Explorations on facial image keypoint detection.
This project contains the explorations on face feature detection (for the Kaggle competition), as part of the final project for the 207-Machine Learning course for Berkeley's Master in Information and Data Science.
The project was developed by: Alex, Ankit, Nina and Will.
We created a Prezi adventure to showcase our ideas and explorations.
The notebook FacialKeypointDetection-AlexAnkitNinaGuillermo located in scripts/final-notebook/
contains the final report summarizing most of the work included in the rest of the repository.
For a full exposition refer to this notebook. However, this notebook is better treated as a report, than a runnable notebook, due to its size and its lengthy write-ups. For details on the runned codes refer to the specific notebooks, as guided by the READMEs.
-
scripts/
-
explorations/
contains dataset explorations, plots and initial modeling attempts -
preprocessors/
contains the preprocessing scripts -
modelers/
contains all serious modeling attempts, including the generation of their submission files -
tools/
contains a set of additional tools that make the exploration and modeling cleaner. For examplesubmit.py
contains functions for creating the submission files in the appropriate folder
-
-
data/
-
datasets/
contains the original kaggle data -
submissions/
contains the csv files submitted to the Kaggle competition -
models/
contains the persistent storages of the models created. Each pickled model contains: name, alias, description, model-object, prediction-df, [training-time], [predicting-time] -
preprocessed/
contains preprocessed datasets. For temporal time-consuming preprocessings
-
- First of all clone the repo's folder:
$ git clone https://github.com/WillahScott/facial-keypoint-detection.git
Use of a virtual environment is highly recommended (specially through conda).
Should you choose to not create a virtualenv and just install directly on your raw machine just install the prereqs (follow step 2 for virtualenv instructions)
- Clone the environment as provided in
environment.yml
:
$ conda env create -f environment.yml
$ source activate fkd
That's it!
For more info on using virtual environments with conda see here
- Create a virtual env (from within the folder) and activate it:
$ cd facial-keypoint-detection
$ virtualenv fkd
$ source fkd/bin/activate
- Install pre-reqs:
$ pip install -r requirements.txt
Refer to the subfolders README's for more details on the sections, contents and usage.
Last update: April, 2016