This code is for the paper 360-Degree Gaze Estimation in the Wild Using Multiple Zoom Scales
. By using this code you agree to terms of the LICENSE.
Use the conda to create a new environment using the given .yml file. In case conda
is not installed on your system, you can install it from here. Once conda is installed, use the following code to create an environment with all dependencies installed.
conda env create -f multizoomgaze_env.yml
Download the checkpoint files from here.
Use this notebook to predict gaze direction in a random image.
Setting up the Gaze360 database.
Register here which will then give you access to the database.
MSA+Seq
python run.py --model_type=NonLstmSinCosModel --enable_time --checkpoints_path=CKECKPOINT_DIRECTORY/ --source_path=/data/GAZE360/imgs/ --evaluate
MSA
python run.py --model_type=NonLstmSinCosModel --checkpoints_path=CKECKPOINT_DIRECTORY/ --source_path=/data/GAZE360/imgs/ --evaluate
MSA+raw
python run.py --model_type=NonLstmMultiCropModel --checkpoints_path=CKECKPOINT_DIRECTORY/ --source_path=/data/GAZE360/imgs/ --evaluate
Pinball Static
python run.py --model_type=StaticModel --checkpoints_path=/home/ashesh/gaze_final_checkpoints/ --evaluate
Here, checkpoints_path
is the directory where you've saved the trained checkpoint files. source_path
is the directory which contains gaze360 data.
Just remove the --evaluate
token from the command for evaluating the model performance which is given in the previous section. In this case, checkpoints_path
will be the path where your checkpoints will get saved.