Giter VIP home page Giter VIP logo

fast-hmp's Introduction

Fast-Human

Setup

Data Preparation

For Human3.6M and HumanEva-I:

We have adopted the data preprocessing framework and datasets from GSPS. The datasets can be downloaded from here, and all files should be placed in the ./data directory.

For the zero-shot prediction experiments using the AMASS dataset:

We retargeted the AMASS dataset skeletons to the Human3.6M dataset skeletons. We only provide a subset of the AMASS motions here. The retargeted dataset can be downloaded from Google Drive (Baidu Cloud). Place the .npy files in the ./data directory. Details of the retargeting process can be found in ./motion-retargeting.

The final structure of the ./data directory is as follows:

data
├── amass_retargeted.npy
├── data_3d_h36m.npz
├── data_3d_h36m_test.npz
├── data_3d_humaneva15.npz
├── data_3d_humaneva15_test.npz
├── data_multi_modal
│   ├── data_candi_t_his25_t_pred100_skiprate20.npz
│   └── t_his25_1_thre0.500_t_pred100_thre0.100_filtered_dlow.npz
└── humaneva_multi_modal
    ├── data_candi_t_his15_t_pred60_skiprate15.npz
    └── t_his15_1_thre0.500_t_pred60_thre0.010_index_filterd.npz

Pretrained Models

For the demonstration of various capabilities of Fast-Human, we provide a pretrained model on Human3.6M available on Google Drive (Baidu Cloud). The pretrained model should be placed in the ./checkpoints directory.

Environment Setup

sh install.sh

Training

For Human3.6M:

python main.py --cfg h36m --mode train

For HumanEva-I:

python main.py --cfg humaneva --mode train

After running the command, a directory named <DATASET>_<INDEX> will be created under ./results directory (<DATASET> belongs to {'h36m', 'humaneva'}, <INDEX> equals the number of directories under ./results). During the training process, gifs are stored in ./<DATASET>_<INDEX>/out, log files are saved in ./<DATASET>_<INDEX>/log, model checkpoints are stored in ./<DATASET>_<INDEX>/models, and metrics are stored in ./<DATASET>_<INDEX>/results.

Prediction Visualization

For Human3.6M:

python main.py --cfg h36m --mode pred --vis_row 3 --vis_col 10 --ckpt ./checkpoints/h36m_ckpt.p

For HumanEva-I:

python main.py --cfg humaneva --mode pred --vis_row 3 --vis_col 10 --ckpt ./checkpoints/humaneva_ckpt.pt

vis_row and vis_col represent the number of rows and columns of gifs respectively. Each type of action in the dataset will have two gifs, each gif containing vis_row actions, each action having vis_col candidate predictions. Gifs are stored in ./inference/<DATASET>_<INDEX>/out.

Action Switching

Demonstration of action switching:

python main.py --mode switch --ckpt ./checkpoints/h36m_ckpt.pt

vis_switch_num gifs will be stored in ./inference/switch_<INDEX>/out, each gif containing 30 actions, these actions will eventually switch to one of them.

Controllable Action Prediction

Demonstration of controllable action prediction:

python main.py --mode control --ckpt ./checkpoints/h36m_ckpt.pt

Seven gifs will be stored in ./inference/<CONTROL>_<INDEX>/out, each gif having vis_row actions, each action having vis_col candidate predictions. <CONTROL> corresponds to {'right_leg', 'left_leg', 'torso', 'left_arm', 'right_arm', 'fix_lower', 'fix_upper'}.

Zero-Shot Test on AMASS Dataset

Demonstration of zero-shot testing on the AMASS dataset:

python main.py --mode zero_shot --ckpt ./checkpoints/h36m_ckpt.pt

Gifs of the zero-shot testing experiment will be stored in ./inference/zero_shot_<INDEX>/out, with the number of actions also set by vis_col and vis_row.

Evaluation

For Human3.6M:

python main.py --cfg h36m --mode eval --ckpt ./checkpoints/h36m_ckpt.pt

For HumanEva-I:

python main.py --cfg humaneva --mode eval --ckpt ./checkpoints/humaneva_ckpt.pt

Note: We have parallelized the calculation process of evaluation metrics (APD, ADE, FDE, MMADE, MMFDE) to speed up computation, therefore this part strictly requires the use of a GPU.

fast-hmp's People

Contributors

anonymi1ty avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.