Sai Shashank Kalakonda · Shubh Maheshwari · Ravi Kiran Sarvadevabhatla
Action-GPT provides a plug and play framework for incorporating Large Language Models (LLMs) into text-based action generation models.
For more details please refer to the Paper or the project website.
- Description
- Getting Started
- Running the Demo
- Training
- Sampling and Evaluation
- Citation
- Acknowledgments
This implementation:
- Can generate motion sequences using Action-GPT framework on TEACH, TEMOS and MotionCLIP for arbitrary text descriptions provided by users.
- Can retrain Action-GPT on TEACH, TEMOS and MotionCLIP, allowing users to change details in the training configuration.
To install the dependencies please follow the next steps:
-
Clone this repository:
git clone https://github.com/actiongpt/actiongpt.git cd actiongpt
-
Install the dependencies of respective models by following the steps below:
-
Action-GPT-TEACH
-
Action-GPT-TEMOS
- Since TEACH is an extension of TEMOS, the same installation and data setups used for Action-GPT-TEACH can be used here.
-
Action-GPT-MotionCLIP
-
-
After installing the dependencies of the respective models, install openai as mentioned below to use GPT,
pip install openai
- We provided
gpt3_annotations.json
for all the models which consists of the gpt3 descriptions for the train and test action phrases of the respective data loaders. - OpenAI API key
- OpenAI API key need to be provided to generate GPT-3 descriptions for the action phrases which doesn't exist in
gpt3_annotations.json
. - Update your API-Key in
gpt_annotator.py
- OpenAI API key need to be provided to generate GPT-3 descriptions for the action phrases which doesn't exist in
-
Action-GPT-TEACH (or) Action-GPT-TEMOS
- Check out the steps to run the demo on any arbitary text descriptions from here
- The
path/to/experiment
directory ispretrained_model
in the respectiveAction-GPT_TEACH_k-4
orAction-GPT_TEMOS_k-4
directory, which can be downloaded from here. - NOTE : As Action-GPT-TEMOS is trained for single text descriptions, the demo can be executed for only single text prompts.
-
Action-GPT-MotionCLIP:
-
Action-GPT-TEACH (or) Action-GPT-TEMOS
- Check out the steps to train the model from here
-
Action-GPT-MotionCLIP:
- Check out the steps to train the model from here
-
Action-GPT-TEACH (or) Action-GPT-TEMOS
- Check out the steps to sample and evaluate the model from here
- NOTE : For Action-GPT-TEMOS there are no align and slerp parameters to be passed as the model is trained for single text descriptions.
-
Action-GPT-MotionCLIP:
- Follow the below command to sample from the test set. The below command creates two directories named
ground_truth
andaction_gpt
in the path provided for the parametergenerations
. Both the directoriesground_truth
andaction_gpt
contain the npy files of the motion sequences corresponding to the test set text descriptions. Using the npy files created and the evaluation code provided in TEACH one can generate the metrics as provided in the paper. python sampling.py ./exps/paper-model/checkpoint_0100.pth.tar --generations path/to/store/sampled/generations
- Follow the below command to sample from the test set. The below command creates two directories named
@inproceedings{Action-GPT,
title = {Action-GPT: Leveraging Large-scale Language Models for Improved and Generalized Action Generation},
author = {Kalakonda, Sai Shashank and Maheshwari, Shubh and Sarvadevabhatla, Ravi Kiran},
booktitle = {IEEE International Conference on Multimedia and Expo ({ICME})},
year = {2023},
url = {https://actiongpt.github.io/}
}
This work is an implementation of Action-GPT framework on T2M-models such as TEACH, TEMOS and MotionCLIP. Many part of this code were based on the official implementation of the respective T2M-Models. We thank all the corresponding authors for making thieir code available.
This template was adapted from the GitHub repository of GOAL.