Giter VIP home page Giter VIP logo

insactor's Introduction

InsActor:
Instruction-driven Physics-based Characters

Jiawei Ren* Mingyuan Zhang* Cunjun Yu* Xiao Ma Liang Pan Ziwei Liu
S-Lab, Nanyang Technological University  National University of Singapore  
Dyson Robot Learning Lab  
*equal contribution
corresponding author
NeurIPS 2023

Installation

conda create -n insactor python==3.9 -y
conda activate insactor

# diffmimic
python -m pip install --upgrade "jax[cuda]==0.4.2" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html

# diffplanner
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch -y
conda install -c bottler nvidiacub -y
conda install -c fvcore -c iopath -c conda-forge fvcore iopath -y
conda install pytorch3d -c pytorch3d -y

python -m pip install -r requirements.txt

export PYTHONPATH=$PYTHONPATH:$(pwd)

download models

mkdir pretrained_models && cd pretrained_models
gdown --fuzzy https://drive.google.com/file/d/1qdglrZJa5ago2nkSWc5Nu52kvLaS_dTg/view?usp=drive_link # Human-ML skills 0.001
gdown --fuzzy https://drive.google.com/file/d/1Wi33YZWR8K6IuWZdcXdV2mqcivcyObIU/view?usp=drive_link # Human-ML planner
cd ..

Run Demo

streamlit run tools/demo.py --server.port 8501

The WebUI should be available at localhost:8501.

Run evaluation

#prepare data
mkdir data/datasets
gdown https://drive.google.com/file/d/1knGb1Kt9VUu377vcXIDfQN_qcAYSYPez/view?usp=drive_link --fuzzy && unzip human_pml3d.zip && rm human_pml3d.zip &&  mv human_pml3d data/datasets/human_pml3d
gdown https://drive.google.com/file/d/1_YQwV5kqZgSHhOJ_V9MwDjKO5yYfsdDP/view?usp=drive_link --fuzzy && unzip kit_pml.zip && rm kit_pml.zip &&  mv kit_pml data/datasets/kit_pml
# prepare contrasitive models
mkdir data/evaluators
gdown https://drive.google.com/file/d/1aoiq702L6fy4yCsX_ub1MxD_mvj-VmrQ/view?usp=drive_link --fuzzy && mv humanml.pth data/evaluators
gdown https://drive.google.com/file/d/1fyGAi2NHHvUNgDN0YgB1ND_7buVel_SA/view?usp=drive_link --fuzzy &&  mv kit.pth data/evaluators
# KIT
export CONTROLLER_PARAM_PATH="pretrained_models/skill_kit_0.01.pkl"
python -u tools/test.py configs/planner/kit.py --work-dir=work_dirs/eval --physmode=normal pretrained_models/planner_kit.pth
# Humanml
export CONTROLLER_PARAM_PATH="pretrained_models/skill_human_0.01.pkl"
python -u tools/test.py configs/planner/human.py --work-dir=work_dirs/eval --physmode=normal pretrained_models/planner_humanml.pth
# Humanml (Perturb)
export CONTROLLER_PARAM_PATH="pretrained_models/skill_human_0.01.pkl"
python -u tools/test.py configs/planner/human.py --work-dir=work_dirs/eval --physmode=normal --perturb true pretrained_models/planner_humanml.pth
# Humanml (Waypoint)
export CONTROLLER_PARAM_PATH="pretrained_models/skill_human_0.01.pkl"
python -u tools/test_waypoint.py configs/planner/human.py --work-dir=work_dirs/eval --physmode=normal pretrained_models/planner_humanml.pth

Difference from the paper results:

  • We improved the low-level trakcer training by adding a gradient truncation.
  • We fixed the contrastive model for humanml-3d.

More models

cd pretrained_models
gdown --fuzzy https://drive.google.com/file/d/10gFEWUdZtMIA-6yhd_gZXbYm9snMZ63m/view?usp=drive_link # KIT-ML skills 0.001
gdown --fuzzy https://drive.google.com/file/d/1oyT5DE5ItZb1KNSV0lW85cLk9w4alGPV/view?usp=drive_link # Human-ML skills 0.0
gdown --fuzzy https://drive.google.com/file/d/1fvW3RtFU8sOGj2rLZTT7gGSOzLCvNkUW/view?usp=drive_link # KIT-ML skills 0.0
gdown --fuzzy https://drive.google.com/file/d/17ut3gymJpDrPt4nsIhPcug0A0HevqGkE/view?usp=drive_link # Human-ML skills 0.01
gdown --fuzzy https://drive.google.com/file/d/1Ijosu_4W2eIg72kK2IuGaR0wbEhxw6CU/view?usp=drive_link # KIT-ML skills 0.01
gdown --fuzzy https://drive.google.com/file/d/10WrKJ4v1u6DwCYb8KT9s2aP3n2BBn9ST/view?usp=drive_link # KIT-ML planner
cd ..

Visualize Evaluation Results

The evaluation script will save diffusion plans at planned_traj.npy and simulated motion in simulated_traj.npy.

streamlit run visualize.py

Then input the trajectory file path (e.g., planned_traj.npy), or drag and drop the trajectory file.

Training Low-level Controllers

#prepare data
gdown https://drive.google.com/file/d/15qjv-tREix2kJ5kvaRgxTXgoKQgHPq6L/view?usp=drive_link --fuzzy && unzip kit_ml_raw_processed.zip && rm kit_ml_raw_processed.zip &&  mv kit_ml_raw_processed data/kit_ml_raw_processed  # KIT
gdown https://drive.google.com/file/d/1WCiuqQOeIpu2sFjnF20aJiNmYLFMkNxU/view?usp=drive_link --fuzzy && unzip humanml3d_processed.zip && rm humanml3d_processed.zip &&  mv humanml3d_processed data/humanml3d_processed  # Human-ML
# train
python mimic.py --config configs/controller/human_0.01.yaml

Training Diffusion Policy

python -u tools/train.py configs/planner/kit.py --work-dir=planner_kit  # kit
python -u tools/train.py configs/planner/human.py --work-dir=planner_human  # humanml

Citation

@article{ren2023insactor,
  title={InsActor: Instruction-driven Physics-based Characters},
  author={Ren, Jiawei and Zhang, Mingyuan and Yu, Cunjun and Ma, Xiao and Pan, Liang and Liu, Ziwei},
  journal={NeurIPS},
  year={2023}
}

insactor's People

Contributors

jiawei-ren avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

insactor's Issues

code release

Appreciate for your excellent work, I am wondering when the code will be released, thank you.

Missing npy files

Hi Jiawei,

Amazing project you all have built here.

I am following the instructions to run, and I get this error:
FileNotFoundError: [Errno 2] No such file or directory: 'data/datasets/human_pml3d/mean.npy'
File "/content/insactor/tools/demo_utils/diffuse.py", line 15, in
mean = np.load(mean_path)

It seems like these files are missing
mean_path = "data/datasets/human_pml3d/mean.npy"
std_path = "data/datasets/human_pml3d/std.npy"

They are referenced in tools/demo_utils/diffuse.py

mean_path = "data/datasets/human_pml3d/mean.npy"

Can you include these files, or is there something I am missing from the instructions?
Thanks!

Missing `mimic.py` file

Hi Jiawei,

Thank you for your excellent work!

I followed the instructions to run the code, but when training the low-level controller following the command below, it seems that the mimic.py file is missing.

python mimic.py --config configs/controller/human_0.01.yaml

Can you include these files, or is there something I am missing from the instructions?
Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.