Giter VIP home page Giter VIP logo

motion_reconstruction's People

Contributors

akanazawa avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

motion_reconstruction's Issues

json file

where are the ketpoints files(.json file)?

Apply motion reconstruction on top of the HMMR outputs

Thanks a ton for the great work!

I am trying to apply the motion reconstruction to the output of HMMR as a smoothing stage, as well as converting the repo to python3. Though I have a few questions regarding the refinement pipeline. 1. It is mentioned in the paper that the optimization is done in the latent space of HMR's. Does this corresponds to the f_movie output in HMMR, and one can directly optimize that part of the output?
2. I am wondering how is the optimization stepped, as in is there a regressor trained here to revise the previous output? Just reading the paper which describes the losses confuses me how this is done (or maybe it is due to my lack of knowledge in this field :(

Appreciate it!

Integrate HMMR

Hi,

first of all, thanks for releasing the code.

Do you have any suggestion about how to integrate HMMR instead of HMR with this code?
Something like a todo list or steps to go through

How to smooth your result

I have read your code, but I have no idea how did you smooth your result. I just notice you used HMR as your output, Thx~

Is there code about Imitation Learning of training the policy?

I havn't run these two line code because I havn't configure the environment.

python -m run_openpose

python -m refine_video

But I guess the first one is to get the json file storing all the keypoints and the second one is to get the bvh and h5 file storing the animation..

The question is, how to train the policy? Is the code for training not uploaded yet?
I'm a newbee about Deep Learning and I will be very appreciated if you have time to answer me.

Demo Data bounding boxes don't exist

When I try to run refine_video.py on the demo videos and .h5 files, I get errors like:

!!!./demo_data/openpose_output/run_bboxes.h5 doesnt exist!!!

What exactly am I supposed to do with the .h5 and the .bvh files in the demo data for the demo to work? As there is no *_bboxes.h5 file given in the demo data.

BVH File Format or how to import in Unity

Hello,

I wanted to ask if anybody has implemented the write2bvh function or has another way of importing the results of the project into Unity as an animation. I have successfully imported the pre-existing .bvh files provided in the data folder and I wanted to make my own bvh files by running the project. How can I achieve that?

Thank you!

How to write to bvh file?

Hi, I noticed that '# from jason.bvh_core import write2bvh' this line has been commented out, so how to write to bvh?

where is the PRETRAINED_MODEL

in refiner.py, you need a pretrained_model which name should be Feb12_2100_save75_model.ckpt-667589,
but in hmr the model isn't this so I wonder how can I get it ?

Retargeting : How to transfer the SMPL motion data to DeepMimic to RL?

I've build both models( SMPL and DeepMimic) in Unity, and restore them in T-pose.
But the T-pose is very different, especially in arms. Besides, their joint count is different( 24 vs 14),
So in SFV paper, how to transfer the data from SMPL into DeepMimic?

Any idea would be very helpful!
image

How to get root position

How to get root position?
Not rotation or anything, but root position, I saw in your bvhs, you have root position, but in your code, the model didn't predict that thing, so how did you get it?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.