Giter VIP home page Giter VIP logo

Comments (4)

HalfSummer11 avatar HalfSummer11 commented on June 25, 2024

Hi, I'm sorry but the current code is still quite entangled with the training dataset we use. For example, the number of channels in some layers should be in accordance with the number of joints of the training skeleton. (Now they're hardcoded in config.py.) Furthermore, the scripts to generate the dataset from BVH files should also be accustomed to each dataset. You can see in data_proc/export_train.py that we process xia and bfa differently. Since xia's data happens to have content labels, we use that as a reference to do the train-test split. That's what content_test_cnt is for.
In our next update, we'll further refactor the code to support training with customized data & release relevant details.

from deep-motion-editing.

godzillalla avatar godzillalla commented on June 25, 2024

Thank you very much.
I've retargeted our bvh to the skeleton of the code processing.
But some definitions of the overall data structure are still incorrect, resulting in errors in the label correspondence.
I also process our data in the way of bfa, but we have too few frames in our data, so I change the parameter window_size and
window_step

image
If I change this parameter, does the rest of the code need to be changed again?

from deep-motion-editing.

HalfSummer11 avatar HalfSummer11 commented on June 25, 2024

Regarding the label correspondence, now we read the style label of the data from the BVH filename. You can change the related code (line 219~211) in generate_database_bfa to make sure it's compatible with your BVH filenames. Also if you are working with many small BVH files, you may want to modify generate_database_bfa's code to do the train-test split in a different way (use some BVH files for testing). Our bfa BVHs are very long so we have to take window * 2 frames for testing from every window * 20 frames, which kind of requires the BVH to be as least window * 20 frames in the first place.
A window size of 16 might be a bit small? But let's see how the training goes.
(Oh, and please note we assume our input BVH files are of 120 fps. If yours are not you may want to change the downsample parameter.)
Hope this would help!

from deep-motion-editing.

godzillalla avatar godzillalla commented on June 25, 2024

thanks a lot!

from deep-motion-editing.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.