Giter VIP home page Giter VIP logo

deep-motion-editing's People

Contributors

deepmotionediting avatar halfsummer11 avatar jdbodyfelt avatar jonathanrein avatar kfiraberman avatar mids avatar peizhuoli avatar thecrazyt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deep-motion-editing's Issues

something is wrong with the file

D:\deep-motion-editing-master\retargeting>python demo.py
命令语法不正确。
loading from ./pretrained/models\topology0
loading from epoch 20000......
load succeed!
loading from ./pretrained/models\topology1
loading from epoch 20000......
load succeed!
C:\Users\anbanglee\miniconda3\envs\avatarify\lib\site-packages\torch\nn\modules\upsampling.py:129: UserWarning: nn.Upsample is deprecated. Use nn.functional.interpolate instead.
warnings.warn("nn.{} is deprecated. Use nn.functional.interpolate instead.".format(self.name))
命令语法不正确。
Traceback (most recent call last):
File "eval_single_pair.py", line 99, in
main()
File "eval_single_pair.py", line 93, in main
model.test()
File "D:\deep-motion-editing-master\retargeting\models\base_model.py", line 96, in test
self.compute_test_result()
File "D:\deep-motion-editing-master\retargeting\models\architecture.py", line 296, in compute_test_result
self.writer[src][i].write_raw(gt[i, ...], 'quaternion', os.path.join(new_path, '{}_gt.bvh'.format(self.id_test)))
File "D:\deep-motion-editing-master\retargeting\datasets\bvh_writer.py", line 91, in write_raw
return self.write(rotations, positions, order, path, frametime, root_y=root_y)
File "D:\deep-motion-editing-master\retargeting\datasets\bvh_writer.py", line 80, in write
return write_bvh(self.parent, offset, rotations_full, positions, self.names, frametime, order, path)
File "D:\deep-motion-editing-master\retargeting\datasets\bvh_writer.py", line 10, in write_bvh
file = open(path, 'w')
FileNotFoundError: [Errno 2] No such file or directory: './pretrained/results/bvh\Aj\0_gt.bvh'
Traceback (most recent call last):
File "demo.py", line 46, in
example('Aj', 'BigVegas', 'Dancing Running Man.bvh', 'intra', './examples/intra_structure')
File "demo.py", line 42, in example
height)
File "D:\deep-motion-editing-master\retargeting\models\IK.py", line 57, in fix_foot_contact
anim, name, ftime = BVH.load(input_file)
File "../utils\BVH.py", line 58, in load
f = open(filename, "r")
FileNotFoundError: [Errno 2] No such file or directory: './examples/intra_structure\result.bvh'

(avatarify) D:\deep-motion-editing-master\retargeting>sh demo.sh
'sh' 不是内部或外部命令,也不是可运行的程序
或批处理文件。

(avatarify) D:\deep-motion-editing-master\retargeting>python test.py
Batch [1/4]
命令语法不正确。
loading from ./pretrained/models\topology0
loading from epoch 20000......
load succeed!
loading from ./pretrained/models\topology1
loading from epoch 20000......
load succeed!
0%| | 0/106 [00:00<?, ?it/s]C:\Users\anbanglee\miniconda3\envs\avatarify\lib\site-packages\torch\nn\modules\upsampling.py:129: UserWarning: nn.Upsample is deprecated. Use nn.functional.interpolate instead.
warnings.warn("nn.{} is deprecated. Use nn.functional.interpolate instead.".format(self.name))
命令语法不正确。

Traceback (most recent call last):
File "eval.py", line 37, in
main()
File "eval.py", line 33, in main
model.test()
File "D:\deep-motion-editing-master\retargeting\models\base_model.py", line 96, in test
self.compute_test_result()
File "D:\deep-motion-editing-master\retargeting\models\architecture.py", line 296, in compute_test_result
self.writer[src][i].write_raw(gt[i, ...], 'quaternion', os.path.join(new_path, '{}_gt.bvh'.format(self.id_test)))
File "D:\deep-motion-editing-master\retargeting\datasets\bvh_writer.py", line 91, in write_raw
return self.write(rotations, positions, order, path, frametime, root_y=root_y)
File "D:\deep-motion-editing-master\retargeting\datasets\bvh_writer.py", line 80, in write
return write_bvh(self.parent, offset, rotations_full, positions, self.names, frametime, order, path)
File "D:\deep-motion-editing-master\retargeting\datasets\bvh_writer.py", line 10, in write_bvh
file = open(path, 'w')
FileNotFoundError: [Errno 2] No such file or directory: './pretrained/results/bvh\BigVegas\0_gt.bvh'
Collecting test error...
Traceback (most recent call last):
File "test.py", line 35, in
cross_error += full_batch(0)
File "D:\deep-motion-editing-master\retargeting\get_error.py", line 15, in full_batch
res.append(batch(char, suffix))
File "D:\deep-motion-editing-master\retargeting\get_error.py", line 31, in batch
files = [f for f in os.listdir(new_p) if
FileNotFoundError: [WinError 3] 系统找不到指定的路径。: './pretrained/results/bvh\Mousey_m'

How to convert the .fbx file into .bvh file format?

Hi, I was recently attracted by your fancy project and tried to train my custom model. However, I met a problem when preprocessing the data:

The animation data obtained from the Mixamo is in a .fbx format. But based on your source code, I think you use the .bvh data as the raw input. So could you please give me some suggestions about how to convert the .fbx data into a .bvh format!

Many Thanks!

I use my own data for retargeting, but the program will give an exception

Traceback (most recent call last):
File "eval_single_pair.py", line 104, in
main()
File "eval_single_pair.py", line 92, in main
new_motion = (new_motion - dataset.mean[i][j]) / dataset.var[i][j]
RuntimeError: The size of tensor a (99) must match the size of tensor b (91) at non-singleton dimension 1
Traceback (most recent call last):
File "demo.py", line 46, in
example('Aj', 'BigVegas', '01.bvh', 'intra', './examples/intra_structure')
File "demo.py", line 42, in example
height)
File "/home/wuxiaoliang/docker/newAPP/deep-motion-editing/retargeting/models/IK.py", line 57, in fix_foot_contact
anim, name, ftime = BVH.load(input_file)
File "../utils/BVH.py", line 58, in load
f = open(filename, "r")
FileNotFoundError: [Errno 2] No such file or directory: './examples/intra_structure/result.bvh'

Hello, how can I solve this problem?@ @kfiraberman

Can this be used with other 3D programs and different rigs?

I use reallusion iClone and Character Creator which are similar to DAZ3D.

What I am looking for is to have full body motion capture face and hands from a web camera.

One of the problems I have been running into using Character Creator 3 armature is that the facial movement is mostly based on BlendShapes (Morphs) and I don't know how to translate that from xyz coordinates. Actually I am just learning 3D and motion capture but the price for markerless mocap is high and I am just a hobbyist so I have been searching for an opensource solution.

Your software looks it will be fantastic!

Thanks,
Dan

Why create a "global_part_neighbor" in the function "find_neighbor" in the skeleton.py?

Hi,
I was recently reading the source code very carefully in order to understand the whole pipeline of your work.

I was kind of confused that why init a variable named "global_part_neighbor" in the in the function "find_neighbor" in the line #373 of the models/skeleton.py?

Besides, why append a "edge_num" into those global_part_neighbor's list in line #375?

Many thanks!

About the mean and var of each character?

Hi,

I have a question about the mean and var located in the ./retargeting/datasets/Mixamo/mean_var.

I'd like to know the mean and var of each character's motion data are just calculated based on the test data or they are calculated based on the whole dataset?

If it is the whole dataset, how many data did you use to calculate the mean and var?

Many thanks!

train with my dataset

Hi, I want to train style transfer with my database, I tried many times, but the program always gave me an error.
The error is "RuntimeError: CUDA error: device-side assert triggered" and "index out of bounds".

So I want to ask that how do I define a YML file?
After defining a database like xia, can I leave the rest of the code unchanged?
Thank you!

Some obfuscation in the instance function "to_numpy" of class"BVH_file" in retargeting/datasets/bvh_parser.py

Hi, I was kind of confused about a code snippet in the instance function "to_numpy" of class"BVH_file" in retargeting/datasets/bvh_parser.py.

from line #180 to line #184
if edge:
>>>>index = []
>>>> for e in self.edges:
>>>>>>>> index.append(e[0])
>>>>rotations = rotations[:, index, :]
rotations = rotations.reshape(rotations.shape[0], -1)

According to my understanding so far, after doing:
rotations = self.anim.rotations[:, self.corps, :] in line #174
the variable "rotation" will hold the rotation info of the simplified skeleton. Then why we still need to do the operation from line #180 to line #184?

B.T.W could you please explain the "MOTION" part of the .bvh file for me? I searched some explainations from the website, but I didn't get a satisfied one. I guessed each line of the "MOTION" part presents the rotation information of the skeleton's joints in one frame. But how to understand the order of these numbers?

Many thanks!

what's the role of the function "find_seq" defined in the class "SkeletonPool" in skeleton.py?

Hi=.=, it's me again,

Could you please describe the role of the function "find_seq" defined in the class "SkeletonPool" in skeleton.py as well as the role of the attribute "self.seq_list" for me?

According to my understanding so far, after the Skeleton pooling, we need a new topology to help us to calculate the neigboring list/matrix for the next Skeleton convolution layer.

But I haven't make it clear after reading the code.

Many thanks!

Basic concepts about "SkeletonConv" and "SkeletonUnpool".

Hi, after reading the implementation code of the skeleton.py. I got some questions which I'd like to disccuss with you.

1.Could I regard the "SkeletonConv" as a binary mask which created based on the neighboring list of each joint. If joint A is the neighbor of joint B, when convolve joint B, the binary mask on joint A is 1 else the mask is set to 0.

2.Does the "SkeletonUnpool" just duplicate the features of the pooled joint to increase the nodes of the skeleton graph?

Many thanks!

test in style_transfer

In styleTransfer,when i use other BVHs,why cant run the style_transfer/demo.sh successfully?
is my skeleton in my BVH is lack? the skeleton must match it that you give?

Use my own data for training, but there is no loss value

Excuse me, I use my own data for training, but there is no loss value. Is it because there is a problem with my default configuration?

====characters= [['mouse'], ['strong']]
load from file ./datasets/Mixamo/mouse.npy
Window count: 4, total frame (without downsampling): 3572
load from file ./datasets/Mixamo/strong.npy
Window count: 4, total frame (without downsampling): 3560

No scalar data was found.
Probable causes:

You haven’t written any scalar data to your event files.
TensorBoard can’t find your event files.

If you’re new to using TensorBoard, and want to find out how to add data and set up your event files, check out the README and perhaps the TensorBoard tutorial.

If you think TensorBoard is configured properly, please see the section of the README devoted to missing data problems and consider filing an issue on GitHub.

@kfiraberman

how to obtain training dataset?

Hi, thanks for sharing such a great project! Howver, I'd like to know that is there any way to obtain or create some customized trainig data?
Because I'd like to reproduce the fully pipeline of the project!
Many thanks!

How to test with customer data?

Hi,
When I use other characters on the Mixamo dataset in retarget work, I will need a std file, but I didn't find the code to calculate the std. Could you please provide the code to test with the customer data?
Thanks!

Raise error in eval_single_pair.py after running train.py in motion retargeting.

Hi, I customized a TrainDataset and tried to use this dataset to train the motion retargeting model from scratch. However, after run the train.py(it even isn't been run successfully ), then when I run the demo.py or eval_single_pair.py it throw our errors like below:
**
loading from ./pretrained/models/topology0
loading from epoch 20000......
Traceback (most recent call last):
File "/home/deep-motion-editing-new/home/deep-motion-editing-new/retargeting/eval_single_pair.py", line 98, in
main()
File "/home/deep-motion-editing-new/home/deep-motion-editing-new/retargeting/eval_single_pair.py", line 78, in main
model.load(epoch=20000)
File "/home/deep-motion-editing-new/home/deep-motion-editing-new/retargeting/models/architecture.py", line 274, in load
model.load(os.path.join(self.model_save_dir, 'topology{}'.format(i)), epoch)
File "/home/deep-motion-editing-new/home/deep-motion-editing-new/retargeting/models/integrated.py", line 83, in load
map_location=self.args.cuda_device))
File "/home/hair_gans/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 847, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for AE:
Missing key(s) in state_dict: "enc.layers.2.0.mask", "enc.layers.2.0.weight", "enc.layers.2.0.bias", "enc.layers.2.0.offset_enc.bias", "enc.layers.2.0.offset_enc.weight", "enc.layers.2.0.offset_enc.mask", "enc.layers.2.1.weight", "dec.layers.2.1.weight", "dec.layers.2.2.mask", "dec.layers.2.2.weight", "dec.layers.2.2.bias", "dec.layers.2.2.offset_enc.bias", "dec.layers.2.2.offset_enc.weight", "dec.layers.2.2.offset_enc.mask", "dec.unpools.2.weight", "dec.enc.layers.2.0.mask", "dec.enc.layers.2.0.weight", "dec.enc.layers.2.0.bias", "dec.enc.layers.2.0.offset_enc.bias", "dec.enc.layers.2.0.offset_enc.weight", "dec.enc.layers.2.0.offset_enc.mask", "dec.enc.layers.2.1.weight".
Unexpected key(s) in state_dict: "dec.layers.1.2.bias".
size mismatch for dec.layers.0.1.weight: copying a param with shape torch.Size([192, 112]) from checkpoint, the shape in current model is torch.Size([224, 224]).
size mismatch for dec.layers.0.2.mask: copying a param with shape torch.Size([96, 192, 15]) from checkpoint, the shape in current model is torch.Size([112, 224, 15]).
size mismatch for dec.layers.0.2.weight: copying a param with shape torch.Size([96, 192, 15]) from checkpoint, the shape in current model is torch.Size([112, 224, 15]).
size mismatch for dec.layers.0.2.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([112]).
size mismatch for dec.layers.0.2.offset_enc.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([112]).
size mismatch for dec.layers.0.2.offset_enc.weight: copying a param with shape torch.Size([96, 72]) from checkpoint, the shape in current model is torch.Size([112, 84]).
size mismatch for dec.layers.0.2.offset_enc.mask: copying a param with shape torch.Size([96, 72]) from checkpoint, the shape in current model is torch.Size([112, 84]).
size mismatch for dec.layers.1.1.weight: copying a param with shape torch.Size([184, 96]) from checkpoint, the shape in current model is torch.Size([192, 112]).
size mismatch for dec.layers.1.2.mask: copying a param with shape torch.Size([92, 184, 15]) from checkpoint, the shape in current model is torch.Size([96, 192, 15]).
size mismatch for dec.layers.1.2.weight: copying a param with shape torch.Size([92, 184, 15]) from checkpoint, the shape in current model is torch.Size([96, 192, 15]).
size mismatch for dec.layers.1.2.offset_enc.bias: copying a param with shape torch.Size([92]) from checkpoint, the shape in current model is torch.Size([96]).
size mismatch for dec.layers.1.2.offset_enc.weight: copying a param with shape torch.Size([92, 69]) from checkpoint, the shape in current model is torch.Size([96, 72]).
size mismatch for dec.layers.1.2.offset_enc.mask: copying a param with shape torch.Size([92, 69]) from checkpoint, the shape in current model is torch.Size([96, 72]).
size mismatch for dec.unpools.0.weight: copying a param with shape torch.Size([192, 112]) from checkpoint, the shape in current model is torch.Size([224, 224]).
size mismatch for dec.unpools.1.weight: copying a param with shape torch.Size([184, 96]) from checkpoint, the shape in current model is torch.Size([192, 112]).

Process finished with exit code 1
**

Even I use version control tools roll back the code, it stills raise this error!
Have you ever met this before?
Many thanks!

Can Unpaired Motion Style Transfer be adjusted?

From looking at both videos I see that this is being designed for "intelligent ai retargeting and stylized motion. Can the strength of the stylized motion be adjusted? Is this project related to FACS AU as far as using emotions for mocap instead of direct xyz?

Thanks!

Error: The system cannot find the path specified: './pretrained/results/bvh\\Mousey_m'

Error when running: sh demo.sh

System:
Windows 10 build 19.09
Python 3.7.7

1)I cloned project and downloaded the Mixamo dataset and placed it in the datasets folder
2) ran sh demo.sh

Error:

loading from epoch 20000......
load succeed!
The syntax of the command is incorrect.
Traceback (most recent call last):
File "eval_single_pair.py", line 99, in
main()
File "eval_single_pair.py", line 93, in main
model.test()
File "D:\Development\deep-motion-editing\retargeting\models\base_model.py", line 96, in test
self.compute_test_result()
File "D:\Development\deep-motion-editing\retargeting\models\architecture.py", line 296, in compute_test_result
self.writer[src][i].write_raw(gt[i, ...], 'quaternion', os.path.join(new_path, '{}_gt.bvh'.format(self.id_test)))
File "D:\Development\deep-motion-editing\retargeting\datasets\bvh_writer.py", line 91, in write_raw
return self.write(rotations, positions, order, path, frametime, root_y=root_y)
File "D:\Development\deep-motion-editing\retargeting\datasets\bvh_writer.py", line 80, in write
return write_bvh(self.parent, offset, rotations_full, positions, self.names, frametime, order, path)
File "D:\Development\deep-motion-editing\retargeting\datasets\bvh_writer.py", line 10, in write_bvh
file = open(path, 'w')
FileNotFoundError: [Errno 2] No such file or directory: './pretrained/results/bvh\Aj\0_gt.bvh'
Traceback (most recent call last):
File "demo.py", line 46, in
example('Aj', 'BigVegas', 'Dancing Running Man.bvh', 'intra', './examples/intra_structure')
File "demo.py", line 42, in example
height)
File "D:\Development\deep-motion-editing\retargeting\models\IK.py", line 57, in fix_foot_contact
anim, name, ftime = BVH.load(input_file)
File "../utils\BVH.py", line 58, in load
f = open(filename, "r")
FileNotFoundError: [Errno 2] No such file or directory: './examples/intra_structure\result.bvh'

Error when running python test.py

loading from epoch 20000......
load succeed!
loading from ./pretrained/models\topology1
loading from epoch 20000......
load succeed!
0%| | 0/106 [00:00<?
he syntax of the command is incorrect.
0%| | 0/106 [00:02<?
Traceback (most recent call last):
File "eval.py", line 37, in
main()
File "eval.py", line 33, in main
model.test()
File "D:\Mocap Software\deep-motion-editing\retargeting\models\base_model.py", line 96, in test
self.compute_test_result()
File "D:\Mocap Software\deep-motion-editing\retargeting\models\architecture.py", line 296, in compute_test_re
self.writer[src][i].write_raw(gt[i, ...], 'quaternion', os.path.join(new_path, '{}_gt.bvh'.format(self.id_t
File "D:\Mocap Software\deep-motion-editing\retargeting\datasets\bvh_writer.py", line 91, in write_raw
return self.write(rotations, positions, order, path, frametime, root_y=root_y)
File "D:\Mocap Software\deep-motion-editing\retargeting\datasets\bvh_writer.py", line 80, in write
return write_bvh(self.parent, offset, rotations_full, positions, self.names, frametime, order, path)
File "D:\Mocap Software\deep-motion-editing\retargeting\datasets\bvh_writer.py", line 10, in write_bvh
file = open(path, 'w')
FileNotFoundError: [Errno 2] No such file or directory: './pretrained/results/bvh\BigVegas\0_gt.bvh'
Collecting test error...
Traceback (most recent call last):
File "test.py", line 35, in
cross_error += full_batch(0)
File "D:\Mocap Software\deep-motion-editing\retargeting\get_error.py", line 15, in full_batch
res.append(batch(char, suffix))
File "D:\Mocap Software\deep-motion-editing\retargeting\get_error.py", line 31, in batch
files = [f for f in os.listdir(new_p) if
FileNotFoundError: [WinError 3] The system cannot find the path specified: './pretrained/results/bvh\Mousey_m'

I have been looking for a project like this. There are many excellent projects out there that output 2D points such as OpenPose but not many that do the conversion to 3D. The ones I didn't really work well. I am excited about this project since it is specifically what I am looking for.

Thanks,
Dan

about skining

in the skining step ,i have some question.

  1. after we import the fbx what the mean about "Merge meshes - select all the parts and merge them (ctrl+J)"?
  2. could you please give me more details about skining?

Try loading BVH in Blender doesn't seem to work

Hello, as per the title

import sys
import os
sys.path.append("../utils")
sys.path.append("./")
import BVH
import numpy as np
import bpy
import mathutils
import pdb

#scale factor for bone length
global_scale = 10

class BVH_file:
    def __init__(self, file_path):
        self.anim, self.names, self.frametime = BVH.load(file_path)

        #permute (x, y, z) to (z, x, y)
        tmp = self.anim.offsets.copy()
        self.anim.offsets[..., 0] = tmp[..., 2]
        self.anim.offsets[..., 1] = tmp[..., 0]
        self.anim.offsets[..., 2] = tmp[..., 1]

BVH.load(file_path) refers to import BVH, which is not in current Blender (2.83).
What python package is that?

all windows length is -1

================================windows length is -1
Traceback (most recent call last):
File "./datasets/preprocess.py", line 72, in
write_statistics(character, './datasets/Mixamo/mean_var/')
File "./datasets/preprocess.py", line 34, in write_statistics
dataset = MotionData(new_args)
File "/home/wuxiaoliang/docker/motion_retarget/deep-motion-editing/retargeting/datasets/motion_dataset.py", line 33, in init
new_windows = self.get_windows(motions)
File "/home/wuxiaoliang/docker/motion_retarget/deep-motion-editing/retargeting/datasets/motion_dataset.py", line 106, in get_windows
return torch.cat(new_windows)
RuntimeError: There were no tensor arguments to this function (e.g., you passed an empty list of Tensors), but no fallback function is registered for schema aten::_cat. This usually means that this function requires a non-empty list of Tensors. Available functions are [CPUTensorId, CUDATensorId, QuantizedCPUTensorId, VariableTensorId]

how to solve the problem? @kfiraberman

training of retarget

what a great work!
Could you please give us the training method and the retarget datasets

combined_dataset

Is it possible to train the model combining your.npy files(mixamo characters) with custom rig .npy(new custom character)?

Where is model?

Your work is great, and I'm curious about the model for the bvh data. I want to render the animation of Jerry model in your presentation video

What is "xxx_virtual" joints produced by the function "build_joint_topology" in skeleton.py?

Hi there, I got a question about the function "build_joint_topology" in skeleton.py.

I could understand that this function is mapping the "edges" to "joints" thus we could write the info of "joints" into a .bvh file.

But why we need to add "xxx_vitual" joints to the skeleton?

And I was kind of confused about the role of the variable named "out_degree" defined in the line#295.

Why only send the position of root joint of each frame into the model as input?

Hi,
I was kind of curious that why only send the position of root joint of each frame into the model as input?
I mean after parsing the .bvh file, we could get the positions of every joint at every frame. But we just use the root joint's position as input, is it because other joints' position info is redundant?

Mant thanks!

The reason for represent rotation by quaternion instead of Euler angle.

Hi, sorry for bothering again,

May I ask the reason that why choose to use quaternion to represent the roration instead of the Euler angle? At first I thought that it is because if we don't know the implementation order of Euler angle, we might obtain wrong rotation, but since we could know the order of Euler angle according to the .bvh file, why we still use the quaternion representation?

Many thanks!

Do I need to train new models for new characters in retargeting?

Hi, all

I read your papers carefully, you mentioned that "since the domains have different dimensions, the two networks (A → B, A → B) cannot share weights, so they had to be trained separately." in the paper.

Does this mean I need to train a new model if I want to use a character with a different number of bones from the training set?

Thanks!

about architecture.py->def backward_d(self) : in retargeting

i want to retain the retarget model use my own dataset(eg:cmu dataset).
when i use cmu dataset ,i find split_joint.py cant split the dataset into '***_m'. so my len(characters)==1 in the retargeting/datasets/init.py
then when the code run in architecture.py->def backward_d(self)->fake = self.fake_pools[i].query(self.fake_pos[2 - i]): i will have a error:IndexError: list index out of range
so what the mean about "self.fake_pos[2 - i]"?

What's the role of "std_bvh" files?

Hi, could you please descirbe the role of std_bvh files? I noticed that they were loaded during testing as well as training. What's that for ?

demo cannot find pytorch while pip shows is installed

Hello there
First of all kudos for this interesting project. I have tons of animation and from time to time I bang my head on how to port them gracefully...

I'm try to run the demo but it fails.

user:retargeting source demo.sh 
Traceback (most recent call last):
  File "eval_single_pair.py", line 2, in <module>
    import torch
ImportError: No module named torch
Traceback (most recent call last):
  File "demo.py", line 49, in <module>
    example('Aj', 'BigVegas', 'Dancing Running Man.bvh', 'intra', './examples/intra_structure')
  File "demo.py", line 45, in example
    height)
  File "/Users/max/Developer/Library/Graphics/moremocap/deep-motion-editing/retargeting/models/IK.py", line 57, in fix_foot_contact
    anim, name, ftime = BVH.load(input_file)
  File "../utils/BVH.py", line 58, in load
    f = open(filename, "r")
FileNotFoundError: [Errno 2] No such file or directory: './examples/intra_structure/result.bvh'

Seems it cannot find pytorch, but my pip shows is installed:

user:retargeting pip list
Package          Version
---------------- --------
appnope          0.1.0
backcall         0.1.0
certifi          2019.3.9
decorator        4.4.2
future           0.18.2
ipython          7.13.0
ipython-genutils 0.2.0
jedi             0.17.0
macpack          1.0.3
numpy            1.16.2
parso            0.7.0
pexpect          4.8.0
pickleshare      0.7.5
pip              20.1.1
prompt-toolkit   3.0.5
ptyprocess       0.6.0
Pygments         2.6.1
scipy            1.4.1
setuptools       39.0.1
six              1.14.0
torch            1.5.0
tqdm             4.46.1
traitlets        4.3.3
wcwidth          0.1.9
wheel            0.34.2

Anyway, if I run eval_single_pair.py directly it works:

user:retargeting python eval_single_pair.py
usage: eval_single_pair.py [-h] [--save_dir SAVE_DIR]
                           [--cuda_device CUDA_DEVICE]
                           [--num_layers NUM_LAYERS]
                           [--learning_rate LEARNING_RATE] [--alpha ALPHA]
                           [--batch_size BATCH_SIZE] [--upsampling UPSAMPLING]
                           [--downsampling DOWNSAMPLING]
                           [--batch_normalization BATCH_NORMALIZATION]
                           [--activation ACTIVATION] [--rotation ROTATION]
                           [--data_augment DATA_AUGMENT]
                           [--epoch_num EPOCH_NUM] [--window_size WINDOW_SIZE]
                           [--kernel_size KERNEL_SIZE]
                           [--base_channel_num BASE_CHANNEL_NUM]
                           [--normalization NORMALIZATION] [--verbose VERBOSE]
                           [--skeleton_dist SKELETON_DIST]
                           [--skeleton_pool SKELETON_POOL]
                           [--extra_conv EXTRA_CONV]
                           [--padding_mode PADDING_MODE] [--dataset DATASET]
                           [--fk_world FK_WORLD] [--patch_gan PATCH_GAN]
                           [--debug DEBUG] [--skeleton_info SKELETON_INFO]
                           [--ee_loss_fact EE_LOSS_FACT] [--pos_repr POS_REPR]
                           [--D_global_velo D_GLOBAL_VELO]
                           [--gan_mode GAN_MODE] [--pool_size POOL_SIZE]
                           [--is_train IS_TRAIN] [--model MODEL]
                           [--epoch_begin EPOCH_BEGIN]
                           [--lambda_rec LAMBDA_REC]
                           [--lambda_cycle LAMBDA_CYCLE]
                           [--lambda_ee LAMBDA_EE]
                           [--lambda_global_pose LAMBDA_GLOBAL_POSE]
                           [--lambda_position LAMBDA_POSITION]
                           [--ee_velo EE_VELO] [--ee_from_root EE_FROM_ROOT]
                           [--scheduler SCHEDULER]
                           [--rec_loss_mode REC_LOSS_MODE]
                           [--adaptive_ee ADAPTIVE_EE]
                           [--simple_operator SIMPLE_OPERATOR]
                           [--use_sep_ee USE_SEP_EE] [--eval_seq EVAL_SEQ]
                           --input_bvh INPUT_BVH --target_bvh TARGET_BVH
                           --test_type TEST_TYPE --output_filename
                           OUTPUT_FILENAME
eval_single_pair.py: error: the following arguments are required: --input_bvh, --target_bvh, --test_type, --output_filename

That means a python scripts run directly finds pytorch ok. Maybe could be something related to os.system calls? My setup:

Mac Os X 10.14.3 Mojave
default python.org Python 3.7.0, no envs

Thank you

Question about Mixamo dataset

Hi, great works! I am curious about the dataset setting: are all of Mixamo animations manually designed or are they also created by some kind of motion retargeting? It raises my doubts when some characters in Mixamo, e.g., Sword Woman, do not behave naturally in certain motions. If the Mixamo animations are not designed by human artists, then why can we take them as ground truth (please feel free to correct me)? Thanks.

run python test.py error ,maybe some error in code

python test.py
Batch [1/4]
Error: Numpy + Intel(R) MKL: MKL_THREADING_LAYER=INTEL is incompatible with libgomp-7c85b1e2.so.1 library.
Try to import numpy first or set the threading layer accordingly. Set NPY_MKL_FORCE_INTEL to force it.
Collecting test error...
Traceback (most recent call last):
File "test.py", line 35, in
cross_error += full_batch(0)
File "/home/linux/workspace/skeletonAware/deep-motion-editing-master/retargeting/get_error.py", line 15, in full_batch
res.append(batch(char, suffix))
File "/home/linux/workspace/skeletonAware/deep-motion-editing-master/retargeting/get_error.py", line 54, in batch
err = (pos - pos_ref) * (pos - pos_ref)
ValueError: operands could not be broadcast together with shapes (108,28,3) (156,28,3)

Is it necessary to use "paired" data for training when eliminating the adversarial loss of GAN?

Hi Peizhuo,

Thanks for the guidance these days, I've reproduced part of your works.()(https://github.com/crissallan/Skeleton_Aware_Networks_for_Deep_Motion_Retargeting)

However, I haven't add the GAN part to the model(since training GAN needs lots of tricks). My training pipeline are now working in a "paired" way. Which means the .bvh file of skeleton A and B should be the same during every training iteration.

I tried to train the model in an "Un-paired" way, but the loss is really hard to converge. According to the ablation study of your paper, it states that "omitting the adversarial loss improves the performance for intra-structural skeletons"

So I'd like to know that after deprecating the adv_loss, did you train the network in a "paired"(supervised) way or still in a "Un-paired" way?

Many thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.