Giter VIP home page Giter VIP logo

delta-interpolator's People

Contributors

antonvalk avatar boreshkinai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

delta-interpolator's Issues

importing BVH animations onto puppet model in blender

Hello,
I am trying to visualize some of the lafan dataset in blender. I am able to successfully import the BVH files which come with their own armature in blender. Those animations look fine, but they are only the bones with no mesh attached. I am not sure how to apply the BVH animation to the puppet rig and mesh, or how to rig the armature with the mesh onto the imported BVH animation. Did you do all the animation applying in unity? or do you know how to do this in blender?
Thanks,
Sarah

pretrain models and visualization code

Hi!

Great work!

I wonder if there's a pretrained model we can play with and whether it's possible to release a visualization code?
also there's a type in docker exec -i -t pose_estimation_$USER /bin/bash. it should be delta_interpolator_$USER i think?
thanks in advance
best regards

Can't find libnvidia-ml.so.1

Hello,
When running the container on Ubuntu 22.04, I run into the following issue :

$ sudo nvidia-docker run -p 18888:8888 -p 16006:6006 -v ~/workspace/delta-interpolator:/workspace/delta-interpolator -t -d --shm-size="8g" --name delta_interpolator_$USER delta_interpolator:$USER
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #1: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy'
nvidia-container-cli: initialization error: load library failed: libnvidia-ml.so.1: cannot open shared object file: no such file or directory: unknown.

The issue seems to be that libnvidia-ml.so.1 could not be found. The following packages supply it :

$ sudo apt-file find libnvidia-ml.so.1
libnvidia-compute-390: /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1
libnvidia-compute-418-server: /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1
libnvidia-compute-450-server: /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1
libnvidia-compute-470: /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1
libnvidia-compute-470-server: /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1
libnvidia-compute-510: /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1
libnvidia-compute-510-server: /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1
libnvidia-compute-515: /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1
libnvidia-compute-515-server: /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1
libnvidia-compute-525: /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1

I'm not sure which one should be installed, since I suppose it depends on the driver version that comes with the container. That being said, I guess the host machine dictates which driver version should be used since it must match the hardware. Being unfamiliar with Docker, I'm not sure what the correct way to make this work is.

Cannot build Docker image : unable to install tini

Hi! When trying to build the docker image on a Ubuntu 22.04 machine, I get the following logs :

Step 10/19 : RUN apt-get install -y curl grep sed dpkg &&     TINI_VERSION=`curl https://github.com/krallin/tini/releases/latest | grep -o "/v.*\"" | sed 's:^..\(.*\).$:\1:'` &&     curl -L "https://github.com/krallin/tini/releases/download/v${TINI_VERSION}/tini_${TINI_VERSION}.deb" > tini.deb &&     dpkg -i tini.deb &&     rm tini.deb &&     apt-get clean &&     rm -rf /var/lib/apt/lists/*
 ---> Running in eac8c955105f
Reading package lists...
Building dependency tree...
Reading state information...
sed is already the newest version (4.4-2).
grep is already the newest version (3.1-2build1).
The following additional packages will be installed:
  libcurl4
Suggested packages:
  debsig-verify
The following NEW packages will be installed:
  curl libcurl4
The following packages will be upgraded:
  dpkg
1 upgraded, 2 newly installed, 0 to remove and 56 not upgraded.
Need to get 1,516 kB of archives.
After this operation, 1,053 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 dpkg amd64 1.19.0.5ubuntu2.4 [1,137 kB]
Get:2 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libcurl4 amd64 7.58.0-2ubuntu3.21 [220 kB]
Get:3 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 curl amd64 7.58.0-2ubuntu3.21 [159 kB]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 1,516 kB in 1s (1,032 kB/s)
(Reading database ... 11409 files and directories currently installed.)
Preparing to unpack .../dpkg_1.19.0.5ubuntu2.4_amd64.deb ...
Unpacking dpkg (1.19.0.5ubuntu2.4) over (1.19.0.5ubuntu2.3) ...
Setting up dpkg (1.19.0.5ubuntu2.4) ...
Selecting previously unselected package libcurl4:amd64.
(Reading database ... 11409 files and directories currently installed.)
Preparing to unpack .../libcurl4_7.58.0-2ubuntu3.21_amd64.deb ...
Unpacking libcurl4:amd64 (7.58.0-2ubuntu3.21) ...
Selecting previously unselected package curl.
Preparing to unpack .../curl_7.58.0-2ubuntu3.21_amd64.deb ...
Unpacking curl (7.58.0-2ubuntu3.21) ...
Setting up libcurl4:amd64 (7.58.0-2ubuntu3.21) ...
Setting up curl (7.58.0-2ubuntu3.21) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100     9  100     9    0     0     30      0 --:--:-- --:--:-- --:--:--    30
dpkg-deb: error: 'tini.deb' is not a Debian format archive
dpkg: error processing archive tini.deb (--install):
 dpkg-deb --control subprocess returned error exit status 2
Errors were encountered while processing:
 tini.deb
The command '/bin/sh -c apt-get install -y curl grep sed dpkg &&     TINI_VERSION=`curl https://github.com/krallin/tini/releases/latest | grep -o "/v.*\"" | sed 's:^..\(.*\).$:\1:'` &&     curl -L "https://github.com/krallin/tini/releases/download/v${TINI_VERSION}/tini_${TINI_VERSION}.deb" > tini.deb &&     dpkg -i tini.deb &&     rm tini.deb &&     apt-get clean &&     rm -rf /var/lib/apt/lists/*' returned a non-zero code: 1

The issue seems to be that the curl command that sets TINI_VERSION doesn't work. At least on my machine, running curl https://github.com/krallin/tini/releases/latest does not return anything.

How to run the model only on the test set

I am interested in testing the model on LAFAN and visualizing the predicted test skeletons. I don't see any functionality for that. Any guidance would be appreciated!

can not run the code

Hi,

I have create the same environment, but I could not run the code.

I meet such error with every command in the readme.md. Dose this code really work?

(pt1.12) E:\PycharmProjects\delta-interpolator>python run.py --config=src/configs/zerovel_anidance.yaml
C:\ProgramData\Anaconda3\envs\pt1.12\lib\site-packages\hydra\experimental\initialize.py:43: UserWarning: hydra.experimental.initialize() is no longer experimental. Use hydra.initialize()
deprecation_warning(message=message)
C:\ProgramData\Anaconda3\envs\pt1.12\lib\site-packages\hydra\experimental\initialize.py:48: UserWarning:
The version_base parameter is not specified.
Please specify a compatability version level, or None.
Will assume defaults for version 1.1
caller_stack_depth=caller_stack_depth + 1,
C:\ProgramData\Anaconda3\envs\pt1.12\lib\site-packages\hydra\experimental\compose.py:25: UserWarning: hydra.experimental.compose() is no longer experimental. Use hydra.compose()
deprecation_warning(message=message)
C:\ProgramData\Anaconda3\envs\pt1.12\lib\site-packages\hydra\core\default_element.py:128: UserWarning: In 'model/anidance_inbetween_default': Usage of deprecated keyword in package header '# @Package group'.
See https://hydra.cc/docs/1.2/upgrades/1.0_to_1.1/changes_to_package_header for more information
See {url} for more information"""
type( cfg.dataset) = <class 'omegaconf.dictconfig.DictConfig'>
cfg.dataset = {'target': 'src.data.sequence_module.AlternateSequenceDataModule', 'backbone': {'target': 'src.data.batched_sequence_dataset.AnidanceSequenceDataset'}, 'path': './datasets/anidance/dances', 'name': 'anidance', 'batch_size': 2048, 'num_workers': 4, 'sequence_offset_train': 64, 'min_sequence_length_train': 128, 'max_sequence_length_train': 128, 'use_sliding_windows': True, 'sequence_offset_validation': 64, 'sequence_length_validation': 128, 'mirror': False, 'rotate': False, 'augment_training': False, 'augment_validation': False, 'centerXZ': False, 'y_rotate_on_frame': -1, 'remove_quat_discontinuities': False}
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\envs\pt1.12\lib\site-packages\hydra_internal\instantiate_instantiate2.py", line 92, in _call_target
return target(*args, **kwargs)
TypeError: init() missing 1 required positional argument: 'source'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "run.py", line 164, in
main(filepath=args.config, overrides=args.overrides)
File "run.py", line 154, in main
run(cfg.model)
File "run.py", line 56, in run
dm = instantiate(cfg.dataset)
File "C:\ProgramData\Anaconda3\envs\pt1.12\lib\site-packages\hydra_internal\instantiate_instantiate2.py", line 227, in instantiate
config, *args, recursive=recursive, convert=convert, partial=partial
File "C:\ProgramData\Anaconda3\envs\pt1.12\lib\site-packages\hydra_internal\instantiate_instantiate2.py", line 343, in instantiate_node
value, convert=convert, recursive=recursive
File "C:\ProgramData\Anaconda3\envs\pt1.12\lib\site-packages\hydra_internal\instantiate_instantiate2.py", line 347, in instantiate_node
return _call_target(target, partial, args, kwargs, full_key)
File "C:\ProgramData\Anaconda3\envs\pt1.12\lib\site-packages\hydra_internal\instantiate_instantiate2.py", line 97, in _call_target
raise InstantiationException(msg) from e
hydra.errors.InstantiationException: Error in call to target 'src.data.batched_sequence_dataset.AnidanceSequenceDataset':
TypeError("init() missing 1 required positional argument: 'source'")
full_key: dataset.backbone

Running on Ubuntu 22.10 : can't use nvidia-docker, running into hydra error without docker

Hi again,

To avoid any additional XY problem, here's what I'm trying to do : I'd like to run your code, but I only have a Ubuntu 22.10 machine at hand. nvidia-docker does not work with Ubuntu 22.10 (see this), hence I tried running everything inside a 22.04 VirtualBox VM. That does not work, because the VM can't get proper access to the GPU.

To avoid all this, I tried running the code without docker, but that triggers an exception when initializing the dataset :

/usr/lib/python3/dist-packages/pkg_resources/__init__.py:116: PkgResourcesDeprecationWarning: 2.22.2ubuntu3 is an invalid version and will not be supported in a future release
  warnings.warn(
$HOME/.local/lib/python3.10/site-packages/hydra/experimental/initialize.py:43: UserWarning: hydra.experimental.initialize() is no longer experimental. Use hydra.initialize()
  deprecation_warning(message=message)
$HOME/.local/lib/python3.10/site-packages/hydra/experimental/initialize.py:45: UserWarning: 
The version_base parameter is not specified.
Please specify a compatability version level, or None.
Will assume defaults for version 1.1
  self.delegate = real_initialize(
$HOME/.local/lib/python3.10/site-packages/hydra/experimental/compose.py:25: UserWarning: hydra.experimental.compose() is no longer experimental. Use hydra.compose()
  deprecation_warning(message=message)
$HOME/.local/lib/python3.10/site-packages/hydra/core/default_element.py:124: UserWarning: In 'model/lafan_inbetween_default': Usage of deprecated keyword in package header '# @package _group_'.
See https://hydra.cc/docs/next/upgrades/1.0_to_1.1/changes_to_package_header for more information
  deprecation_warning(
Traceback (most recent call last):
  File "$HOME/.local/lib/python3.10/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 92, in _call_target
    return _target_(*args, **kwargs)
TypeError: LafanSequenceDataset.__init__() missing 1 required positional argument: 'source'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "delta-interpolator/run.py", line 143, in <module>
    main(filepath=args.config, overrides=args.overrides)
  File "delta-interpolator/run.py", line 133, in main
    run(cfg.model)
  File "delta-interpolator/run.py", line 47, in run
    dm = instantiate(cfg.dataset)
  File "$HOME/.local/lib/python3.10/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 222, in instantiate
    return instantiate_node(
  File "$HOME/.local/lib/python3.10/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 334, in instantiate_node
    value = instantiate_node(
  File "$HOME/.local/lib/python3.10/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 339, in instantiate_node
    return _call_target(_target_, partial, args, kwargs, full_key)
  File "$HOME/.local/lib/python3.10/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 97, in _call_target
    raise InstantiationException(msg) from e
hydra.errors.InstantiationException: Error in call to target 'src.data.batched_sequence_dataset.LafanSequenceDataset':
TypeError("LafanSequenceDataset.__init__() missing 1 required positional argument: 'source'")
full_key: dataset.backbone

Now I know that the Dockerfile was provided to help with reproducibility and I'm bypassing all of that, but I feel out of options here and I don't want to downgrade my OS to run this. I haven't been able to fix the above errors by digging into the code a bit, but maybe this'll be clearer to you ?

Would appreciate any help!

About visualization

Hi!

Thanks for making this amazing project publicly available!

Could you please provide an example/demo of showing how to visualize the trained model?

Also, if would be nice if you can provide the script that processes the .bvh files into .csv files that are ready to be trained on, because I want to try it on another mocap dataset. : D

Best,
Haozhou

Freeze markupsafe to version 2.0.1 in requirements.txt

Hi again,
Running the container on Ubuntu 22.10 yields the following docker logs :

Traceback (most recent call last):
  File "/opt/conda/bin/jupyter-notebook", line 7, in <module>
    from notebook.notebookapp import main
  File "/opt/conda/lib/python3.7/site-packages/notebook/notebookapp.py", line 43, in <module>
    from jinja2 import Environment, FileSystemLoader
  File "/opt/conda/lib/python3.7/site-packages/jinja2/__init__.py", line 12, in <module>
    from .environment import Environment
  File "/opt/conda/lib/python3.7/site-packages/jinja2/environment.py", line 25, in <module>
    from .defaults import BLOCK_END_STRING
  File "/opt/conda/lib/python3.7/site-packages/jinja2/defaults.py", line 3, in <module>
    from .filters import FILTERS as DEFAULT_FILTERS  # noqa: F401
  File "/opt/conda/lib/python3.7/site-packages/jinja2/filters.py", line 13, in <module>
    from markupsafe import soft_unicode
ImportError: cannot import name 'soft_unicode' from 'markupsafe' (/opt/conda/lib/python3.7/site-packages/markupsafe/__init__.py)

Freezing the markupsafe version by adding markupsafe==2.0.1 in requirements.txt fixes the issue.

About Dataset Pre-Processing

Hi,

Thanks for making this awesome project open source. I encountered some questions about training this model on another dataset.

  1. Could you share more details on how you pre-processed the LAFAN1 (*.bvh) into the deeppose_lafan_v1_fps? I found the offset and localOffset in the data_settings file are not the same as the offset in the original BVH data.
  2. Does the data in .csv files consist of global positions and global quaternions?

Thanks~

Snipaste_2022-05-13_16-08-29

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.