Giter VIP home page Giter VIP logo

video_to_bvh's Introduction

video_to_bvh

Convert human motion from video to .bvh with Google Colab

Usage

1. Open video_to_bvh.ipynb in Google Colab

  1. Go to https://colab.research.google.com
  2. File > Upload notebook... > GitHub > Paste this link: https://github.com/Dene33/video_to_bvh/blob/master/video_to_bvh.ipynb
  3. Ensure that Runtime > Change runtime type is Python 3 with GPU

2. Initial imports, install, initializations

Second step is to install all the required dependencies. Select the first code cell and push shift+enter. You'll see running lines of executing code. Wait until it's done (1-2 minutes).

3. Upload video

  1. Select the code cell and push shift+enter
  2. Push select files button
  3. Select the video you want to process (it should contain only one person, all body parts in frame, long videos will take a lot of time to process)

4. Process the video

  1. Specify desired fps rate at which you want to convert video to images. Lower fps = faster processing
  2. Select the code cell and push shift+enter

This step does all the job:

  1. Convertion of video to images (images are required for pose estimation to work)
  2. 2d pose estimation. For each image creates corresponding .json file with 2djoints with format similar to output .json format of original openpose. Fork of keras_Realtime_Multi-Person_Pose_Estimation is used.
  3. 3d pose estimation. Creates .csv file of all the frames of video with 3d joints coordinates. Fork of End-to-end Recovery of Human Shape and Pose
  4. Convertion of estimated .csv files to .bvh with help of custom script with .blend file.

5. Download .bvh

  1. Select the code cell and push shift+enter .bvh will be saved to your PC.
  2. If you want preview it, run Blender on your PC. File > Import > Motion Capture (.bvh) > alt+a

6. Clear all the generated data if you want to process new video

  1. Select the code cell and push shift+enter.

video_to_bvh's People

Contributors

dene33 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

video_to_bvh's Issues

'Object' object has no attribute 'select'

when i run "blender --background hmr/csv_to_bvh.blend -noaudio -P hmr/csv_to_bvh.py",then will have an error:
bpy.data.objects['rig'].select = True
AttributeError: 'Object' object has no attribute 'select'

Errors

First set of errors was the log_dir, which I fixed by changing instances of log_dir to log_dir2

However while processing images:

Traceback (most recent call last): File "hmr/demo.py", line 211, in <module> main(config.img_path, config.json_path) File "hmr/demo.py", line 125, in main model = RunModel(config, sess=sess) File "/content/hmr/src/RunModel.py", line 62, in __init__ self.build_test_model_ief() File "/content/hmr/src/RunModel.py", line 82, in build_test_model_ief reuse=False) File "/content/hmr/src/models.py", line 40, in Encoder_resnet with tf.name_scope("Encoder_resnet", [x]): File "/usr/local/lib/python2.7/dist-packages/tensorflow_core/python/framework/ops.py", line 6291, in __init__ "pass this into thevalueskwarg." % type(default_name)) TypeError:default_nametype (<type 'list'>) is not a string type. You likely meant to pass this into thevalues kwarg.

How do you fix that?

Frame limit to 250?

Hi,

Thanks for the awesome script.

I have been trying to animate a larger frame sequence (1500 appx) but the resulting animation is only 250 frames long.

'BVH Exported: ./estimated_animation.bvh frames:251'

and no error. Can you guide me through fixing this?

Thanks

Attach to camera

Nice stuff. How hard would it be to attach to real camera stream, lets say either via:

  1. /dev/video0
    or
  2. via rtms stream from IP camera
    ?

Process the video

So I have successfully ran 2d pose estimation and have gotten json outputs.
But I am running into this issue with 3D pose estimation function (bash hmr/3dpose_estimate.sh).
Please help!

Traceback (most recent call last):
File "hmr/demo.py", line 32, in
import src.config
File "/content/hmr/src/config.py", line 59, in
flags.DEFINE_string('log_dir', 'logs', 'Where to save training models')
File "/usr/local/lib/python2.7/dist-packages/absl/flags/_defines.py", line 241, in DEFINE_string
DEFINE(parser, name, default, help, flag_values, serializer, **args)
File "/usr/local/lib/python2.7/dist-packages/absl/flags/_defines.py", line 82, in DEFINE
flag_values, module_name)
File "/usr/local/lib/python2.7/dist-packages/absl/flags/_defines.py", line 104, in DEFINE_flag
fv[flag.name] = flag
File "/usr/local/lib/python2.7/dist-packages/absl/flags/_flagvalues.py", line 430, in setitem
raise _exceptions.DuplicateFlagError.from_flag(name, self)
absl.flags._exceptions.DuplicateFlagError: The flag 'log_dir' is defined twice. First from absl.logging, Second from src.config. Description from first occurrence: directory to write logfiles into
Done
Read blend: /content/hmr/csv_to_bvh.blend
[bpy.data.objects['Ankle.R'], bpy.data.objects['Knee.R'], bpy.data.objects['Hip.R'], bpy.data.objects['Hip.L'], bpy.data.objects['Knee.L'], bpy.data.objects['Ankle.L'], bpy.data.objects['Wrist.R'], bpy.data.objects['Elbow.R'], bpy.data.objects['Shoulder.R'], bpy.data.objects['Shoulder.L'], bpy.data.objects['Elbow.L'], bpy.data.objects['Wrist.L'], bpy.data.objects['Neck'], bpy.data.objects['Head'], bpy.data.objects['Nose'], bpy.data.objects['Eye.L'], bpy.data.objects['Eye.R'], bpy.data.objects['Ear.L'], bpy.data.objects['Ear.R'], bpy.data.objects['Hip.Center']]
Traceback (most recent call last):
File "/content/hmr/csv_to_bvh.py", line 20, in
with open(fullpath, 'r', newline='') as csvfile:
FileNotFoundError: [Errno 2] No such file or directory: 'hmr/output/csv_joined/csv_joined.csv'

Blender quit
src/tcmalloc.cc:283] Attempt to free invalid pointer 0x7f1e9680e400

ImportError: No module named contrib.slim

I get this error message after succefully getting json files
I tried several tensorflow versions but I still get the same error
Processing How001
Traceback (most recent call last):
File "hmr/demo.py", line 33, in
from src.RunModel import RunModel
File "/content/hmr/src/RunModel.py", line 13, in
from .models import get_encoder_fn_separate
File "/content/hmr/src/models.py", line 19, in
import tensorflow.contrib.slim as slim
ImportError: No module named contrib.slim

trying to run it locally, but hit a issue.

windows 10 , python 3.6
when running 3dpose_estimate.sh it will always get stuck.. right at ipdb
i changed python2 to python within 3dpose_estimate.sh.
i tried adding neutral_smpl_with_cocoplus_reg.pkl file and also moving it around to see if it would find it. any ideas? can i add any other info that will help?
thanks

Processing ti10002
Fix path to models/
D:\Anaconda3\lib\site-packages\dask\config.py:168: YAMLLoadWarning: calling yaml .load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
data = yaml.load(f.read()) or {}
d:\machinelearning\3dpose\hmr-master\src\config.py(26)()
25 ipdb.set_trace()
---> 26 SMPL_MODEL_PATH = osp.join(model_dir, 'neutral_smpl_with_cocoplus_reg.pkl')
27 SMPL_FACE_PATH = osp.join(curr_path, '../src/tf_smpl', 'smpl_faces.npy')
ipdb>

Distortions in feet in bvh.

Hey @Dene33 , great work!
Although, I had one issue. I generated a csv from other means, and matched it exactly with the csv format you are using (headers, order etc.) and then converted that csv to bvh using the technique you used. The whole structure in bvh is correct but only the feet in the bvh are being rotated. However, I input the same video into your .ipynb file and the feet are correctly aligned in it. I dont understand why is that happening.
I am attaching link to both the CSVs, BVHs and original video for your reference.
https://drive.google.com/drive/folders/1UEq-8Ftrz3kyZvXQb_uoNw8_oVUFUVBM?usp=sharing
If you could please help me with this.

specify version in requirements.txt

Wouldn't it be a good and extremely obvious idea to specify the exact versions to use while setting up the project? There tons of error here, please add this

fixed errors for beginners

hello every one please share a easy working copy for 3d artists with low programming knowledge many thanks in advance

Uploading video

Screen Shot 2019-04-24 at 11 24 04 AM

I got this error when try to upload video file. Which expectation is a dialog opened to select video file from my computer

Errors

ERROR: Failed building wheel for opendr
Running setup.py clean for opendr
Building wheel for ipdb (setup.py) ... done
Stored in directory: /root/.cache/pip/wheels/59/24/91/695211bd228d40fb22dff0ce3f05ba41ab724ab771736233f3
Building wheel for chumpy (setup.py) ... done
Stored in directory: /root/.cache/pip/wheels/bd/34/cf/3719f67895ddd6d8668f861bfebb7a879ea86a747ff5b952d5
Successfully built ipdb chumpy
Failed to build opendr
Installing collected packages: chumpy, opendr, deepdish, ipdb
Running setup.py install for opendr ... error
ERROR: Command "/usr/bin/python2 -u -c 'import setuptools, tokenize;file='"'"'/tmp/pip-install-dh9HII/opendr/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-jcSAH_/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-install-dh9HII/opendr/

FileNotFoundError Due to [Choose Files] Button the Upload Widget is not Available Even the Session is Rerun

2 error messages are shown below for your information:

1>
MessageError Traceback (most recent call last)
in ()
1 #upload video
----> 2 exec(open('upload_videos.py').read())

in ()

2>
/usr/local/lib/python3.6/dist-packages/google/colab/_message.py in read_reply_from_input(message_id, timeout_sec)
104 reply.get('colab_msg_id') == message_id):
105 if 'error' in reply:
--> 106 raise MessageError(reply['error'])
107 return reply.get('data', None)
108

MessageError: TypeError: Cannot read property '_uploadFiles' of undefined

Another Error Message is shown after the {Upload Video] cell is rerun as below:

FileNotFoundError Traceback (most recent call last)
in ()
1 #upload video
----> 2 exec(open('upload_videos.py').read())

FileNotFoundError: [Errno 2] No such file or directory: 'upload_videos.py'

It happens even though I've rerun the Upload Video, the cell has been re-executed in the current browser session, agin and again.

Please help. Thanks!

Questions regarding 3D lifting from other 2D detectors

Hi
Great work ! I was wondering if it was possible to have other 2D detectors as input ? There are two possible detectors that ouput the 2D joints
hrnet
posenet

For hrnet implementation, there is a python implementation done in the following
https://github.com/lxy5513/hrnet
And the keypoints are returned in the following
https://github.com/lxy5513/hrnet/blob/master/pose_estimation/demo.py#L153
Where

preds.shape = (N, 17, 2), N is the frame number of video , 2 is the coordinate
maxvals = (N, 17, 1), 1 is the confidence of coordinate

I think the output keypoints have to be rearranged a bit to get something similar to that of openpose which is of the format
{"people": [{"pose_keypoints_2d": [374, 460, 374, 516, 324, 518, 296, 596, 336, 636, 424, 512, 446, 590, 424, 604, 340, 660, 324, 776, 308, 890, 400, 660, 402, 792, 400, 904, 364, 448, 382, 450, 348, 450, 396, 450]}]}

For posenet there is a python implementation
https://github.com/rwightman/posenet-python
and the keypoints are returned here

It also has a different keypoint ordering , but can be extended to openpose format.

You also mentioned the 3d joints are exported in a csv format. Is it possible , also to export it to unity , for animation ? Your thoughts and inputs would be appreciated.

visualize motion with bvh file

Hi,

Thanks for sharing this. I successfully got the bvh file by following the steps you documented. I want to know how to visualize the motion in blender? Do I need to create a new human mesh file and then apply the motion to it? Besides, is there any way to convert the bvh file to ARM format file or can we directly output the AMC file by using the hmr?

Thanks

TypeError: `default_name` type (<type 'list'>) is not a string type. You likely meant to pass this into the `values` kwarg.

Hello I am stuck at step "Processing video", actually every command in this step has it's error but I fixed it thanks to the Issues. But I can't do anything with this following error:

When I run
`#3d pose estimation
os.chdir('..')
!bash hmr/3dpose_estimate.sh

#convert estimated .csv files to bvh
!blender --background hmr/csv_to_bvh.blend -noaudio -P hmr/csv_to_bvh.py`

This error occurred.

Traceback (most recent call last):
File "hmr/demo.py", line 211, in
main(config.img_path, config.json_path)
File "hmr/demo.py", line 125, in main
model = RunModel(config, sess=sess)
File "/content/drive/My Drive/AI_COLLAB/video_to_bvh/hmr/src/RunModel.py", line 62, in init
self.build_test_model_ief()
File "/content/drive/My Drive/AI_COLLAB/video_to_bvh/hmr/src/RunModel.py", line 82, in build_test_model_ief
reuse=False)
File "/content/drive/My Drive/AI_COLLAB/video_to_bvh/hmr/src/models.py", line 40, in Encoder_resnet
with tf.name_scope("Encoder_resnet", [x]):
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 6450, in init
"pass this into the values kwarg." % type(default_name))
TypeError: default_name type (<type 'list'>) is not a string type. You likely meant to pass this into the values kwarg.
Done
Read blend: /content/drive/My Drive/AI_COLLAB/video_to_bvh/hmr/csv_to_bvh.blend
OSError: Python file "/content/drive/My Drive/AI_COLLAB/video_to_bvh/hmr/csv_to_bvh.py" could not be opened: No such file or directory

Blender quit
src/tcmalloc.cc:283] Attempt to free invalid pointer 0x7f1fa500e400

How to fix it?
Thank you.

Error Due to string literal in File Name

One of the errors that occur in step 3 is the following (Process the video )

File "hmr/demo.py", line 194, in
all_files.sort(key=lambda x: int(x.split('/')[-1].split('.')[0]))
ValueError: invalid literal for int() with base 10: 'abc002'

which is due to the input file name being abc, which causes the error during conversion in the lambda function

to fix this what is the suggested action?

  1. Should I change the file name to some integer only? But if I assign an integer value that might mess up the sort function as all filenames will be prefixed with that integer value
    OR I can name the input file as '0'
  2. I can remove the part where the filename is added to the number during the CSV file generation

Also to make the code compatible with tf 1.15 I had to make some small changes to
config.py, batch_lbs.py and models.py in your hmr repo,
should I make a PR describing that? ( I don't think it is needed it's only 4 changes)

No such file or directory: 'hmr/output/csv_joined/csv_joined.csv'

When processing the uploaded video, I get:
Traceback (most recent call last):
File "/content/hmr/csv_to_bvh.py", line 20, in
with open(fullpath, 'r', newline='') as csvfile:
FileNotFoundError: [Errno 2] No such file or directory: 'hmr/output/csv_joined/csv_joined.csv'

Blender quit
src/tcmalloc.cc:283] Attempt to free invalid pointer 0x7f3c34c0e400

What is the above supposed to mean and how can I fix it? how can I create the files and download them?

Thanks,

tensorflow.contrib.slim error in "Process the video"

I have followed the steps to execute the code but I keep having an issue regarding the processing. In fact, when executing the segment "process the video", it doesn't go through. This is how the execution ends:

Traceback (most recent call last):
File "hmr/demo.py", line 33, in
from src.RunModel import RunModel
File "/content/hmr/src/RunModel.py", line 13, in
from .models import get_encoder_fn_separate
File "/content/hmr/src/models.py", line 19, in
import tensorflow.contrib.slim as slim
ImportError: No module named contrib.slim
Done
Read blend: /content/hmr/csv_to_bvh.blend
[bpy.data.objects['Ankle.R'], bpy.data.objects['Knee.R'], bpy.data.objects['Hip.R'], bpy.data.objects['Hip.L'], bpy.data.objects['Knee.L'], bpy.data.objects['Ankle.L'], bpy.data.objects['Wrist.R'], bpy.data.objects['Elbow.R'], bpy.data.objects['Shoulder.R'], bpy.data.objects['Shoulder.L'], bpy.data.objects['Elbow.L'], bpy.data.objects['Wrist.L'], bpy.data.objects['Neck'], bpy.data.objects['Head'], bpy.data.objects['Nose'], bpy.data.objects['Eye.L'], bpy.data.objects['Eye.R'], bpy.data.objects['Ear.L'], bpy.data.objects['Ear.R'], bpy.data.objects['Hip.Center']]
Traceback (most recent call last):
File "/content/hmr/csv_to_bvh.py", line 20, in
with open(fullpath, 'r', newline='') as csvfile:
FileNotFoundError: [Errno 2] No such file or directory: 'hmr/output/csv_joined/csv_joined.csv'

Blender quit
src/tcmalloc.cc:283] Attempt to free invalid pointer 0x7f135b80e400

Can anybody help me with this? I tried using tensorflow 1.3.0 ; 1.11 ;1.14 and finally 2.2.0. When using 1.3.0 an 1.11, I get a different error, the code doesn't even reach the point where it requires contrib.slim

IOError and FileNotFoundError

I am gabind the following erros

IOError: [Errno 2] No such file or directory: 'keras_Realtime_Multi-Person_Pose_Estimation/sample_images/*'
Done
Read blend: /content/hmr/csv_to_bvh.blend
[bpy.data.objects['Ankle.R'], bpy.data.objects['Knee.R'], bpy.data.objects['Hip.R'], bpy.data.objects['Hip.L'], bpy.data.objects['Knee.L'], bpy.data.objects['Ankle.L'], bpy.data.objects['Wrist.R'], bpy.data.objects['Elbow.R'], bpy.data.objects['Shoulder.R'], bpy.data.objects['Shoulder.L'], bpy.data.objects['Elbow.L'], bpy.data.objects['Wrist.L'], bpy.data.objects['Neck'], bpy.data.objects['Head'], bpy.data.objects['Nose'], bpy.data.objects['Eye.L'], bpy.data.objects['Eye.R'], bpy.data.objects['Ear.L'], bpy.data.objects['Ear.R'], bpy.data.objects['Hip.Center']]
Traceback (most recent call last):
File "/content/hmr/csv_to_bvh.py", line 20, in
with open(fullpath, 'r', newline='') as csvfile:
FileNotFoundError: [Errno 2] No such file or directory: 'hmr/output/csv_joined/csv_joined.csv'

Blender quit
src/tcmalloc.cc:283] Attempt to free invalid pointer 0x7ff0fa40e400

UnknownError: Failed to get convolution algorithm.

Great work on this project it looks like a really great tool. I'm getting an error when I get to the process video portion of the code. Do you have any insight on how to go about fixing this error?

Thanks for any help!

Here's the output
`---------------------------------------------------------------------------
UnknownError Traceback (most recent call last)
in ()
3 #2d pose estimation. For each image creates corresponding .json file with format
4 #similar to output .json format of openpose (https://github.com/CMU-Perceptual-Computing-Lab/openpose)
----> 5 exec(open('2d_pose_estimation.py').read())
6
7 #3d pose estimation

in ()

/usr/local/lib/python3.6/dist-packages/keras/engine/training.py in predict(self, x, batch_size, verbose, steps)
1167 batch_size=batch_size,
1168 verbose=verbose,
-> 1169 steps=steps)
1170
1171 def train_on_batch(self, x, y,

/usr/local/lib/python3.6/dist-packages/keras/engine/training_arrays.py in predict_loop(model, f, ins, batch_size, verbose, steps)
292 ins_batch[i] = ins_batch[i].toarray()
293
--> 294 batch_outs = f(ins_batch)
295 batch_outs = to_list(batch_outs)
296 if batch_index == 0:

/usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py in call(self, inputs)
2713 return self._legacy_call(inputs)
2714
-> 2715 return self._call(inputs)
2716 else:
2717 if py_any(is_tensor(x) for x in inputs):

/usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py in _call(self, inputs)
2673 fetched = self._callable_fn(*array_vals, run_metadata=self.run_metadata)
2674 else:
-> 2675 fetched = self._callable_fn(*array_vals)
2676 return fetched[:len(self.outputs)]
2677

/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py in call(self, *args, **kwargs)
1437 ret = tf_session.TF_SessionRunCallable(
1438 self._session._session, self._handle, args, status,
-> 1439 run_metadata_ptr)
1440 if run_metadata:
1441 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/errors_impl.py in exit(self, type_arg, value_arg, traceback_arg)
526 None, None,
527 compat.as_text(c_api.TF_Message(self.status.status)),
--> 528 c_api.TF_GetCode(self.status.status))
529 # Delete the underlying status object from memory otherwise it stays alive
530 # as there is a reference to status from this from the traceback due to

UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[{{node conv1_1/convolution}} = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](conv1_1/convolution-0-TransposeNHWCToNCHW-LayoutOptimizer, conv1_1/kernel/read)]]`

3D mash model

Hello, I'd like to ask how I can output the 3D mash model

Bad estimation pose

Hi! I have tested the algorithm on different videos. Each time, the estimation of the pose was very bad. Is it normal? Here is an exemple of my results.
IMG_1719 TRIM002

hmr directory problem

@Dene33 please I facing an error mostly everyone facing it please can you elaborate the solution for the error
image
thanks in advance

Errors everywhere

Not sure why this hasn't been updated for months. Code is crippled with errors popping up everywhere. Wasted 2 days following various discussions trying out different fixes to no avail.

"Process the video" error

When processing,I get:
Traceback (most recent call last):
File "hmr/demo.py", line 27, in
import tensorflow as tf
File "/tensorflow-1.15.0/python3.6/tensorflow/init.py", line 99, in
from tensorflow_core import *
File "/tensorflow-1.15.0/python3.6/tensorflow_core/init.py", line 28, in
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "/tensorflow-1.15.0/python3.6/tensorflow/init.py", line 50, in getattr
module = self._load()
File "/tensorflow-1.15.0/python3.6/tensorflow/init.py", line 44, in _load
module = _importlib.import_module(self.name)
File "/usr/lib/python2.7/importlib/init.py", line 37, in import_module
import(name)
File "/tensorflow-1.15.0/python3.6/tensorflow_core/python/init.py", line 49, in
from tensorflow.python import pywrap_tensorflow
File "/tensorflow-1.15.0/python3.6/tensorflow_core/python/pywrap_tensorflow.py", line 74, in
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "/tensorflow-1.15.0/python3.6/tensorflow_core/python/pywrap_tensorflow.py", line 58, in
from tensorflow.python.pywrap_tensorflow_internal import *
File "/tensorflow-1.15.0/python3.6/tensorflow_core/python/pywrap_tensorflow_internal.py", line 28, in
_pywrap_tensorflow_internal = swig_import_helper()
File "/tensorflow-1.15.0/python3.6/tensorflow_core/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
ImportError: dynamic module does not define init function (init_pywrap_tensorflow_internal)

Failed to load the native TensorFlow runtime.

See https://www.tensorflow.org/install/errors

for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.

I cheched the README file many times but still get this error.Maybe the incorrect tensflow verson cause this problem?

csv_joined key pionts type

Hello, I have been thinking about using the key points in the final CSV join file to animate an avatar in Unity and I would like to know if anyone here has tried it already and if it is feasible. Is the x, y and z obtained, rotation or position point?

It's quit slow

I upload a 3 minutes video, it cost nearly two hours, is there any method to speed it up, thanks

No module tensorflow

The tensorflow dependencies for keras pose estimation and hmr are different. How can we solve this issue?

error

daria_walk.zip

Thank for good code.
Can you check this video? I have a problem the process of Download .bvh.

FileNotFoundError Traceback (most recent call last)
in ()
1 from google.colab import files
----> 2 files.download('hmr/output/bvh_animation/estimated_animation.bvh')

/usr/local/lib/python3.6/dist-packages/google/colab/files.py in download(filename)
142 raise OSError(msg)
143 else:
--> 144 raise FileNotFoundError(msg) # pylint: disable=undefined-variable
145
146 started = _threading.Event()

FileNotFoundError: Cannot find file: hmr/output/bvh_animation/estimated_animation.bvh

Instructions on running python version locally ?

Hi ,
i had some issues with running your google collab notebook. So i looked in your python code
https://github.com/Dene33/hmr

I am trying to follow the steps as you mentioned in the notebook but i failed to run a video sequence on it, Is there a guide on how to run a video and get the mesh for the above mentioned repository ? hmr
As instructed the pretrained model has ben downloaded to the model directory .
Thanks!

CSV to blend error

Hello!

When I execute "blender --background hmr/csv_to_bvh.blend -noaudio -P hmr/csv_to_bvh.py" to convert .csv files to bvh, I am always getting the same error in different videos:

BVH Exported: hmr/output/bvh_animation/estimated_animation.bvh frames:251

Error, region type 4 missing in - name:"Action", id:12
Error, region type 4 missing in - name:"Action", id:12

Blender quit

It only export 251 frames, why is this happing? But I have more than 1000K frames, it always stops here

Thanks in advance

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.