xkunwu / depth-hand Goto Github PK
View Code? Open in Web Editor NEWSingle view depth image based hand detection and pose estimation.
License: GNU General Public License v3.0
Single view depth image based hand detection and pose estimation.
License: GNU General Public License v3.0
got error: the folder is not a reparse point : data/univue/output/hands17/log/blinks/super_edt2m.
systems: Win10
again, only run capture demo with pretrained models, it still try to find data/hands17/train/annotation....txt, why?
Hi Kelvin, I want to use this repo only for hand detection. how can I do that?
The instructions given here seem not very intuitive for beginners like me.
I have an Intel Realsense D415 camera. I want to test this code for a small project. I need some help to get started.
Thanks.
Hi everyone, hi Kelvin,
as suggested in your quick start guide I tried to download the pretrained model from your provided linked. Sadly, I was not able to obtain said data due to the fact that I have to register on the platform, which is not possible. Do you know of any other possibilities to get the pretrained model and/or data?
Thank you in advance.
Best regards,
Chris
When running with model 'voxel_regre':
python -m camera.capture --data_root=$HOME/data --out_root=$HOME/data/univue/output --model_name=voxel_regre
Error reported:
File "/web/research/vendors/depth-hand/code/camera/capture.py", line 214, in update
depth_image, cube, sess, ops)
File "/web/research/vendors/depth-hand/code/camera/capture.py", line 185, in detect_region
feed_dict=feed_dict)
File "/home/qwe/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 900, in run
run_metadata_ptr)
File "/home/qwe/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1111, in _run
str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (1, 128, 128, 1) for Tensor u'Placeholder:0', which has shape '(1, 64, 64, 64, 1)'
It says the model uses 3d input, but we feed 2d input.
Hi, I was trying to reproduce the detection result for my project using you pretrained model, but as i run
python3 -m camera.capture
--data_root=$HOME/data
--out_root=$HOME/data/univue/output
--model_name=super_edt2m
The program tries to find Training_Annotation.txt
Here's the error log
2021-08-23 10:46:54.388726: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.10.2
21-08-23 10:46:59 [INFO ] ######## hands17 [super_edt2m] ########
21-08-23 10:46:59 [INFO ] cleaning data ...
Traceback (most recent call last):
File "/home/hungduong/Downloads/depth-hand/code/camera/capture.py", line 363, in
if not argsholder.create_instance():
File "/home/hungduong/Downloads/depth-hand/code/args_holder.py", line 399, in create_instance
self.args.data_inst.init_data()
File "/home/hungduong/Downloads/depth-hand/code/data/hands17/holder.py", line 286, in init_data
pose_limit = self.remove_out_frame_annot_mt()
File "/home/hungduong/Downloads/depth-hand/code/data/hands17/holder.py", line 197, in remove_out_frame_annot_mt
reader = filepack.push_file(self.training_annot_origin)
File "/home/hungduong/Downloads/depth-hand/code/utils/coder.py", line 50, in push_file
file = open(name, 'r')
FileNotFoundError: [Errno 2] No such file or directory: '/home/hungduong/data/hands17/training/Training_Annotation.txt'
Can you help check again? I'm using nvidia jetson nano realsense D435i and python3 virtualenv as conda cannot be installed on jetson yet.
Thanks
Is there any simple example which shows how to run the code for depth images coming from a live camera. something like
img = camera.get_image()
out = network.run(img)
display(out)
I tried to look into it but I can't find such an example easily.
Hello @xkunwu,
Thanks for your great job!
You precised the command line of basic usage of the pretrained model, however to realise the evaluation your code always tries to find the hands2017's annotation text file. As I am currently not capable of getting the raw data, I would like to know if there is a way to use the pretrained model for prediction without ground truth files.
Thanks!
Hi, Thank you for the great work.
I appreciate your work. I'm wondering if it is possible that use your code for whole body pose estimation? Could you please give me some suggestions?
Any help is much appreciated.
Thank you!
It is said that the required data will be prepared automatically in your README, but when I run with that command it raised error in 'holder.py'
FileNotFoundError: [Errno 2] No such file or directory: './data/hands17/training/Training_Annotation.txt'
I wonder if I set directory incorrectly or made some other mistakes. Thanks.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.