otaheri / grab Goto Github PK
View Code? Open in Web Editor NEWGRAB: A Dataset of Whole-Body Human Grasping of Objects
Home Page: https://grab.is.tue.mpg.de
License: Other
GRAB: A Dataset of Whole-Body Human Grasping of Objects
Home Page: https://grab.is.tue.mpg.de
License: Other
Hello, first, thanks for your great work!
It seems that the meshes I get from your method are not watertight.
I just do [os_mesh = o_mesh + s_mesh] and then save os_mesh as a .obj file. o_mesh is from line 95 in example/visualize_grab.py and s_mesh is from line 98 in example/visualize_grab.py.
So is there any watertight version? Or could you please show me how to get watertight .obj files?
Thank you :)
Upon installing the required dependencies and smplx, I am met with the following error when running visualize_grab.py (occurs regardless of arguments passed in).
$ python examples/visualize_grab.py
Traceback (most recent call last):
File "examples/visualize_grab.py", line 25, in <module>
from tools.objectmodel import ObjectModel
File "/home/msalvato/miniconda3/envs/myenv2/lib/python3.8/site-packages/tools/__init__.py", line 18, in <module>
import clean_ch
ModuleNotFoundError: No module named 'clean_ch'
Linux (WSL 2) in a conda env (python 3.8.3)
Linux (WSL 2) in a pip env (python 3.8.3 I think?)
Windows in a conda env (python 3.8.3)
I checked a min repo in WSL 2 in a new conda env with python 3.8.3
I renamed the "tools" module in GRAB to "tools_grab" (and changed all references). I believe the issue is that smplx creates a package named "tools" which also exists as a name in GRAB.
Hello,
Thank you for the great work! I have been studying this dataset and the associated GrabNet for hand-object interaction, and I tried to replicate the data preparation step for Grabnet, as written in the original paper.
According to the paper, the following rules were applied:
(i) The right hand should be in contact.
(ii) The left hand should not have any contact.
(iii) The object’s vertical position should be at least 5 mm different from its initial one (i.e. it should be lifted from the resting table).
(iv) The right thumb and at least one more finger should be in contact.
(v) A finger is considered a contacting finger, when it is in contact with at least 50 object vertices
While I can check if whether the left and/or right hand is in contact with the object, according to the MANO vertices,
I wonder how I can check rule iv: if the "thumb" and "at least one more finger" should be in contact.
Is there a MANO-vertices segmentation map somewhere? If so, would you mind sharing the annotation map information?
I would also like to know how to exactly calcuate the object's vertical position.
Thanks in advance,
Hello,
I have been facing issues running the preprocessing and save vertices examples. I've downloaded the dataset for all subjects, unzipped them with the provided unzipping tool, and have also downloaded the SMPLX models.
In both examples, the call to sbj_vtemp = self.load_sbj_verts(sbj_id, seq_data) throws the following error:
File "grab/grab_preprocessing.py", line 148, in data_preprocessing sbj_vtemp = self.load_sbj_verts(sbj_id, seq_data) File "grab/grab_preprocessing.py", line 307, in load_sbj_verts sbj_vtemp = np.array(Mesh(filename=mesh_path).vertices) File "./tools/meshviewer.py", line 47, in __init__ mesh = trimesh.load(filename, process = process) File "/home/tshankar/Research/Code/Robo_Env1/lib/python3.6/site-packages/trimesh/exchange/load.py", line 113, in load resolver=resolver) File "/home/tshankar/Research/Code/Robo_Env1/lib/python3.6/site-packages/trimesh/exchange/load.py", line 623, in parse_file_args raise ValueError('string is not a file: {}'.format(file_obj)) ValueError: string is not a file: ../Data/Datasets/TestGrab/grab/../tools/subject_meshes/male/s1.ply
I believe the error is because the code assumes the unzipping code creates a /tools/ directory in path the GRAB dataset is extracted to. However, despite having run the code as instructed, my unzipped GRAB dataset directory does not contain a /tools/ directory. Since the example code tries to access ../tools/subject_meshes from there, it correspondingly fails.
How can I fix this issue?
Hi,
I am trying to see if I can get the right hand wrist global_orientation from full body pose. My understanding is that the 21 full body joints correspond to these specific joints. The last joint is the right wrist.
However, the value for data['body']['body_pose'][-1, :]
does not appear to be the same as data['rhand']['global_orient']
. Should these two be the same / related? If so, is there a way for me to get right hand global orientation (wrist pose) from the full-body pose?
Thanks!
Hello!
For my project, I would like to use only a single (right hand) with GRAB. I plugged MANO as a model type instead of SMPL-X, however I'm facing two problems:
Everything seems to be in place from my code level, so I'm assuming the model itself makes a difference. I also get a warning which is probably the reason: WARNING: You are using a MANO model, with only 10 shape coefficients.
Is there a way to get the exact same hand movement with MANO as with the hand in SMPL-X?
For the object dictionary, I can extract values for the right hand according to the table that shows mapping between the contact number and each body joint (values greater than 40).
However in case of the body dictionary, there's an error with dimensions: dimension is 778 but corresponding boolean dimension is 10475
. Specifically, MANO expects the size 778 while the list has 10475 values as originally for SMPL-X. I assume that I should take only 778 values from this list that are relevant for the right hand, but this is not clear.
Is there a simple way to extract the contact forces only for the hand?
Thank you,
Bartek
Hi, thanks for your excellent work! I downloaded all the zip files from the website. I unzipped the files by running the scripts you offered—— python grab/unzip_grab.py --grab-path $PATH_TO_FOLDER_WITH_ZIP_FILES --extract-path $PATH_TO_EXTRACT_GRAB_DATASET_TO.
But I couldn't obtain the "tools" folder, I checked the steps and didn't know where the problem is.
Hi Omid,
As I didn't find an explanation for this yet, I would like to confirm that the value in mesh files provided by GRAB is measured in meter
. For example,
from psbody.mesh import Mesh
mesh = Mesh(filename='GRAB/tools/object_meshes/contact_meshes/camera.ply')
# mesh.v[:,0].max() = 0.057756002992391586
# mesh.v[:,0].min() = -0.057756002992391586
This means that the diameter of the camera mesh on the z-axis is ~57*2=114 mm. Is this correct?
Hello @otaheri
I am trying to extract the vertices of the body. As described, I run the the get_grab_vertices.py
python grab/save_grab_vertices.py --grab-path $GRAB_DATASET_PATH \
--model-path $SMPLX_MODEL_FOLDER
I am getting an error related to the vertex displacement in the function blend_shapes
RuntimeError: size of dimension does not match previous size, operand 1, dim 2
How can I get it work?I didn't modify anything in the code just gave the model path and the data path. Thanks
Hi everybody!
I'm trying to make the code work but I'm doing mistakes (probably while organizing files).
I've downloaded all the files required and unzipped (I think in correct way) but when I try to run the examples I can't see anything and it returns errors.
I've organized my files as you can see in the first image
and the subfolder 'grab' has this items
the zipped_obj_subj folder contains all the objects and 6 subjects downloaded from GRAB page.
As you can see from this screen, this is what I type for see an example of movement
I think that my problem is about how I downloaded, collected and unzipped SMPLX components.
I've read multiple times all the documentations and informations on git but I can't find a way out.
Please help! :(
Hello,
I am currently working with a kinematic chain represented by lists of joint indices in Python, similar to the following example:
t2m_kinematic_chain = [
[0, 2, 5, 8, 11], # pelvis --> right_hip ---> right_knee ---> right_ankle ---> right_foot
[0, 1, 4, 7, 10], # pelvis ---> left_hip ---> left_knee ---> left_ankle ---> left_foot
[0, 3, 6, 9, 12, 15], # pelvis ---> spine1 ---> spine2 ---> spine3 ---> neck ---> head
[9, 14, 17, 19, 21], # spine3 ---> right_collar ---> right_shoulder ---> right_elbow ---> right_wrist
[9, 13, 16, 18, 20] # spine3 ---> left_collar ---> left_shoulder ---> left_elbow ---> left_wrist
]
Now, I want to include the 'jaw' joint (index 22) and the eye joints (indices 23 and 24) into this kinematic chain. I'm unsure about the connection between the 'jaw' joint and the 'head' joint (index 15). How should I modify the kinematic chain to include these joints properly? Should I connect the 'jaw' joint directly to the 'head' joint or consider another approach?
Thanks for your great work!
But I meet HTTP ERROR 500 when tried to open https://grab.is.tue.mpg.de/de.
I don't get problems on other websites and the solutions for this error found on websites can't help.
Can you help me? Thanks!
Hi,
Thank you for sharing the great work! When visualizing the data, I obtain incorrect hand-oject contacts as the examples shown below:
The hand and object mesh are obtained from the MANO hand data (pose, shape, global rotation, translation) and object data (object template, object global rotation, translation). I wonder if there are errors in my visualization process or the data can include these errors?
Yufei
Hi,
I noticed that the right_hand_pose
PCA provided in body
parameters is different from rhand_hand_pose
PCA provided in rhand
parameters in GRAB data. So when I replace the rhand
pose (either fullpose or PCA) as is into SMPLX the hand looks different. I was wondering -- given a MANO pose parameter (fullpose or PCA), is there a way to convert it into right_hand_pose
for SMPLX?
Here is an example: Left - right_hand_pose from body param ; Right - hand_pose from rhand param
Thanks!
Hello,
is there a way to get the positions of the 15 joint points of the mano hand in the world coordinate system from the data stored under rhand in the dataset?
In addition, what does the 24-dimensional vector of hand_pose under rhand represent?
Thank you so much! ! !
Thank you for your great work!!!!
I want to ask about the global_orient parameter in rhand. Does it describe the rotation of the root node of the middle finger of the palm in a fixed world coordinate system?
Hello! Thanks for the great dataset!
I've noticed that some sequences have outliers in the table position. The rendered videos confirm their presence (see, for example, "grab\s7\pyramidsmall_pass_1", shortly after the person grabs the object). To get the full list of sequences with outliers you can use the following snippet. I get 242 sequences when it finishes.
tol = 1e-3
for file in glob.glob(PATH_TO_GRAB + r"\*\*.npz"):
data = parse_npz(file)
parms = data.table.params
if np.max(np.linalg.norm(np.diff(parms.global_orient,axis=0),axis=1)) > tol: print(file)
Are there any plans to clean these sequences, or a recommended way to clean the data?
Thanks!
regarding the body data, what's the difference between the following data in body_data, including "body_pose", "fullpose" and "joints"(get from the codes in the attached figure1)? What the meaning of the size of these data, like what the meaning of 63 of 'body_pose', 127 of 'joints', and what the meaning of 165 of 'fullpose'?
How to get a sequence of joints data both angle and coordinates? Which key in the data dict is the right one?
As for the object data, I want to get a sequence of object vertices, is it right to get the sampled vertices with original coordinates firstly(which is object_data['verts'] with size (1024,3) ) for the first frame, and then rotate and translate it based on the object_data['global_orient'] and object_data['transl'] ?
How to caluculate it? Is there any existing python function that I can use directly with the input of these params?
Hello Omid,
I followed these steps to extract the zip folder, but the result was an empty folder, and when I tried to print the all_zips variable, it was an empty array.
The GRAB dataset is split into separate files per subject. Please do NOT unzip manually! Please take the following steps:
Download all ZIP files in the same folder.
Run our script as explained here to unzip the ZIP files and extract the content in the folder hierarchy that our code expects.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.