yuliangxiu / icon Goto Github PK
View Code? Open in Web Editor NEW[CVPR'22] ICON: Implicit Clothed humans Obtained from Normals
Home Page: https://icon.is.tue.mpg.de
License: Other
[CVPR'22] ICON: Implicit Clothed humans Obtained from Normals
Home Page: https://icon.is.tue.mpg.de
License: Other
great job for 3D human body, and I wonder to know whether it will provide mutiview mesh texture in the future work, Maybe it will more significant that predicting Muti-view texture from single view image, just like PIFU.
Hello
the paper is great and the result is impressive. May I ask when are you going to release the code for pre-training?
Hey,
Congrats for the great work.
I successfully ran this repository and getting the results from the visible view. Now, I want to extend it to a full clothed textured human body model using multiple view (like front, back ,left and right).
Any plan to do this task?
Can you suggest some ideas how to do the same ?
Hi, when I run bash fetch_data.sh
, I face this error message.
Archive: icon_data.zip
End-of-central-directory signature not found. Either this file is not
a zipfile, or it constitutes one disk of a multi-part archive. In the
latter case the central directory and zipfile comment will be found on
the last disk(s) of this archive.
unzip: cannot find zipfile directory in one of icon_data.zip or
icon_data.zip.zip, and cannot find icon_data.zip.ZIP, period.
Could you give some help?
An issue, occurred in the:
I run the inference script inside docker container with nvidia-docker on the A100. Driver Version: 510.47.03
Logs from container:
https://gist.github.com/KernelA/2ea3def5846c7f688ce3876f07527768
Installed dependencies:
https://gist.github.com/KernelA/7173fcff115103bc94514a72798f7853
Info about environment:
https://gist.github.com/KernelA/dfdbf7c61e61455d0bf46c8cc21aadfa
Git commit hash: d1908ce65762a08e4f2723585e485969beae9aa1
Thanks for your excellent work!
The results are very exciting, and I want to use the result of SMPL as prior for other work, but I don't know how to get the parameters of SMPL and the corresponding camera.
Could you tell me how to get it?
Thanks for your help!
Thank you so much for such an excellent work open source.
I would like to ask if the human model output by the network is aligned with the original input image.I see that the input image will be processed.
I see that the input image will be processed. If I have multi-view input images, I want to fuse these images and get a mannequin into one mannequin. How to do it.
Sorry to open similar issue.
It's an issue that came up before, but it's not solved well, so I'm asking you again
#30 related to iss_30
The problem has not been solved for about five days.
It was changed to 5.1.1 according to the advice of the issue, but the problem remained the same and restored to the latest version of PyYAML. Is there any other solution?
Sorry, try to use PyYAML==5.1.1
Traceback (most recent call last):
File "infer.py", line 96, in <module>
dataset = TestDataset(dataset_param, device)
File "/workspace/fashion-ICON/apps/../lib/dataset/TestDataset.py", line 105, in __init__
self.hps = PIXIE(config = pixie_cfg, device=self.device)
File "/workspace/fashion-ICON/apps/../lib/pixielib/pixie.py", line 49, in __init__
self._create_model()
File "/workspace/fashion-ICON/apps/../lib/pixielib/pixie.py", line 115, in _create_model
self.smplx = SMPLX(self.cfg.model).to(self.device)
File "/workspace/fashion-ICON/apps/../lib/pixielib/models/SMPLX.py", line 156, in __init__
self.extra_joint_selector = JointsFromVerticesSelector(
File "/workspace/fashion-ICON/apps/../lib/pixielib/models/lbs.py", line 399, in __init__
data = yaml.load(f)
TypeError: load() missing 1 required positional argument: 'Loader'
by original repo's issue (#30)
refer to upper comments, this error might be resolved by install PyYAML==5.1.1 but
It makes error again
Traceback (most recent call last):
File "infer.py", line 102, in <module>
for data in pbar:
File "/opt/conda/envs/icon/lib/python3.8/site-packages/tqdm/std.py", line 1180, in __iter__
for obj in iterable:
File "/workspace/fashion-ICON/apps/../lib/dataset/TestDataset.py", line 191, in __getitem__
preds_dict = self.hps.forward(img_hps.to(self.device))
File "/workspace/fashion-ICON/apps/../lib/pixielib/pixie.py", line 56, in forward
param_dict = self.encode({'body': {'image': data}}, threthold=True, keep_local=True, copy_and_paste=False)
File "/opt/conda/envs/icon/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/workspace/fashion-ICON/apps/../lib/pixielib/pixie.py", line 259, in encode
cropped_image, cropped_joints_dict = self.part_from_body(image_hd, part_name, points_dict)
File "/workspace/fashion-ICON/apps/../lib/pixielib/pixie.py", line 166, in part_from_body
cropped_image, tform = self.Cropper[cropper_key].crop(
File "/workspace/fashion-ICON/apps/../lib/pixielib/utils/tensor_cropper.py", line 98, in crop
cropped_image, tform = crop_tensor(image, center, bbox_size, self.crop_size)
File "/workspace/fashion-ICON/apps/../lib/pixielib/utils/tensor_cropper.py", line 78, in crop_tensor
cropped_image = warp_affine(
TypeError: warp_affine() got an unexpected keyword argument 'flags'
Are there any solution to run pixie module?
Hi Yuliang,
Thanks for releasing the ICON code and adding instructions for building/testing.
I am having the issue shown in the attached image. In the instructions it is pointed out to clone smplx from [email protected]:YuliangXiu/smplx.git but that repo does not exist.
The original smplx from https://github.com/vchoutas/smplx does contain a 'ModelOutput' class but not with the properties you are using in your repo/code ergo the error I am getting when testing the ICON example.
I would like to know some details about training.
Is the Ground-Truth SMPL or the predicted SMPL used in training ICON?
Also, what about normal images?
According to my understanding of the paper and practice, ICON should train the normal network first and then train the implicit reconstruction network.
When I reproduce ICON, I don't know whether to choose the Ground-Truth or the predicted data for SMPL model and normal images, respectively.
Hi, thank you so much for the wonderful work and corresponding codes. I am facing the following issue:
Is there any .py file called bvh_distance_queries_cuda
? Please let me know a possible solution.
Thank you for your effort and help :) :) :)
After installing all packages, I got the results successfully for PIFu and PaMIR.
I faced the runtime error when trying to get the ICON demo result. Could you guide what setting was wrong?
$ python infer.py -cfg ../configs/icon-filter.yaml -gpu 0 -in_dir ../examples -out_dir ../results
Traceback (most recent call last):
File "infer.py", line 304, in <module>
verts_pr, faces_pr, _ = model.test_single(in_tensor)
File "./ICON/apps/ICON.py", line 738, in test_single
sdf = self.reconEngine(opt=self.cfg,
File "./.virtualenvs/icon/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "../lib/common/seg3d_lossless.py", line 148, in forward
return self._forward_faster(**kwargs)
File "../lib/common/seg3d_lossless.py", line 170, in _forward_faster
occupancys = self.batch_eval(coords, **kwargs)
File "../lib/common/seg3d_lossless.py", line 139, in batch_eval
occupancys = self.query_func(**kwargs, points=coords2D)
File "../lib/common/train_util.py", line 338, in query_func
preds = netG.query(features=features,
File "../lib/net/HGPIFuNet.py", line 285, in query
smpl_sdf, smpl_norm, smpl_cmap, smpl_ind = cal_sdf_batch(
File "../lib/dataset/mesh_util.py", line 231, in cal_sdf_batch
residues, normals, pts_cmap, pts_ind = func(
File "./.virtualenvs/icon/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "./.virtualenvs/icon/lib/python3.8/site-packages/bvh_distance_queries/mesh_distance.py", line 79, in forward
output = self.search_tree(triangles, points)
File "./.virtualenvs/icon/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "./.virtualenvs/icon/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "./.virtualenvs/icon/lib/python3.8/site-packages/bvh_distance_queries/bvh_search_tree.py", line 109, in forward
output = BVHFunction.apply(
File "./.virtualenvs/icon/lib/python3.8/site-packages/bvh_distance_queries/bvh_search_tree.py", line 42, in forward
outputs = bvh_distance_queries_cuda.distance_queries(
RuntimeError: after reduction step 1: cudaErrorInvalidDevice: invalid device ordinal
There is something error when i running in the colab,I want to ask you for some help.here is the error, Thank you for your help.
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
Building wheel for voxelize-cuda (setup.py) ... error
ERROR: Failed building wheel for voxelize-cuda
Running setup.py clean for voxelize-cuda
Building wheel for kaolin (setup.py) ... done
Created wheel for kaolin: filename=kaolin-0.10.0-cp38-cp38-linux_x86_64.whl size=984560 sha256=534cd6e51cca93946d4ed5248b12077374eb8a96c0d3d9a2a2c546598b019d8d
Stored in directory: /tmp/pip-ephem-wheel-cache-owjr1c2q/wheels/b0/0c/b7/c21c4d2eb6360afe4d8728dfb21dbf88790f99fefa0885a0b5
Successfully built human-det smplx pytorch3d kaolin
Failed to build voxelize-cuda
Installing collected packages: voxelize-cuda, usd-core, kaolin
error: subprocess-exited-with-error
× Running setup.py install for voxelize-cuda did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
Running setup.py install for voxelize-cuda ... error
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> voxelize-cuda
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
It surprises me that 7 dims input for MLP performs so well, while other sotas use high-dim feature encoding. You attribute this to the absolutely local feature, which is contradictory to PiFUhd. They found that "3D reconstruction using high-resolution features without holistic reasoning severely suffers from depth ambiguity and is unable to generalize with input size discrepancy between training and inference." Since you both use normal map to recover local details, I wonder if the main contributors to reduce depth ambiguity in your work are the first two terms, the distance to nearest SMPL vertices and its normal. So I integrate them to PaMIR but it still cannot generalize well in the wild images. I notice the three terms of feature are all with certain geometry property, either distance in
Due to the limitation of GPU memory, it is not possible to finish the test of one image on a single GPU. Can it run the test on two GPUs?
Is it possible to obtain the exact 3D mesh highlighted in yellow of the 1st attached image?
Browsing the code, I can see the 3 default meshes being saved in the obj folder for a given test run (xxx_smpl.obj, xxx_recon.obj, xxx_refine.obj). My assumption was that given the cloth-norm (recon) tag in the image reflects the xxx_recon.obj then cloth-norm (pred) should reflect the xxx_refine.obj but seems that is not the case. 2nd screenshot shows the 3 generated meshes and looks that xxx_recon and xxx_refine differ indeed but not as noticeable as in the image (in the code the deform_verts
variable is being added to the default verts_ptr
but that displacement results only in some kind of artifacts -or here is where I should tune some values to have better 3D mesh resolution?-, similar behavior is shown with the repo test images)
I also thought that increasing the marching cubes resolution/values should I be able to obtain the 3D mesh that can match the cloth-norm (pred) image but after following more of your code seems that also the computation of the sdf
function is relevant (not really an expert of generating those functions and that is where I started to get lost)
I might be misinterpreting ICON but my objective is just to know if one can get the 3D mesh from the very detailed normal map from the prediction (or get a very close approximation).
Thanks again for all the help so far and awesome work (:
Hi!
I would like to finish the test in the colab demo. However, it has a problem.
Before I test this demo, all code is modified.
Traceback (most recent call last):
File "infer.py", line 308, in
verts_pr, faces_pr, _ = model.test_single(in_tensor)
File "/content/ICON/apps/ICON.py", line 736, in test_single
sdf = self.reconEngine(opt=self.cfg,
File "/usr/local/envs/icon/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/content/ICON/apps/../lib/common/seg3d_lossless.py", line 148, in forward
return self._forward_faster(**kwargs)
File "/content/ICON/apps/../lib/common/seg3d_lossless.py", line 170, in _forward_faster
occupancys = self.batch_eval(coords, **kwargs)
File "/content/ICON/apps/../lib/common/seg3d_lossless.py", line 139, in batch_eval
occupancys = self.query_func(**kwargs, points=coords2D)
File "/content/ICON/apps/../lib/common/train_util.py", line 338, in query_func
preds = netG.query(features=features,
File "/content/ICON/apps/../lib/net/HGPIFuNet.py", line 285, in query
smpl_sdf, smpl_norm, smpl_cmap, smpl_ind = cal_sdf_batch(
File "/content/ICON/apps/../lib/dataset/mesh_util.py", line 255, in cal_sdf_batch
residues, normals, pts_cmap, pts_ind = func(
File "/usr/local/envs/icon/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/envs/icon/lib/python3.8/site-packages/bvh_distance_queries/mesh_distance.py", line 105, in forward
output = self.search_tree(triangles, points)
File "/usr/local/envs/icon/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/envs/icon/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/usr/local/envs/icon/lib/python3.8/site-packages/bvh_distance_queries/bvh_search_tree.py", line 109, in forward
output = BVHFunction.apply(
File "/usr/local/envs/icon/lib/python3.8/site-packages/bvh_distance_queries/bvh_search_tree.py", line 42, in forward
outputs = bvh_distance_queries_cuda.distance_queries(
RuntimeError: after reduction step 1: cudaErrorInvalidDevice: invalid device ordinal
Hello,
In Colab, after run
%cd /content/ICON/apps
!source activate icon && python infer.py -cfg ../configs/icon-filter.yaml -loop_smpl 100 -loop_cloth 0 -colab -gpu 0 -export_video
I get this:
/content/ICON/apps
Traceback (most recent call last):
File "infer.py", line 23, in
import torch, trimesh
ModuleNotFoundError: No module named 'trimesh'
What am I doing wrong?
Thanks!
Traceback (most recent call last):
File "/home/lyx0208/Desktop/code/ICON/apps/infer.py", line 207, in
'normal_B'] = model.netG.normal_filter(in_tensor)
File "/home/lyx0208/anaconda3/envs/icon/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/lyx0208/Desktop/code/ICON/apps/../lib/net/NormalNet.py", line 87, in forward
nmlF = self.netF(torch.cat(inF_list, dim=1))
Thanks for the exciting job! Can you please give some description about 'smpl_cmap' and 'tedra_data'? Cannot google too much about them and cannot find related keywords in the paper.
I've managed to compile and install all dependencies, except this last one. Google doesn't really find any helpful result for this error, so any help is much appreciated!
Here's the log:
python setup.py install
running install
running bdist_egg
running egg_info
writing bvh_distance_queries.egg-info\PKG-INFO
writing dependency_links to bvh_distance_queries.egg-info\dependency_links.txt
writing requirements to bvh_distance_queries.egg-info\requires.txt
writing top-level names to bvh_distance_queries.egg-info\top_level.txt
adding license file 'LICENSE'
writing manifest file 'bvh_distance_queries.egg-info\SOURCES.txt'
installing library code to build\bdist.win-amd64\egg
running install_lib
running build_py
running build_ext
building 'bvh_distance_queries_cuda' extension
Emitting ninja build file C:\bvh-distance-queries\build\temp.win-amd64-3.8\Release\build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/1] C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\bin\nvcc --generate-dependencies-with-compile --dependency-output C:\bvh-distance-queries\build\temp.win-amd64-3.8\Release\src/bvh_cuda_op.obj.d --use-local-env -Xcompiler /MD -Xcompiler /wd4819 -Xcompiler /wd4251 -Xcompiler /wd4244 -Xcompiler /wd4267 -Xcompiler /wd4275 -Xcompiler /wd4018 -Xcompiler /wd4190 -Xcompiler /EHsc -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -IC:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\torch\include -IC:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\torch\include\torch\csrc\api\include -IC:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\torch\include\TH -IC:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\torch\include\THC -Iinclude -Icuda-samples/Common -IC:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\torch\include -IC:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\torch\include\torch\csrc\api\include -IC:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\torch\include\TH -IC:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\include" -IC:\ProgramData\Anaconda3\envs\pttf2cu113py38\include -IC:\ProgramData\Anaconda3\envs\pttf2cu113py38\include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" -c C:\bvh-distance-queries\src\bvh_cuda_op.cu -o C:\bvh-distance-queries\build\temp.win-amd64-3.8\Release\src/bvh_cuda_op.obj -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -DPRINT_TIMINGS=0 -DDEBUG_PRINT=0 -DERROR_CHECKING=1 -DNUM_THREADS=256 -DPROFILING=0 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=bvh_distance_queries_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61
FAILED: C:/bvh-distance-queries/build/temp.win-amd64-3.8/Release/src/bvh_cuda_op.obj
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\bin\nvcc --generate-dependencies-with-compile --dependency-output C:\bvh-distance-queries\build\temp.win-amd64-3.8\Release\src/bvh_cuda_op.obj.d --use-local-env -Xcompiler /MD -Xcompiler /wd4819 -Xcompiler /wd4251 -Xcompiler /wd4244 -Xcompiler /wd4267 -Xcompiler /wd4275 -Xcompiler /wd4018 -Xcompiler /wd4190 -Xcompiler /EHsc -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -IC:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\torch\include -IC:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\torch\include\torch\csrc\api\include -IC:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\torch\include\TH -IC:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\torch\include\THC -Iinclude -Icuda-samples/Common -IC:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\torch\include -IC:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\torch\include\torch\csrc\api\include -IC:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\torch\include\TH -IC:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\include" -IC:\ProgramData\Anaconda3\envs\pttf2cu113py38\include -IC:\ProgramData\Anaconda3\envs\pttf2cu113py38\include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" -c C:\bvh-distance-queries\src\bvh_cuda_op.cu -o C:\bvh-distance-queries\build\temp.win-amd64-3.8\Release\src/bvh_cuda_op.obj -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -DPRINT_TIMINGS=0 -DDEBUG_PRINT=0 -DERROR_CHECKING=1 -DNUM_THREADS=256 -DPROFILING=0 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=bvh_distance_queries_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61
C:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\torch\include\pybind11\detail/common.h(108): warning C4005: 'HAVE_SNPRINTF': macro redefinition
C:\ProgramData\Anaconda3\envs\pttf2cu113py38\include\pyerrors.h(315): note: see previous definition of 'HAVE_SNPRINTF'
C:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\torch\include\pybind11\detail/common.h(108): warning C4005: 'HAVE_SNPRINTF': macro redefinition
C:\ProgramData\Anaconda3\envs\pttf2cu113py38\include\pyerrors.h(315): note: see previous definition of 'HAVE_SNPRINTF'
C:\bvh-distance-queries\src\aabb.hpp(34): error: attributes are not allowed here
C:\bvh-distance-queries\src\triangle.hpp(33): error: attributes are not allowed here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: identifier "start" is undefined
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: identifier "stop" is undefined
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: identifier "distances_ptr" is undefined
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: identifier "dev_ptr" is undefined
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: identifier "points_ptr" is undefined
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: identifier "distances_dest_ptr" is undefined
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: identifier "distances_dest_ptr" is undefined
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected a ";"
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): warning: variable "triangles_ptr" was declared but never referenced
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: identifier "start" is undefined
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: identifier "stop" is undefined
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: identifier "distances_ptr" is undefined
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: identifier "dev_ptr" is undefined
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: identifier "points_ptr" is undefined
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: identifier "distances_dest_ptr" is undefined
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: identifier "distances_dest_ptr" is undefined
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: "#" not expected here
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected an expression
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): error: expected a ";"
C:\bvh-distance-queries\src\bvh_cuda_op.cu(900): warning: variable "triangles_ptr" was declared but never referenced
90 errors detected in the compilation of "C:/bvh-distance-queries/src/bvh_cuda_op.cu".
bvh_cuda_op.cu
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\torch\utils\cpp_extension.py", line 1717, in _run_ninja_build
subprocess.run(
File "C:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\subprocess.py", line 516, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "setup.py", line 75, in <module>
setup(name=NAME,
File "C:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\setuptools\__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "C:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\distutils\core.py", line 148, in setup
dist.run_commands()
File "C:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\distutils\dist.py", line 966, in run_commands
self.run_command(cmd)
File "C:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "C:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\setuptools\command\install.py", line 67, in run
self.do_egg_install()
File "C:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\setuptools\command\install.py", line 109, in do_egg_install
self.run_command('bdist_egg')
File "C:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "C:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "C:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\setuptools\command\bdist_egg.py", line 164, in run
cmd = self.call_command('install_lib', warn_dir=0)
File "C:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\setuptools\command\bdist_egg.py", line 150, in call_command
self.run_command(cmdname)
File "C:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "C:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "C:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\setuptools\command\install_lib.py", line 11, in run
self.build()
File "C:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\distutils\command\install_lib.py", line 107, in build
self.run_command('build_ext')
File "C:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "C:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "C:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\setuptools\command\build_ext.py", line 79, in run
_build_ext.run(self)
File "C:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\Cython\Distutils\old_build_ext.py", line 186, in run
_build_ext.build_ext.run(self)
File "C:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\distutils\command\build_ext.py", line 340, in run
self.build_extensions()
File "C:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\torch\utils\cpp_extension.py", line 735, in build_extensions
build_ext.build_extensions(self)
File "C:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\Cython\Distutils\old_build_ext.py", line 195, in build_extensions
_build_ext.build_ext.build_extensions(self)
File "C:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\distutils\command\build_ext.py", line 449, in build_extensions
self._build_extensions_serial()
File "C:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\distutils\command\build_ext.py", line 474, in _build_extensions_serial
self.build_extension(ext)
File "C:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\setuptools\command\build_ext.py", line 202, in build_extension
_build_ext.build_extension(self, ext)
File "C:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\distutils\command\build_ext.py", line 528, in build_extension
objects = self.compiler.compile(sources,
File "C:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\torch\utils\cpp_extension.py", line 708, in win_wrap_ninja_compile
_write_ninja_file_and_compile_objects(
File "C:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\torch\utils\cpp_extension.py", line 1399, in _write_ninja_file_and_compile_objects
_run_ninja_build(
File "C:\ProgramData\Anaconda3\envs\pttf2cu113py38\lib\site-packages\torch\utils\cpp_extension.py", line 1733, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error compiling objects for extension
Hi, would you be interested in adding ICON to Hugging Face? The Hub offers free hosting, and it would make your work more accessible and visible to the rest of the ML community. Models/datasets/spaces(web demos) can be added to a user account or organization similar to github.
Example from other organizations:
Keras: https://huggingface.co/keras-io
Microsoft: https://huggingface.co/microsoft
Facebook: https://huggingface.co/facebook
Example spaces with repos:
github: https://github.com/salesforce/BLIP
Spaces: https://huggingface.co/spaces/salesforce/BLIP
github: https://github.com/facebookresearch/omnivore
Spaces: https://huggingface.co/spaces/akhaliq/omnivore
and here are guides for adding spaces/models/datasets to your org
How to add a Space: https://huggingface.co/blog/gradio-spaces
how to add models: https://huggingface.co/docs/hub/adding-a-model
uploading a dataset: https://huggingface.co/docs/datasets/upload_dataset.html
Please let us know if you would be interested and if you have any questions, we can also help with the technical implementation.
Hi, thanks for your great job. I tried to try the demo and the task was killed due to lack of RAM. How much RAM do I need to run the demo?
Similarly, when I try to run the Colab demo, I get a warning that I need a lot of RAM. Is it possible to run the provided Colab demo with the free version? (Sorry I haven't tried it myself)
Thanks for your wonderful work! I'm wondering will you support VIBE as optional HPS? I found that VIBE performs more robust than PyMAF, etc in challenging videos, so I'm wondering if you have tested VIBE and will VIBE be supported?
Thanks and looking forward to your reply!
Great work! Can i ask how did you get the fitted corresponding SMPL/SMPL-X model of your scan?
FileNotFoundError: [Errno 2] No such file or directory: '/content/ICON/apps/../lib/pymaf/core/../../../data/pymaf_data/pretrained_model/PyMAF_model_checkpoint.pt'
When I run the following command in Colab
% cd / content / ICON / apps
! source activate icon && python infer.py -cfg ../configs/icon-filter.yaml -loop_smpl 100 -loop_cloth 0 -colab -gpu 0 -export_video -in_dir ../examples
I get the following error
/ content / ICON / apps
Traceback (most recent call last):
File "infer.py", line 29, in
from ICON import ICON
File "/content/ICON/apps/ICON.py", line 36, in
from lib.dataset.mesh_util import SMPLX, update_mesh_shape_prior_losses, get_visibility
File "/content/ICON/apps/../lib/dataset/mesh_util.py", line 28, in
from kaolin.ops.mesh import check_sign
ModuleNotFoundError: No module named'kaolin'
What should I do?
Thanks for the exciting work! Could you please share the training dataset pre-processing process? In addition, how is the model trained?
hi!
I got this error on linux system with cuda==10.2.
Installed pytorch3d locally as recommended,
python = 3.8.12
pytorch = 1.8.0
pytorch3d = 0.6.1
and already installed bvh-distance-queries.
hello i have a question about the shape of optimized_pose. The standard SMPL model takes 72 pose parameters but here the shape is [1, 23, 3, 3 ], how to convert to [1,72] shape?
Hey @YuliangXiu ,
I'm facing the below error after running the demo script,
%cd /content/ICON/apps
!source activate icon && python infer.py -cfg ../configs/icon-filter.yaml -loop_smpl 100 -loop_cloth 100 -colab -gpu 0
Traceback (most recent call last):
File "infer.py", line 87, in <module>
dataset = TestDataset(
File "/content/ICON/apps/../lib/dataset/TestDataset.py", line 73, in __init__
self.hps = pymaf_net(path_config.SMPL_MEAN_PARAMS,
File "/content/ICON/apps/../lib/pymaf/models/pymaf_net.py", line 361, in pymaf_net
model = PyMAF(smpl_mean_params, pretrained)
File "/content/ICON/apps/../lib/pymaf/models/pymaf_net.py", line 207, in __init__
Regressor(feat_dim=ref_infeat_dim,
File "/content/ICON/apps/../lib/pymaf/models/pymaf_net.py", line 36, in __init__
self.smpl = SMPL(SMPL_MODEL_DIR, batch_size=64, create_transl=False)
File "/content/ICON/apps/../lib/pymaf/models/smpl.py", line 23, in __init__
super().__init__(*args, **kwargs)
File "/usr/local/envs/icon/lib/python3.8/site-packages/smplx/body_models.py", line 149, in __init__
data_struct = Struct(**pickle.load(smpl_file,
_pickle.UnpicklingError: invalid load key, '\x03'.
I tried the below code snippet and it's working fine in colab cell,
import pickle
with open('/content/ICON/data/smpl_related/models/smpl/SMPL_FEMALE.pkl','rb') as f1:
d1=pickle.load(f1,encoding='latin1')
print(d1)
Can anyone please suggest to tackle this issue?
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='drive.google.com', port=443): Max retries exceeded with url: /uc?id=1tCU5MM1LhRgGou5OpmpjBQbSrYIUoYab (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fbc4ad7deb0>: Failed to establish a new connection: [Errno 110] Connection timed out'))
Hey @YuliangXiu ,
I tried to setup the complete dependency on my Ubuntu 18.04 PC (pytorch 1.6,Cuda 10.1) with all the dependencies in the requirements.txt
one by one.
Actually faced a lot of issues in the above procedure.
After that I was getting issue in loading model in rembg module
. So I manually downloaded the model file
and modified the rembg
accordingly. It takes corrected the process_image
function in lib/pymaf/utils/imutils.py
.
This produces hps_img
having shape [3,224,224]
in my case, that is being further fed to pymaf_net.py
in line 282
to extract features using the defined backbone (res50
).
But this backbone expects input size as [64, 3, 7, 7]
.
And that's why i'm getting dimension mismatch runtime error.
Note:- I have modified the image_to_pymaf_tensor
in get_transformer()
from lib/pymaf/utils/imutils.py
as per my pytorch version .
image_to_pymaf_tensor = transforms.Compose([
transforms.ToPILImage(), #Added by us
transforms.Resize(224),
transforms.ToTensor(), #Added by us
transforms.Normalize(mean=constants.IMG_NORM_MEAN,
std=constants.IMG_NORM_STD)
])
ICON:
[w/ Global Image Encoder]: True
[Image Features used by MLP]: ['normal_F', 'normal_B']
[Geometry Features used by MLP]: ['sdf', 'norm', 'vis', 'cmap']
[Dim of Image Features (local)]: 6
[Dim of Geometry Features (ICON)]: 7
[Dim of MLP's first layer]: 13
initialize network with xavier
initialize network with xavier
Resume MLP weights from ../data/ckpt/icon-filter.ckpt
Resume normal model from ../data/ckpt/normal.ckpt
Using cache found in /home/ujjawal/.cache/torch/hub/NVIDIA_DeepLearningExamples_torchhub
Using cache found in /home/ujjawal/.cache/torch/hub/NVIDIA_DeepLearningExamples_torchhub
Dataset Size: 2
0%| | 0/2 [00:00<?, ?it/s]*********************************
img_np shape: (512, 512, 3)
img_hps shape: torch.Size([3, 224, 224])
input shape x in pymaf_net : torch.Size([3, 224, 224])
input shape x in hmr : torch.Size([3, 224, 224])
0%| | 0/2 [00:01<?, ?it/s]
Traceback (most recent call last):
File "infer.py", line 97, in <module>
for data in pbar:
File "/home/ujjawal/miniconda2/envs/caffe2/lib/python3.7/site-packages/tqdm/std.py", line 1130, in __iter__
for obj in iterable:
File "../lib/dataset/TestDataset.py", line 166, in __getitem__
preds_dict = self.hps(img_hps.to(self.device))
File "/home/ujjawal/miniconda2/envs/caffe2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "../lib/pymaf/models/pymaf_net.py", line 285, in forward
s_feat, g_feat = self.feature_extractor(x)
File "/home/ujjawal/miniconda2/envs/caffe2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "../lib/pymaf/models/hmr.py", line 159, in forward
x = self.conv1(x)
File "/home/ujjawal/miniconda2/envs/caffe2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ujjawal/miniconda2/envs/caffe2/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 419, in forward
return self._conv_forward(input, self.weight)
File "/home/ujjawal/miniconda2/envs/caffe2/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 416, in _conv_forward
self.padding, self.dilation, self.groups)
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 3, 7, 7], but got 3-dimensional input of size [3, 224, 224] instead
Please suggest your view on the same.
really awesome project.
Would be great to have a user input and run inference to get result on google colab!
hope you guys can set this project up there.
Hello Yuliang! Thank you for your amazing job!
In smpl_feat
, I notice that there is a property named cmap
. What's the meaning of that?
Hi Yuliang, thanks for your contribution to the community, I have a few questions here.
Thanks.
/lib/pixielib/models/lbs.py", line 399, in init
data = yaml.load(f)
TypeError: load() missing 1 required positional argument: 'Loader'
Getting this error when installing locally on my workstation via colab bash script.
.../ICON/pytorch3d/pytorch3d/_C.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZNSt15__exception_ptr13exception_ptr10_M_releaseEv
This after installing pytorch3d locally as recommended. Conda has too many conflicts and never resolves.
Installing torch through pip works (1.8.2+cu111) up until the next steps of infer.py because bvh_distance_queries only supports cuda 11.0. This would most likely require compiling against 11.0, but it will probably lead to more errors as I don't know what this repository's dependencies require as far as torch goes.
您好,请问您在论文中提到的将此model用于SCNimate可以得到一个有纹理且可动画的avater,这个是怎么做到的啊,您有兴趣说一下吗?
It‘s a great work! I have a question about the FPS. Can it achieve real-time?
When using the infer.py sample script is it possible to control the 3D body model being used/generated? I am interested to see if one can get/use the SMPLX in your infer.py pipeline or is it just SMPL supported?
I see in the installation instructions one can download the SMPLX model but it is mentioned (optional, training used), so makes me wonder the purpose of having the SMPLX in the structure of the data folder.
Thanks and again awesome work.
colab download problem: mv: cannot stat 'smpl_data': No such file or directory
so during test demo in example there exists bug:
FileNotFoundError: [Errno 2] No such file or directory: '/content/ICON/apps/../lib/dataset/../../data/smpl_related/smpl_data/smplx_faces.npy'
Hi how to solve it on colab?
Archive: icon_data.zip
End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive.
-The same error applies to SMPL and mpips zip files
-I have tried unzipping / viewing zip files manually/offline.
-Windows doesn't recognise the file content as a "zipped file"
-I have registered with all nessesary sites and all logins appears valid... ie no errors are shown.
Can u validate the zip files and links are okay?
In not sure what the issue might be.
Dear Yuliang Xiu,
during the package installation on colab page, I am getting the following error:
ERROR: pip's legacy dependency resolver does not consider dependency conflicts when selecting packages. This behaviour is the source of the following dependency conflicts.
kornia 0.4.0 requires torch<1.7.0,>=1.6.0, but you'll have torch 1.8.2 which is incompatible.
rembg 2.0.13 requires pillow==9.0.1, but you'll have pillow 9.0.0 which is incompatible.
rembg 2.0.13 requires scipy==1.7.3, but you'll have scipy 1.5.2 which is incompatible.
and when running on the test data
Traceback (most recent call last):
File "infer.py", line 96, in
dataset = TestDataset(dataset_param, device)
File "/content/ICON/apps/../lib/dataset/TestDataset.py", line 94, in init
self.smpl_model = self.get_smpl_model(self.smpl_type, self.smpl_gender).to(self.device)
File "/content/ICON/apps/../lib/dataset/TestDataset.py", line 87, in
self.get_smpl_model = lambda smpl_type, smpl_gender : smplx.create(
File "/usr/local/envs/icon/lib/python3.8/site-packages/smplx/body_models.py", line 2413, in create
return SMPL(model_path, **kwargs)
File "/usr/local/envs/icon/lib/python3.8/site-packages/smplx/body_models.py", line 145, in init
assert osp.exists(smpl_path), 'Path {} does not exist!'.format(
AssertionError: Path /content/ICON/apps/../lib/dataset/../../data/smpl_related/models/smpl does not exist!
Please take a look
Thanks
Sunil
i get this error
AssertionError: Path /content/ICON/apps/../lib/pymaf/core/../../../data/pymaf_data/../smpl_related/models/smpl does not exist!
Thanks for your great work.
I want to use another gpu to run the demo, so I modify
Line 71 in 53273e0
os.environ['CUDA_VISIBLE_DEVICES'] = "0,1,2,3,4,5,6,7"
Then I run
python infer.py -cfg ../configs/icon-filter.yaml -gpu 1 -in_dir ../examples -out_dir ../results -hps_type pixie
But errors occur.
Then I find out that the implementation of point_to_mesh_distance in kaolin use .cuda() to force tensors on gpu0.
Line 282 in 53273e0
min_dist = torch.zeros((num_points), dtype=points.dtype).to(points.device)
min_dist_idx = torch.zeros((num_points), dtype=torch.long).to(points.device)
dist_type = torch.zeros((num_points), dtype=torch.int32).to(points.device)
and reinstall kaolin from local.
Then I meet one of the following errors.
or
Using gpu 2, gpu 3,... will also trigger the error.
Could you help me to solve these errors.
Hello. Thank you for your awesome work and providing source code.
But i have question: Is there any possible ways to extract texture from image?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.