Giter VIP home page Giter VIP logo

mica's People

Contributors

isofew avatar yusukeyusuke avatar zielon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mica's Issues

ResolvePackageNotFound

Hi,
I try to install and test MICA but when i launch conda env created ....:
Solving environment: failed
ResolvePackageNotFound:

  • pytorch3d==0.7.0=py38_cu113_pyt1110
    I try to downgrade python to 3.8.13 but nothing

conda 22.9.0
Linux 5.15.0-52-generic #58-Ubuntu SMP Thu Oct 13 08:03:55 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

Thk for your help.

CUDA 11 support / RTX30x0?

Traceback (most recent call last):
File "demo.py", line 156, in
main(cfg, args)
File "demo.py", line 124, in main
codedict = mica.encode(images, arcface)
File "/home/yc/testing3dai/mica/MICA/micalib/models/mica.py", line 76, in encode
codedict['arcface'] = F.normalize(self.arcface(arcface_imgs))
File "/home/yc/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/yc/testing3dai/mica/MICA/models/arcface.py", line 179, in forward
x = self.forward_arcface(images)
File "/home/yc/testing3dai/mica/MICA/models/arcface.py", line 188, in forward_arcface
x = self.prelu(x)
File "/home/yc/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/yc/anaconda3/lib/python3.7/site-packages/torch/nn/modules/activation.py", line 1104, in forward
return F.prelu(input, self.weight)
File "/home/yc/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py", line 1499, in prelu
return torch.prelu(input, weight)
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

Could not build with environment.yml

I've seen this issue come up and be resolved in 2022, but it's a new year, and a new problem. When I try to build, I get the following output (I tried using mamba as well, and it didn't work):

conda env create -f ..\MICA\environment.yml
Retrieving notices: ...working... done
Collecting package metadata (repodata.json): done
Solving environment: failed

ResolvePackageNotFound:

  • typing_extensions==4.3.0=py39h06a4308_0
  • sqlite==3.40.0=h5082296_0
  • libstdcxx-ng==11.2.0=h1234567_1
  • libffi==3.4.2=h6a678d5_6
  • ncurses==6.3=h5eee18b_3
  • cuda-cuobjdump==11.6.124=h2eeebcb_0
  • pip==22.2.2=py39h06a4308_0
  • cuda-driver-dev==11.6.55=0
  • python==3.9.15=h7a1cb2a_2
  • mkl_random==1.2.2=py39h51133e4_0
  • mkl_fft==1.3.1=py39hd3c417c_0
  • cffi==1.15.1=py39h5eee18b_2
  • gmp==6.2.1=h295c915_3
  • numpy==1.23.4=py39h14f4228_0
  • urllib3==1.26.12=py39h06a4308_0
  • ffmpeg==4.3=hf484d3e_0
  • zlib==1.2.13=h5eee18b_0
  • nettle==3.7.3=hbbd107a_1
  • xz==5.2.6=h5eee18b_0
  • libgcc-ng==11.2.0=h1234567_1
  • cuda-samples==11.6.101=h8efea70_0
  • zstd==1.5.2=ha4553b6_0
  • cuda-cccl==11.6.55=hf6102b2_0
  • _openmp_mutex==5.1=1_gnu
  • cuda-gdb==11.8.86=0
  • readline==8.2=h5eee18b_0
  • cuda-cuxxfilt==11.6.124=hecbf4f6_0
  • cuda-nsight==11.8.86=0
  • openssl==1.1.1s=h7f8727e_0
  • pysocks==1.7.1=py39h06a4308_0
  • libcufile==1.4.0.31=0
  • libtiff==4.4.0=hecacb30_2
  • cuda-cudart==11.6.55=he381448_0
  • mkl-service==2.4.0=py39h7f8727e_0
  • brotlipy==0.7.0=py39h27cfd23_1003
  • libgomp==11.2.0=h1234567_1
  • libdeflate==1.8=h7f8727e_5
  • requests==2.28.1=py39h06a4308_0
  • numpy-base==1.23.4=py39h31eccc5_0
  • cuda-nvtx==11.6.124=h0630a44_0
  • pillow==9.2.0=py39hace64e9_1
  • ld_impl_linux-64==2.38=h1181459_1
  • idna==3.4=py39h06a4308_0
  • cuda-nvprune==11.6.124=he22ec0a_0
  • certifi==2022.9.24=py39h06a4308_0
  • jpeg==9e=h7f8727e_0
  • cuda-nvml-dev==11.6.55=haa9ef22_0
  • ca-certificates==2022.10.11=h06a4308_0
  • gnutls==3.6.15=he1e5248_0
  • cuda-nvcc==11.6.124=hbba6d2d_0
  • libidn2==2.3.2=h7f8727e_0
  • libpng==1.6.37=hbc83047_0
  • lerc==3.0=h295c915_0
  • setuptools==65.5.0=py39h06a4308_0
  • libcufile-dev==1.4.0.31=0
  • cuda-nvrtc==11.6.124=h020bade_0
  • libtasn1==4.16.0=h27cfd23_0
  • gds-tools==1.4.0.31=0
  • tk==8.6.12=h1ccaba5_0
  • pytorch==1.13.0=py3.9_cuda11.6_cudnn8.3.2_0
  • giflib==5.2.1=h7b6447c_0
  • libiconv==1.16=h7f8727e_2
  • lame==3.100=h7b6447c_0
  • libunistring==0.9.10=h27cfd23_0
  • freetype==2.12.1=h4a9f257_0
  • cuda-cudart-dev==11.6.55=h42ad0f4_0
  • lcms2==2.12=h3be6417_0
  • cuda-nvrtc-dev==11.6.124=h249d397_0
  • libwebp-base==1.2.4=h5eee18b_0
  • bzip2==1.0.8=h7b6447c_0
  • lz4-c==1.9.3=h295c915_1
  • intel-openmp==2021.4.0=h06a4308_3561
  • mkl==2021.4.0=h06a4308_640
  • cuda-cupti==11.6.124=h86345e5_0
  • cryptography==38.0.1=py39h9ce1e76_0
  • openh264==2.1.1=h4ff587b_0
  • libwebp==1.2.4=h11a3e52_0

I have a question about 3D Face Reconstruction

Is it correct that MICA uses FLAME model for 3D face reconstruction?

If so, does MICA provide 3DMM parameters for 3D face reconstruction and the FLAME model does 3D modeling through those parameters?

ImportError

ImportError: /usr/local/lib/python3.7/dist-packages/pytorch3d/_C.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZNK2at6Tensor7is_cudaEv

How to generate UV_texture?

Hello,I want to ask a question?mica's results :['flame_verts_shape', 'flame_shape_code', 'pred_canonical_shape_vertices', 'pred_shape_code', 'faceid']. I don't know how to generate UV_texture?Thank you!

performance

Why the red line get better performace than the blue line, but the green line get worse performance than the yellow line?
You state in your research that this may be due to fine-tuning of partial layers or entire pipeline leads to huge overfitting of the training data。Why does this happen to DECA but not to MICA?

screenshot-20220914-114257

NoW benchmark public validation set results

Hi,

I was checking MICA's performance on validation and test set released by NoW benchmark to the public and wanted to verify if the numbers I am getting are indeed correct:

Validation Set:
median: 1.267735, mean: 2.329532, std: 3.083015

I wanted to know if you have run the model on the public validation dataset and if the numbers match as they are far away from the leaderboard results.

perspective camera model

Hello, I would like to obtain the parameters of the perspective camera model, such as optical center position, focal point position, focal length, camera orientation, etc. But I cannot obtain it from the code. Which piece of code does the scientific research obtain these parameters?

Face tracking replication

Hi Zielon,

Thanks for the great work! Are there any updates on the adding the face tracking to test the shape ID with expression tracking?

insightface has not app.py ang utils.pu

demo.py use from insightface.app import FaceAnalysis ; from insightface.app.common import Face; from insightface.utils import face_align, but I cannot find the app.py and utils.py

Training procedure

What are the training commands as well as data processing script required? Also some of the dataset links do not seem to work, is there any location for the dataset in flame topology?

what is the arcface file in npy?

i notice that the input have two files: 2d image and arcface image(.npy), how can i get or generate the arcface image file? and the input should only be the 2d image. And What does the arcface image do?

is it possible to improve your code in order to generate 608 dense landmark

before by hand, I want to know if your code can be used as 608 dense landmark generator like the paper [3D Face Reconstruction with Dense Landmarks].

I notice in your code, here is blendshape or someting like this, and in your paper, your code can generate the same facial pose 3D reconstruction.

I want to know whether your code can be used to 608 facial dense landmarks. if it cans ,I will take part in develop.

Texture on 3D head obj?

Hello, Is it possible to apply texture on 3D obj ? If yes, do you have any code for that or are you planning to add texture UV ?

environment error: cannot run demo.py

Hi,
I've followed the instruction to install the conda environment, but I cannot run demo.py successfully, this is the error:

ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject

i searched online and upgraded numpy from 1.21.2 to 1.23.1 by using pip install -U numpy

Then I ran into another error:

ImportError: numpy.core.multiarray failed to import

It is due to version incompatibility, but which one is the correct version? Can you please help me?

Error when running demo.py

I cloned the project, ran the install.sh file without errors, then I activated the environment and when I tried to run demo.py, I got:

`(MICA) patrick@Alien:~/MICA$ python demo.py
2022-11-17 23:47:03.894 | INFO | models.flame:init:54 - [FLAME] creating the FLAME Decoder

Traceback (most recent call last):
File "demo.py", line 157, in
main(cfg, args)
File "demo.py", line 105, in main
mica = util.find_model_using_name(model_dir='micalib.models', model_name=cfg.model.name)(cfg, device)
File "/home/patrick/MICA/micalib/models/mica.py", line 37, in init
self.initialize()
File "/home/patrick/MICA/micalib/base_model.py", line 45, in initialize
self.create_flame(self.cfg.model)
File "/home/patrick/MICA/micalib/base_model.py", line 53, in create_flame
self.flame = FLAME(model_cfg).to(self.device)
File "/home/patrick/anaconda3/envs/MICA/lib/python3.8/site-packages/torch/nn/modules/module.py", line 907, in to
return self._apply(convert)
File "/home/patrick/anaconda3/envs/MICA/lib/python3.8/site-packages/torch/nn/modules/module.py", line 601, in _apply
param_applied = fn(param)
File "/home/patrick/anaconda3/envs/MICA/lib/python3.8/site-packages/torch/nn/modules/module.py", line 905, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
File "/home/patrick/anaconda3/envs/MICA/lib/python3.8/site-packages/torch/cuda/init.py", line 216, in _lazy_init
torch._C._cuda_init()
RuntimeError: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero.
`

Any idea how to solve this?

flame-fitting release

Thanks for your great work. I am not sure if you use the fitting way by Rubikplayer/flame-fitting. If you have a better way to fit 3D scan with FLAME, could you share the flame-fitting code? Thank you very much.

How do we change the output shape of 3D face?

Hi, I have used your demo.py code. The 3D face output has fixed numbers of 5023 verticals and 9976 heads in the .obj file. How can we change the number of verticals and heads in this output file? As I want to have a higher resolution output.

Also, can we still use the pre-trained model if we change these parameters?

Thanks

How to get generic_model.pkl

I get this error by running command:

python demo.py

FileNotFoundError: [Errno 2] No such file or directory: './data/FLAME2020/generic_model.pkl'

Questions About Face Tracking

Would you please tell me how to avoid the face-shaking problem of Face2Face? I think the expression parameters of each frame cause the face-shaking in the reconstructed video.

Code Problem about demo.py

Thanks for your great work!
I met this error when I tried to run demo.py:
Traceback (most recent call last):
File "demo.py", line 149, in
main(cfg, args)
File "demo.py", line 126, in main
rendering = mica.render.render_mesh(mesh[None])
File "/MICA/micalib/renderer.py", line 69, in render_mesh
rendering = self.renderer(meshes).permute(0, 3, 1, 2)
File "/home/anaconda3/envs/MICA/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/anaconda3/envs/MICA/lib/python3.8/site-packages/pytorch3d/renderer/mesh/renderer.py", line 60, in forward
images = self.shader(fragments, meshes_world, **kwargs)
File "/home/anaconda3/envs/MICA/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/anaconda3/envs/MICA/lib/python3.8/site-packages/pytorch3d/renderer/mesh/shader.py", line 125, in forward
colors = phong_shading(
File "/home/anaconda3/envs/MICA/lib/python3.8/site-packages/pytorch3d/renderer/mesh/shading.py", line 90, in phong_shading
ambient, diffuse, specular = _apply_lighting(
File "/home/anaconda3/envs/MICA/lib/python3.8/site-packages/pytorch3d/renderer/mesh/shading.py", line 36, in _apply_lighting
camera_position=cameras.get_camera_center(),
File "/home/anaconda3/envs/MICA/lib/python3.8/site-packages/pytorch3d/renderer/cameras.py", line 179, in get_camera_center
P = w2v_trans.inverse().get_matrix()
File "/home/anaconda3/envs/MICA/lib/python3.8/site-packages/pytorch3d/transforms/transform3d.py", line 277, in inverse
i_matrix = self._get_matrix_inverse()
File "/home/anaconda3/envs/MICA/lib/python3.8/site-packages/pytorch3d/transforms/transform3d.py", line 247, in _get_matrix_inverse
return torch.inverse(self._matrix)
torch._C._LinAlgError: cusolver error: CUSOLVER_STATUS_EXECUTION_FAILED, when calling cusolverDnSgetrf( handle, m, n, dA, ldda, static_cast<float*>(dataPtr.get()), ipiv, info). This error may appear if the input matrix contains NaN.

Cuda out of memory

Hi~ Thanks a lot for your great work.
Recently, I was just running demo but got the error of "oom". The error is shown below.
I used Linux server with only one GPU 3090-24G with Ubuntu.
I was wondering if the problem caused by my device, and can you give me some suggestions as I have no more gpu resources.

(MICA) root@I11eaa2113200301aa4:/home/MICA# python demo.py
2023-04-11 14:03:50.577 | INFO     | models.flame:__init__:54 - [FLAME] creating the FLAME Decoder
Traceback (most recent call last):
  File "/home/MICA/demo.py", line 160, in <module>
    main(cfg, args)
  File "/home/MICA/demo.py", line 112, in main
    mica = util.find_model_using_name(model_dir='micalib.models', model_name=cfg.model.name)(cfg, device)
  File "/home/MICA/micalib/models/mica.py", line 37, in __init__
    self.initialize()
  File "/home/MICA/micalib/base_model.py", line 44, in initialize
    self.create_flame(self.cfg.model)
  File "/home/MICA/micalib/base_model.py", line 52, in create_flame
    self.flame = FLAME(model_cfg).to(self.device)
  File "/usr/local/miniconda3/envs/MICA/lib/python3.9/site-packages/torch/nn/modules/module.py", line 987, in to
    return self._apply(convert)
  File "/usr/local/miniconda3/envs/MICA/lib/python3.9/site-packages/torch/nn/modules/module.py", line 662, in _apply
    param_applied = fn(param)
  File "/usr/local/miniconda3/envs/MICA/lib/python3.9/site-packages/torch/nn/modules/module.py", line 985, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError: CUDA error: out of memory

Possible typing error at line 149 of the "environment.yml" file

While trying to install and create the MICA env by running the "install.sh" script, conda showed the following error when running "conda env create -f environment.yml":

Installing conda env...
Collecting package metadata (repodata.json): done
Solving environment: failed

ResolvePackageNotFound: 
  - pytorch3d-0.7.0-py38_cu113_pyt1110 (line 149)

SOLUTION: use pytorch3d=0.7.0=py38_cu113_pyt1110 (replace the "-" by "=").

Do you plan to publish your face tracking algorithm?

I am already using DECA and EMOCA for face tracking and was hoping this was an improvement over both. Unfortunately, this project only implements static head generation with no facial tracking. Further, the output mesh is not in the original orientation (as observed in the input images). For now, both DECA and EMOCA are superior for the purpose of capturing temporal facial data. I see in your video that you are doing it with MICA as well, so clearly, there is some potential here. As for static head generation, negligible difference over DECA. Please consider sharing the rest of your code.

Can't create env with environment.yml / ResolvePackageNotFound


ResolvePackageNotFound:
  - libopencv==4.5.5=py38hd60e7aa_0
  - cairo==1.16.0=ha12eb4b_1010
  - gettext==0.19.8.1=h73d1719_1008
  - lcms2==2.12=h3be6417_0
  - pytorch==1.11.0=py3.8_cuda11.3_cudnn8.2.0_0
  - cython==0.29.28=py38h295c915_0
  - qt==5.12.9=ha98a1a1_5
  - bzip2==1.0.8=h7b6447c_0
  - lerc==3.0=h9c3ff4c_0
  - cryptography==36.0.0=py38h9ce1e76_0
  - libuv==1.40.0=h7b6447c_0
  - xz==5.2.5=h7b6447c_0
  - dbus==1.13.6=h5008d03_3
  - opencv==4.5.5=py38h578d9bd_0
  - libllvm13==13.0.1=hf817b99_2
  - fontconfig==2.13.96=h8e229c2_2
  - xorg-kbproto==1.0.7=h7f98852_1002
  - mkl_fft==1.3.1=py38hd3c417c_0
  - xorg-libsm==1.2.3=hd9c2040_1000
  - libuuid==2.32.1=h7f98852_1000
  - xorg-libxrender==0.9.10=h7f98852_1003
  - tifffile==2020.10.1=py38hdd07704_2
  - intel-openmp==2021.4.0=h06a4308_3561
  - yaml==0.2.5=h7b6447c_0
  - xorg-renderproto==0.11.1=h7f98852_1002
  - libprotobuf==3.19.4=h780b84a_0
  - libiconv==1.16=h516909a_0
  - libunistring==0.9.10=h27cfd23_0
  - _openmp_mutex==4.5=1_llvm
  - pthread-stubs==0.4=h36c2ea0_1001
  - openh264==2.1.1=h4ff587b_0
  - mysql-common==8.0.28=haf5c9bc_2
  - libwebp-base==1.2.2=h7f8727e_0
  - tornado==6.1=py38h27cfd23_0
  - libev==4.33=h516909a_1
  - certifi==2022.6.15=py38h06a4308_0
  - ffmpeg==4.3.2=h37c90e5_3
  - libstdcxx-ng==11.2.0=he4da1e4_14
  - libffi==3.4.2=h7f98852_5
  - expat==2.4.8=h27087fc_0
  - pillow==9.0.1=py38h22f2fdc_0
  - pytorch3d==0.7.0=py38_cu113_pyt1110
  - xorg-libxext==1.3.4=h7f98852_1
  - libxml2==2.9.12=h885dcf4_1
  - libgcc-ng==11.2.0=h1d223b6_14
  - libpq==14.2=hd57d9b9_0
  - gnutls==3.6.15=he1e5248_0
  - libblas==3.9.0=12_linux64_mkl
  - libdeflate==1.8=h7f98852_0
  - openssl==1.1.1o=h7f8727e_0
  - xorg-inputproto==2.3.2=h7f98852_1002
  - zlib==1.2.11=h166bdaf_1014
  - libpng==1.6.37=hbc83047_0
  - pip==21.2.4=py38h06a4308_0
  - krb5==1.19.3=h3790be6_0
  - c-ares==1.18.1=h7f98852_0
  - scikit-image==0.17.2=py38hdf5156a_0
  - nspr==4.32=h9c3ff4c_1
  - jbig==2.1=h7f98852_2003
  - liblapack==3.9.0=12_linux64_mkl
  - xorg-libxau==1.0.9=h7f98852_0
  - libclang==13.0.1=default_hc23dcda_0
  - nettle==3.7.3=hbbd107a_1
  - tk==8.6.12=h27826a3_0
  - graphite2==1.3.13=h58526e2_1001
  - cffi==1.15.0=py38hd667e15_1
  - pandas==1.4.1=py38h295c915_1
  - giflib==5.2.1=h7b6447c_0
  - libssh2==1.10.0=ha56f1ee_2
  - libogg==1.3.4=h7f98852_1
  - lz4-c==1.9.3=h295c915_1
  - ca-certificates==2022.4.26=h06a4308_0
  - freeglut==3.2.2=h9c3ff4c_1
  - libxcb==1.13=h7f98852_1004
  - freetype==2.11.0=h70c0345_0
  - jpeg==9d=h7f8727e_0
  - xorg-libxi==1.7.10=h7f98852_0
  - libevent==2.1.10=h9b69904_4
  - bottleneck==1.3.4=py38hce1f21e_0
  - icu==69.1=h9c3ff4c_0
  - libgfortran-ng==7.5.0=ha8ba4b0_17
  - mkl_random==1.2.2=py38h51133e4_0
  - mkl-service==2.4.0=py38h7f8727e_0
  - libedit==3.1.20191231=he28a2e2_2
  - py-opencv==4.5.5=py38he5a9106_0
  - pyyaml==5.3.1=py38h7b6447c_1
  - libtasn1==4.16.0=h27cfd23_0
  - lame==3.100=h7b6447c_0
  - x264==1!161.3030=h7f98852_1
  - keyutils==1.6.1=h166bdaf_0
  - libglib==2.70.2=h174f98d_4
  - libnsl==2.0.0=h7f98852_0
  - brotlipy==0.7.0=py38h27cfd23_1003
  - libvorbis==1.3.7=h9c3ff4c_0
  - zstd==1.5.2=ha95c52a_0
  - libglu==9.0.0=he1b5a44_1001
  - cudatoolkit==11.3.1=h2bc3f7f_2
  - pcre==8.45=h9c3ff4c_0
  - llvm-openmp==13.0.1=he0ac6c6_1
  - liblapacke==3.9.0=12_linux64_mkl
  - gstreamer==1.18.5=h9f60fe5_3
  - nss==3.76=h2350873_0
  - xorg-libxfixes==5.0.3=h7f98852_1004
  - xorg-libx11==1.7.2=h7f98852_0
  - libtiff==4.3.0=h6f004c6_2
  - scipy==1.7.3=py38hc147768_0
  - setuptools==61.2.0=py38h06a4308_0
  - hdf5==1.12.1=h69dfa17_1
  - pysocks==1.7.1=py38h06a4308_0
  - libcurl==7.82.0=h7bff187_0
  - xorg-fixesproto==5.0=h7f98852_1002
  - harfbuzz==3.4.0=hb4a5f5f_0
  - xorg-xextproto==7.3.0=h7f98852_1002
  - _libgcc_mutex==0.1=conda_forge
  - libzlib==1.2.11=h166bdaf_1014
  - matplotlib-base==3.3.1=py38h817c723_0
  - gst-plugins-base==1.18.5=hf529b03_3
  - libwebp==1.2.2=h55f646e_0
  - readline==8.1.2=h7f8727e_1
  - xorg-libice==1.0.10=h7f98852_0
  - portalocker==2.4.0=py38h578d9bd_0
  - libopus==1.3.1=h7f98852_1
  - libgfortran4==7.5.0=ha8ba4b0_17
  - kiwisolver==1.2.0=py38hfd86e86_0
  - mysql-libs==8.0.28=h28c427c_2
  - libidn2==2.3.2=h7f8727e_0
  - pixman==0.40.0=h36c2ea0_0
  - xorg-xproto==7.0.31=h7f98852_1007
  - numpy-base==1.21.2=py38h79a1101_0
  - libxkbcommon==1.0.3=he3ba5ed_0
  - numpy==1.21.2=py38h20f2e39_0
  - xorg-libxdmcp==1.1.3=h7f98852_0
  - python==3.8.13=h582c2e5_0_cpython
  - alsa-lib==1.2.3=h516909a_0
  - wget==1.20.1=h20c2e04_0
  - sqlite==3.38.2=hc218d9a_0
  - pywavelets==1.1.1=py38h7b6447c_2
  - jasper==2.0.33=ha77e612_0
  - numexpr==2.8.1=py38h6abb31d_0
  - mkl==2021.4.0=h06a4308_640
  - libcblas==3.9.0=12_linux64_mkl
  - ld_impl_linux-64==2.36.1=hea4e1c9_2
  - cytoolz==0.11.0=py38h7b6447c_0
  - ncurses==6.3=h7f8727e_2
  - libnghttp2==1.47.0=h727a467_0
  - gmp==6.2.1=h2531618_2

Expression parameters

Hi,

I wanted to ask if it is possible to add expression parameters generated by DECA to MICA. I see that the expression parameters generated by DECA are a 50-dimensional vector whereas for MICA it is a 100-dimensional vector.

How to apply texture on the 3D mesh obtained?

I tried DECA, and we obtain an .obj file with texture. In this repo, the obtained 3D model has no texture. How to get it? I am a beginner in this, so may be the nomenclature I am using will not be accurate.

error demo.py - ValueError: numpy.ndarray

Hi,
Thank for your help, env conda installed.
But when i launch demo.py i have this issue:

Traceback (most recent call last):
File "demo.py", line 28, in
from insightface.app import FaceAnalysis
File "/home/titof/anaconda3/envs/MICA/lib/python3.8/site-packages/insightface/init.py", line 18, in
from . import app
File "/home/titof/anaconda3/envs/MICA/lib/python3.8/site-packages/insightface/app/init.py", line 2, in
from .mask_renderer import *
File "/home/titof/anaconda3/envs/MICA/lib/python3.8/site-packages/insightface/app/mask_renderer.py", line 8, in
from ..thirdparty import face3d
File "/home/titof/anaconda3/envs/MICA/lib/python3.8/site-packages/insightface/thirdparty/face3d/init.py", line 3, in
from . import mesh
File "/home/titof/anaconda3/envs/MICA/lib/python3.8/site-packages/insightface/thirdparty/face3d/mesh/init.py", line 9, in
from .cython import mesh_core_cython
File "insightface/thirdparty/face3d/mesh/cython/mesh_core_cython.pyx", line 1, in init insightface.thirdparty.face3d.mesh.cython.mesh_core_cython
ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject

Could you help me, thk

the performance based on DECA and MICA

In your paper table5 shows, why DECA-based methods lead to huge overfitting when using fine-tuning of partial layers or entire pipeline, but the MICA can get better performace? thanks

Hello, I am using M1 Macbook pro.

Mac cannot use the CUDA as long as I know. I encountered the error that "Torch not compiled with CUDA enabled", while running the demo.py. Below is the message I received.

/Users/user/miniforge3/lib/python3.9/site-packages/pytorch3d/renderer/opengl/init.py:16: UserWarning: Can't import EGL, not importing MeshRasterizerOpenGL. This might happen if your Python application imported OpenGL with a non-EGL backend before importing PyTorch3D, or if you don't have pyopengl installed as part of your Python distribution.
warnings.warn(
2022-07-29 10:55:41.258 | INFO | models.flame:init:54 - [FLAME] creating the FLAME Decoder
2022-07-29 10:55:42.083 | INFO | models.flame:init:54 - [FLAME] creating the FLAME Decoder
2022-07-29 10:55:42.109 | INFO | micalib.models.mica:load_model:59 - [MICA] Checkpoint not available starting from scratch!
/Users/user/miniforge3/lib/python3.9/site-packages/pytorch3d/io/obj_io.py:531: UserWarning: No mtl file provided
warnings.warn("No mtl file provided")
Traceback (most recent call last):
File "/Users/user/MICA/demo.py", line 149, in
main(cfg, args)
File "/Users/user/MICA/demo.py", line 105, in main
mica = util.find_model_using_name(model_dir='micalib.models', model_name=cfg.model.name)(cfg, device)
File "/Users/user/MICA/micalib/models/mica.py", line 37, in init
self.initialize()
File "/Users/user/MICA/micalib/base_model.py", line 48, in initialize
self.setup_renderer(self.cfg.model)
File "/Users/user/MICA/micalib/base_model.py", line 91, in setup_renderer
self.render = MeshShapeRenderer(obj_filename=model_cfg.topology_path)
File "/Users/user/MICA/micalib/renderer.py", line 33, in init
faces = faces.verts_idx[None, ...].cuda()
File "/Users/user/miniforge3/lib/python3.9/site-packages/torch/cuda/init.py", line 211, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

Could you please tell me how I can deal with or fix the error?

CUDA 10.1

Hi,

Thanks for the nice work you share, I find it very interesting.

I´m trying to install it but I found that I´m running a different cuda version 10.1.

nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243

The environment.yml in the repository installs newer version of cudatoolkit (11.3.1=h2bc3f7f_2) and this ends up in a pytorch3d error when running demo.py.

from pytorch3d import _C
ImportError: libtorch_cuda_cu.so: cannot open shared object file: No such file or directory

It looks related with this.
facebookresearch/pytorch3d#1013 (comment)

Some packages versions should be changed to adapt to the cuda version.

Could someone point me in the right direction?

Best regards.

Unable to use , cannot solve environment

Collecting package metadata (repodata.json): done
Solving environment: failed

ResolvePackageNotFound:

  • python==3.9.15=h7a1cb2a_2
  • libidn2==2.3.2=h7f8727e_0
  • pillow==9.2.0=py39hace64e9_1
  • cuda-nvprune==11.6.124=he22ec0a_0
  • lcms2==2.12=h3be6417_0
  • cuda-nvrtc==11.6.124=h020bade_0
  • libiconv==1.16=h7f8727e_2
  • cuda-nvtx==11.6.124=h0630a44_0
  • giflib==5.2.1=h7b6447c_0
  • nettle==3.7.3=hbbd107a_1
  • freetype==2.12.1=h4a9f257_0
  • libcufile-dev==1.4.0.31=0
  • libtiff==4.4.0=hecacb30_2
  • readline==8.2=h5eee18b_0
  • sqlite==3.40.0=h5082296_0
  • cuda-gdb==11.8.86=0
  • gmp==6.2.1=h295c915_3
  • ffmpeg==4.3=hf484d3e_0
  • mkl_fft==1.3.1=py39hd3c417c_0
  • certifi==2022.9.24=py39h06a4308_0
  • pytorch==1.13.0=py3.9_cuda11.6_cudnn8.3.2_0
  • libtasn1==4.16.0=h27cfd23_0
  • cuda-cuxxfilt==11.6.124=hecbf4f6_0
  • cuda-nvml-dev==11.6.55=haa9ef22_0
  • lame==3.100=h7b6447c_0
  • pip==22.2.2=py39h06a4308_0
  • cuda-cuobjdump==11.6.124=h2eeebcb_0
  • mkl==2021.4.0=h06a4308_640
  • bzip2==1.0.8=h7b6447c_0
  • cuda-cccl==11.6.55=hf6102b2_0
  • libstdcxx-ng==11.2.0=h1234567_1
  • numpy-base==1.23.4=py39h31eccc5_0
  • openh264==2.1.1=h4ff587b_0
  • brotlipy==0.7.0=py39h27cfd23_1003
  • cffi==1.15.1=py39h5eee18b_2
  • idna==3.4=py39h06a4308_0
  • libwebp==1.2.4=h11a3e52_0
  • libffi==3.4.2=h6a678d5_6
  • cryptography==38.0.1=py39h9ce1e76_0
  • gds-tools==1.4.0.31=0
  • libgomp==11.2.0=h1234567_1
  • cuda-nvcc==11.6.124=hbba6d2d_0
  • libwebp-base==1.2.4=h5eee18b_0
  • numpy==1.23.4=py39h14f4228_0
  • openssl==1.1.1s=h7f8727e_0
  • requests==2.28.1=py39h06a4308_0
  • ca-certificates==2022.10.11=h06a4308_0
  • zlib==1.2.13=h5eee18b_0
  • cuda-samples==11.6.101=h8efea70_0
  • urllib3==1.26.12=py39h06a4308_0
  • _openmp_mutex==5.1=1_gnu
  • libunistring==0.9.10=h27cfd23_0
  • intel-openmp==2021.4.0=h06a4308_3561
  • setuptools==65.5.0=py39h06a4308_0
  • zstd==1.5.2=ha4553b6_0
  • libdeflate==1.8=h7f8727e_5
  • gnutls==3.6.15=he1e5248_0
  • mkl_random==1.2.2=py39h51133e4_0
  • cuda-driver-dev==11.6.55=0
  • tk==8.6.12=h1ccaba5_0
  • pysocks==1.7.1=py39h06a4308_0
  • ncurses==6.3=h5eee18b_3
  • cuda-cudart==11.6.55=he381448_0
  • xz==5.2.6=h5eee18b_0
  • libcufile==1.4.0.31=0
  • lerc==3.0=h295c915_0
  • cuda-cupti==11.6.124=h86345e5_0
  • mkl-service==2.4.0=py39h7f8727e_0
  • cuda-cudart-dev==11.6.55=h42ad0f4_0
  • lz4-c==1.9.3=h295c915_1
  • cuda-nvrtc-dev==11.6.124=h249d397_0
  • jpeg==9e=h7f8727e_0
  • cuda-nsight==11.8.86=0
  • libpng==1.6.37=hbc83047_0
  • ld_impl_linux-64==2.38=h1181459_1
  • libgcc-ng==11.2.0=h1234567_1

_pickle.UnpicklingError: the STRING opcode argument must be quoted

I'm on Windows 11, Ubuntu on WSL. I installed using the shell script, and then manually downloaded and placed the buffalo and antelope zip files.

ERROR MESSAGE

(MICA) eric@CPFX02:/mnt/c/Users/Eric/Documents/ML/MICA$ python demo.py
Traceback (most recent call last):
File "/mnt/c/Users/Eric/Documents/ML/MICA/demo.py", line 156, in
main(cfg, args)
File "/mnt/c/Users/Eric/Documents/ML/MICA/demo.py", line 109, in main
mica = util.find_model_using_name(model_dir='micalib.models', model_name=cfg.model.name)(cfg, device)
File "/mnt/c/Users/Eric/Documents/ML/MICA/micalib/models/mica.py", line 35, in init
super(MICA, self).init(config, device, tag)
File "/mnt/c/Users/Eric/Documents/ML/MICA/micalib/base_model.py", line 40, in init
self.masking = Masking(config)
File "/mnt/c/Users/Eric/Documents/ML/MICA/utils/masking.py", line 47, in init
ss = pickle.load(f, encoding="latin1")
_pickle.UnpicklingError: the STRING opcode argument must be quoted

Some questions about Dataset

Thx for your great work!
I downloaded some of the eight datasets mentioned in your document.
But I can't get the subset zip file which format is
root
FLAME_parameters
actor_id
*.npz
registrations
actor_id
*.obj

Would you like to tell me how to get this subset zip file? Generating by myself?

Thanks a lot!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.