Giter VIP home page Giter VIP logo

abdallahdib / nextface Goto Github PK

View Code? Open in Web Editor NEW
700.0 24.0 88.0 14.8 MB

A high-fidelity 3D face reconstruction library from monocular RGB image(s)

License: GNU General Public License v3.0

Python 13.27% Jupyter Notebook 86.73%
3d-face-reconstruction deep-learning face-reconstruction morphable-model optimization pytorch shape-from-image spherical-harmonics reflectance-from-image 3d-graphics

nextface's Introduction

NextFace

NextFace is a light-weight pytorch library for high-fidelity 3D face reconstruction from monocular image(s) where scene attributes –3D geometry, reflectance (diffuse, specular and roughness), pose, camera parameters, and scene illumination– are estimated. It is a first-order optimization method that uses pytorch autograd engine and ray tracing to fit a statistical morphable model to an input image(s).

A demo on youtube from here:

Practical Face Reconstruction via Differentiable Ray Tracing

News

  • 19 March 2023: fixed a bug in the optimizer where the gradients were not activated for the camera pos (rotation and translation). Also I added a new optimization strategy for the second and third stage which should improve the overall performance. plz pull
  • 21 June 2022: Many thanks for Jack Saunders for adding this new feature to NextFace: Added support for mediapipe as a replacement for FAN landmarks detector. Mediapipe produces much stable and accurate results than FAN . To try mediapipe, you need to pull the new version of the code and install mediapipe ** pip install mediapipe** . Now by default the landmarks detector used is mediapipe, if you want to switch bach to FAN plz edit the optimConfig.ini file (set lamdmarksDetectorType = 'fan')
  • 01 May 2022: i you want to generate an animation like the one of the gif files in the readme that rotates the reconstruction on the vertical axis, plz run the replay.py script and give it the path of the pickle file that contains the optimized scene attributes ( located in checkpoints/stage3_output.pickle).
  • 26 April 2022: I added export of the estimated light map (as an environment map). this can be useful if you want to render the face with other rendering engines (Unreal, Unity, OpenGL). plz pull the code. U can choose to export the lightmap as png or exr (check optimConfig.ini)
  • 25 April 2022: if you want to generate textures with higher resolutions (1024x1024 or 2048x2048) I have added these two maps here : https://github.com/abdallahdib/NextFace/releases. to use these maps, download uvParametrization.2048.pickle and uvParametrization.1024.pickle and put them inside baselMorphableModel directory and change the textureResolution in optimConfig.in to 1024 or 2048. Also dont forget to pull the latest code. Plz note that with these large uvmaps optimization will require more cpu/gpu memory.
  • 24 April 2022: added a colab notebook in: demo.ipynb.
  • 20 April 2022: I replaced landmarks association file with a new one which gives better reconstruction, especially on face coutours. Plz pull
  • 20 April 2022: I tried NextFace on a challenging face and surprisingly we still get appealing reconstruction, check below:

Features:

  • Reconstructs face at high fidelity from single or multiple RGB images
  • Estimates face geometry
  • Estimates detailed face reflectance (diffuse, specular and roughness)
  • Estimates scene light with spherical harmonics
  • Estimates head pose and orientation
  • Runs on both cpu and cuda-enabled gpu

Installation

  • Clone the repository
  • Execute the commands in 'INSTALL' file. these commands create a new conda environment called faceNext and install required packages. An 'environment.yml' is also provided. The library is tested with torch 1.3.1, torchvision 0.4.2 and cuda toolkit 10.1, but it should also work with recent pytorch versions.
  • Activate the environment: conda activate nextFace
  • Download basel face model from here, just fill the form and you will receive an instant direct download link into your inbox. Downloaded model2017-1_face12_nomouth.h5 file and put it inside ./baselMorphableModel directory
  • Download the albedo face model albedoModel2020_face12_albedoPart.h5 from here and put it inside ./baselMorphableModel directory

How to use

Reconstruction from a single image

  • to reconstruct a face from a single image: run the following command:
    • python optimizer.py --input path-to-your-input-image --output output-path-where-to-save-results

Reconstruction from multiple images (batch reconstruction)

  • In case you have multiple images with same resolution, u can run a batch optimization on these images. For this, put all ur images in the same directory and run the following command:
    • python optimizer.py --input path-to-your-folder-that-contains-all-ur-images --output output-path-where-to-save-results

Reconstruction from mutliple images for the same person

  • if you have multiple images for the same person, put these images in the same folder and run the following command:

    • python optimizer.py --sharedIdentity --input path-to-your-folder-that-contains-all-ur-images --output output-path-where-to-save-results

    the sharedIdentity flag tells the optimizer that all images belong to the same person. In such case, the shape identity and face reflectance attributes are shared across all images. This generally produces better face reflectance and geometry estimation.

Configuring NextFace

  • The file optimConfig.ini allows to control different aspect of NextFace such as:
    • optimization (regularizations, number of iterations...)
    • compute device (run on cpu or gpu)
    • spherical harmonics (number of bands, environment map resolution)
    • ray tracing (number of samples)
  • The code is self-documented and easy to follow

Output

The optimization takes 4~5 minutes depending on your gpu performance. The output of the optimization is the following:

  • render_{imageIndex}.png: contains from left to right: input image, overlay of the final reconstruction on the input image, the final reconstruction, diffuse, specular and roughness maps projected on the face.
  • diffuseMap_{imageIndex}.png: the estimated diffuse map in uv space
  • specularMap_{imageIndex}.png: the estimated specular map in uv space
  • roughnessMap_{imageIndex}.png: the estimated roughness map in uv space
  • mesh{imageIndex}.obj: an obj file that contains the 3D mesh of the reconstructed face

How it works

NextFace reprocudes the optimizatin strategy of our early work. The optimization is composed of the three stages:

  • stage 1: or coarse stage, where face expression and head pose are estimated by minimizing the geometric loss between the 2d landmarks and their corresponding face vertices. this produces a good starting point for the next optimization stage
  • stage 2: the face shape identity/expression, statistical diffuse and specular albedos, head pose and scene light are estimated by minimizing the photo consistency loss between the ray traced image and the real one.
  • stage 3: to improve the statistical albedos estimated in the previous stage, the method optimizes, on per-pixel basis, the previously estimated albedos and try to capture more albedo details. Consistency, symmetry and smoothness regularizers (similar to this work) are used to avoid overfitting and add robustness against lighting conditions.
    By default, the method uses 9 order spherical harmonics bands (as in this work) to capture scene light. you can modify the number of spherical harmonics bands in optimConfig.ini bands and see the importance of using high number of bands for a better shadows recovery.

Good practice for best reconstruction

  • To obtain best reconstruction with optimal albedos, ensure that the images are taken in good lighting conditions (no shadows and well lit...).
  • In case of single input image, ensure that the face is frontal to reconstructs a complete diffuse/specular/roughness, as the method recover only visible parts of the face.
  • Avoid extreme face expressions as the underlying model may fail to recover them.

Limitations

  • The method relies on landmarks to initialize the optimization (Stage 1). In case these landmarks are inaccurate, you may get sub-optimal reconstruction. NextFace uses landmarks from face_alignment which are robust against extreme poses however they are not as accurate as they can be. This limitation has been discussed here and here. Using this landmark detector from Microsoft seems promising.
  • NextFace is slow and execution speed decreases with the size of the input image. For instance, if you are running an old-gpu (like me), you can decrease the resolution of the input image in the optimConfig.ini file by reducing the value of the maxResolution parameter. Our recent work solves for this and achieve near real-time performance using deep convolutional neural network.
  • NextFace cannot capture fine geometry details (wrinkles, pores...). these details may get baked in the final albedos. our recent work captures fine scale geometric details.
  • The spherical harmonics can only model lights at infinity, under strong directional shadows, the estimated light may not be accurate as it can be, so residual shadows may appear in the estimated albedos. You can attenuate this by increasing the value of regularizers in the optimConfig.ini file, but this trade-off albedo details. Below are the values to modify:
    • for diffuse map: weightDiffuseSymmetryReg and weightDiffuseConsistencyReg,
    • for specular map: weightSpecularSymmetryReg, weightSpecularConsistencyReg
    • for roughness map: weightRoughnessSymmetryReg and weightRoughnessConsistencyReg I also provided a configuration file named optimConfigShadows.ini which have higher values for these regularizers that u can try.
  • Using a single image to estimate face attribute is an ill-posed problem and the estimated reflectance maps(diffuse, specular and roughness) are view/camera dependent. To obtain intrinsic reflectance maps, you have to use multiple images per subject.

Roadmap

If I have time:

  • Expression tracking from video by optimizating head pose and expression on per-frame basis, which is straightforward once you have estimated the intrinsic face parameters(reflectance and geometry). I did not implement it yet simply, because i am running an old gpu (GTX 970M). I may add this feature when I decide to buy an RTX :)
  • Add virtual lightstage as proposed in this to model high frequency point lights.
  • Add support for FLAME morphable model. You are welcome if you can help.
  • Add GUI interface for loading images, landmarks edition, run optimization and visualize results.

License

NextFace is available for free, under GPL license, to use for research and educational purposes only. Please check LICENSE file.

Acknowledgements

The uvmap is taken from here, landmarks association from here. redner is used for ray tracing, albedo model from here.

contact

mail: deeb.abdallah @at gmail

twitter: abdallah_dib

Citation

If you use NextFace and find it useful in your work, these works are relevant for you:

@inproceedings{dib2021practical,
  title={Practical face reconstruction via differentiable ray tracing},
  author={Dib, Abdallah and Bharaj, Gaurav and Ahn, Junghyun and Th{\'e}bault, C{\'e}dric and Gosselin, Philippe and Romeo, Marco and Chevallier, Louis},
  booktitle={Computer Graphics Forum},
  volume={40},
  number={2},
  pages={153--164},
  year={2021},
  organization={Wiley Online Library}
}

@inproceedings{dib2021towards,
  title={Towards High Fidelity Monocular Face Reconstruction with Rich Reflectance using Self-supervised Learning and Ray Tracing},
  author={Dib, Abdallah and Thebault, Cedric and Ahn, Junghyun and Gosselin, Philippe-Henri and Theobalt, Christian and Chevallier, Louis},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={12819--12829},
  year={2021}
}

@article{dib2022s2f2,
  title={S2F2: Self-Supervised High Fidelity Face Reconstruction from Monocular Image},
  author={Dib, Abdallah and Ahn, Junghyun and Thebault, Cedric and Gosselin, Philippe-Henri and Chevallier, Louis},
  journal={arXiv preprint arXiv:2203.07732},
  year={2022}
}

nextface's People

Contributors

abdallahdib avatar jsaunders909 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nextface's Issues

Google Colab error

Image of the error error

Error occurs on the optimizer creation step, it says to "Download "albedoModel2020_face12_albedoPart.h5", and put it inside /content/NextFace/baselMorphableModel/ and run again." But I've already done that as shown in the image.

Aborted in 2080Ti

Hello,
I have this problem when i run the code, and I want to ask for your help.

detecting landmarks...
init camera pose...
1/3 => Optimizing head pose and expressions using landmarks...
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2000/2000 [00:41<00:00, 47.85it/s]
2/3 => Optimizing shape, statistical albedos, expression, head pose and scene light...
0%| | 0/401 [00:00<?, ?
0%|▎ | 1/401 [00:29<3:16:12,
terminate called after throwing an instance of 'thrust::system::system_error'
what(): device free failed: an illegal memory access was encountered
run.sh: line 4: 14367 Aborted (core dumped) python optimizer.py --sharedIdentity --input /image

I run the code with GeForce RTX 2080 Ti GPU, and python==3.6.7, torch==1.9.1, cuda toolkit==10.2.
Thanks a lot.

Visualization fail

Hi, abdallahdib,

  1. I run your code, the visualization of envMaps is black,
    envMap_0

  2. the visualization of
    renderAll = torch.cat([ torch.cat([inputTensor[i], torch.ones_like(images[i])[..., 3:]], dim = -1), torch.cat([overlay.to(self.device), torch.ones_like(images[i])[..., 3:]], dim = -1), images[i], #illum[i], diffuseAlbedo[self.getTextureIndex(i)], specularAlbedo[self.getTextureIndex(i)], roughnessAlbedo[self.getTextureIndex(i)]], dim=1)
    is transparent as below.
    render_0

The loss figure is here,
stage1_loss

Do you have further suggestions, the overlay rendering results seems right.

RuntimeError: inverse: LAPACK library not found in compilation

Little help, please. I am getting below error.

(nextface) E:\GITHUB\NEXTFACE\NextFace>python optimizer.py --sharedIdentity --input input/s2.png --output output/
loading optim config from: ./optimConfig.ini
[WARN] no cuda enabled device found. switching to cpu...
Loading Basel Face Model 2017 from ./baselMorphableModel/morphableModel-2017.pickle...
loading mesh normals...
loading uv parametrization...
loading landmarks association file...
creating sampler...
C:\Anaconda\envs\nextface\lib\site-packages\torch\functional.py:478: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at C:\b\abs_f0dma8qm3d\croot\pytorch_1669187301762\work\aten\src\ATen\native\TensorShape.cpp:2895.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
loading image from path: input/s2.png
detecting landmarks using: mediapipe
E:\GITHUB\NEXTFACE\NextFace\landmarksmediapipe.py:55: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at C:\b\abs_f0dma8qm3d\croot\pytorch_1669187301762\work\torch\csrc\utils\tensor_new.cpp:204.)
return torch.tensor(landmarks, device = self.device)
init camera pose...
1/3 => Optimizing head pose and expressions using landmarks...
100%|██████████████████████████████████████████████████████████████████████████████████| 2000/2000 [01:36<00:00, 20.71it/s]
2/3 => Optimizing shape, statistical albedos, expression, head pose and scene light...
0%| | 0/401 [00:00<?, ?it/s]
Traceback (most recent call last):
File "optimizer.py", line 486, in
optimizer.run(inputDir,
File "optimizer.py", line 434, in run
self.runStep2()
File "optimizer.py", line 242, in runStep2
images = self.pipeline.render(cameraVerts, diffuseTextures, specularTextures)
File "E:\GITHUB\NEXTFACE\NextFace\pipeline.py", line 134, in render
scenes = self.renderer.buildScenes(cameraVerts, self.faces32, normals, self.uvMap, diffuseTextures,
File "E:\GITHUB\NEXTFACE\NextFace\renderer.py", line 126, in buildScenes
cam = self.setupCamera(focal[i], self.screenWidth, self.screenHeight)
File "E:\GITHUB\NEXTFACE\NextFace\renderer.py", line 81, in setupCamera
cam = pyredner.Camera(
File "C:\Anaconda\envs\nextface\lib\site-packages\pyredner\camera.py", line 118, in init
self.intrinsic_mat_inv = torch.inverse(self.intrinsic_mat).contiguous()
RuntimeError: inverse: LAPACK library not found in compilation

The rendering results have noises

Hello, thanks for your wonderful job! I have tried to render the image with ray tracing, but it seems to have lots of noise. Do you have any idea to denoise the images? Thank you again!

I tried DeepNextFace myself, but landmark loss decreases slowly.

Hey there, thanks for your impressive work!
I tried Nextface first and now implemented DeepNextFace by myself, using default resnet152 in pytorch(pretrained on ImageNet), but during step 1, landmark loss decreased in a quite slow way(landmark loss is the same as NextFace), ended up with about 3000.
I think it may related to slow fitting of focal and camera position params. Dosen't find more details about the training strategy in your paper including appendix, can you please help?

Stuck on Step #2 of 3

I installed everything correctly using the INSTALL file and when testing it, step 1 works fine, but then it gets stuck on step 2.
Says 0/401 and unknown iterations per second.
how long should I wait to see if it does eventually end?
I'm on a 3060ti. (And I tried with cpu, does the same thing)
thanks!

About smooth SH

Hi! Thanks for your code. I have some questions about smoothing SH coeffs. I notice that during optimization there is no smoothSH operation because the default value of smooth is set to False in the toEnvMap function.

envMaps = self.sh.toEnvMap(self.vShCoeffs)

def toEnvMap(self, shCoeffs, smooth = False):

But after optimization, the SH can be smoothed as specified by smoothSH operation in optimConfig.ini:

envMaps = self.pipeline.sh.toEnvMap(self.pipeline.vShCoeffs, self.config.smoothSh) #smooth

What about the results if smooth SH coeffs during optimization?

No module named 'pyredner'

Dear brother Abdallahdib
First of all, your code is really helpful
I face this issue "No module named 'pyredner'" , I am run the code on colab
could you please advice on this issue
image

.obj file is not rendered in Blender

Hi,
Thank you for your effort. I am trying to import the obj file into Blender, but I cant see the model rendered, altough I see the mesh file imported.
Have you tried it on Blender? or is it expected to work on Blender?
Thank you again

Cannot run demo Colab notebook due to missing normals.pickle file

Hi, I've downloaded all the files for the BFM 2017 model as well as the albedos .h5 file. When I try loading the model it gives me an error message that I am missing 'normals.pickle'. Where can I find this file? There seems to be no mention of it anywhere in the repo. Any help is much appreciated!

No face was found in this image

Dear all,
NextFace complained about cannot find any face in the image. I tried to save the image as jpg or png but it didn't work. I even tried to
use your image in Github but without any luck. Sorry it looks like a stupid question but I just don't know how to solve it.

(faceNext) C:\Users\kenmax\NextFace>python optimizer.py --input c:\Users\kenmax --output c:\Users\kenmax\hikari
loading optim config from: ./optimConfig.ini
Loading Basel Face Model 2017 from ./baselMorphableModel/morphableModel-2017.pickle...
loading mesh normals...
loading uv parametrization...
loading landmarks association file...
creating sampler...
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
loading images from path: c:\Users\kenmax
loading image from path: c:\Users\kenmax/hikari03.png
[INFO] resizing input image to fit: 256 px resolution...
detecting landmarks using: mediapipe
Traceback (most recent call last):
File "optimizer.py", line 491, in
doStep3= doStep3)
File "optimizer.py", line 420, in run
self.setImage(imagePathOrDir, sharedIdentity)
File "optimizer.py", line 128, in setImage
landmarks = self.landmarksDetector.detect(self.inputImage.tensor)
File "C:\Users\kenmax\NextFace\landmarksmediapipe.py", line 51, in detect
land = self._detect((images[i].detach().cpu().numpy() * 255.0).astype('uint8'))
File "C:\Users\kenmax\NextFace\landmarksmediapipe.py", line 71, in _detect
raise RuntimeError('No face was found in this image')
RuntimeError: No face was found in this image

(faceNext) C:\Users\kenmax\NextFace>

Openssl

'https' is not recognized I am see these message
can u help with these to solve

about UV params

Dear author,
I'm new to CG,can you please teach me the way of calculating uvParams? and each params' meaning, like 'uvFaces', 'uvMapFaces', 'uvValidUVMap', 'uvXYMap'and 'uvVertices'.
thanks a lot!

BRDF lighting

Hi abdallah,

NextFace result is amazing, while I meet some problems in the brdf formulation.
In paper, you say the roughness is static, and I see u put it as 0.4

self.vRoughness = 0.4 * torch.ones([nShape, texRes, texRes, 1], dtype=torch.float32, device=self.device)

May I know, why do this or why this work?

Best
Jiaxiang

add web demo/model to Huggingface

Hi, would you be interested in adding NextFace to Hugging Face? The Hub offers free hosting, and it would make your work more accessible and visible to the rest of the ML community. Models/datasets/spaces(web demos) can be added to a user account or organization similar to github.

Example from other organizations:
Keras: https://huggingface.co/keras-io
Microsoft: https://huggingface.co/microsoft
Facebook: https://huggingface.co/facebook

Example spaces with repos:
github: https://github.com/salesforce/BLIP
Spaces: https://huggingface.co/spaces/salesforce/BLIP

github: https://github.com/facebookresearch/omnivore
Spaces: https://huggingface.co/spaces/akhaliq/omnivore

and here are guides for adding spaces/models/datasets to your org

How to add a Space: https://huggingface.co/blog/gradio-spaces
how to add models: https://huggingface.co/docs/hub/adding-a-model
uploading a dataset: https://huggingface.co/docs/datasets/upload_dataset.html

Please let us know if you would be interested and if you have any questions, we can also help with the technical implementation.

RuntimeError: tabulate: failed to synchronize: cudaErrorAssert: device-side assert triggered

python optimizer.py --input /home/dingwuyou/Downloads/83aedae4baa21cd2868437d5ceee8e22.jpeg --output persion1
loading optim config from: ./optimConfig.ini Loading Basel Face Model 2017 from ./baselMorphableModel/morphableModel-2017.pickle...
loading mesh normals... loading uv parametrization...
loading landmarks association file... creating sampler...
loading image from path: /home/dingwuyou/Downloads/83aedae4baa21cd2868437d5ceee8e22.jpeg [INFO] resizing input image to fit: 256 px resolution...
detecting landmarks... init camera pose...
1/3 => Optimizing head pose and expressions using landmarks... 100%|█████████████████████████████████████████████████████████████████████████████████████| 2000/2000 [00:04<00:00, 431.23it/s]
2/3 => Optimizing shape, statistical albedos, expression, head pose and scene light... 0%| | 0/401 [00:00<?, ?it/s]256
torch.Size([1, 512, 512, 3]) /opt/conda/conda-bld/pytorch_1573049304260/work/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda ->auto::operator()(int)->auto: block: [1440,0,0], thread: [96,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" fail
ed. /opt/conda/conda-bld/pytorch_1573049304260/work/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda ->auto::operator()(int)->auto: block: [1444,0,0], thread: [32,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" fail
ed. /opt/conda/conda-bld/pytorch_1573049304260/work/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda ->auto::operator()(int)->auto: block: [1444,0,0], thread: [33,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" fail
ed. /opt/conda/conda-bld/pytorch_1573049304260/work/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda ->auto::operator()(int)->auto: block: [1502,0,0], thread: [96,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" fail
ed.
...
/opt/conda/conda-bld/pytorch_1573049304260/work/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda ->auto::operator()(int)->auto: block: [1502,0,0], thread: [77,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1573049304260/work/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda ->auto::operator()(int)->auto: block: [1502,0,0], thread: [78,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1573049304260/work/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda ->auto::operator()(int)->auto: block: [1502,0,0], thread: [79,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1573049304260/work/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda ->auto::operator()(int)->auto: block: [1502,0,0], thread: [86,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1573049304260/work/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda ->auto::operator()(int)->auto: block: [1502,0,0], thread: [87,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1573049304260/work/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda ->auto::operator()(int)->auto: block: [1502,0,0], thread: [88,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1573049304260/work/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda ->auto::operator()(int)->auto: block: [1502,0,0], thread: [92,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1573049304260/work/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda ->auto::operator()(int)->auto: block: [1502,0,0], thread: [93,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1573049304260/work/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda ->auto::operator()(int)->auto: block: [1502,0,0], thread: [94,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1573049304260/work/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda ->auto::operator()(int)->auto: block: [1502,0,0], thread: [95,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
0%| | 0/401 [00:00<?, ?it/s]
Traceback (most recent call last):
File "optimizer.py", line 467, in
doStep3= doStep3)
File "optimizer.py", line 419, in run
self.runStep2()
File "optimizer.py", line 231, in runStep2
specularTextures = self.pipeline.morphableModel.generateTextureFromAlbedo(specAlbedo)
File "/home/dingwuyou/Projects/NextFace/morphablemodel.py", line 169, in generateTextureFromAlbedo
neighboors = torch.arange(self.faces.shape[-1], dtype = torch.int64, device = self.faces.device)
RuntimeError: tabulate: failed to synchronize: cudaErrorAssert: device-side assert triggered

How to get the mesh as reported in your paper

Hi, thanks for your great work!
I've run the following command line for s3.png and s4.png, and got some weired results. Can you tell me how to get the the same results as reported in your paper? Many thanks!

python optimizer.py --input ./input/s3.png --output ./output
loading optim config from: ./optimConfig.ini
Loading Basel Face Model 2017 from ./baselMorphableModel/morphableModel-2017.pickle...
loading mesh normals...
loading uv parametrization...
loading landmarks association file...
creating sampler...
loading image from path: ./input/s3.png
detecting landmarks...
init camera pose...
1/3 => Optimizing head pose and expressions using landmarks...
100%|█████████████████████████████████████████████████████████████████████████████| 2000/2000 [00:06<00:00, 301.50it/s]
2/3 => Optimizing shape, statistical albedos, expression, head pose and scene light...
100%|████████████████████████████████████████████████████████████████████████████████| 401/401 [02:28<00:00, 2.69it/s]
3/3 => finetuning albedos, shape, expression, head pose and scene light...
100%|████████████████████████████████████████████████████████████████████████████████| 101/101 [00:39<00:00, 2.59it/s]
took 3.25 minutes to optimize
saving to: ' ./output/s3.png/ '. hold on...
diffuseAlbedo.shape[0] = 1

And here are my results. As you can see, both subjects have Asian faces, the bridge of the nose is not so high, but the results are all European and American faces.
image

Reproducing the paper results

Hi Abdallah,

Thanks for open-sourcing your amazing work. I am currently trying to replicate results from the "Practical Face Reconstruction via Differentiable Ray Tracing". The teaser image on Github (I get the same result when I run the code) is different than the teaser from the paper. In particular, there is a lot of shading baked into the albedo map and possibly poor separation between the diffuse and specular/roughness maps. Are there any settings that can be changed to get closer to the results from the paper?

Output File Usage

Thank you for your great job.

Can we use the output files to make a faceswap video?
We anticipate that we will need a technique to detect faces from a video or images that shows the target person and apply a mask processed from an obj file.

Please forgive me for asking this question without knowing how to handle obj files.

How to get uvParametrization.*.pickle

Thanks for your great works. I want to know the process of calculating uvParametrization.pickle, in order to replace BFM model to FLAME.
Looking forward to your detailed answer.

Does this is a typo??

image

I nerver know a lib called pyredner, just pyrender
if it is typo, how to able to run your code? if it is a typo, how do u make twice of them?

very very strange...

how to convert texture to vertices?

hi there, its me again.
by using generateTextureFromAlbedo in morphablemodel.py can convert color vertices to uv texures, how can i inverse this process to convert textures to color vertices?

thanks!

How to make merged texture? (reconstruction result like paper)

Hi, thank you for nice work.
It's great research to make high resolution face texture.
In my case, result set(diffuseMap, roughnessMap, specularMap, envMap) is too difficult to use (compare than other project. they make .obj and texture image only)
I know these are more detail features. But I'm using pytorch3d to visualize 3D obj, and there is no option to use these.
Is there any solution to merge these details using python?
I want to get reconstructed .obj result without 3D tools (unreal, Unity, blender, etc)

step2 core dumped

2/3 => Optimizing shape, statistical albedos, expression, head pose and scene light...
0%| | 0/401 [00:00<?, ?it/s]python3: /tmp/pip-req-build-4hdcr8rl/src/scene.cpp:125: Scene::Scene(const Camera&, const std::vector<const Shape*>&, const std::vector<const Material*>&, const std::vector<const AreaLight*>&, const std::shared_ptr&, bool, int, bool, bool): Assertion `false' failed.
Aborted (core dumped)

core dumped when execute step2. Is anyone have confused by this issue?

Render resolution is 512

I notice that render and debug output all have resolution 512, does that mean per-pixel basis is based on this 512 image ? I just tried 2048 uv and feel that the output is blurry. Should I increase maxRes in image.py ? I have changed all 256/512 to 2048, not sure how will that play out.

detecting landmarks froze after using --sharedIdentity flag

Hi, thanks for your great work!
One question. I was testing the code. I've noticed if you just finished a single image, and want to test another one, it froze when step into the "detecting landmarks...", and it will stay there for ever. Have to kill the console and restart it for the second image. I am using Win10. Any suggestions? Thanks!!!

Error

2/3 => Optimizing shape, statistical albedos, expression, head pose and scene light...
0%| | 0/401 [00:00<?, ?it/s]
Traceback (most recent call last):
File "optimizer.py", line 462, in
optimizer.run(inputDir,
File "optimizer.py", line 419, in run
self.runStep2()
File "optimizer.py", line 233, in runStep2
images = self.pipeline.render(cameraVerts, diffuseTextures, specularTextures)
File "D:\NextFace\pipeline.py", line 124, in render
scenes = self.renderer.buildScenes(cameraVerts, self.faces32, normals, self.uvMap, diffuseTextures,
File "D:\NextFace\renderer.py", line 127, in buildScenes
scene = pyredner.Scene(cam, materials=[mat], objects=[obj], envmap=pyredner.EnvironmentMap(envMap[i]))
File "D:\anaconda3\envs\faceNext\lib\site-packages\pyredner\scene.py", line 56, in init
shape = pyredner.Shape(vertices = obj.vertices,
File "D:\anaconda3\envs\faceNext\lib\site-packages\pyredner\shape.py", line 371, in init
assert(indices.is_contiguous())
AssertionError
Python 3.8 cuda 11.1

Colab Error (RuntimeError: CUDA error: device-side assert triggered)

image

Cant seem to find a solution to this and its getting pretty late :')
I tried adding os.environ['CUDA_LAUNCH_BLOCKING'] = "1" but it didnt help, got any ideas? thanks a head of time :))

Edit:
I added os.environ['CUDA_LAUNCH_BLOCKING'] = "1 to the classes optimizer and morphablemodel, no diff

Fix texture map ?

After exporting mesh and texture maps,can we edit the texture and run the program with the edited texture fixed?

how can i get combine *obj and *mtl?

Hi ! Thanks for sharing your code.
I've tried the Colab demo file that you uploaded. Then I get mesh0.obj and material0.mtl.
When I tried the obj file, I was able to get the result that only mesh was expressed.
How do you get the results in a form that expresses the color of the character, not just the mesh? I think I make the result by combining obj file and mtl file, but I don't know what to do.
I need your help! Please understand that English may not be smooth using a translator.
image

Why the diffuse map is blur?

Screen Shot 2022-05-23 at 20 12 04
Hi, I have tried many images, but the diffuse map around the eye is not fit with the 3D model and was blurry. Could you explaine it? Thanks very much.

using more images for the same face, higher res texture

hey i tested it, its ok but not much better than other 3d avatar apps out there that can also take sideviews.
Is it planned to use additional sideviews for mesh generation? from what i see ,the same male mesh is used on all models and the mesh itself is just slightly adapted, not that much, so in the end its almost like slapping photo on base mesh without artifacts. sideview is the same with all people.
It also takes awhile on 1080ti , would be nice if it could retain the colours of original image and produce material texture of 1024, let it compute mesh on256 but texture should be higher res
also theres an issue with landmarks, when face is wider than base mesh then landmarks places the dots on cheek, not on the border of the face... maybe cause its not perfect front view

Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.

Hi, thanks for sharing your work!

I've come across a couple of problems,
Firstly, I'm trying to reproduce on python 3.9 and as you may know redner doesn't have a build for that version yet. I've worked around the problem by removing the imports and functions that call the library (super dirty, I know).

But now I'm facing this issue in step 2 of 3 and can't tell where the problem originates

2/3 => Optimizing shape, statistical albedos, expression, head pose and scene light... 0%| | 0/401 [00:00<?, ?it/s]C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:91: block: [5563,0,0], thread: [64,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"failed. C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:91: block: [5563,0,0], thread: [65,0,0] Assertionindex >= -sizes[i] && index < sizes[i] && "index out of bounds"failed. C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:91: block: [5563,0,0], thread: [66,0,0] Assertionindex >= -sizes[i] && index < sizes[i] && "index out of bounds"failed. C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:91: block: [5563,0,0], thread: [70,0,0] Assertionindex >= -sizes[i] && index < sizes[i] && "index out of bounds"failed. C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:91: block: [5563,0,0], thread: [71,0,0] Assertionindex >= -sizes[i] && index < sizes[i] && "index out of bounds"failed. C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:91: block: [5563,0,0], thread: [72,0,0] Assertionindex >= -sizes[i] && index < sizes[i] && "index out of bounds"failed. C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:91: block: [5563,0,0], thread: [28,0,0] Assertionindex >= -sizes[i] && index < sizes[i] && "index out of bounds"failed. C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:91: block: [5563,0,0], thread: [29,0,0] Assertionindex >= -sizes[i] && index < sizes[i] && "index out of bounds"failed. C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:91: block: [5563,0,0], thread: [30,0,0] Assertionindex >= -sizes[i] && index < sizes[i] && "index out of bounds" failed. 0%| | 0/401 [00:00<?, ?it/s] Traceback (most recent call last): File "B:\NextFace\optimizer.py", line 468, in <module> optimizer.run(inputDir, File "B:\NextFace\optimizer.py", line 425, in run self.runStep2() File "B:\NextFace\optimizer.py", line 233, in runStep2 images = self.pipeline.render(cameraVerts, diffuseTextures, specularTextures) File "B:\NextFace\pipeline.py", line 114, in render envMaps = self.sh.toEnvMap(self.vShCoeffs) File "B:\NextFace\sphericalharmonics.py", line 81, in toEnvMap envMaps = torch.zeros( [shCoeffs.shape[0], self.resolution[0], self.resolution[1], 3]).to(shCoeffs.device) RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

I'm on Windows 10, using an Nvidia RTX2070s and tried with texture size 512, 1024 and 2048 all with the same result.
I've read on another issue with similar error messages where the issue was that the wrong basel model was being used, but I've triple checked and now I not sure where to check. Could it be because of the redner library? That seems unlikely to me

Cheers!

PS The text is wonky because of the '' so here's an image
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.