Giter VIP home page Giter VIP logo

monohair's Introduction

MonoHair:High-Fidelity Hair Modeling from a Monocular Video ( CVPR 2024 Oral ) [Projectpage]

This repository is the official code for MonoHair. Give a monocular video, MonoHair reconstruct a high-fidelity 3D strand model.

In this repository we also give some examples for reconstructing 3D hair from a monocular video or multi-view images.

  • We generate 3D avatar using flame template and fit the flame coarse geometry using multiview images (only for real human capture), more details please check DELTA
  • For coarse goemtry initialization, please check Instant-NGP.
  • For hair exterior geometry synthesis, we propose patch-based multi-view optimization (PMVO) method, please check our Paper.
  • For hair interior inference, please check DeepMVSHair.
  • For strands generate, please also check our paper.

Getting Started

Clone the repository and install requirements:

git clone https://github.com/KeyuWu-CS/MonoHair.git --recursive
cd MonoHair
conda create -n MonoHair python==3.10.12
conda activate MonoHair
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113
pip install -r requirements.txt

Dependencies and submodules

Install Pytorch, Pytorch3d and tiny-cuda-nn. We have tested on Ubuntu 22.04.4, gcc==9.5, python==3.10.12 pytorch==1.11.0, pytorch3d==0.7.2 with CUDA 11.3 on RTX 3090Ti. You can install any version that is compatible with these dependencies. We know torch==1.3.0 have some bug when employing MODNet.

# if have problem when install pytorch 3d, you can try to install fvcore: pip install  fvcore==0.1.5.post20220512
pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py310_cu113_pyt1110/download.html

Initialize submodules of Instant-NGP, MODNet, CDGNet, DELTA and face-parsing.

git submodule update --init --recursive
pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch

If you have problem when install tiny-cuda-nn or pytorch3d, please refer to their repositories.

Compile Instant-NGP and move our modified run.py to instant-ngp/scripts.

cp run.py submodules/instant-ngp/scripts/
cd submodules/instant-ngp
cmake . -B build -DCMAKE_BUILD_TYPE=RelWithDebInfo
cmake --build build --config RelWithDebInfo -j
cd ../..

if you have problem with Instant-NGP compile. Please refer their instruction

Download assets

Download pretrained model for MODNet, CDGNet and face-parsing.

# Download some pretrained model and data for avatar optimization.  
# You should download pretrained model of CDGNet in thier repository, their are 
# two kind of "LIP_epoch_149.pth", please download the one with a size of 300MB.
bash fetch_pretrained_model.sh
bash fetch_data.sh    #this will cost long time.

Download examples

Download our example datas One Driven. For obtaining a certain results, we have run colmap and save the pretrained instant-ngp weight. Then you need to run the follow four steps to get the results. We also provide full results (include intermediate results) in full folder. You can use it to check the results of each step.
Tips: Since the wig use the same unreal human head, we don't use flame (smplx) model as template and don't run multi-view bust fitting.

3D Hair Reconstruction

# Prepare data: instant-ngp intialization, segmentation, gabor filter etc. You skip this step if use our provided data.
python prepare_data.py --yaml=configs/reconstruct/big_wavy1 

# Hair exterior optimization
python PMVO.py --yaml=configs/reconstruct/big_wavy1

# Hair interior inference
python infer_inner.py --yaml=configs/reconstruct/big_wavy1

# Strands generation
python HairGrow.py --yaml=configs/reconstruct/big_wavy1

Visualization

Download our released program at One Driven to visualize the results.

# First copy the output/10-16/full/connected_strands.hair to ../Ours/Voxel_hair
cp data/case_name/output/10-16/full/connected_strands.hair data/case_name/ours/Voxel_hair

# unzip VoxelHair_demo_v3.zip and click VoxelHair_v1.exe. 
# Then click "Load Strands" to visualize the results. You also can use Blender to achieve realistic rendering.

Test your own data

In our given examples, we ignored the steps of running colmap and training instant ngp. So if you want to test your own captured videos. Please refer to the following steps.

Citation

@inproceedings{wu2024monohair,
  title={MonoHair: High-Fidelity Hair Modeling from a Monocular Video},
  author={Wu, Keyu and Yang, Lingchen and Kuang, Zhiyi and Feng, Yao and Han, Xutao and Shen, Yuefan and Fu, Hongbo and Zhou, Kun and Zheng, Youyi},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={24164--24173},
  year={2024}
}

Acknowledgments

Here are some great resources we benefit from:

TO DO List

  • Upload full example data (before June.24)
  • Check version problem (before June.24)
  • Release visualization program (before June.30)
  • Automatic method to add key_frame.json

monohair's People

Contributors

wky1998 avatar

Stargazers

 avatar Wang Yidong avatar Xusy2333 avatar  avatar CAI Zeyu avatar Jin Hyeong Park avatar  avatar YifanZhu avatar  avatar YiChenCityU avatar Simon Giebenhain avatar slongle avatar Chaofeng Chen avatar Krtolica Vujadin avatar Lizhen Wang avatar XingyuRen avatar  avatar Xinyang Li avatar An Liang avatar Jingxiang Sun avatar LiangChao avatar Bruce Fan avatar conallwang avatar  avatar blackmonk13 avatar  avatar Zidu Wang avatar ZZSSCC  avatar Vanessa Sklyarova avatar Lu Ming avatar Kirill Klimov avatar Artem Sevastopolsky avatar  avatar  avatar Gleb Sterkin avatar Inferencer avatar Egor Zakharov avatar  avatar YANG Zhitao avatar Ellis avatar  avatar learner avatar  avatar Pyjcsx avatar Fudong Wang avatar  avatar Ouyang min avatar Wesley Sales avatar  avatar  avatar Han Feng avatar Ken Taketani avatar  avatar Halit avatar  avatar Valentin Nadolu avatar Akram Parvez avatar Andres Perez avatar Lauren garcia avatar <>(CK)<> avatar  avatar  avatar  avatar Antlitz.ai avatar Jean-Philippe Deblonde avatar Jin Jiacheng avatar  avatar Joseph Curtis avatar Razvan B. avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar Moran Li avatar Kenshi Takayama avatar  avatar  avatar  avatar Akash Panchal avatar  avatar lan avatar  avatar 小三爷我大胆地往前走莫回头 avatar 个人公众号 Hypochondira avatar Snow avatar Chenghong Li avatar

Watchers

Kenshi Takayama avatar Lauren garcia avatar  avatar fingerx avatar PeterZs avatar Snow avatar  avatar  avatar Chao Wen avatar Halit avatar EasyShu avatar 个人公众号 Hypochondira avatar  avatar  avatar  avatar ASka avatar  avatar  avatar  avatar

monohair's Issues

python infer_inner.py --yaml=configs/reconstruct/big_wavy1 - ImportError: cannot import name 'egl' from 'glcontext'

Another Step error :)

generate segments...
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 281827/281827 [10:57<00:00, 428.35it/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 281827/281827 [04:13<00:00, 1112.02it/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 281827/281827 [03:32<00:00, 1325.13it/s]
done...
Traceback (most recent call last):
File "C:\Users\Lauren\Documents\Source\MonoHair\infer_inner.py", line 71, in
render_data(camera ,strands ,vertices ,faces ,[1280 ,720] ,os.path.join(args.data.root ,'imgs'))
File "C:\Users\Lauren\Documents\Source\MonoHair\Utils\Render_utils.py", line 270, in render_data
Render = Renderer(camera, Width=image_size[1], Height=image_size[0], Headless=True)
File "C:\Users\Lauren\Documents\Source\MonoHair\Utils\Render_utils.py", line 217, in init
self.ctx = moderngl.create_context(standalone=True, backend='egl', libgl='libGL.so.1',
File "C:\Users\Lauren\miniconda3\envs\MonoHair\lib\site-packages\moderngl_init_.py", line 1936, in create_context
ctx.mglo, ctx.version_code = mgl.create_context(glversion=require, mode=mode, **settings)
File "C:\Users\Lauren\miniconda3\envs\MonoHair\lib\site-packages\glcontext_init_.py", line 49, in get_backend_by_name
return egl()
File "C:\Users\Lauren\miniconda3\envs\MonoHair\lib\site-packages\glcontext_init
.py", line 106, in egl
from glcontext import egl
ImportError: cannot import name 'egl' from 'glcontext' (C:\Users\Lauren\miniconda3\envs\MonoHair\lib\site-packages\glcontext_init
.py)

Unable to execute prepare_data.py

I tried to run the script but I ran into the following error

loading configs/reconstruct/base.yaml...
loading configs/reconstruct/short_curly1.yaml...

  • HairGenerate:
    • connect_dot_threshold: 0.8
    • connect_scalp: True
    • connect_segments: True
    • connect_threshold: 0.0025
    • connect_to_guide: None
    • dist_to_root: 6
    • generate_segments: True
    • grow_threshold: 0.85
    • out_ratio: 0.0
  • PMVO:
    • conf_threshold: 0.1
    • filter_point: True
    • genrate_ori_only: None
    • infer_inner: True
    • num_sample_per_grid: 6
    • optimize: True
    • patch_size: 5
    • threshold: 0.05
    • visible_threshold: 1
  • bbox_min: [-0.32, -0.32, -0.24]
  • bust_to_origin: [0.006, -1.644, 0.01]
  • camera_path: camera/calib_data/wky07-22/cam_params.json
  • check_strands: True
  • cpu: None
  • data:
    • Conf_path: conf
    • Occ3D_path: ours/Occ3D.mat
    • Ori2D_path: best_ori
    • Ori3D_path: ours/Ori3D.mat
    • bust_path: ours/bust_long_tsfm.obj
    • case: short_curly1
    • depth_path: render_depth
    • frame_interval: 7
    • image_size: [1080, 1920]
    • mask_path: hair_mask
    • raw_points_path: ours/colmap_points.obj
    • root: data
    • scalp_path: ours/scalp_tsfm.obj
    • strands_path: ours/world_str_raw.dat
  • device: cuda:0
  • gpu: 0
  • image_camera_path: ours/cam_params.json
  • infer_inner:
    • render_data: True
    • run_mvs: True
  • name: 10-16
  • ngp:
    • marching_cubes_density_thresh: 3.0
  • output_root: output
  • prepare_data:
    • fit_bust: None
    • process_bust: True
    • process_camera: True
    • process_imgs: True
    • render_depth: True
    • run_ngp: True
    • select_images: True
  • save_path: refine
  • scalp_diffusion: None
  • seed: 0
  • segment:
    • CDGNET_ckpt: assets/CDGNet/LIP_epoch_149.pth
    • MODNET_ckpt: assets/MODNet/modnet_photographic_portrait_matting.ckpt
    • scene_path: None
  • vsize: 0.005
  • yaml: configs/reconstruct/short_curly1
    existing options file found (identical)
    distance: 2.254131284488828
    distance: 2.254131284488828
    16:11:36 SUCCESS Initialized CUDA 11.5. Active GPU is #0: NVIDIA GeForce RTX 3090 [86]
    16:11:36 INFO Loading NeRF dataset from
    16:11:36 WARNING data/short_curly1/colmap/cam_params.json does not contain any frames. Skipping.
    16:11:36 INFO data/short_curly1/colmap/transforms.json
    16:11:36 WARNING data/short_curly1/colmap/base_cam.json does not contain any frames. Skipping.
    16:11:36 WARNING data/short_curly1/colmap/key_frame.json does not contain any frames. Skipping.
    16:11:36 WARNING data/short_curly1/colmap/base_transform.json does not contain any frames. Skipping.
    16:11:37 SUCCESS Loaded 409 images after 0s
    16:11:37 INFO cam_aabb=[min=[-0.820552,-0.696267,0.88122], max=[3.15477,1.71274,1.34521]]
    16:11:37 INFO Loading network snapshot from: data/short_curly1/colmap/base.ingp
    16:11:38 INFO GridEncoding: Nmin=16 b=3.28134 F=4 T=2^19 L=8
    16:11:38 INFO Density model: 3--[HashGrid]-->32--[FullyFusedMLP(neurons=64,layers=3)]-->1
    16:11:38 INFO Color model: 3--[Composite]-->16+16--[FullyFusedMLP(neurons=64,layers=4)]-->3
    16:11:38 INFO total_encoding_params=13194816 total_network_params=10240
    Screenshot transforms from data/short_curly1/colmap/base_transform.json
    Generating mesh via marching cubes and saving to data/short_curly1/colmap/base.obj. Resolution=[512,512,512], Density Threshold=3.0
    16:11:38 INFO #vertices=6031894 #triangles=11994368
    range(0, 16)
    rendering data/short_curly1/trainning_images/capture_images/000.png
    rendering data/short_curly1/trainning_images/capture_images/001.png
    rendering data/short_curly1/trainning_images/capture_images/002.png
    rendering data/short_curly1/trainning_images/capture_images/003.png
    rendering data/short_curly1/trainning_images/capture_images/004.png
    rendering data/short_curly1/trainning_images/capture_images/005.png
    rendering data/short_curly1/trainning_images/capture_images/006.png
    rendering data/short_curly1/trainning_images/capture_images/007.png
    rendering data/short_curly1/trainning_images/capture_images/008.png
    rendering data/short_curly1/trainning_images/capture_images/009.png
    rendering data/short_curly1/trainning_images/capture_images/010.png
    rendering data/short_curly1/trainning_images/capture_images/011.png
    rendering data/short_curly1/trainning_images/capture_images/012.png
    rendering data/short_curly1/trainning_images/capture_images/013.png
    rendering data/short_curly1/trainning_images/capture_images/014.png
    rendering data/short_curly1/trainning_images/capture_images/015.png
    unable to load materials from: material.mtl
    Start calculating masks!
    100%|
    Start calculating hair masks!
    Traceback (most recent call last):
    File "/home/sharma/MonoHair/prepare_data.py", line 182, in
    calculate_mask(segment_args)
    File "/home/sharma/MonoHair/preprocess_capture_data/calc_masks.py", line 171, in calculate_mask
    for key, nkey in zip(state_dict_old.keys(), state_dict.keys()):
    RuntimeError: OrderedDict mutated during iteration

Missing data/big_wavy1/ours/ Folder in data_processed Download Package

Thank you for sharing this awesome project.
However, I think there might be something missing to make it run correctly.
I already downloaded the data_processed folder, but there is no data/big_wavy1/ours/ folder.
Please check the following error message:

(MonoHair) me@ubuntu:/hair/MonoHair$ python PMVO.py --yaml=configs/reconstruct/big_wavy1
Run PMVO...
Process ID: 177603
setting configurations...
loading configs/reconstruct/base.yaml...
loading configs/reconstruct/big_wavy1.yaml...
* HairGenerate:
   * connect_dot_threshold: 0.8
   * connect_scalp: True
   * connect_segments: True
   * connect_threshold: 0.0025
   * connect_to_guide: None
   * dist_to_root: 6
   * generate_segments: True
   * grow_threshold: 0.85
   * out_ratio: 0.35
* PMVO:
   * conf_threshold: 0.15
   * filter_point: True
   * genrate_ori_only: None
   * infer_inner: True
   * num_sample_per_grid: 4
   * optimize: True
   * patch_size: 7
   * threshold: 0.025
   * visible_threshold: 1
* bbox_min: [-0.32, -0.32, -0.24]
* bust_to_origin: [0.006, -1.644, 0.01]
* camera_path: camera/calib_data/wky07-22/cam_params.json
* check_strands: True
* cpu: None
* data:
   * Conf_path: conf
   * Occ3D_path: ours/Occ3D.mat
   * Ori2D_path: best_ori
   * Ori3D_path: ours/Ori3D.mat
   * bust_path: Bust/bust_long.obj
   * case: big_wavy1
   * depth_path: render_depth
   * frame_interval: 7
   * image_size: [1920, 1080]
   * mask_path: hair_mask
   * raw_points_path: ours/colmap_points.obj
   * root: data
   * scalp_path: ours/scalp_tsfm.obj
   * strands_path: ours/world_str_raw.dat
* device: cuda:0
* gpu: 0
* image_camera_path: ours/cam_params.json
* infer_inner:
   * render_data: True
   * run_mvs: True
* name: 10-16
* ngp:
   * marching_cubes_density_thresh: 3.0
* output_root: output
* prepare_data:
   * fit_bust: None
   * process_bust: True
   * process_camera: True
   * process_imgs: True
   * render_depth: True
   * run_ngp: True
   * select_images: True
* save_path: refine
* scalp_diffusion: None
* seed: 0
* segment:
   * CDGNET_ckpt: assets/CDGNet/LIP_epoch_149.pth
   * MODNET_ckpt: assets/MODNet/modnet_photographic_portrait_matting.ckpt
   * scene_path: None
* vsize: 0.005
* yaml: configs/reconstruct/big_wavy1
existing options file found (identical)
unable to load materials from: ./bust_long_c.obj.mtl
[Open3D WARNING] Unable to load file data/big_wavy1/ours/scalp_tsfm.obj with ASSIMP
/home/users/me/miniconda3/envs/MonoHair/lib/python3.10/site-packages/numpy/core/fromnumeric.py:3432: RuntimeWarning: Mean of empty slice.
  return _methods._mean(a, axis=axis, dtype=dtype,
/home/users/me/miniconda3/envs/MonoHair/lib/python3.10/site-packages/numpy/core/_methods.py:182: RuntimeWarning: invalid value encountered in divide
  ret = um.true_divide(
Traceback (most recent call last):
  File "/home/users/me/w/hair/MonoHair/PMVO.py", line 820, in <module>
    scalp_max = np.max(scalp_vertices,axis=0)
  File "<__array_function__ internals>", line 180, in amax
  File "/home/users/me/miniconda3/envs/MonoHair/lib/python3.10/site-packages/numpy/core/fromnumeric.py", line 2793, in amax
    return _wrapreduction(a, np.maximum, 'max', axis, None, out,
  File "/home/users/me/miniconda3/envs/MonoHair/lib/python3.10/site-packages/numpy/core/fromnumeric.py", line 86, in _wrapreduction
    return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
ValueError: zero-size array to reduction operation maximum which has no identity

Unable to load material

Found error while running prepare_data.py

unable to load materials from: ./bust_long_c.obj.mtl unable to load materials from: ./bust_long_c.obj.mtl unable to load materials from: material.mtl unable to load materials from: ./bust_long_c.obj.mtl Traceback (most recent call last): File "/opt/data/private/chy/workspace/MonoHair/prepare_data.py", line 169, in <module> render_bust_hair_depth(os.path.join(root,'ours/colmap_points.obj'), camera_path, save_root,bust_path=bust_path,Headless=Headless) File "/opt/data/private/chy/workspace/MonoHair/Utils/Render_utils.py", line 321, in render_bust_hair_depth Render = Renderer(camera, Width=image_size[1], Height=image_size[0], Headless=Headless) File "/opt/data/private/chy/workspace/MonoHair/Utils/Render_utils.py", line 217, in init self.ctx = moderngl.create_context(standalone=True, backend='egl', libgl='libGL.so.1', File "/root/miniconda3/envs/MonoHair/lib/python3.10/site-packages/moderngl/init.py", line 1936, in create_context ctx.mglo, ctx.version_code = mgl.create_context(glversion=require, mode=mode, **settings) File "/root/miniconda3/envs/MonoHair/lib/python3.10/site-packages/glcontext/init.py", line 120, in create return egl.create_context(**kwargs) Exception: requested device index 0, but found 0 devices

I thought it may be short of GPU, but I found nothing wrong. So I think maybe the error came from bust_long_c.obj.mtl, and I didn't find this file. There is only a similar bust_long.obj in directory Bust. I followed readme to install the repository, but I only downloaded processed_data. Did I miss anyting?

python prepare_data.py --yaml=configs/reconstruct/big_wavy1 crash

Hello I am trying to execute the first step with:

  1. cp data_processed/big_wavy1 to data/big_wavy1
  2. python prepare_data.py --yaml=configs/reconstruct/big_wavy1
  3. setting configurations...
    loading configs/reconstruct/base.yaml...
    loading configs/reconstruct/big_wavy1.yaml...
  • HairGenerate:
    • connect_dot_threshold: 0.8
    • connect_scalp: True
    • connect_segments: True
    • connect_threshold: 0.0025
    • connect_to_guide: None
    • dist_to_root: 6
    • generate_segments: True
    • grow_threshold: 0.85
    • out_ratio: 0.35
  • PMVO:
    • conf_threshold: 0.15
    • filter_point: True
    • genrate_ori_only: None
    • infer_inner: True
    • num_sample_per_grid: 4
    • optimize: True
    • patch_size: 7
    • threshold: 0.025
    • visible_threshold: 1
  • bbox_min: [-0.32, -0.32, -0.24]
  • bust_to_origin: [0.006, -1.644, 0.01]
  • camera_path: camera/calib_data/wky07-22/cam_params.json
  • check_strands: True
  • cpu: None
  • data:
    • Conf_path: conf
    • Occ3D_path: ours/Occ3D.mat
    • Ori2D_path: best_ori
    • Ori3D_path: ours/Ori3D.mat
    • bust_path: ours/bust_long_tsfm.obj
    • case: big_wavy1
    • depth_path: render_depth
    • frame_interval: 7
    • image_size: [1920, 1080]
    • mask_path: hair_mask
    • raw_points_path: ours/colmap_points.obj
    • root: data
    • scalp_path: ours/scalp_tsfm.obj
    • strands_path: ours/world_str_raw.dat
  • device: cuda:0
  • gpu: 0
  • image_camera_path: ours/cam_params.json
  • infer_inner:
    • render_data: True
    • run_mvs: True
  • name: 10-16
  • ngp:
    • marching_cubes_density_thresh: 3.0
  • output_root: output
  • prepare_data:
    • fit_bust: None
    • process_bust: True
    • process_camera: True
    • process_imgs: True
    • render_depth: True
    • run_ngp: True
    • select_images: True
  • save_path: refine
  • scalp_diffusion: None
  • seed: 0
  • segment:
    • CDGNET_ckpt: assets/CDGNet/LIP_epoch_149.pth
    • MODNET_ckpt: assets/MODNet/modnet_photographic_portrait_matting.ckpt
    • scene_path: None
  • vsize: 0.005
  • yaml: configs/reconstruct/big_wavy1
    (creating new options file...)
    distance: 2.254131284488828
    distance: 2.2541312844888277
    16:49:16 SUCCESS Initialized CUDA 12.5. Active GPU is #0: NVIDIA GeForce RTX 4090 [89]
    16:49:16 INFO Loading NeRF dataset from
    16:49:16 WARNING data\big_wavy1\colmap\base_cam.json does not contain any frames. Skipping.
    16:49:16 WARNING data\big_wavy1\colmap\base_transform.json does not contain any frames. Skipping.
    16:49:16 WARNING data\big_wavy1\colmap\cam_params.json does not contain any frames. Skipping.
    16:49:16 WARNING data\big_wavy1\colmap\key_frame.json does not contain any frames. Skipping.
    16:49:16 INFO data\big_wavy1\colmap\transforms.json
    16:49:20 SUCCESS Loaded 1189 images after 3s
    16:49:20 INFO cam_aabb=[min=[-1.17578,-0.902506,-0.0814099], max=[1.78049,1.91326,1.84811]]
    16:49:22 INFO Loading network snapshot from: data\big_wavy1\colmap\base.ingp
    16:49:22 INFO GridEncoding: Nmin=16 b=2.43803 F=4 T=2^19 L=8
    16:49:22 INFO Density model: 3--[HashGrid]-->32--[FullyFusedMLP(neurons=64,layers=3)]-->1
    16:49:22 INFO Color model: 3--[Composite]-->16+16--[FullyFusedMLP(neurons=64,layers=4)]-->3
    16:49:22 INFO total_encoding_params=12855296 total_network_params=10240
    Screenshot transforms from data\big_wavy1\colmap/base_transform.json
    Generating mesh via marching cubes and saving to data\big_wavy1\colmap/base.obj. Resolution=[512,512,512], Density Threshold=3.0
    16:49:22 INFO #vertices=3666953 #triangles=7305214
    range(0, 16)
    rendering data\big_wavy1\trainning_images/capture_images\000.png
    rendering data\big_wavy1\trainning_images/capture_images\001.png
    rendering data\big_wavy1\trainning_images/capture_images\002.png
    rendering data\big_wavy1\trainning_images/capture_images\003.png
    rendering data\big_wavy1\trainning_images/capture_images\004.png
    rendering data\big_wavy1\trainning_images/capture_images\005.png
    rendering data\big_wavy1\trainning_images/capture_images\006.png
    rendering data\big_wavy1\trainning_images/capture_images\007.png
    rendering data\big_wavy1\trainning_images/capture_images\008.png
    rendering data\big_wavy1\trainning_images/capture_images\009.png
    rendering data\big_wavy1\trainning_images/capture_images\010.png
    rendering data\big_wavy1\trainning_images/capture_images\011.png
    rendering data\big_wavy1\trainning_images/capture_images\012.png
    rendering data\big_wavy1\trainning_images/capture_images\013.png
    rendering data\big_wavy1\trainning_images/capture_images\014.png
    rendering data\big_wavy1\trainning_images/capture_images\015.png
    unable to load materials from: ./bust_long_c.obj.mtl
    unable to load materials from: ./bust_long_c.obj.mtl
    unable to load materials from: material.mtl
    unable to load materials from: ./bust_long_c.obj.mtl
    Start calculating masks!
    100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 169/169 [00:10<00:00, 16.46it/s]
    Start calculating hair masks!
    Traceback (most recent call last):
    File "C:\Users\Lauren\Documents\Source\MonoHair\prepare_data.py", line 182, in
    calculate_mask(segment_args)
    File "C:\Users\Lauren\Documents\Source\MonoHair\preprocess_capture_data\calc_masks.py", line 171, in calculate_mask
    for key, nkey in zip(state_dict_old.keys(), state_dict.keys()):
    RuntimeError: OrderedDict mutated during iteration

Proper parameter settings for a custom dataset

@KeyuWu-CS
Thank you for your previous response.
I have more question regarding how to reproduce the quality of the results.

  1. From preprocessing to running the prepare_data.py script, there are many parameters that need to be set such as COLMAP, instant-NGP. In the case of using my own dataset, can you provide any advice on which parameters to adjust to achieve a similar quality to your results? (e.g., parameter settings for short male hair or long straight female hair)

  2. In addition, why should I train the instant-NGP for around 2-3 minutes and set the key frame in the front view during the preprocessing section? As far as I understand, there is already an instant-NGP training section in your 4-step 3D Hair Reconstruction process.

Thanks a lot in advance your answering! :)

ValueError occurs while executing the HairGrow.py step.

@KeyuWu-CS
I encountered a problem almost at the end of my process and need some assistance. 😭😭
While running the final command on my short male hair dataset, python HairGrow.py --yaml=configs/reconstruct/male_short, I encountered the following error. I tried to resolve it, but couldn't pinpoint the exact issue.
Do you have any suggestions or insights on what might be causing this problem?
Your advice would be greatly appreciated! Thank you! :)

Error log

setting configurations...
loading configs/reconstruct/base.yaml...
loading configs/reconstruct/test_maleB.yaml...
* HairGenerate:
   * connect_dot_threshold: 0.85
   * connect_scalp: True
   * connect_segments: True
   * connect_threshold: 0.005
   * connect_to_guide: None
   * dist_to_root: 6
   * generate_segments: True
   * grow_threshold: 0.9
   * out_ratio: 0.0
* PMVO:
   * conf_threshold: 0.1
   * filter_point: True
   * genrate_ori_only: None
   * infer_inner: True
   * num_sample_per_grid: 4
   * optimize: True
   * patch_size: 5
   * threshold: 0.05
   * visible_threshold: 1
* bbox_min: [-0.32, -0.32, -0.24]
* bust_to_origin: [0.006, -1.644, 0.01]
* camera_path: camera/calib_data/wky07-22/cam_params.json
* check_strands: True
* cpu: None
* data:
   * Conf_path: conf
   * Occ3D_path: ours/Occ3D.mat
   * Ori2D_path: best_ori
   * Ori3D_path: ours/Ori3D.mat
   * bust_path: Bust/bust_long.obj
* case: test_maleB
   * conf_threshold: 0.4
   * depth_path: render_depth
   * frame_interval: 2
   * image_size: [1920, 1080]
   * mask_path: hair_mask
   * raw_points_path: ours/colmap_points.obj
   * root: data
   * scalp_path: ours/scalp_tsfm.obj
   * strands_path: ours/world_str_raw.dat
* device: cuda:0
* gpu: 0
* image_camera_path: ours/cam_params.json
* infer_inner:
   * render_data: True
   * run_mvs: True
* name: 10-16
* ngp:
   * marching_cubes_density_thresh: 2.5
* output_root: output
* prepare_data:
   * fit_bust: True
   * process_bust: True
   * process_camera: True
   * process_imgs: True
   * render_depth: True
   * run_ngp: True
   * select_images: True
* save_path: refine
* scalp_diffusion: None
* seed: 0
* segment:
   * CDGNET_ckpt: assets/CDGNet/LIP_epoch_149.pth
   * MODNET_ckpt: assets/MODNet/modnet_photographic_portrait_matting.ckpt
   * scene_path: None
* vsize: 0.005
* yaml: configs/reconstruct/test_maleB
existing options file found (different from current one)...
17c17
<     optimize: null
---
>     optimize: true
override? (y/n) generate from scalp
voxel size: 192 256 256
100%|███████████████████████████████████████████████████████████| 60000/60000 [16:39<00:00, 60.00it/s]
num guide: 0
100%|██████████████████████████████████████████████████████████| 30503/30503 [00:58<00:00, 521.34it/s]
100%|█████████████████████████████████████████████████████████| 30503/30503 [00:20<00:00, 1493.68it/s]
done...
Smoothing strands: 100%|██████████████████████████████████████| 12108/12108 [00:08<00:00, 1396.04it/s]
connect segments...
100%|█████████████████████████████████████████████████████████| 12108/12108 [00:10<00:00, 1176.97it/s]
100%|██████████████████████████████████████████████████████████| 12108/12108 [01:00<00:00, 201.43it/s]
fail: 8151
done...
100%|███████████████████████████████████████████████████████| 12108/12108 [00:00<00:00, 297656.32it/s]
Smoothing strands: 100%|██████████████████████████████████████| 12108/12108 [00:10<00:00, 1171.83it/s]
num of strands: 12108
num of good strands: 0.0
connect poor strands to good strands...
iter: 0
num of good strands: 0
num of out strands: 0
current thr_dist: 0.5
current thr_dot: 0.9
Traceback (most recent call last):
  File "/MonoHair/HairGrow.py", line 963, in <module>
    connect_strands = HairGrowSolver.connect_to_scalp(strands,num_root)
  File "/MonoHair/HairGrow.py", line 655, in connect_to_scalp
    core_strands = np.concatenate(core_strands,0)
  File "<__array_function__ internals>", line 180, in concatenate
ValueError: need at least one array to concatenate

Some strange typos in .sh file, Please help

At this step i had to manually download due to SSL problems and i found some strange typos.
image

in file fetch_data.sh:

image
there is a "utilities" miss type.

in file fetch_pretrained_model.sh:

image
where the original file is ".ckpt"
image

i wonder whether these will affect the normal run of the repository. Should I follow the original file or use the modified one to download manually, thanks

Release date?

Hello, may I know about when will you release your source code?

And also, Is this 3D hair reconstruction from the input image?

Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.