Comments (10)
Sorry, I forget I comment the code for data process in multiview_optimization.py(Line 868). After execute this step, it will generate landmark and iris.
from monohair.
For our provided data, you can directly download, we have aligned it, for your own data, you should set fit_bust=True, then we will fit the flame bust and will get the model_tsfm_semantic.dat in prepare_data.py.
from monohair.
For our provided data, you can directly download, we have aligned it, for your own data, you should set fit_bust=True, then we will fit the flame bust and will get the model_tsfm_semantic.dat in prepare_data.py.
@KeyuWu-CS
I encountered the following error white trying your solution in prepare_data.py
with the jenya2 dataset you provided.
I merely added a line prepare_data: fit_bust: true
in configs/reconstruct/jenya2.yaml
file
Are there any other settings I need to configure for using my own data?
Thank you in advance for your answer!
Process ID: 14272
setting configurations...
loading configs/reconstruct/base.yaml...
loading configs/reconstruct/jenya2.yaml...
* HairGenerate:
* connect_dot_threshold: 0.85
* connect_scalp: True
* connect_segments: True
* connect_threshold: 0.005
* connect_to_guide: None
* dist_to_root: 6
* generate_segments: True
* grow_threshold: 0.9
* out_ratio: 0.0
* PMVO:
* conf_threshold: 0.1
* filter_point: True
* genrate_ori_only: None
* infer_inner: True
* num_sample_per_grid: 4
* optimize: True
* patch_size: 5
* threshold: 0.05
* visible_threshold: 1
* bbox_min: [-0.32, -0.32, -0.24]
* bust_to_origin: [0.006, -1.644, 0.01]
* camera_path: camera/calib_data/wky07-22/cam_params.json
* check_strands: True
* cpu: None
* data:
* Conf_path: conf
* Occ3D_path: ours/Occ3D.mat
* Ori2D_path: best_ori
* Ori3D_path: ours/Ori3D.mat
* bust_path: Bust/bust_long.obj
* case: jenya2
* conf_threshold: 0.4
* depth_path: render_depth
* frame_interval: 2
* image_size: [1920, 1080]
* mask_path: hair_mask
* raw_points_path: ours/colmap_points.obj
* root: data
* scalp_path: ours/scalp_tsfm.obj
* strands_path: ours/world_str_raw.dat
* device: cuda:0
* gpu: 0
* image_camera_path: ours/cam_params.json
* infer_inner:
* render_data: True
* run_mvs: True
* name: 10-16
* ngp:
* marching_cubes_density_thresh: 2.5
* output_root: output
* prepare_data:
* fit_bust: True
* process_bust: True
* process_camera: True
* process_imgs: True
* render_depth: True
* run_ngp: True
* select_images: True
* save_path: refine
* scalp_diffusion: None
* seed: 0
* segment:
* CDGNET_ckpt: assets/CDGNet/LIP_epoch_149.pth
* MODNET_ckpt: assets/MODNet/modnet_photographic_portrait_matting.ckpt
* scene_path: None
* vsize: 0.005
* yaml: configs/reconstruct/jenya2
existing options file found (identical)
distance: 2.254131284488828
distance: 2.254131284488828
09:16:02 SUCCESS Initialized CUDA 11.3. Active GPU is #0: NVIDIA GeForce RTX 4090 [89]
09:16:02 INFO Loading NeRF dataset from
09:16:02 WARNING data/jenya2/colmap/base_transform.json does not contain any frames. Skipping.
09:16:02 WARNING data/jenya2/colmap/cam_params.json does not contain any frames. Skipping.
09:16:02 WARNING data/jenya2/colmap/base_cam.json does not contain any frames. Skipping.
09:16:02 INFO data/jenya2/colmap/transforms.json
09:16:02 WARNING data/jenya2/colmap/key_frame.json does not contain any frames. Skipping.
09:16:03 SUCCESS Loaded 301 images after 0s
09:16:03 INFO cam_aabb=[min=[-0.599218,-0.552077,0.784563], max=[2.06055,1.66177,1.41255]]
09:16:04 INFO Loading network snapshot from: data/jenya2/colmap/base.ingp
09:16:04 INFO GridEncoding: Nmin=16 b=2.43803 F=4 T=2^19 L=8
09:16:04 INFO Density model: 3--[HashGrid]-->32--[FullyFusedMLP(neurons=64,layers=3)]-->1
09:16:04 INFO Color model: 3--[Composite]-->16+16--[FullyFusedMLP(neurons=64,layers=4)]-->3
09:16:04 INFO total_encoding_params=12855296 total_network_params=10240
Screenshot transforms from data/jenya2/colmap/base_transform.json
Generating mesh via marching cubes and saving to data/jenya2/colmap/base.obj. Resolution=[512,512,512], Density Threshold=2.5
09:16:04 INFO #vertices=4571178 #triangles=9112028
range(0, 16)
rendering data/jenya2/trainning_images/capture_images/000.png
rendering data/jenya2/trainning_images/capture_images/001.png
rendering data/jenya2/trainning_images/capture_images/002.png
rendering data/jenya2/trainning_images/capture_images/003.png
rendering data/jenya2/trainning_images/capture_images/004.png
rendering data/jenya2/trainning_images/capture_images/005.png
rendering data/jenya2/trainning_images/capture_images/006.png
rendering data/jenya2/trainning_images/capture_images/007.png
rendering data/jenya2/trainning_images/capture_images/008.png
rendering data/jenya2/trainning_images/capture_images/009.png
rendering data/jenya2/trainning_images/capture_images/010.png
rendering data/jenya2/trainning_images/capture_images/011.png
rendering data/jenya2/trainning_images/capture_images/012.png
rendering data/jenya2/trainning_images/capture_images/013.png
rendering data/jenya2/trainning_images/capture_images/014.png
rendering data/jenya2/trainning_images/capture_images/015.png
fiting ...
Process ID: 14506
setting configurations...
loading configs/Bust_fit/base.yaml...
loading configs/Bust_fit/jenya2.yaml...
* batch_size: 1
* camera_path: data/jenya2/ours/cam_params.json
* cpu: None
* data:
* image_size: [1920, 1080]
* device: cuda:0
* gpu: 0
* ignore_existing: None
* isTrain: True
* load_fits: None
* loss:
* eyed: 2
* inside_mask: None
* lipd: 0.5
* lmk: 1
* scale_weight: 1
* name: debug
* num_workers: 4
* optimize:
* data_type: fix_shoulder
* iter: 10000
* use_iris: None
* use_mask: None
* use_rendering: None
* vis_step: 100
* output_path: None
* output_root: output
* path: data
* savepath: data
* seed: 0
* smplx:
* extra_joint_path: assets/data/smplx_extra_joints.yaml
* face_eye_mask_path: assets/data/uv_face_eye_mask.png
* face_mask_path: assets/data/uv_face_mask.png
* flame2smplx_cached_path: assets/data/flame2smplx_tex_1024.npy
* flame_ids_path: assets/data/SMPL-X__FLAME_vertex_ids.npy
* flame_vertex_masks_path: assets/data/FLAME_masks.pkl
* j14_regressor_path: assets/data/SMPLX_to_J14.pkl
* mano_ids_path: assets/data/MANO_SMPLX_vertex_ids.pkl
* n_exp: 100
* n_shape: 300
* n_tex: 100
* smplx_model_path: assets/data/SMPLX_NEUTRAL_2020.npz
* smplx_tex_path: assets/data/smplx_tex.png
* tex_path: assets/data/FLAME_albedo_from_BFM.npz
* tex_type: BFM
* topology_path: assets/data/SMPL_X_template_FLAME_uv.obj
* uv_size: 512
* subject: jenya2
* subject_path: data/jenya2
* vis: True
* yaml: configs/Bust_fit/jenya2
existing options file found (identical)
Traceback (most recent call last):
File "/workspace/multiview_optimization.py", line 880, in <module>
dataset = NerfDataset(args, given_imagepath_list = imagepath_list)
File "/workspace/multiview_optimization.py", line 102, in __init__
assert len(self.data) > 0, "Can't find data; make sure you specify the path to your dataset"
AssertionError: Can't find data; make sure you specify the path to your dataset
If you are not running wig hair, please first run bust fitting.
Traceback (most recent call last):
File "/workspace/prepare_data.py", line 130, in <module>
shutil.copyfile(os.path.join(args.data.root,'optimize','model_tsfm.dat'),os.path.join(args.data.root,'model_tsfm.dat'))
File "/usr/lib/python3.10/shutil.py", line 254, in copyfile
with open(src, 'rb') as fsrc:
FileNotFoundError: [Errno 2] No such file or directory: 'data/jenya2/optimize/model_tsfm.dat'
from monohair.
@0mil Traceback (most recent call last):
File "/workspace/multiview_optimization.py", line 880, in
dataset = NerfDataset(args, given_imagepath_list = imagepath_list)
File "/workspace/multiview_optimization.py", line 102, in init
assert len(self.data) > 0, "Can't find data; make sure you specify the path to your dataset"
AssertionError: Can't find data; make sure you specify the path to your dataset
Actually, you don't run the fit bust step successfully, after optimization, it will generate "model_tsfm.dat"
from monohair.
@KeyuWu-CS
According to the code, to run the fit_bust
process, the sample datasets jenya2
and ksyusha1
must include data/jenya2/iris/*.txt
and data/jenya2/landmark2d/*.txt
files. However, these files are missing. How can I obtain these *.txt
files to run with own dataset?
from monohair.
Sorry, I forget I comment the code for data process in multiview_optimization.py(Line 868). After execute this step, it will generate landmark and iris.
From what I understand, the data processing in multiview_optimization.py(Line 868) should not be commented when running my own dataset. Is that correct?
It works smoothly! Thank you for answering!
from monohair.
in my case after remove the comment in Multiview :
existing options file found (identical)
name: VA_Hair_Footage1stTry
Traceback (most recent call last):
File "C:\Users\Lauren\Documents\Source\MonoHair\multiview_optimization.py", line 880, in
dataset = NerfDataset(args, given_imagepath_list = imagepath_list)
File "C:\Users\Lauren\Documents\Source\MonoHair\multiview_optimization.py", line 102, in init
assert len(self.data) > 0, "Can't find data; make sure you specify the path to your dataset"
AssertionError: Can't find data; make sure you specify the path to your dataset
If you are not running wig hair, please first run bust fitting.
Traceback (most recent call last):
File "C:\Users\Lauren\Documents\Source\MonoHair\prepare_data.py", line 130, in
shutil.copyfile(os.path.join(args.data.root,'optimize','model_tsfm.dat'),os.path.join(args.data.root,'model_tsfm.dat'))
File "C:\Users\Lauren\miniconda3\envs\MonoHair\lib\shutil.py", line 254, in copyfile
with open(src, 'rb') as fsrc:
FileNotFoundError: [Errno 2] No such file or directory: 'data\VA_Hair_Footage1stTry\optimize\model_tsfm.dat'
seem like the directory is adding not correct?
from monohair.
Any idea? :)
from monohair.
in my case after remove the comment in Multiview :
existing options file found (identical) name: VA_Hair_Footage1stTry Traceback (most recent call last): File "C:\Users\Lauren\Documents\Source\MonoHair\multiview_optimization.py", line 880, in dataset = NerfDataset(args, given_imagepath_list = imagepath_list) File "C:\Users\Lauren\Documents\Source\MonoHair\multiview_optimization.py", line 102, in init assert len(self.data) > 0, "Can't find data; make sure you specify the path to your dataset" AssertionError: Can't find data; make sure you specify the path to your dataset If you are not running wig hair, please first run bust fitting. Traceback (most recent call last): File "C:\Users\Lauren\Documents\Source\MonoHair\prepare_data.py", line 130, in shutil.copyfile(os.path.join(args.data.root,'optimize','model_tsfm.dat'),os.path.join(args.data.root,'model_tsfm.dat')) File "C:\Users\Lauren\miniconda3\envs\MonoHair\lib\shutil.py", line 254, in copyfile with open(src, 'rb') as fsrc: FileNotFoundError: [Errno 2] No such file or directory: 'data\VA_Hair_Footage1stTry\optimize\model_tsfm.dat'
seem like the directory is adding not correct?
@LaurentGarcia I think that you need to double-check the path of your own dataset. Additionally, you must keep the directory structure rigorously.
In my case, I referred to another sample dataset's directory like jenya2
and I didn't encounter this error!
from monohair.
@LaurentGarcia Sorry for later respond. model_tsfm.dat
will be generated when fit_bust set to True. It will generate a SMPLX model fit with images. You must ensure the mulitview optimization run successfully. In you error info. Some error happened in this step, please check. You can refer the answer.
from monohair.
Related Issues (14)
- Release date? HOT 2
- Missing data/big_wavy1/ours/ Folder in data_processed Download Package HOT 19
- Unable to load material HOT 7
- Unable to execute prepare_data.py HOT 5
- python prepare_data.py --yaml=configs/reconstruct/big_wavy1 crash HOT 8
- python infer_inner.py --yaml=configs/reconstruct/big_wavy1 - ImportError: cannot import name 'egl' from 'glcontext' HOT 1
- VoxelHair_v1.exe - open System Error HOT 1
- Some strange typos in .sh file, Please help HOT 1
- Blender for realistic rendering HOT 2
- Proper parameter settings for a custom dataset HOT 9
- ValueError occurs while executing the HairGrow.py step. HOT 15
- Issue at multiview_optimization.py HOT 1
- Same issues still in perpare_data about model_tsfm.dat HOT 6
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from monohair.