Giter VIP home page Giter VIP logo

eva3d's People

Contributors

hongfz16 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

eva3d's Issues

smpl_link error while duplicating

What is the smpl_link parameter for duplicating the HF space?

also your huggingface space is down

https://huggingface.co/spaces/hongfz16/EVA3D

Runtime error
00/160M [00:00<?, ?B/s]�[A

  0%|          | 0.00/160M [00:00<?, ?B/s]�[A

  0%|          | 0.00/160M [00:00<?, ?B/s]�[ATraceback (most recent call last):
  File "app.py", line 42, in download_pretrained_models
    download_file(session, eva3d_deepfashion_model)
  File "EVA3D/download_models.py", line 81, in download_file
    res.raise_for_status()
  File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/requests/models.py", line 1021, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://drive.google.com/uc?id=1SYPjxnHz3XPRhTarx_Lw8SG_iz16QUMU&confirm=t&uuid=3a51e74f-d6cd-4692-adcd-867a1dde7a18

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "app.py", line 56, in <module>
    download_pretrained_models()
  File "app.py", line 46, in download_pretrained_models
    download_file(session, eva3d_deepfashion_model, use_alt_url=True)
  File "EVA3D/download_models.py", line 80, in download_file
    with session.get(file_url, stream=True) as res:
  File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/requests/sessions.py", line 600, in get
    return self.request("GET", url, **kwargs)
  File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/requests/sessions.py", line 573, in request
    prep = self.prepare_request(req)
  File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/requests/sessions.py", line 484, in prepare_request
    p.prepare(
  File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/requests/models.py", line 368, in prepare
    self.prepare_url(url, params)
  File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/requests/models.py", line 439, in prepare_url
    raise MissingSchema(
requests.exceptions.MissingSchema: Invalid URL '': No scheme supplied. Perhaps you meant https://?

  0%|          | 0.00/160M [00:00<?, ?B/s]

  0%|          | 0.00/160M [00:00<?, ?B/s]
Container logs:

Cloning into 'EVA3D'...
Collecting fvcore
  Downloading fvcore-0.1.5.post20221221.tar.gz (50 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 50.2/50.2 kB 721.9 kB/s eta 0:00:00
  Preparing metadata (setup.py): started
  Preparing metadata (setup.py): finished with status 'done'
Requirement already satisfied: plotly in /home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages (5.13.1)
Collecting plotly
  Downloading plotly-5.15.0-py2.py3-none-any.whl (15.5 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 15.5/15.5 MB 32.9 MB/s eta 0:00:00
Requirement already satisfied: numpy in /home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages (from fvcore) (1.21.2)
Collecting yacs>=0.1.6
  Downloading yacs-0.1.8-py3-none-any.whl (14 kB)
Requirement already satisfied: pyyaml>=5.1 in /home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages (from fvcore) (6.0)
Requirement already satisfied: tqdm in /home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages (from fvcore) (4.64.1)
Collecting termcolor>=1.1
  Downloading termcolor-2.3.0-py3-none-any.whl (6.9 kB)
Requirement already satisfied: Pillow in /home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages (from fvcore) (9.0.1)
Collecting tabulate
  Downloading tabulate-0.9.0-py3-none-any.whl (35 kB)
Collecting iopath>=0.1.7
  Downloading iopath-0.1.10.tar.gz (42 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 42.2/42.2 kB 760.8 kB/s eta 0:00:00
  Preparing metadata (setup.py): started
  Preparing metadata (setup.py): finished with status 'done'
Requirement already satisfied: tenacity>=6.2.0 in /home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages (from plotly) (8.2.2)
Requirement already satisfied: packaging in /home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages (from plotly) (23.0)
Requirement already satisfied: typing_extensions in /home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages (from iopath>=0.1.7->fvcore) (4.5.0)
Collecting portalocker
  Downloading portalocker-2.7.0-py2.py3-none-any.whl (15 kB)
Building wheels for collected packages: fvcore, iopath
  Building wheel for fvcore (setup.py): started
  Building wheel for fvcore (setup.py): finished with status 'done'
  Created wheel for fvcore: filename=fvcore-0.1.5.post20221221-py3-none-any.whl size=61405 sha256=585e93d8432e05cf12db25b5afcdc7f126e03ac93b73784ce8d2ca21064f083a
  Stored in directory: /home/user/.cache/pip/wheels/6d/1f/39/577cab48487ad5c31acd046ee33dea7ca6fede7e923b2a2bc1
  Building wheel for iopath (setup.py): started
  Building wheel for iopath (setup.py): finished with status 'done'
  Created wheel for iopath: filename=iopath-0.1.10-py3-none-any.whl size=31532 sha256=0b594ceaf50d8e5be928e2a127ebdfd67283cf96ada1d623d808621c3e253cc9
  Stored in directory: /home/user/.cache/pip/wheels/85/e6/c8/4a67ea2cd453fd2e02c616b615fe8655ae67246c8e669bd464
Successfully built fvcore iopath
Installing collected packages: yacs, termcolor, tabulate, portalocker, plotly, iopath, fvcore
  Attempting uninstall: plotly
    Found existing installation: plotly 5.13.1
    Uninstalling plotly-5.13.1:
      Successfully uninstalled plotly-5.13.1
Successfully installed fvcore-0.1.5.post20221221 iopath-0.1.10 plotly-5.15.0 portalocker-2.7.0 tabulate-0.9.0 termcolor-2.3.0 yacs-0.1.8

[notice] A new release of pip available: 22.3.1 -> 23.1.2
[notice] To update, run: pip install --upgrade pip
Looking in links: https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py38_cu116_pyt1131/download.html
Collecting pytorch3d
  Downloading https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py38_cu116_pyt1131/pytorch3d-0.7.2-cp38-cp38-linux_x86_64.whl (72.8 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 72.8/72.8 MB 258.8 MB/s eta 0:00:00
Requirement already satisfied: iopath in /home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages (from pytorch3d) (0.1.10)
Requirement already satisfied: fvcore in /home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages (from pytorch3d) (0.1.5.post20221221)
Requirement already satisfied: tabulate in /home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages (from fvcore->pytorch3d) (0.9.0)
Requirement already satisfied: termcolor>=1.1 in /home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages (from fvcore->pytorch3d) (2.3.0)
Requirement already satisfied: pyyaml>=5.1 in /home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages (from fvcore->pytorch3d) (6.0)
Requirement already satisfied: yacs>=0.1.6 in /home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages (from fvcore->pytorch3d) (0.1.8)
Requirement already satisfied: tqdm in /home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages (from fvcore->pytorch3d) (4.64.1)
Requirement already satisfied: numpy in /home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages (from fvcore->pytorch3d) (1.21.2)
Requirement already satisfied: Pillow in /home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages (from fvcore->pytorch3d) (9.0.1)
Requirement already satisfied: portalocker in /home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages (from iopath->pytorch3d) (2.7.0)
Requirement already satisfied: typing-extensions in /home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages (from iopath->pytorch3d) (4.5.0)
Installing collected packages: pytorch3d
Successfully installed pytorch3d-0.7.2
Downloading EVA3D model pretrained on DeepFashion.
Google Drive download failed.
Trying do download from alternate server

  0%|          | 0.00/160M [00:00<?, ?B/s]
  0%|          | 0.00/160M [00:00<?, ?B/s]
  0%|          | 2.26k/160M [00:00<3:26:04, 13.0kB/s]
  0%|          | 0.00/160M [00:00<?, ?B/s]           
  0%|          | 0.00/160M [00:00<?, ?B/s]
  0%|          | 0.00/160M [00:00<?, ?B/s]
  0%|          | 0.00/160M [00:00<?, ?B/s]
  0%|          | 0.00/160M [00:00<?, ?B/s]
  0%|          | 0.00/160M [00:00<?, ?B/s]
  0%|          | 0.00/160M [00:00<?, ?B/s]
  0%|          | 0.00/160M [00:00<?, ?B/s]
  0%|          | 0.00/160M [00:00<?, ?B/s]

  0%|          | 0.00/160M [00:00<?, ?B/s]

  0%|          | 0.00/160M [00:00<?, ?B/s]

  0%|          | 0.00/160M [00:00<?, ?B/s]

  0%|          | 0.00/160M [00:00<?, ?B/s]

  0%|          | 0.00/160M [00:00<?, ?B/s]

  0%|          | 0.00/160M [00:00<?, ?B/s]

  0%|          | 0.00/160M [00:00<?, ?B/s]

  0%|          | 0.00/160M [00:00<?, ?B/s]

  0%|          | 0.00/160M [00:00<?, ?B/s]

  0%|          | 0.00/160M [00:00<?, ?B/s]

  0%|          | 0.00/160M [00:00<?, ?B/s]Traceback (most recent call last):
  File "app.py", line 42, in download_pretrained_models
    download_file(session, eva3d_deepfashion_model)
  File "EVA3D/download_models.py", line 81, in download_file
    res.raise_for_status()
  File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/requests/models.py", line 1021, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://drive.google.com/uc?id=1SYPjxnHz3XPRhTarx_Lw8SG_iz16QUMU&confirm=t&uuid=3a51e74f-d6cd-4692-adcd-867a1dde7a18

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "app.py", line 56, in <module>
    download_pretrained_models()
  File "app.py", line 46, in download_pretrained_models
    download_file(session, eva3d_deepfashion_model, use_alt_url=True)
  File "EVA3D/download_models.py", line 80, in download_file
    with session.get(file_url, stream=True) as res:
  File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/requests/sessions.py", line 600, in get
    return self.request("GET", url, **kwargs)
  File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/requests/sessions.py", line 573, in request
    prep = self.prepare_request(req)
  File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/requests/sessions.py", line 484, in prepare_request
    p.prepare(
  File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/requests/models.py", line 368, in prepare
    self.prepare_url(url, params)
  File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/requests/models.py", line 439, in prepare_url
    raise MissingSchema(
requests.exceptions.MissingSchema: Invalid URL '': No scheme supplied. Perhaps you meant https://?

  0%|          | 0.00/160M [00:00<?, ?B/s]

  0%|          | 0.00/160M [00:00<?, ?B/s]

Custom images and texture

Great project.
I was able to make it work here on my windows and I've got a doubt.
The way it is, is possible to create using custom images?

From what I saw on the code, seems like it can create (at the moment) using a demodataset that is a this pkl file
image

The idea is the release some code in the future for us to create our own "pkl" to use on EVA3D?

And is there an option to create the texture for the model?

Thanks a lot for the amazing code.

Questions about evaluation fid of EVA3D

Hi,

Thanks so much for this inspiring and excellent work in the Generative Human Model. I have some questions when evaluating the fid of EVA3D. I wonder what is the truncation value to evaluate the method. Is it 1 or the default number of 0.5? If you could provide your script to evaluate all of the metrics of your code, that will be quite helpful for me.

Thanks so much for your help and looking forward to your help.

Best,
Zijian

Use your own pictures

I want to use my own images for training. May I ask how to estimate the SMLP parameters and camera parameters of an image?How do I get a sample_ Data.pkl?

Predefined boxes

Thanks for the great work.
Can you please elaborate on how you divide up the volume into boxes?

def predefined_bbox(self, j, only_cur_index=False):
        if j == 15:
            xyz_min = np.array([-0.0901, 0.2876, -0.0891])
            xyz_max = np.array([0.0916, 0.5555+0.04, 0.1390])
            xyz_min -= np.array([0.05, 0.05, 0.05])
            xyz_max += np.array([0.05, 0.05, 0.05])
            cur_index = self.smpl_index_by_joint([15])
        elif j == 12:
            xyz_min = np.array([-0.1752, 0.0208, -0.1198]) # combine 12 and 9
            xyz_max = np.array([0.1724, 0.2876, 0.1391])
            cur_index = self.smpl_index_by_joint([9, 13, 14, 6, 16, 17, 12, 15])
        elif j == 9 and only_cur_index:
            xyz_min = None
            xyz_max = None
            cur_index = self.smpl_index_by_joint([9, 13, 14, 6, 16, 17, 3])
        elif j == 6:
            xyz_min = np.array([-0.1569, -0.1144, -0.1095])
            xyz_max = np.array([0.1531, 0.0208, 0.1674])
            cur_index = self.smpl_index_by_joint([3, 6, 0, 9])
        elif j == 3:
            xyz_min = np.array([-0.1888, -0.3147, -0.1224])
            xyz_max = np.array([0.1852, -0.1144, 0.1679])
            cur_index = self.smpl_index_by_joint([3, 0, 1, 2, 6])
        elif j == 18:
            xyz_min = np.array([0.1724, 0.1450, -0.0750])
            xyz_max = np.array([0.4321, 0.2758, 0.0406])
            cur_index = self.smpl_index_by_joint([13, 18, 16])
        elif j == 20:
            xyz_min = np.array([0.4321, 0.1721, -0.0753])
            xyz_max = np.array([0.6813, 0.2668, 0.0064])
            cur_index = self.smpl_index_by_joint([16, 20, 18])
        elif j == 22:
            xyz_min = np.array([0.6813, 0.1882, -0.1180])
            xyz_max = np.array([0.8731, 0.2445, 0.0461])
            cur_index = self.smpl_index_by_joint([22, 20, 18])
        elif j == 19:
            xyz_min = np.array([-0.4289, 0.1426, -0.0785])
            xyz_max = np.array([-0.1752, 0.2754, 0.0460])
            cur_index = self.smpl_index_by_joint([14, 17, 19])
        elif j == 21:
            xyz_min = np.array([-0.6842, 0.1705, -0.0780])
            xyz_max = np.array([-0.4289, 0.2659, 0.0059])
            cur_index = self.smpl_index_by_joint([17, 19, 21])
        elif j == 23:
            xyz_min = np.array([-0.8720, 0.1839, -0.1195])
            xyz_max = np.array([-0.6842, 0.2420, 0.0465])
            cur_index = self.smpl_index_by_joint([23, 21, 19])
        elif j == 4:
            xyz_min = np.array([0, -0.6899, -0.0849])
            xyz_max = np.array([0.1893, -0.3147, 0.1335])
            cur_index = self.smpl_index_by_joint([0, 1, 4])
        elif j == 7:
            xyz_min = np.array([0.0268, -1.0879, -0.0891])
            xyz_max = np.array([0.1570, -0.6899, 0.0691])
            cur_index = self.smpl_index_by_joint([4, 1, 7])
        elif j == 10:
            xyz_min = np.array([0.0625, -1.1591-0.04, -0.0876])
            xyz_max = np.array([0.1600, -1.0879+0.02, 0.1669])
            cur_index = self.smpl_index_by_joint([7, 10, 4])
        elif j == 5:
            xyz_min = np.array([-0.1935, -0.6964, -0.0883])
            xyz_max = np.array([0, -0.3147, 0.1299])
            cur_index = self.smpl_index_by_joint([0, 2, 5])
        elif j == 8:
            xyz_min = np.array([-0.1611, -1.0948, -0.0911])
            xyz_max = np.array([-0.0301, -0.6964, 0.0649])
            cur_index = self.smpl_index_by_joint([2, 5, 8])
        elif j == 11:
            xyz_min = np.array([-0.1614, -1.1618-0.04, -0.0882])
            xyz_max = np.array([-0.0632, -1.0948+0.02, 0.1680])
            cur_index = self.smpl_index_by_joint([8, 11, 5])
        else:
            xyz_min = xyz_max = cur_index = None

        if only_cur_index:
            return cur_index

        return xyz_min, xyz_max, cur_index

RuntimeError: derivative for aten::grid_sampler_2d_backward is not implemented

Hi,

Thank you for releasing the great codebase. I am having some issues while running the deepfashion training code. I am keep getting the error "RuntimeError: derivative for aten::grid_sampler_2d_backward is not implemented". My pytorch version is 1.11.0 and cuda version is 11.3. Can you please help regarding this?

Cannot get same evaluation results as the paper.

I use the released checkpoint models_0420000.pt and official inference code on DeepFashion dataset.
According to the paper I got 50k inference results. Then I calculate the FlD and KID between results and dataset.

python generation_demo.py --batch 1 --chunk 1 \
    --expname 512x256_deepfashion --dataset_path ./dataset/DeepFashion \
    --depth 5 --width 128 --style_dim 128 --renderer_spatial_output_dim 512 256 \
    --input_ch_views 3 --white_bg \
    --voxhuman_name eva3d_deepfashion \
    --deltasdf --N_samples 28 --ckpt 420000 \
    --identities 50000  #--render_video

For the evaluation code, I use torch-fidelity package:

fidelity --gpu 0 --kid --fid --input1 ${my_path} --input2 ${dataset_path}

But I only got FID=55, which is much worse than it in the paper.
Am I doing wrong?

smpl_template_sdf

Thanks for the great work!
How do you generate the smpl_template_sdf?Is the smpl_template_sdf in part local space and all the part share a same smpl_template_sdf? Could you release the code for generating smpl_template_sdf?

I could not find RGB 3D Meshes

Thanks for great work!
I ran the EVA3D demo code at the google colab, but I could not find RGB 3D meshes.
Can I get RGB 3D meshes or point cloud in this project?

Have you ever tried training the model with AMP?

Hi, I think it is a nice work on 3D-Human image generation, so I want to learn more details about the model, but the memory of the graphics card is small so that I try to train the model with AMP (automatic mixed-precision) according to pytorch tutorial, and some problems occurred :
File "/home/xxx/eva3d/op/fused_act.py", line 70, in forward
out = fused.fused_bias_act(input, bias, empty, 3, 0, negative_slope, scale)
RuntimeError: expected scalar type Half but found Float
I think the problem is related to fused_bias_act.cpp but I dont know how to deal with it,couldn‘t appreciate it more if you could give me some advice

train on new dataset

hi,
I want to train this model on a similar dataset to deepfashion.
I have calculated keypoint info using openpose(only 18 keypoints), you seems to have more taken keypoint info as input. which method did you use
I am producing segmentation mask using u2net.

in smpl.pkl you have camera parameters and other information, how have you calculated it?

Any other suggestions on how should I proceed?

How can we get the result of novel pose generation?

Excuse me, which code can get the result of novel pose generation(maybe a gif or mp4 :a dance girl). What is the input of the novel pose generation, a dance video of a real person, or a sequence of dance girl's joint position?

Collab won't run without errors

I am trying to run EVA3D on collab but the generate code encounters errors while running. The error is missing file or directory in the evaluations folder after downloading the models. Can you please tell me when we start the start collab how to get the software to run? I added a png file in "evaluations/debug/iter_0300000/random_angles/images_paper_fig/PNG_FILE_NAME.png" and it throws FileNotFoundError: No such file or directory.
worth mentioning, I noticed a miss-match between the versions on huggingface and on github.
I am working on thesis on reconstructing 3D models from 2D images and this software is a breakthrough for me, but it requires to look at the code and how it works, so although the huggingface version works I need to work on the collab version that works on github.
your work is amazing, and the help would be greatly appreciated. thanks.

cur_input shape is [1,0,6]

Hi in line1224 in eva3d_deepfashion.py, the shape of cur_input is [1,0,6] many times. That's why
mean some image doesn't have corresponding part? looking forward to your reply

key error

hi I met the key error when use the smpl model in dataset.py(deepfashion block).
how can the smpl model to index the images name?
截屏2023-04-11 下午4 45 14
截屏2023-04-11 下午4 45 28

CalledProcessError

+ python generation_demo.py --batch 1 --chunk 1 --expname 256x256_aist --dataset_path demodataset --depth 5 --width 128 --style_dim 128 --renderer_spatial_output_dim 512 256 --input_ch_views 3 --white_bg --voxhuman_name eva3d_deepfashion --deltasdf --N_samples 28 --ckpt 340000 --identities 5 --truncation_ratio 0.5 --is_aist
Traceback (most recent call last):
  File "generation_demo.py", line 18, in <module>
    from model import VoxelHumanGenerator as Generator
  File "/home/yc/testing3dai/EVA3D/model.py", line 10, in <module>
    from op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d
  File "/home/yc/testing3dai/EVA3D/op/__init__.py", line 1, in <module>
    from .fused_act import FusedLeakyReLU, fused_leaky_relu
  File "/home/yc/testing3dai/EVA3D/op/fused_act.py", line 11, in <module>
    fused = load(
  File "/home/yc/anaconda3/envs/eva3d/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 986, in load
    return _jit_compile(
  File "/home/yc/anaconda3/envs/eva3d/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1193, in _jit_compile
    _write_ninja_file_and_build_library(
  File "/home/yc/anaconda3/envs/eva3d/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1273, in _write_ninja_file_and_build_library
    check_compiler_abi_compatibility(compiler)
  File "/home/yc/anaconda3/envs/eva3d/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 265, in check_compiler_abi_compatibility
    if not check_compiler_ok_for_platform(compiler):
  File "/home/yc/anaconda3/envs/eva3d/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 225, in check_compiler_ok_for_platform
    which = subprocess.check_output(['which', compiler], stderr=subprocess.STDOUT)
  File "/home/yc/anaconda3/envs/eva3d/lib/python3.8/subprocess.py", line 415, in check_output
    return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
  File "/home/yc/anaconda3/envs/eva3d/lib/python3.8/subprocess.py", line 516, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['which', 'c++']' returned non-zero exit status 1.

About PTI Inversion

Hi! Thanks for your great work.
I'm trying to inverse target images as you descibe in 4.5 Inversion section. Could you give me some clue about it?

In detail, I've got an embed with shape(1, 18, 512) at (PTI repo)/embeddings/barcelona/PTI by following https://github.com/danielroich/PTI. However, I just dont know how to change the embed to fit your 'mean_latent' in line 269 generation_demo.py

Observation space

Hello, I am a newcomer to the world of 3D modeling and NeRF and have a small question. Does the observation space mentioned in the paper refer to the camera space or the space transformed using transformation matrices Gk (similar to the world space in NeRF)?

Should I change the iter from 1,000,000 to 8,000,000 if I use single GPU to train?

Hi,

Thank you very much for your outstanding work. While attempting to train, I noticed that regardless of the number of GPUs selected, the total time required remained constant. If I were to train using a single GPU, should I adjust the iteration count from 1,000,000 to 8,000,000 to account for the fact that you are training with 8 GPUs? Am I correct in understanding that the iteration count pertains to a single GPU?

Very thanks!

Will the evaluation code be released?

Thank you for your excellent work.

I searched past issues and read your steps of calculating PCK and Depth metrics. But it seems the evaluation code has not be released yet.

Model not converging.

Dear Hong,
Thank you for making your work open source. I am currently attempting to reproduce the results. I trained the model using the recommended command, but even after 1,000,000 iterations, the results remained unsatisfactory. Could you please suggest some possible reasons for this issue?
The results appear as follows:
image

My command is:
python -m torch.distributed.launch --nproc_per_node ${NUM_GPU} --master_port=${MASTER_PORT} train_deepfashion.py \ --batch 1 --chunk 1 --expname train_deepfashion_512x256_2 --dataset_path ./DeepFashion/ --depth 5 --width 128 --style_dim 128 --renderer_spatial_output_dim 512 256 --input_ch_views 3 --white_bg --r1 300 --voxhuman_name eva3d_deepfashion --random_flip --eikonal_lambda 0.5 --small_aug --iter 1000000 --adjust_gamma --gamma_lb 20 --min_surf_lambda 1.5 --deltasdf --gaussian_weighted_sampler --sampler_std 15 --N_samples 28

Training time

Thanks for your excellent work, fangzhou! Following your paper and other issues, I use 4 V100 and modify training iterations to 2,000,000, and change the batch parameter to 8 in your given train bash file as you mentioned in your paper. But it seems like more than 8000 hours needed to train the model rather than 10 days theoretically. Maybe I should set batch to 1 or just shut down before 1,000,000 iters? It kinda confuses me, please inform me how to adjust my settings.

All the best.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.