Giter VIP home page Giter VIP logo

ide-3d's Issues

Error While - pyglet.gl.ContextException: Could not create GL context while running the command "python render_mesh.py --fname out/0.npy --outdir out'

(pytorch3d) student@CL502-07://mnt/c/Users/sit/Documents/IDE-3D$ Error
0%| | 0/240 [00:00<?, ?it/s]libGL error: MESA-LOADER: failed to open swrast: /home/student/anaconda3/envs/pytorch3d/bin/../lib/libstdc++.so.6: version `GLIBCXX_3.4.30' not found (required by /lib/x86_64-linux-gnu/libLLVM-15.so.1) (search paths /usr/lib/x86_64-linux-gnu/dri:$${ORIGIN}/dri:/usr/lib/dri, suffix _dri)
libGL error: failed to load driver: swrast
0%| | 0/240 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/mnt/c/Users/sit/Documents/IDE-3D/render_mesh.py", line 85, in
render()
File "/home/student/anaconda3/envs/pytorch3d/lib/python3.9/site-packages/click/core.py", line 1157, in call
return self.main(*args, **kwargs)
File "/home/student/anaconda3/envs/pytorch3d/lib/python3.9/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
File "/home/student/anaconda3/envs/pytorch3d/lib/python3.9/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/student/anaconda3/envs/pytorch3d/lib/python3.9/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/mnt/c/Users/sit/Documents/IDE-3D/render_mesh.py", line 60, in render
r = pyrender.OffscreenRenderer(512, 512)
File "/home/student/anaconda3/envs/pytorch3d/lib/python3.9/site-packages/pyrender/offscreen.py", line 31, in init
self._create()
File "/home/student/anaconda3/envs/pytorch3d/lib/python3.9/site-packages/pyrender/offscreen.py", line 149, in _create
self._platform.init_context()
File "/home/student/anaconda3/envs/pytorch3d/lib/python3.9/site-packages/pyrender/platforms/pyglet_platform.py", line 50, in init_context
self._window = pyglet.window.Window(config=conf, visible=False,
File "/home/student/anaconda3/envs/pytorch3d/lib/python3.9/site-packages/pyglet/window/xlib/init.py", line 133, in init
super(XlibWindow, self).init(*args, **kwargs)
File "/home/student/anaconda3/envs/pytorch3d/lib/python3.9/site-packages/pyglet/window/init.py", line 538, in init
context = config.create_context(gl.current_context)
File "/home/student/anaconda3/envs/pytorch3d/lib/python3.9/site-packages/pyglet/gl/xlib.py", line 105, in create_context
return XlibContext(self, share)
File "/home/student/anaconda3/envs/pytorch3d/lib/python3.9/site-packages/pyglet/gl/xlib.py", line 127, in init
raise gl.ContextException('Could not create GL context')
pyglet.gl.ContextException: Could not create GL context

Any plans for training code release?

Dear @MrTornado24

Congratulations on the amazing work and thank you for choosing to release the implementation.

I was wondering if there are any plans to release the training code in the near future?

Hoping for a positive reply.

Thank you.

About style transfer

Hi, thank you for sharing code.
I have a question about the cartoon demo in github.io. cartoon demo
image

How to get this style transfer, using a cartoon dataset to transfer learning, then layer blend?
Hoping for your reply!

Configs for encoder training and canonical encoder

Hi,

Thank you for sharing your awesome work. I have some questions about how you train your encoder part. In your released code, script apps/train_hybrid_encoder.py shows how you do it, but there are two options (train_gen and train_real) and the default weight parameter for each loss is 0, also your paper and supp do not includes loss description of this part. So I want to ask can you release more details about it. I tried to train_gen and train_real (celebA_mask dataset) on your pretrained generator, but it fails to get good reconstruction encoder.

Also, you mentioned you add canonical encoder for this paper, but on apps/train_hybrid_encoder.py, I can not found canonical encoder part, the gen images or real images are not limited to canonical views, and there is only one encoder. Also I can not found it on your pretrained encoder, it seems that you directly use the pretrained encoder in Painter/run_UI.py.

Painter/run_UI.py

D:/projects/IDE-3D/data/ffhq/images512x512/dataset.json
Can you please share this dataset .json file? When I try to run run_UI.py script, it cause error.

Request for network implementation details.

this work is quite interesting and amazing.
i want to learn about more details to better understand how it works and i'm looking forward to the script of networks.
thanks a lot

Possible bug in render_mesh.py

Hi there, thanks for providing the code! I found some possible bugs when I used render_mesh.py.

  • Lack of openGL environment. To solve this, I added os.environ['PYOPENGL_PLATFORM'] = 'egl' to the top of the code.
  • Delete the render. I met this error because the render is not deleted at each iteration. To solve this, I added r.delete() to the end of each iteration in the for loop.
  • Unsorted image list for video generation. In Line 70, the input image list is not sorted. To fix it, change Line 70 to img2video(sorted(glob.glob(f"tmp/{id}/*.png")), f"{outdir}/render.mp4").

Is there network code?

Hello, thank for your great work at first.

Now, I am trying to understand your paper and model.

However, the code for the model is not shown to me.
I think the code will be placed in "./training/*" but it isn't.

You guys only plan to open the pretrained model as .pkl?
Or it would be updated as soon?

Network architecture details, import issues, and novel view generations

Hi,
Thanks for the great work; we appreciate it. I have several questions/suggestions about it, though:

  • Do you plan to publish the explicit network architecture code soon? It would be beneficial for further research.
  • There are many absolute paths and importing issues throughout the codebase. Even some paths are hardcoded, which makes it more cumbersome to work with than needed. I suggest fixing those for easy usability.
  • Do you have an individual .py file to generate novel views from real-life images? As far as I understood, the inversion pipeline is as follows: first, apps/infer_hybrid_encoder.py generates a w, then inversion/scripts/run_pti.py fine-tunes the said w. Can I use that fine-tuned w and your hybrid encoder, along with different angles fed to the generator & renderer for the novel views? Have you applied further adjustments for generating new views (to get the images in the last row of Fig. 7. on the SIGGRAPH paper)?

Best,
Batuhan

Remove the eg3d-nada submodule

It doesn't exist publicly on github, so it should probably be removed from your project.
Also, it would be nice to switch the submodules to use https paths so us laymen don't have to deal with ssh keys.

Question about dataset.json for inversion

Hi there,

In apps/infer_hybrid_encoder.py, Line 141: fname = 'D:/projects/eg3d/data/FFHQ/images512x512/dataset.json' # label list Path. . Should we prepare dataset.json by ourself? Is it possible you could share this file? Thank you so much in advance!

Questions about the generator implementation

Hi, thanks for releasing the code!

Querying the source code from the pickle file, I found that some modules looked different from the paper Fig.2.

  1. I could not find the three parallel branches sharing the 64x64x64 feature map you mentioned in the Appendix B. And the stylegan backbone looks simmilar to the EG3D's stylegan backbone, adding the toseg layer.
  2. The texture decoder outputs the sigma; on the other hand, the shape decoder outputs the sigma in the paper.

Am I thinking something wrong, or is this exactly your design?

Please refer to the files below.
source_code.txt
generator_summary.txt

buggy codebase

The paper and result seems promising. But the code is too buggy to run. The messy import is a nightmare.

Here are some tips for those who want to run this code:

install pytorch3D from official instead of using the install script here.
fix the hard code sys path
fix all kinds of import error

Pretrained checkpoints.

I see ”More pretrianed models will be released soon." Could you please tell me when will the pretrained checkpoints be released.

Pose for the real image inversion

Hi there, sorry for too many questions these days!

It looks like in apps/infer_hybrid_encoder.py, it just reads the camera pose from dataset.json. If we want to invert a real image, should we get the pose by using some off-the-shelf pose estimator? Thanks!

how to edit on my own images

I notice the advices from the section "Real portrait image editing", where first the following code should be executed:

python apps/infer_hybrid_encoder.py 
    --target_img /path/to/img_0.png
    --g_ckpt pretrained_models/ide3d-ffhq-64-512.pkl 
    --e_ckpt pretrained_models/encoder-base-hybrid.pkl
    --outdir out

But I found for my image, it does not have labels like those from dataset.json, so error occured:

File "/home/dianxin/hys/IDE-3D/apps/infer_hybrid_encoder.py", line 147, in
c = [label_list[opts.target_img[-21:]]]
KeyError: 'test.png'

How can I generate the labels for my own images? Thank you!

Export 3d object

The project looks great.
Is it possible to export a mesh, like a obj or ply?

thanks a lot

network_snapshot.pkl was not found

Hi, The model file was not found when I ran the following command, Could you please share me the model file(network_snapshot.pkl)?

python extract_shapes.py --outdir out --trunc 0.7 --seeds 0-3
--network networks/network_snapshot.pkl --cube_size 1

Error during 'conda create -f environment.yaml'

command

git clone --recursive https://github.com/MrTornado24/IDE-3D.git
cd IDE-3D
conda env create -f environment.yml

result

(base) C:\t\IDE-3D>git clone --recursive https://github.com/MrTornado24/IDE-3D.git
Cloning into 'IDE-3D'...
remote: Enumerating objects: 320, done.
remote: Counting objects: 100% (3/3), done.
remote: Compressing objects: 100% (3/3), done.
remote: Total 320 (delta 0), reused 0 (delta 0), pack-reused 317
Receiving objects: 100% (320/320), 85.16 MiB | 5.92 MiB/s, done.
Resolving deltas: 100% (38/38), done.
Submodule 'dataset_preprocessing/ffhq/Deep3DFaceRecon_pytorch' ([email protected]:sicxu/Deep3DFaceRecon_pytorch.git) registered for path 'dataset_preprocessing/ffhq/Deep3DFaceRecon_pytorch'
Submodule 'ide3d-nada' ([email protected]:MrTornado24/ide3d-nada.git) registered for path 'ide3d-nada'
Cloning into 'C:/t/IDE-3D/IDE-3D/dataset_preprocessing/ffhq/Deep3DFaceRecon_pytorch'...
Host key verification failed.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
fatal: clone of '[email protected]:sicxu/Deep3DFaceRecon_pytorch.git' into submodule path 'C:/t/IDE-3D/IDE-3D/dataset_preprocessing/ffhq/Deep3DFaceRecon_pytorch' failed
Failed to clone 'dataset_preprocessing/ffhq/Deep3DFaceRecon_pytorch'. Retry scheduled
Cloning into 'C:/t/IDE-3D/IDE-3D/ide3d-nada'...
Host key verification failed.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
fatal: clone of '[email protected]:MrTornado24/ide3d-nada.git' into submodule path 'C:/t/IDE-3D/IDE-3D/ide3d-nada' failed
Failed to clone 'ide3d-nada'. Retry scheduled
Cloning into 'C:/t/IDE-3D/IDE-3D/dataset_preprocessing/ffhq/Deep3DFaceRecon_pytorch'...
Host key verification failed.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
fatal: clone of '[email protected]:sicxu/Deep3DFaceRecon_pytorch.git' into submodule path 'C:/t/IDE-3D/IDE-3D/dataset_preprocessing/ffhq/Deep3DFaceRecon_pytorch' failed
Failed to clone 'dataset_preprocessing/ffhq/Deep3DFaceRecon_pytorch' a second time, aborting

(base) C:\t\IDE-3D>cd IDE-3D

(base) C:\t\IDE-3D\IDE-3D>conda env create -f environment.yml
Collecting package metadata (repodata.json): done
Solving environment: failed

ResolvePackageNotFound:
  - imageio=2.13.5
  - pillow=9.0.0
  - ninja=1.10.2.3

When running painter theres error with dnnlib

import dnnlib

ModuleNotFoundError: No module named 'dnnlib

This is what i get , all dependencies installed fine but on windows some had to be lower versions .

name: ide3d
channels:

  • pytorch
  • nvidia
    dependencies:
  • python=3.9
  • pip
  • numpy>=1.20
  • click>=8.0
  • pillow=8.3.1
  • scipy=1.7.1
  • pytorch=1.10.0
  • cudatoolkit=11.3
  • requests=2.27.1
  • tqdm=4.62.3
  • ninja=1.10.2
  • matplotlib=3.5.1
  • imageio=2.9.0
  • pip:
    • imgui==1.4.1
    • glfw==2.5.0
    • pyopengl==3.1.5
    • imageio-ffmpeg==0.4.5
    • pyspng
    • psutil
    • mrcfile
    • tensorboard
    • einops
    • pymcubes
    • pytorch3d

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.