marian42 / shapegan Goto Github PK
View Code? Open in Web Editor NEWGenerative Adversarial Networks and Autoencoders for 3D Shapes
Home Page: https://arxiv.org/abs/2002.00349
Generative Adversarial Networks and Autoencoders for 3D Shapes
Home Page: https://arxiv.org/abs/2002.00349
Hello, I want to ask some questions about the paper , he said that training will experience four stages in the process, I want to ask how long is the training in each stage is over
QObject::moveToThread: Current thread (0x563d74d14da0) is not the object's thread (0x563e00957b30).
Cannot move to target thread (0x563d74d14da0)
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "/home/alberto/anaconda3/lib/python3.8/site-packages/cv2/qt/plugins" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: xcb, eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl.
Aborted (core dumped)
Did you by chance run into this error during the visualization of the latent walk?
Hi,
after reading your paper, I am trying to reproduce your works.
Firstly, I trained VAE with the chair dataset.
Before testing tsne space, I am testing plotting process.
When I run "python create_plot.py color-test", it occurs ModuleNotFoundError.
In the code (create_plot.py),
from dataset import dataset as dataset
dataset.load_voxels('cpu')
dataset.load_labels()
voxels = dataset.voxels
but I could not find 'dataset'.
is it a typo of datasets?
Also, I could not find where load_voxels() and load_labels() are defined.
could you clarify which dataset it is pointing to?
Thank you,
Hi, thanks for your work and this repository! I was taking a look at the original DeepSDF training code, and your own training code for the autodecoder, and I noticed some differences. I was hoping I could perhaps pick your brain on your own experiences with DeepSDF-style training?
For instance, I notice that your training completely shuffles the data samples across all points and across all shapes. So a batch will contain a random assortment of shapes. Whereas the DeepSDF code will instead randomly pick a set of shapes, and from those randomly select the points. So DeepSDF batches are only composed of a handful of shapes. Was there a reason you departed from the DeepSDF batch composition? I'm curious if you found it more stable?
I also noticed that your autodecoder model includes no normalization layers. Was there a reason you decided not to include those?
Because of these differences, I'm just wondering if there are any quick tips/strategies you can pass on - thanks in advance if you have the time to respond!
Hi Marian,
Is there any chance you could upload the visualization scripts for the point-based GAN? I am precisely training the point_sdf_net model and would like to see what comes out from it.
Cheers!
According to the prompt of the file, I run the prepare_shapenet_dataset.py file to generate the voxel of the chair. After running, the following file appeared: 1a6f615e8b1b5ae4dbbc9440457e303e.npy
When I tried to draw this voxel with the matplotlib tool, it showed a 323232 cube without the shape of a chair.
Hi,
Is there a installation guide or codelab option?
I like the project and would use it in an art project for 3D printing. Showing what's possible with GAN's etc..
And a 3D output would be amazing?
Kindest regards,
fake_loss = torch.mean(-torch.log(critic_output_fake)) => fake_loss = -torch.mean(critic_output_fake)
Could you please add a license to this repo? Thanks!
Thank you for a great work.
Is it possible to share sdf_net_latent_codes.to
to run demo_latent_space.py
?
Also I need sdf_points.to
and sdf_values.to
to run train_sdf_autodecoder.py
, how can I get it?
For demo_latent_space there is the assertion regarding our labels.to
how can we generate that file?
Hi,
First of all, thanks for sharing this excellent work!
However, I have 2 questions regarding to code release:
Thanks!
May you have any suggestions for solve the out of memory issue in the last iteration of the hybrid progressive GAN? what GPU did you use.
I am trying with GCP 1-2 tier GPU and my RTX 2080 Ti, I also tried with Batch Size = 2 but still same issue.
Any suggestions would be much appreciated
Hello, the dataset you provided is too large, can you provide a smaller data set, about 1g
I'm running into issues trying to get the right conda environment set up for this project -- pygame requires 2.7 or 3.5, but scikit-spatial requires >=3.7. Is there possibly a list of package versions to set up the correct environment?
Hi, I'm trying to re-implement your work and am running into some confusion.
It seems like the file 'data/chairs/train.txt' is being called in train_hybrid_progressive_gan.py, but I don't see where this file is generated (or any writeup about how to generate it yourself). I followed the data preprocessing steps as was outlined - any pointers would be helpful!
Hi @marian42 ,
Thanks for contributing an amazing amount of code.
I am trying to use your repo to retrain DeepSDF on the Sofa category and have two queries :
cloud
folder is missing from the downloaded data. Do you confirm that I need to rerun prepare_shapenet_data.py
to get the sdf.to
file for training?Thanks a lot!
Best regards,
Thibault
I ran demo_gan.py
using the pre-trained gan_generator_voxels_sofas.to
file. I made the following changes:
Copied gan_generator_voxels_sofas.to
from examples
folder to models
folder.
Did pip install scipy==1.5.2
In rendering/__init__.py
vertices, faces, normals, _ = skimage.measure.marching_cubes_lewiner(voxels, level=level, spacing=(2.0 / voxel_resolution, 2.0 / voxel_resolution, 2.0 / voxel_resolution))
to
vertices, faces, normals, _ = skimage.measure.marching_cubes(voxels, level=level, spacing=(2.0 / voxel_resolution, 2.0 / voxel_resolution, 2.0 / voxel_resolution),method='lewiner')
In model/gan.py
super(Generator, self).__init__(filename="generator.to")
to
super(Generator, self).__init__(filename="gan_generator_voxels_sofas.to")
I got this visualization as a result:
How do I resolve this?
I would like to ask, is the data set of the pre-training model trained with point clouds or voxels?
Hi,
Could you share your evaluation scripts to reproduce the table1 in paper?
Thanks!
I am running the demo and I am getting an empty view.
Exception in thread Thread-1:
Traceback (most recent call last):
File "/home/alberto/anaconda3/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/home/alberto/anaconda3/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/home/alberto/Documents/shapegan/rendering/__init__.py", line 299, in _run
self._render()
File "/home/alberto/Documents/shapegan/rendering/__init__.py", line 230, in _render
light_vp_matrix = get_camera_transform(6, self.rotation[0], 50, project=True)
File "/home/alberto/Documents/shapegan/rendering/math.py", line 20, in get_camera_transform
camera_transform = np.matmul(camera_transform, get_rotation_matrix(rotation_x, axis='x'))
File "/home/alberto/Documents/shapegan/rendering/math.py", line 14, in get_rotation_matrix
matrix[:3, :3] = rotation.as_dcm()
AttributeError: 'scipy.spatial.transform.rotation.Rotation' object has no attribute 'as_dcm'
could you please provide some point_sdf dataset of 3D warehouse ?****
Were the demo models provided in examples directory trained in progressive fashion or without it? Also what are the other hyper-parameters for the provided models. Thanks!
Hey,
I'd like to create the latent space animation with the model I have trained.
However, I'm not sure how to create the following files required by your code:
sdf_net_latent_codes.to
labels.to
Could you elaborate on how to initialize them?
Also, I assumed that sdf_net.to
is basically the generative model I trained hybrid_progressive_gan_generator_3.to
- Is that correct?
Thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.