czq142857 / bsp-net-original Goto Github PK
View Code? Open in Web Editor NEWTensorflow 1.15 implementation of BSP-NET, along with other scripts used in our paper.
License: Other
Tensorflow 1.15 implementation of BSP-NET, along with other scripts used in our paper.
License: Other
why the max and min function in layer 2 and 3 could work in the process of aggregation
Hi @czq142857 ,
Thank you for sharing for great work!
I was looking into the semantic segmentation experiments from you paper. Have you released the code to replicate your results? I would like to try it out, but happened not to find it. Also, are you learning to classify convexes into parts, or were the convexes manually assigned to the most representative part? Any detail would be appreciated.
Thanks,
Best,
Hi @czq142857,
I have been trying to reproduce the results from your paper on a modified version of the toy dataset which also includes a circle ina addition to the default shapes. When I train 2d BSPnet and infer on the modified toy dataset (images of size 64x64), I am able to obtain results very similar to that in the paper.
However, I also modified the bsp_2d.IMSEG network to accept images of size 128x128. For this, I have simply added another conv+lrelu layer in the encoder so that the outputs of the encoder are shape compatible with the generators.
In this case, the reconstruction quality is significantly poorer. I have used values of 64, 128, 512 for the hyperparameters ef_dim, gf_dim, p_dim respectively, to account for the larger image size and the additional number of convexes and planes required. I have attached the outputs for your reference. It appears that the model particularly fails on the hollow diamond shape, where it is simply predicted as either a solid diamond or as a hollow diamond with a triangular hole. Kindly let me know your thoughts or suggestions on why these kind of artifacts occur when 128x128 patches of the toy dataset are used to train the bspt 2d network.
Looking forward to your reply. Thank you.
hi,
Thanks for your great work. I have a question about the experience result. Does table 1 show mean values of five classes? I haven't found corresponding description.
Hi,
thank you for making this great work available here.
Could you explain your idea behind the Chamfer distance computation here:
If I am not mistaken you are missing a square root. Meaning, you are computing the squared distance, right?
Hi,
Could you elaborate on how to obtain the water-tight mesh representation from the predictions? Is there any code in this repo regarding this ? Thank you!
Thanks for sharing the great work.
I noticed that in your paper you said when training for the SVR problem, you train an autoencoder first and then train another encoder to learn the latent codes. Can you explain the training details of the image encoder training? For example, what loss function do you use in the latent space?
Dear author:
Thank you for sharing the whole code.
In bspy_2d.py: line 24~40, (also in bspy_slow.py)
fisrt use whether |a| is larger than |b| or nor to get the intersection of edge, but then it seems like to use
the condition 'a>0' or 'b<0' to decide the order of point in the face.
I'm confused about this condition. If it is convenient, could you explain it.
Thanks
Hi, it's an amazing idea!
Does the method support shape generation? I don't find any code about generation.
So could you have any suggestions?
It seems that in your code the connection weights are stored as parameters of the generator module. They don't seem to be related to image features, which means at inference, the network uses the same connection weights for all images. Wouldn't this limit the network's ability to handle different images?
Hi,
Could you elaborate on how to generate the meshes with lower number of vertices and faces as shown in Fig 1.(a) of the paper? Is there a piece of code in the current repository to do that?
Thank you!
In figure 5. you mentioned "Note how many planes are unused." So did you take any measures to deal with those redundant planes, as they make the inference slower? Maybe some post-processing step will work?
Thank you for sharing such great work! I face up some issues in reproducing the experimental results. I have loaded the pretrained weights provided in the repo, but I couldn't obtain the results as good as the values in table1(Table 1: Surface reconstruction quality and comparison for 3D shape autoencoding.)
CD NC LFD
Ours 0.447 0.858 2019.26
Ours + Loverlap 0.448 0.858 2030.3
My results :
no L_overlap | airplane | car | chair | lamp | table | average (per shape)
CD (x1000) | 0.3767 | 0.5829 | 0.8064 | 1.7223 | 0.8743 | 0.8725
NC | 0.8184 | 0.8378 | 0.8027| 0.7081 | 0.8278 |pr 0.7989
Is there any part in this repo I need to change for getting the same results in the paper?
For example, the provided pretrained weight is trained for all category while the results in the paper is trained in individual category?
Thanks for your help!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.