Comments (13)
0.5 as threshold for the predictions may be too high. In my experience, the optimal threshold is somewhere around 0.3.
You should search for the optimal threshold by, e.g., computing IoU's with 0.1, 0.15, 0.2, ..., 0.5, 0.55.
from genre-shapehd.
Thanks soo much , Xiuming, I will have a try now!
from genre-shapehd.
Hi Xiuming,
We've tried all possible threshold but didn't get IOU higher than 0.1.
As we don't have 2.5D sketches for testing set, we can only evaluate on validation set you provided.
Do you mind sharing your evaluation code and corresponding testing set for us to reproduce your result?
Cheers.
from genre-shapehd.
Thanks for the update. Another three things to check:
- Did you handle sigmoid correctly? Did you pass the "sigmoid'ed" voxel again through sigmoid?
- Did you visualize what the predictions look like? Do they look reasonable, or like total trash?
- (Related to 2.) When you visualize the ground truth and your prediction, are they in the same pose?
from genre-shapehd.
Thanks for your suggestions.
-
As shown in the 'calculate_iou' function above, the 'pred' variable is the output of MarrNet2 whose range is not (0, 1), because the net itself doesn't contain sigmoid at the last layer. Thus, we passed the 'original' voxel through one sigmoid to get the value ranged from 0 to 1.
-
It looks reasonable. The visualization gives a rough and mean shape which we can recognize it is a chair, even if some of them look like trash.
-
They are in the same pose when visualizing. Also we checked the code. We used conan-voxel during both training and evaluation.
When we print out iou for every single sample, most of them are lower than 0.07 but sometimes we can see 0.2 - 0.4.
Even on the training set,IOU for the same object are similar, some are very low (0.02), some are fairly good (0.46). This seems quite resonable.
Cheers.
from genre-shapehd.
Thanks for the update.
As discussed in the repo. README, the Marrnet2 here isn't really the MarrNet published in NIPS '17, since it didn't implement the consistency loss. It is more of a building block of ShapeHD and GenRe. Thus, it's possible that it produces mean shapes when trained all by itself. The GAN loss in ShapeHD essentially addresses this problem by encouraging details in the reconstructions.
That said, I remember it can fit to the training set and produce ok reconstructions, even if trained on its own. If you could share with us visualization of your reconstructions, we are happy to take a look, and send you some of ours if this helps you debug.
from genre-shapehd.
Hi Xiuming,
Thanks for your reply and help. I uploaded a set of .obj files which is generated. Google drive:
https://drive.google.com/file/d/1xLpH1haUZ18VjKwrgIXw_zcnB-Kc8etT/view?usp=sharing
I also screenshot some images which is convenient for you to have a look. The results are not bad.
For now, we even guess: as shown in the last image, these shapes are hollow so if the voxels are off by a few pixels, the iou will drops a lot. However, because you can produce a .40-.50 iou, which means our guess is wrong. We get stuck again.
from genre-shapehd.
@GimSungwoo It seems that the obj files looks okay; Would you checking the following for better debugging? The hollowing arises from the marching cubes for generating meshes, which only preserve the iso-surface.
-
Are the orientations aligned? Would you mind checking a few gt&prediction pairs to see if those have the same orientation?
-
Are gt & prediction both hollow/filled? If I recall correctly, Marrnet2 would give shapes with filled-in voxels. Is this the case?
-
We do have a released evaluation code. You can find it here. But I doubt the problem is not the way of calcualting IoU.
Please let us know if this helps!
from genre-shapehd.
@ztzhang
Thanks for your answer.
I checked the first two, which are correct.
I think we found the issue but just to confirm: did you downsample the 128x128x128 to 32x32x32 before calculation IOU? but we didn't. If so, what is the reason of that, to keep the consistency with the benchmark in this task?
Cheers.
from genre-shapehd.
Yes, we did downsample our results to 32 for ShapeHD for comparison with other methods.
As we see here, IoU is not really a good measure in many ways (such as inconsistency with visual intuitions, strong dependency on threshold selection, etc.). I'd recommend using CD instead.
from genre-shapehd.
Good, thanks. Down-sampling does improve the IOU.
Before downsampling, does drawing a bounding box aim to center the model. Padding them into a cube because the bounding box will be a block of any size. Is my understanding correct?
from genre-shapehd.
Yes. Please also see the "For IoU" paragraph in https://github.com/xingyuansun/pix3d#evaluation-details.
from genre-shapehd.
Yep, I've read it.
Thanks, Xiuming.
from genre-shapehd.
Related Issues (20)
- Can I read numpy data from the depth,normal and silhouette image? HOT 4
- How to obtain surface normal images from the raw depth data? HOT 1
- I want to use the wgangp which is in the shapehd HOT 2
- ground truth images that give rise to the demo latent inputs
- MarrNet w/o Reprojection Consistency Weights HOT 1
- how to get the intrinsic and extrinsic parameters when i use the GenRE HOT 3
- how can I make my own data set? HOT 1
- 128.npz voxel files are not TDF HOT 4
- Evaluation code HOT 2
- Failure when running test_genre.sh HOT 1
- Full Dataset? HOT 1
- Fail to compile. HOT 2
- Poor Genre results on demo images when compiled with Cuda10 HOT 6
- Generate distance field based on depth image by using render_spherical function HOT 2
- Different prediction results by using different data loader code for GenRe HOT 3
- Could you provide state_dicts for the discriminator of the pretrained models HOT 3
- how to
- how to train marrnet of shapehd? HOT 1
- GenRe docker environment for 2080Ti and 1080Ti HOT 1
- dataset link broken? HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from genre-shapehd.