qizekun / recon Goto Github PK
View Code? Open in Web Editor NEW[ICML 2023] Contrast with Reconstruct: Contrastive 3D Representation Learning Guided by Generative Pretraining
Home Page: https://arxiv.org/abs/2302.02318
License: MIT License
[ICML 2023] Contrast with Reconstruct: Contrastive 3D Representation Learning Guided by Generative Pretraining
Home Page: https://arxiv.org/abs/2302.02318
License: MIT License
Thanks for your previous reply!
In zero-shot classification on ScanObjectNN, are all dataset (i.e., train+test) or just test dataset used?
Thank you so much for your excellent work. When I run the pre-training code sh scripts/pretrain.sh <GPU> <exp_name>,
I get an error,as shown:
[ WARN:[email protected]] global /io/opencv/modules/imgcodecs/src/loadsave.cpp (239) findDecoder imread_('/media/data/data01/wcs/data/ShapeNet55-34/shapenet_img/02747177-.png'): can't open/read file: check file path/integrity
I'm pretty sure it's a problem with the shapenet55/34 dataset. As you said “the image data is different from the pointcloud data in some samples, you need to update the meta-data "ShapeNet55-34/ShapeNet-55/train.txt & test.txt" from Our Google Drive.”
But i don't know what modification i need to do, can you tell me. Thanks.
First, thanks for sharing so good work!
Why is the Leaderboard result about zero-shot classification lower than paper?
zero-shot classification on MN10: 81.6( in paper)> 75.6 (inLeaderboard )
zero-shot classification on MN40: 66.8( in paper)> 61.7 (inLeaderboard )
Look forward to your reply~
Firstly , thanks for sharing the outstanding work!!!
Could you help me to know how to pretrain the Point-MAE† (not the Point-MAE)in the Table 1, and the architecture of the Point-MAE† ?
Look forword your reply, thanks!
Thanks for your amazing work! I was trying to run zero-shot classification of ModelNet40 using the pipeline you provided. However, I only got 61.2% accuracy (66.8% reported in the paper) using the following script:
python main.py \
--config=cfgs/zeroshot/modelnet40.yaml \
--zeroshot \
--exp_name=zeroshot_modelnet \
--ckpts=ckpts/zeroshot_66_78.pth
Am I missing something here? Thanks!
First, thanks for sharing so outstanding work!!
could you tell me how to render image using MacOS Preview?
Thanks!!
Hello, thanks for your wonderful work. Can you share the source code? [email protected]
Thanks for your amazing work! Can you provide the pretain logs file.I want to check if I'm running it incorrectly.
Thank you!
When i try run own dataset,this error appears:dataset name is not in the dataset registry'
Hello, thank you for your great work. I encountered some issues while attempting to reproduce your experiment.
I downloaded your pretrained model from Google Cloud, fine-tuned it on an RTX 3090, and obtained the following results: 93.97% on OBJ_BG, 92.08% on OBJ_ONLY, and 89.97% on PB_T50_RS (without voting, seed = 0). However, I couldn't achieve comparable results to those reported in the paper, which are 95.18%, 93.63%, and 90.63%, respectively.
After reading this issue, I learned about the correct method to reproduce the results. I then attempted using seed 32174, but the results remained the same at 93.97% on OBJ_BG. In general, it seems unlikely that the seed alone would cause such a significant performance difference (e.g., 93.97% in my case vs. 95.18% in your reported results).
Could you please provide guidance on how to accurately reproduce the experiment? Thank you very much.
First, thanks for sharing so good work!
How to download the ModelNet_10 dataset ?
Looking forward to your reply~
Dear @qizekun ,
Thanks for your very nice work, recently I noticed that reproducing the results for point cloud pretraining, usually, requires a decent pre-trained checkpoint.
My question is if the same pre-trained checkpoints are used for both the classification and the segmentation task.
For example, if I found ckpt-ep250 works well for classification, am i right to use it also for part segmentation? Or to produce also a decent result of part segmentation, I need to choose another ckpts (e.g., ckpt-ep300)
?
Thanks in advance for your answer.
Best regards and have a nice day,
Hi,I download your pretain model from the google cloud and finetune on NVIDIA 3090,and achieve the 92.38% result on svm modelnet40 task ,and 94.49% 92.6% 89.62% on the ScanObjectNN task, and the random seed also be set to 0 .
Is it related to the server and Pytorch environment I'm using? Or I need to cancel the setting of the random seed and run multiple times.
Thank you!
In the given logs in Google Drive, there are some unknown args and configs that can't be found in the released code. For example, "args.pretrain_prompt : False", "config.model.cls_sample : 256" in hardest_90_63.log, and "config.model.cls_embeding : False" in "objbg_95_18.log". What are them? Are these args and configs related to the final result?
I don't know what is it, can i cancel it?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.