Comments (16)
I suppose you are talking about the performance on ModelNet40.
The performance on ModelNet40 of almost all methods are not stable, see (CVMI-Lab/PAConv#9 (comment)).
If you run the same codes for several times, you will get different results.
Here is the log we got the testing 94.5 acc with voting:
https://drive.google.com/file/d/1z-QbWLMlGxfMVPImcLeaDOVOJGvm64ny/view?usp=sharing
Here is the log we got the testing 94.1 acc without voting: https://drive.google.com/file/d/1T8ZljhO2evsxQ0aL7l0xbd_dVHgargzT/view?usp=sharing
Also, the randomness of ModelNet40 is our motivation to experiment on ScanObjectNN benchmark, and to report the mean/std results of several runs.
Please let me know if you have any questions or concerns.
Best,
Xu
from pointmlp-pytorch.
COULD YOU PLEASE READ WHAT I POSTED PREVIOUSLY IN THIS ISSUE?
Again, the performance on ModelNet40 is not deterministic,the perofmrance has a large variance. This also motivates us to consider ScanObjectNN as a benchmark. Please see above and the paper.
from pointmlp-pytorch.
It seems it's hard to train for me...
from pointmlp-pytorch.
It seems it's hard to train for me...
@zkm98 Could you please clarify what do you mean by "hard to train"?
Empirically, pointmlp is easy to train in terms of deployment, speed, and loss curve, etc.
from pointmlp-pytorch.
about hard to train, I mean, it is hard to get the results in your papers, sorry that I dont clearly define it. For most experiments I trained, I just got the result of 92.8% using pointmlp Elite, I was trying to train from scratch, although maybe I can get the result as you say using the pretrained models. It cant talk me into it.
from pointmlp-pytorch.
@zkm98 I uploaded checkpoint/logs for PointMLP and PointMLP-elite (also, the bug-fixed results and checkpoints are also uploaded). Please see the README file.
As I mentioned previously, the results on ModelNet40 are not stable (but always greater than 93.x for both pointmlp and pointmlp-elite). The randomness is the motivation for us to consider ScanObjectNN benchmark.
from pointmlp-pytorch.
Thank you very much for your wonderful work. I tried testing with your pre-trained model and with the best parameters I only got 93.6. I would like to know what this deals with, theoretically with the same model, using the same parameters, with only validation I should get 94.1.
from pointmlp-pytorch.
@le-cheng Thanks for your interest. I will double-check and reply to you by Wednesday.
from pointmlp-pytorch.
@le-cheng
I just double-checked the testing result, there is no problem, and I can get the same results as I claimed. See screenshot.
For your convenience, I uploaded a simple test code.
from pointmlp-pytorch.
@ma-xu
OK, my problem has been solved!
I would like to ask if your training method is just multiple training sessions, or is there some other parameter adjustment. Am I using your code under what circumstances it is possible to get the same results as you.
from pointmlp-pytorch.
@le-cheng Sry that I did not understand you question. If you are talking about the unstable peroformance?
I trained with the default parameters (see log files).
The performance on ModelNet40 of almost all methods are not stable, see (CVMI-Lab/PAConv#9 (comment)).
If you run the same codes for several times, you will get different results.
Also, the randomness of ModelNet40 is our motivation to experiment on ScanObjectNN benchmark, and to report the mean/std results of several runs.
from pointmlp-pytorch.
Well, thanks.
from pointmlp-pytorch.
only 93.071 can I get!
from pointmlp-pytorch.
For the experiment itself is unstable, could you produce multi-times running logs or the exact random seed to reproduce the best checkpoint.
from pointmlp-pytorch.
@yanghu819
Sure, I upload the 6 runs of PointMLP (last_checkpoints is removed to save space, model names is our fixed developing name, ignore it). Here is the link: https://drive.google.com/drive/folders/1SUuMDjaWhxURaUO7VtPpJ_vnxvaBA9eO?usp=sharing
The folder name pattern is [modelname]-[time]-[randgeneratedseed]. That is, we trained the model (bug fixed version, we can also provide unfixed version for the submission if needed) on Feb/09/2022, with 6 randomly generated seeds.
Train with the exactly same seed, one propobaly still cannot get the deterministic result, we tested that. You can also have a try, the seed is provided in the folder name.
Btw, you can easily get the same result using our pre-triained checkpoint.
Besides, a good choice is to report the mean and std of several runs as in Table 3 on ScanObjectNN benchmark. You can easily get a better PointMLP result than we reported.
from pointmlp-pytorch.
@zkm98 @yanghu819 @le-cheng @coallar
I will close this issue since there is no further discussion. Feel free to reopen it if necessary.
from pointmlp-pytorch.
Related Issues (20)
- About the understanding of geometric affine module HOT 2
- Cite your work HOT 2
- Do you have the training log for part segmentation?
- ONNX export HOT 1
- How can I visualize datasets? HOT 3
- How to visual Fig. 4 in your paper HOT 4
- About implementation of MLP HOT 4
- Questions about paper HOT 3
- How to create the virtual environment? HOT 1
- Hyperparamater about experiment on ScanObjectNN HOT 1
- Maxpooling not performed within each stage? HOT 1
- Error when importing pointnet2_utils HOT 2
- Download Link of ModelNet40 Not Working HOT 2
- Method about rending point cloud HOT 2
- CPU not supported HOT 2
- Equivariance in PointMLP HOT 2
- Unable to import pointnet2_utils HOT 2
- Wrong metric description HOT 1
- How to generate visualization files of scanobjectnn and shapenet and visualize them Thank you very much HOT 1
- Inference result problem
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pointmlp-pytorch.