tsunghan-wu / randla-net-pytorch Goto Github PK
View Code? Open in Web Editor NEW:four_leaf_clover: Pytorch Implementation of RandLA-Net (https://arxiv.org/abs/1911.11236)
License: MIT License
:four_leaf_clover: Pytorch Implementation of RandLA-Net (https://arxiv.org/abs/1911.11236)
License: MIT License
Hello, thanks for your amazing work.
After the model had trained. I using test_SemanticKITTI.py for inference. However, I found the self.test_dataset.min_possibility is not updating during the test time. Could you please give me some suggestions?
when i use this code: python3 train_SemanticKITTI.py --checkpoint_path pretrain_model/checkpoint.tar,
This error will happen: confusion_matrix() takes 2 positional arguments but 3 were given.
could you tell me how to modify this code?
Which path should I set about dst_path ? Can you give me an example ?
And I cant find sequences_0.06 after I run data_prepare_semantickittii.py .
Can you help me ?
Thank you very much .
I used 2 GPUs on a single machine to train model, and it correctly worked. However, when using more 2 GPUs for training, the program stucked in network's forward propagation step, where the utilization rate of GPUs stays 100%
@tsunghan-mama Hi! Thank you for your excellent work! I am trying to infer the sequence 08 by using the pre-trained model. However, an error occurred: ModuleNotFoundError: No module named 'sklearn.metrics._dist_metrics'.
I search the documents of pickle in [https://scikit-learn.org/stable/search.html?], and no version has the module"sklearn.metrics._dist_metrics". Could you please provide some suggestions to solve this problem?
Looking forward to your reply!
Thank you so much!
More Details about the error:
Traceback (most recent call last):
File "test_SemanticKITTI.py", line 180, in
main()
File "test_SemanticKITTI.py", line 176, in main
tester.test()
File "test_SemanticKITTI.py", line 99, in test
self.rolling_predict()
File "test_SemanticKITTI.py", line 110, in rolling_predict
batch_data, input_inds, cloud_inds, min_possibility = next(iter_loader)
File "//python3.7/site-packages/torch/utils/data/dataloader.py", line 345, in next
data = self._next_data()
File "//python3.7/site-packages/torch/utils/data/dataloader.py", line 385, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "//python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 34, in fetch
data = next(self.dataset_iter)
File "//RandLA-Net-pytorch-main/dataset/semkitti_testset.py", line 52, in spatially_regular_gen
pc, tree, labels = self.get_data(pc_path)
File "/***/RandLA-Net-pytorch-main/dataset/semkitti_testset.py", line 69, in get_data
search_tree = pickle.load(f)
ModuleNotFoundError: No module named 'sklearn.metrics._dist_metrics'
Hi, I would like to know how much memory you need for testing SemanticKITTI. When setting batch=1, I need almost 32G of memory (not GPU memory). Is this normal? Or is there any way to reduce that demand?
code errors in utils.semkitti_vis.laserscan.py
Hi, it would crash when I run train_SemanticKITTI.py. I change to cpu mode debug and infomation like following:
How to solve this? @tsunghan-mama @dream-toy @huixiancheng
run data_prepare_semantickitti.py ,and there's an error: ModuleNotFoundError: No module named 'utils.nearest_neighbors.lib'
After check, there is no such a file.
你好,训练时有53.9的iou,但是测试完用evaluate_SemanticKITTI.py 估计时却是如下情况。测试显示的iou和最后的差距大吗
`validation set:
Acc avg 0.052
IoU avg 0.003
IoU class 1 [car] = 0.052
IoU class 2 [bicycle] = 0.000
IoU class 3 [motorcycle] = 0.000
IoU class 4 [truck] = 0.000
IoU class 5 [other-vehicle] = 0.000
IoU class 6 [person] = 0.001
IoU class 7 [bicyclist] = 0.000
IoU class 8 [motorcyclist] = 0.000
IoU class 9 [road] = 0.000
IoU class 10 [parking] = 0.002
IoU class 11 [sidewalk] = 0.000
IoU class 12 [other-ground] = 0.000
IoU class 13 [building] = 0.000
IoU class 14 [fence] = 0.000
IoU class 15 [vegetation] = 0.000
IoU class 16 [trunk] = 0.000
IoU class 17 [terrain] = 0.000
IoU class 18 [pole] = 0.000
IoU class 19 [traffic-sign] = 0.000
0.052,0.000,0.000,0.000,0.000,0.001,0.000,0.000,0.000,0.002,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.003,0.052`
不知道用中文简体字提问能看懂吗。
I have tried using Python 3.10, 3.8, and 3.7, but the compilation with the bash script 'compile_op.sh' failed for all of them. Only version 3.5 was successful. Does this project specifically require Python version 3.5 as well?
How to conduct online test
I use the pre-trained model to test the sequence 08, and i get 52.9% mean IoU but i find the visualization seems work badly
pickle.UnpicklingError: A load persistent id instruction was encountered,
but no persistent_load function was specified.
when i use pretrain_models torch.load error. So I want to know whats the torch version and cuda version.Thank you very much
I got a bug when I training
(randlanet) wx@dl-group-workstation:/media/wx/HDD/DQ/RandLA-Net-pytorch-main$ python train_SemanticKITTI.py
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 19130/19130 [00:40<00:00, 473.39it/s]
0%| | 0/3826 [00:03<?, ?it/s]
Traceback (most recent call last):
File "train_SemanticKITTI.py", line 191, in
main()
File "train_SemanticKITTI.py", line 187, in main
trainer.train()
File "train_SemanticKITTI.py", line 131, in train
self.train_one_epoch()
File "train_SemanticKITTI.py", line 120, in train_one_epoch
loss, end_points = compute_loss(end_points, self.train_dataset, self.criterion)
File "/media/wx/HDD/DQ/RandLA-Net-pytorch-main/network/loss_func.py", line 28, in compute_loss
loss = criterion(valid_logits, valid_labels).mean()
File "/home/wx/anaconda3/envs/randlanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/wx/anaconda3/envs/randlanet/lib/python3.6/site-packages/torch/nn/modules/loss.py", line 1152, in forward
label_smoothing=self.label_smoothing)
File "/home/wx/anaconda3/envs/randlanet/lib/python3.6/site-packages/torch/nn/functional.py", line 2846, in cross_entropy
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
RuntimeError: weight tensor should be defined either for all 19 classes or no classes but got weight tensor of shape: [1, 19]
could you please help me?
As in follow, the left two is the number of points infer by the test_SemanticKITTI.py. The right is the number of points from orgin SemanticKITTI .
Maybe missing the proj.pkl part in val or test part?
https://github.com/QingyongHu/RandLA-Net/blob/6b5445f5f279d33d2335e85ed39ca8b68cb1c57e/tester_SemanticKITTI.py#L109-L114
I have trained 30 epoches. The best accuracy and iou is only 33.75% and 23.16%, respectively.
I have used 1800 samples in Nuscense dataset. Each sample contains 34000 points.
The learning rate is set to 0.001. CosineAnnealingLR strategy is token..
I wonder if anything wrong with me. Thank you very much.
hi, tanks you for your great work! I run the test, but the min_possibility always 0, have you this problem, or why this happen? thank for your reply in your free time.
when I run python train_SemanticKITTI.py ,
an error happen:
100%|███████████████████████████████████████| 4541/4541 [04:12<00:00, 17.99it/s]
100%|█████████████████████████████████████████| 909/909 [18:11<00:00, 1.20s/it]
0%| | 0/136 [00:08<?, ?it/s]
Traceback (most recent call last):
File "train_SemanticKITTI.py", line 197, in
main()
File "train_SemanticKITTI.py", line 193, in main
trainer.train()
File "train_SemanticKITTI.py", line 138, in train
mean_iou = self.validate()
File "train_SemanticKITTI.py", line 167, in validate
iou_calc.add_data(end_points)
File "/home/tukrin/ZYD_3D/RandLA-Net-pytorch-main/utils/metric.py", line 29, in add_data
conf_matrix = confusion_matrix(labels_valid, pred_valid, np.arange(0, self.cfg.num_classes, 1))
TypeError: confusion_matrix() takes 2 positional arguments but 3 were given
I don"t know how to solve this problem,can you help me? thanks!!!
if mean_iou > self.highest_val_iou:
self.hightest_val_iou = mean_iou
code there wrong
I try to train it by the way of ReadMe.md,But there are some problems:
File "/home/xavier/RidarSS/RandLA-Net-pytorch-master/helper_tool.py", line 171, in knn_search neighbor_idx = nearest_neighbors._knn_batch(support_pts, query_pts, k, omp=True) AttributeError: module 'nearest_neighbors' has no attribute 'knn_batch'
any solution?plz
Thank you for the realization, great job. I was wondering if you submitted the seq11-21 prediction results to test codalab?
As far as I know, there is a large difference between the valid and test of the Semantickitti dataset.
Firstly, thank you for your work, an up-to-date pytorch implementation of RandLA is really nice to have.
This is not a bug report but rather a series of questions I had when I started implementing RandLA before I found this repository.
After i've run data_preprare_semantickitti.py
, I want to see how the pointcloud looks like.
So i run command -- python .\visualize_SemanticKITTI.py -d .\dataset\semantickitti\sequences_0.06\ -s 03
But there is an error
RuntimeError: Filename extension is not valid label file.
I noticed that it requires label file needed to has a .label
filename extension.
But after data_prepare, my pointcloud&label file have the extension of .npy
:
self.scan_names[self.offset] is^
.\dataset\semantickitti\sequences_0.06\03\velodyne\000000.npy^
self.label_names[self.offset] is^
.\dataset\semantickitti\sequences_0.06\03\labels\000000.npy^
So i try to visualize raw data, i run python .\visualize_SemanticKITTI.py -d .\dataset\semantickitti\sequences\ -s 03
But there is also an error:
RuntimeError: Filename extension is not valid scan file.
It requirs .npy
, but raw kitii dataset has pointcloud on .bin
while label on .label
So how can i run visualization?
RuntimeError: weight tensor should be defined either for all 19 classes or no classes but got weight tensor of shape: [1, 19]
how to solve
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.