plusmultiply / mprm Goto Github PK
View Code? Open in Web Editor NEWMulti-Path Region Mining For Weakly Supervised 3D Semantic Segmentation on Point Clouds
License: MIT License
Multi-Path Region Mining For Weakly Supervised 3D Semantic Segmentation on Point Clouds
License: MIT License
Thanks for bringing the idea of attention to wsss in point cloud!
I met two issues in this work
In https://github.com/plusmultiply/mprm/blob/master/datasets/Scannet_subcloud.py#L798
You make the center of the sub-cloud different every time, does it mean that we still need point-level labels to get the sub-cloud-level labels?
because the sub-clouds generated by the same seeds might be slightly different.
I think your attention submodules are implemented incorrectly
In short, it should be stacked_length = inputs['stacked_length_out'] in
https://github.com/plusmultiply/mprm/blob/master/models/network_blocks_mprm.py#L968
https://github.com/plusmultiply/mprm/blob/master/models/network_blocks_mprm.py#L1026
I have a question about how the pseudo labels on validation sets in Table 3 evaluated in code? I observed that the validation data in the scannet_subcloud.py is 1201 training data, and through generate_pseudo_label.py, it is possible to generate and evaluate 1201 pseudo labels of Training (i.e., Training in Table 3). I would like to know what comments or statements to add if I want to get the validation set evaluation?
In scannet_subcloud.py, I try to comment out the following lines of code:(line 414~418)
# Get number of clouds
self.input_trees['validation'] = self.input_trees['training']
self.input_colors['validation'] = self.input_colors['training']
self.input_vert_inds['validation'] = self.input_colors['training']
self.input_labels['validation'] = self.input_labels['training']
I also change the following code (line 447,remove the "not"):
if (not self.load_test) and 'train' in cloud_folder and cloud_name not in self.validation_clouds:
In addition, in generate_pseudo_label.py, I change the validation_size=312. After making these changes, an error occurs when running the generate_pseudo_label.py, as follows:
Traceback (most recent call last):
File "generate_pseudo_label.py", line 183, in
test_caller(chosen_log, chosen_snapshot, on_val)
File "generate_pseudo_label.py", line 131, in test_caller
tester.test_cloud_segmentation_on_val(model, dataset)
File "/home/chn/Downloads/Weakly Supervised/mprm-master/utils/tester_cam.py", line 543, in test_cloud_segmentation_on_val
probs = self.test_probs[i_val][dataset.validation_proj[i_val], :]
IndexError: list index out of range
Hello, nice job!
I was sorry to bother you abount scannet dataset. I want to know more about the dataset, which version(v1 or v2) did you use? How much memory needs to be consumed? And another supporting documents?such as train/val split.
best wishs to you!
Will you release the pretrained models?
Traceback (most recent call last):
File "/home/data2/mprm-master/training_mprm.py", line 201, in
model = KernelPointFCNN(dataset.flat_inputs, config)
File "/data2/mprm-master/models/KPFCNN_mprm.py", line 89, in init
self.inputs['last_batch_ind'] = flat_inputs[ind]
IndexError: tuple index out of range
Creating Model
When I used your code, I met the error? Did you give me some advice?
Best wish to you!
Dear plusmultiply,
Thank you very much for sharing the code.
During the training, PCAM, SA and PSA work well and the loss steadily decreases.
However, when I use channel attention head, the loss always fluctuates, resulting in low accuracy. I try it separately or combine it with other heads. Both get unstable losses.
Have you met the same problem or could you give any possible explanation for this?
I'm looking forward to your reply.
Thanks for sharing your code! But I have a question. When training and testing the code(train_mb.py and tester_cam.py), I found that dataset.input_labels[] was not found. It turns out that self.input_labels is commented out in Scannet_subcloud.py, there only exists self.subcloud_labels. After replacing dataset.input_labels in these two files with dataset.subcloud_labels, the following error will appear(Appear in the validation phase). What should I do?
File "/home/chn/Downloads/mprm-master/datasets/Scannet_subcloud.py", line 718, in spatially_regular_gen
cloud_labels = self.subcloud_labels[data_split][cloud_ind][point_ind][1:]
IndexError: index 2173 is out of bounds for axis 0 with size 30
I think the code in trainer_mb.py and tester_cam.py does not correspond to the code in Scannet_subcloud.py. Please fix this bug.
Thank you for sharing the code!
However, I have several questions about the Subcloud-level Annotation. Could you explain how to get the file subcloud_label.tar.gz or how to annotate subcloud in details?
Thank you!
Through training I have been able to obtain the model and produce the result files of the .ply through generate_pseudo_label.py (the .ply files can be refined through crf_postprocess.py). But when I tried to use training_segmentation.py to use pseudo labels for segmentation model training, I found some errors in the code. First, there is no KPFCNN_model_original.py file under the Models folder, Then pseudo labels are read into the network in Scannet_on_pseudo_lable.py of the .npy format, but without producing (only the .ply files).
Because of these two problems, a new segmentation model cannot be retrained. I hope you can fix it. Thank you very much~
hello!
Want to ask for some help while loading .ply data
I tried to load .ply data with "read_ply" function and it gave me keyerror
<ipython-input-17-c43e6f65c260> in parse_header(plyfile, ext)
40 line = line.split()
41 print(line)
---> 42 properties.append((line[2].decode(), ext + ply_dtypes[line[1]]))
43
44 return num_points, properties
KeyError: b'list'
any help on this? thanks a lot!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.