Giter VIP home page Giter VIP logo

lidar-bonnetal's Introduction

LiDAR-Bonnetal

Semantic Segmentation of point clouds using range images.

Developed by Andres Milioto, Jens Behley, Ignacio Vizzo, and Cyrill Stachniss

Examples of segmentation results from SemanticKITTI dataset: ptcl ptcl

Description

This code provides code to train and deploy Semantic Segmentation of LiDAR scans, using range images as intermediate representation. The training pipeline can be found in /train. We will open-source the deployment pipeline soon.

Pre-trained Models

To enable kNN post-processing, just change the boolean value to True in the arch_cfg.yaml file parameter, inside the model directory.

Predictions from Models

These are the predictions for the train, validation, and test sets. The performance can be evaluated for the training and validation set, but for test set evaluation a submission to the benchmark needs to be made (labels are not public).

No post-processing:

With k-NN processing:

License

LiDAR-Bonnetal: MIT

Copyright 2019, Andres Milioto, Jens Behley, Cyrill Stachniss. University of Bonn.

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Pretrained models: Model and Dataset Dependent

The pretrained models with a specific dataset maintain the copyright of such dataset.

Citations

If you use our framework, model, or predictions for any academic work, please cite the original paper, and the dataset.

@inproceedings{milioto2019iros,
  author    = {A. Milioto and I. Vizzo and J. Behley and C. Stachniss},
  title     = {{RangeNet++: Fast and Accurate LiDAR Semantic Segmentation}},
  booktitle = {IEEE/RSJ Intl.~Conf.~on Intelligent Robots and Systems (IROS)},
  year      = 2019,
  codeurl   = {https://github.com/PRBonn/lidar-bonnetal},
  videourl  = {https://youtu.be/wuokg7MFZyU},
}
@inproceedings{behley2019iccv,
  author    = {J. Behley and M. Garbade and A. Milioto and J. Quenzel and S. Behnke and C. Stachniss and J. Gall},
  title     = {{SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences}},
  booktitle = {Proc. of the IEEE/CVF International Conf.~on Computer Vision (ICCV)},
  year      = {2019}
}

lidar-bonnetal's People

Contributors

jbehley avatar tano297 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lidar-bonnetal's Issues

Why permute proj xyz when process data

Hi, in file parser.py, why did you permute x,y,z into z, x, y (proj_xyz.clone().permute(2, 0, 1)) ?

proj = torch.cat([proj_range.unsqueeze(0).clone(), proj_xyz.clone().permute(2, 0, 1), proj_remission.unsqueeze(0).clone()])

About the pre-trained model

Hi,

Thank you for releasing the source code of this awesome work!
I wonder if you could tell me which sequences did you select to train the pre-trained models, because I want to try these models for inference. Was it chose just like the paper said?
Thanks!

Request a simple demo script

Hi,

Thanks for sharing your amazing work.

This is a shot in the dark, but I am wondering if you consider releasing a simple demo script that takes a LiDAR sweep as input and output predicted semantic labels. I am asking because I want to run the pretrained models on a different dataset. The easiest way seems to be porting my dataset into the same format as SemanticKITTI. Is that the case?

Thanks,
Peiyun

RuntimeError: "embedding_backward" not implemented for 'Long'

I tried to train the network from the scratch but I encountered the following runtime error.

image

I searched this error on Google but there was nothing related to this kind of error.

After a lot attempts to fix this, I figured out that the training is working when the following line in "ioueval.py" is removed.

self.conf_matrix = self.conf_matrix.index_put_(tuple(idxs), self.ones, accumulate=True)

but the IoU value is shown to be zero.
image

Do you know what is the problem?
Thanks.

Question about validation results

Thank you for your excellent work!

I download the prediction file darknet53-knn.tar.gz and run evaluate_semantics on sequence 08 and get:

Acc avg 0.842
IoU avg 0.375

I just wanna make sure, this is the SAME model as DarkNet53Seg in Table 2 of the SemanticKITTI paper, which got 49.9 mIoU in the test set (seq 11-21)

Inference pipeline!!

@Chen-Xieyuanli @niosus @stachnis @ovysotska @tano297 thanks for opensourcing the work i have few queries

1.i want to detect only stationary objects in the driving environment , can we use your pretrained model to give only those output , if so which secti0n of the code i have to modify
2.after the segmentation of the object class , how to obtain the cuboid bounding box of those object so that i can extract the area , distance and other params
3. Can we test the model on lower lidar points/ring ? have you done any testing if so what is the accuracy drop
4.when we give custom data as input to the infer.py should it be also in sequences or it can be independent of it
Thanks for the response !!!!

The inference time on Jetson AGX!

Thank you very much for your work!

I find the Runtime of RangeNet53++(64 × 2048 px) is 188ms on Jetson AGX. But in my device,it is 450ms, I want to know your lab environment Settings, and whether there are additional operations?
捕获

Using the code on less number of beams

Great work. Thanks for sharing the dataset and code. I tried you code on a 40 beam lidar and did not get any useful result using your trained weights (darknet53-1024). I made changes to the laserscan.py (fov, and H to match my lidar) and also adjusted the height. Any other change I need to make? I created a .bin file from my data and used it as input to your infer.py code.

storing for predictions for evaluation on the benchmark

Hi,

To store the predictions in the desired format, I went through the code and this is what I understood:

  • In the code, I think it happens somewhere in user.py. More specifically in the function infer_subset().
  • In the for loop, is the image iterated over pixel by pixel and pixel location specified by p_y and p_x?
  • In line 133, is proj_argmax the argmax of the predictions with shape (NxHxWx1)? and is unproj_argmax storing the prediction for the pixel location p_y, p_x?

Can you please let me know if I understand the code correctly.
Thanks for the help.

Why train in docker, show "Training in device: cpu" and "Bus error".

Hi, thank you for the work of open source。
When I run in docker, encountered two problems, train output are as follows.


INTERFACE:
dataset /bonnet/KITTI/
arch_cfg config/arch/squeezeseg.yaml
data_cfg config/labels/semantic-kitti.yaml
log /bonnet/lidar-bonnetal/logs/
pretrained None

Commit hash (training version): b'4233111'

Opening arch config file config/arch/squeezeseg.yaml
Opening data config file config/labels/semantic-kitti.yaml
No pretrained directory found.
Copying files to /bonnet/lidar-bonnetal/logs/ for further reference.
Sequences folder exists! Using sequences from /bonnet/KITTI/sequences
parsing seq 00
parsing seq 01
parsing seq 02
parsing seq 03
parsing seq 04
parsing seq 05
parsing seq 06
parsing seq 07
parsing seq 09
parsing seq 10
Using 2761 scans from sequences [0, 1, 2, 3, 4, 5, 6, 7, 9, 10]
Sequences folder exists! Using sequences from /bonnet/KITTI/sequences
parsing seq 05
Using 2761 scans from sequences [5]
Loss weights from content: tensor([ 0.0000, 22.9317, 857.5627, 715.1100, 315.9618, 356.2452, 747.6170,
887.2239, 963.8915, 5.0051, 63.6247, 6.9002, 203.8796, 7.4802,
13.6315, 3.7339, 142.1462, 12.6355, 259.3699, 618.9667])
Using SqueezeNet Backbone
Depth of backbone input = 5
Original OS: 16
New OS: 16
Strides: [2, 2, 2, 2]
Decoder original OS: 16
Decoder new OS: 16
Decoder strides: [2, 2, 2, 2]
Total number of parameters: 915540
Total number of parameters requires_grad: 915540
Param encoder 724032
Param decoder 179968
Param head 11540
No path to pretrained, using random init.
Training in device: cpu
Ignoring class 0 in IoU evaluation
[IOU EVAL] IGNORE: tensor([0])
[IOU EVAL] INCLUDE: tensor([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18,
19])
ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).
Traceback (most recent call last):
File "./train.py", line 115, in
trainer.train()
File "../../tasks/semantic/modules/trainer.py", line 236, in train
ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).
show_scans=self.ARCH["train"]["show_scans"])
File "../../tasks/semantic/modules/trainer.py", line 307, in train_epoch
for i, (in_vol, proj_mask, proj_labels, _, path_seq, path_name, _, _, _, _, _, _, _, _, _) in enumerate(train_loader):
File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 576, in next
idx, batch = self._get_batch()
File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 553, in _get_batch
success, data = self._try_get_batch()
File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 511, in _try_get_batch
data = self.data_queue.get(timeout=timeout)
File "/usr/lib/python3.5/multiprocessing/queues.py", line 104, in get
if timeout < 0 or not self._poll(timeout):
File "/usr/lib/python3.5/multiprocessing/connection.py", line 257, in poll
return self._poll(timeout)
File "/usr/lib/python3.5/multiprocessing/connection.py", line 414, in _poll
r = wait([self], timeout)
File "/usr/lib/python3.5/multiprocessing/connection.py", line 911, in wait
ready = selector.select(timeout)
File "/usr/lib/python3.5/selectors.py", line 376, in select
fd_event_list = self._poll.poll(timeout)
File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/_utils/signal_handling.py", line 63, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 212) is killed by signal: Bus error.

Looking forward to your reply.

Augmentations

Thank you for this great work! It's an amazing platform to understand different CNN-architectures for 3D-CNN and segmentation.

I try to understand the effects of augmentations on the projections and observe the IoU's, but for myself it's hard to understand where it is best to implement the effects/noise on the trainsequences.
It was possible for me to make some changes on the perspectives and image size through the Spherical Projection class, but these effects are also changing the validation set.

In which programmpart (user/parser/ect.) can i find the step where the training tensors are put inside the decoder?

Thank you !

Kindly asking performance check TABEL 1 in RangeNet ++ research paper

Dear Photogrammetry & Robotics Lab at the University of Bonn,

Based on KITTI Odometry Benchmark and the instructions inside RangeNet ++.

If iterate training dataset sequence 00 - 10 (sequence 08 for validation) to calculate std and mean, the result has different values compared to the config (img_means, img_stds) in the repository.

And after training 150 echos some of the categories might have better performance and you might want to recheck.
Especially some of it has 5% difference and TABEL1 would better update a new one.

I could help with this if needed :D just let me know

A problem about of arcg yaml

img_means: #range,x,y,z,signal
- 12.12
- 10.88
- 0.23
- -1.04
- 0.21
img_stds: #range,x,y,z,signal
- 12.32
- 11.47
- 6.91
- 0.86
- 0.16
How can i compute the img_mean and img_stds on my own data?

Problems when train with part information

Hi, I want to only only use xyz and remission data to train network, so I modify config file as:

backbone:
name: "darknet" # ['squeezeseg', 'squeezesegV2', 'darknet']
input_depth:
range: False
xyz: True
remission: True
dropout: 0.01
bn_d: 0.01
OS: 32 # output stride (only horizontally)
train: True # train backbone?
extra:
layers: 21

And the program report:

INTERFACE:
dataset .
arch_cfg config/arch/darknet21.yaml
data_cfg config/labels/semantic-kitti.yaml
log log_21
pretrained None
Opening arch config file config/arch/darknet21.yaml
Opening data config file config/labels/semantic-kitti.yaml
No pretrained directory found.
Copying files to log_21 for further reference.
Sequences folder exists! Using sequences from ./sequences
parsing seq 00
parsing seq 01
parsing seq 02
parsing seq 03
parsing seq 04
parsing seq 05
parsing seq 06
parsing seq 07
parsing seq 09
parsing seq 10
Using 19130 scans from sequences [0, 1, 2, 3, 4, 5, 6, 7, 9, 10]
Sequences folder exists! Using sequences from ./sequences
parsing seq 08
Using 4071 scans from sequences [8]
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [828,0,0], thread: [64,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [828,0,0], thread: [65,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [828,0,0], thread: [66,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [828,0,0], thread: [67,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [828,0,0], thread: [68,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [828,0,0], thread: [69,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [828,0,0], thread: [70,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [828,0,0], thread: [71,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [828,0,0], thread: [72,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [828,0,0], thread: [73,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [828,0,0], thread: [74,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [828,0,0], thread: [75,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [828,0,0], thread: [76,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [828,0,0], thread: [77,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [828,0,0], thread: [78,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [828,0,0], thread: [79,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [828,0,0], thread: [80,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [828,0,0], thread: [81,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [828,0,0], thread: [82,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [828,0,0], thread: [83,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [828,0,0], thread: [84,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [828,0,0], thread: [85,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [828,0,0], thread: [86,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [828,0,0], thread: [87,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [828,0,0], thread: [88,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [828,0,0], thread: [89,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [828,0,0], thread: [90,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [828,0,0], thread: [91,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [828,0,0], thread: [92,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [828,0,0], thread: [93,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [828,0,0], thread: [94,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [828,0,0], thread: [95,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [96,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [97,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [98,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [99,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [100,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [101,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [102,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [103,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [104,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [105,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [106,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [107,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [108,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [109,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [110,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [111,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [112,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [113,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [114,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [115,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [116,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [117,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [118,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [119,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [120,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [121,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [122,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [123,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [124,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [125,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [126,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [127,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [849,0,0], thread: [32,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [849,0,0], thread: [33,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [849,0,0], thread: [34,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [849,0,0], thread: [35,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [849,0,0], thread: [36,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [849,0,0], thread: [37,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [849,0,0], thread: [38,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [849,0,0], thread: [39,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [849,0,0], thread: [40,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [849,0,0], thread: [41,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [849,0,0], thread: [42,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [849,0,0], thread: [43,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [849,0,0], thread: [44,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [849,0,0], thread: [45,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [849,0,0], thread: [46,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [849,0,0], thread: [47,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [849,0,0], thread: [48,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [849,0,0], thread: [49,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [849,0,0], thread: [50,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [849,0,0], thread: [51,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [849,0,0], thread: [52,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [849,0,0], thread: [53,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [849,0,0], thread: [54,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [849,0,0], thread: [55,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [849,0,0], thread: [56,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [849,0,0], thread: [57,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [849,0,0], thread: [58,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [849,0,0], thread: [59,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [849,0,0], thread: [60,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [849,0,0], thread: [61,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [849,0,0], thread: [62,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [849,0,0], thread: [63,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [811,0,0], thread: [0,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [811,0,0], thread: [1,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [811,0,0], thread: [2,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [811,0,0], thread: [3,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [811,0,0], thread: [4,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [811,0,0], thread: [5,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [811,0,0], thread: [6,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [811,0,0], thread: [7,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [811,0,0], thread: [8,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [811,0,0], thread: [9,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [811,0,0], thread: [10,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [811,0,0], thread: [11,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [811,0,0], thread: [12,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [811,0,0], thread: [13,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [811,0,0], thread: [14,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [811,0,0], thread: [15,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [811,0,0], thread: [16,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [811,0,0], thread: [17,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [811,0,0], thread: [18,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [811,0,0], thread: [19,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [811,0,0], thread: [20,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [811,0,0], thread: [21,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [811,0,0], thread: [22,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [811,0,0], thread: [23,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [811,0,0], thread: [24,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [811,0,0], thread: [25,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [811,0,0], thread: [26,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [811,0,0], thread: [27,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [811,0,0], thread: [28,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [811,0,0], thread: [29,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [811,0,0], thread: [30,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [811,0,0], thread: [31,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [898,0,0], thread: [0,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [898,0,0], thread: [1,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [898,0,0], thread: [2,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [898,0,0], thread: [3,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [898,0,0], thread: [4,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [898,0,0], thread: [5,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [898,0,0], thread: [6,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [898,0,0], thread: [7,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [898,0,0], thread: [8,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [898,0,0], thread: [9,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [898,0,0], thread: [10,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [898,0,0], thread: [11,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [898,0,0], thread: [12,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [898,0,0], thread: [13,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [898,0,0], thread: [14,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [898,0,0], thread: [15,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [898,0,0], thread: [16,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [898,0,0], thread: [17,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [898,0,0], thread: [18,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [898,0,0], thread: [19,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [898,0,0], thread: [20,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [898,0,0], thread: [21,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [898,0,0], thread: [22,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [898,0,0], thread: [23,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [898,0,0], thread: [24,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [898,0,0], thread: [25,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [898,0,0], thread: [26,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [898,0,0], thread: [27,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [898,0,0], thread: [28,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [898,0,0], thread: [29,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [898,0,0], thread: [30,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [898,0,0], thread: [31,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [0,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [1,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [2,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [3,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [4,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [5,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [6,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [7,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [8,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [9,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [10,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [11,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [12,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [13,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [14,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [15,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [16,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [17,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [18,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [19,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [20,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [21,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [22,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [23,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [24,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [25,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [26,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [27,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [28,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [29,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [30,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda ->auto::operator()(int)->auto: block: [820,0,0], thread: [31,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
THCudaCheck FAIL file=/pytorch/aten/src/THC/THCCachingHostAllocator.cpp line=265 error=59 : device-side assert triggered
Loss weights from content: tensor([ 0.0000, 22.9317, 857.5627, 715.1100, 315.9618, 356.2452, 747.6170,
887.2239, 963.8915, 5.0051, 63.6247, 6.9002, 203.8796, 7.4802,
13.6315, 3.7339, 142.1462, 12.6355, 259.3699, 618.9667])
Using DarknetNet21 Backbone
Depth of backbone input = 4
Original OS: 32
New OS: 32
Strides: [2, 2, 2, 2, 2]
Traceback (most recent call last):
File "./train.py", line 114, in
trainer = Trainer(ARCH, DATA, FLAGS.dataset, FLAGS.log, FLAGS.pretrained)
File "../../tasks/semantic/modules/trainer.py", line 89, in init
self.path)
File "../../tasks/semantic/modules/segmentator.py", line 36, in init
_, stub_skips = self.backbone(stub)
File "/home/li/env/py3torch1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "../..//backbones/darknet.py", line 167, in forward
x, skips, os = self.run_layer(x, self.conv1, skips, os)
File "../..//backbones/darknet.py", line 150, in run_layer
y = layer(x)
File "/home/li/env/py3torch1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/li/env/py3torch1/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 338, in forward
self.padding, self.dilation, self.groups)
RuntimeError: cuDNN error: CUDNN_STATUS_NOT_INITIALIZED

Do you know what's the problem?
Thanks!

Performance drop compared with pre-trained models

Hello, I am trying to reproduce your experiments from scratch on the SemanticKitti. I am running the code on a node with 8x titan. But the performance of mine is lower than yours about 4 points. I have tested your pre-trained models, the performance is higer than mine. I also use the arch.yaml and data.yaml from your pre-trained models and I find the performance is similar to config/arch and config/labels.

test on your pre-trained models: darknet21 (noknn, 47.4) and darknet53 (noknn, 50.4)
test on mine: darknet21 (noknn, 43.7) and darknet53 (noknn, 44.8)

what do you think the main difference between your training and mine?
can you share the training log with me?

CUDA out of memory

Just a heads up for anyone who might run into this issue.

When I tried training a network from a pre-trained model (squeezesegV2), I got the following error code.

RuntimeError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 7.93 GiB total capacity; 7.21 GiB already allocated; 9.00 MiB free; 45.45 MiB cached)

Solution : in the config file, change the batch size (default is 8 for the model I was using). After changing it to 4 to see if it would help, the training process seemed to have started up fine.

How to adapt 16 beam lidar?

My lidar has only 16 beams.
How can I get the best result based on your work?
I want to change train the model with the resolution of 2048 * 16, could this help the result?

Stuck during training?

Hello, I am trying to train the model (both from scratch and on a pretrained model) on the SemanticKitti dataset, which I've downloaded. I am running the training on a machine with 2x 1080Ti.

I have let the training sit for about 1 hour, and nothing has happened so far. The tb directory is also empty, so I am not sure if it is actually doing anything.

Interestingly, nvidia-smi shows that the GPUs are idle, but memory has been allocated on them.

INTERFACE:
dataset /home/ddlabs/data/kitti/
arch_cfg /home/ddlabs/catkin_ws/src/rangenet_lib/darknet53/arch_cfg.yaml
data_cfg /home/ddlabs/catkin_ws/src/rangenet_lib/darknet53/data_cfg.yaml
log /tmp/train_log/
pretrained /home/ddlabs/catkin_ws/src/rangenet_lib/darknet53/
----------

Commit hash (training version):  b'4233111'
----------

Opening arch config file /home/ddlabs/catkin_ws/src/rangenet_lib/darknet53/arch_cfg.yaml
Opening data config file /home/ddlabs/catkin_ws/src/rangenet_lib/darknet53/data_cfg.yaml
model folder exists! Using model from /home/ddlabs/catkin_ws/src/rangenet_lib/darknet53/
Copying files to /tmp/train_log/ for further reference.
Sequences folder exists! Using sequences from /home/ddlabs/data/kitti/sequences
parsing seq 00
parsing seq 01
parsing seq 02
parsing seq 03
parsing seq 04
parsing seq 05
parsing seq 06
parsing seq 07
parsing seq 09
parsing seq 10
Using 19130 scans from sequences [0, 1, 2, 3, 4, 5, 6, 7, 9, 10]
Sequences folder exists! Using sequences from /home/ddlabs/data/kitti/sequences
parsing seq 08
Using 4071 scans from sequences [8]
Loss weights from content:  tensor([  0.0000,  22.9317, 857.5627, 715.1100, 315.9618, 356.2452, 747.6170,
        887.2239, 963.8915,   5.0051,  63.6247,   6.9002, 203.8796,   7.4802,
         13.6315,   3.7339, 142.1462,  12.6355, 259.3699, 618.9667])
Using DarknetNet53 Backbone
Depth of backbone input =  5
Original OS:  32
New OS:  32
Strides:  [2, 2, 2, 2, 2]
Decoder original OS:  32
Decoder new OS:  32
Decoder strides:  [2, 2, 2, 2, 2]
Total number of parameters:  50377364
Total number of parameters requires_grad:  50377364
Param encoder  40585504
Param decoder  9786080
Param head  5780
Successfully loaded model backbone weights
Successfully loaded model decoder weights
Successfully loaded model head weights
Training in device:  cuda
Let's use 2 GPUs!
Ignoring class  0  in IoU evaluation
[IOU EVAL] IGNORE:  tensor([0])
[IOU EVAL] INCLUDE:  tensor([ 1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15, 16, 17, 18,
        19])

RuntimeError: cuda runtime error (11) : invalid argument at /pytorch/aten/src/THC/THCGeneral.cpp:383

Train command:
./train.py -d ~/dev-env/dataset/auto_driver/semantic_kitti/data_odometry_labels/dataset/ -ac ./config/arch/darknet21.yaml -l log

The code can run!

Train command:
./train.py -d ~/dev-env/dataset/auto_driver/semantic_kitti/data_odometry_labels/dataset/ -ac ./config/arch/darknet53.yaml -l log

The code can not run!

Error message:

Traceback (most recent call last): File "./train.py", line 115, in <module> trainer.train() File "../../tasks/semantic/modules/trainer.py", line 237, in train show_scans=self.ARCH["train"]["show_scans"]) File "../../tasks/semantic/modules/trainer.py", line 318, in train_epoch output = model(in_vol, proj_mask) #[2, 5, 64, 2048]; [2 , 64, 2048] -> [2, 20, 64, 2048] File "/home/amax-syg/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "/home/amax-syg/.local/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 152, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/home/amax-syg/.local/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 162, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/home/amax-syg/.local/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 83, in parallel_apply raise output File "/home/amax-syg/.local/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 59, in _worker output = module(*input, **kwargs) File "/home/amax-syg/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "../../tasks/semantic/modules/segmentator.py", line 151, in forward y, skips = self.backbone(x) File "/home/amax-syg/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "../..//backbones/darknet.py", line 167, in forward x, skips, os = self.run_layer(x, self.conv1, skips, os) File "../..//backbones/darknet.py", line 150, in run_layer y = layer(x) File "/home/amax-syg/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "/home/amax-syg/.local/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 338, in forward self.padding, self.dilation, self.groups) RuntimeError: cuda runtime error (11) : invalid argument at /pytorch/aten/src/THC/THCGeneral.cpp:383

How will the Sphere projection change if Lidar height is changed?

From what I understand the sphere image projection is independent of the height of the lidar right now. What if the height changes, how do you transform the sphere projection to a different height? As the sphere image projection formed due to the new height will be very different to what the network has previously seen and trained on.

How important is remission/how is it used?

I was wondering on the importance of the remission (or reflectivity) data? I have simulated Lidar data from Unreal Engine which I am planning to use to train a network from scratch, however that only provides X, X and Z coordinates.

Can I get rid of remission completely in the data preprocessing scrip/make remission, say, 0.5 for every point?

Alternatively I could assign a set value to each object ID, for example buildings will have a fixed remission r1, cars will have a fixed remission r2 etc.

Or will doing this be detrimental to the performance? Would be good to find out before spending a few days on data collection/preprocessing and then further 3-4 days on model training.

Thanks in advance!

How to use CRF

I see that in arch_cfg.yaml, there is a comment for post.CRF.params, "this should be a dict when in use".
What should I put here if I want to use the CRF?

Why does inference take a lot longer on the first scan?

Hi,

When running inference, the first scan always takes a lot longer (0.5-0.7 sec) compared to all other scans (0.01 sec), is there a way to fix this?

If not, I was wondering if you have any suggestions on how to implement live inference? For example, continuously infer new scans in the data directory. Currently I'm just running the inference script over an over, but because the first scan alway takes so long, the performance isn't great.

Calculation of the weights for the loss-function

Hi, In your paper it says that the weights for the loss function (weighted cross-entropy loss) are calculated according to:
w_c = 1/log(f_c + epsilon), where f_c is the inverse frequency of the class.

However, if I'm not mistaken, it looks like the weights are calculated differently in the code (in semantic/modules/trainer.py)
w_c = 1/(content + epsilon), where content is the frequency of the class.

Am I misunderstanding something or are the weights calculated differently, and if that's the case, why are they calculated differently?

Output representation and error on other dataset

@niosus @stachnis @ovysotska @Chen-Xieyuanli @tano297 Hi i was able to run the code on semanttic Kitti dataset and obtained the visualization but when i give custom data / nuscence data as input , i have the following observations

  1. i get the entrie output in red , as shown in the figure

  2. how to obtain the list of classes predicted from the model from the inference code

  3. can we get the location of the predicted objects or draw cuboids around them
    THanks in advance

DIFFERENT VALIDATION RESULTS ON TRAINING DATASET

Hello,

when I train de model on training dataset from semantic KITTI, I obtain different metrics in training and validation when using the same sequences for both. I guess that the results should be the same... Do you have any idea of why this is happening? Have you tried this?

Thank you in advance.

The mean and std in config/arch

Hi, thanks for open sourcing your work! I was wondering how you calculated the mean and std in the config files? When I calculated the mean (for the semanticKITTI sequences 0-10 and 0-21) I got different values than you, especially for the X-coordinate. I got the (range, x, y, z, signal)-means :
For the sequences 0-10: (11.6185, -0.1146, 0.4501, -1.0542, 0.2874).
For the sequences 0-21: (11.7777, -0.0965, 0.5086, -1.0626, 0.2758).
Your mean-values: (12.12, 10.88, 0.23, -1.04, 0.21)
It seems reasonable for the mean of the x-value to be around 0, how come it is around 10 for you?

Storing the training data

Hi,
I have a network for semantic segmentation of LiDAR scans and would like to train on your data.

I can visualize the data (visualize.py) and skimming through the code, I feel generating the range images from raw kitti data is integrated within the training pipeline (maybe I am wrong).

Can you please guide me towards storing the training data, so I can use it with my architecture.

Thanks!!

Error in visualizing the predictions

Hi, @tano297 ,

When I try visualizing the predictions, I got the following error Labels folder doesn't exist.

/data/code10/lidar-bonnetal/train/tasks/semantic# ./visualize.py -d /media/root/WDdata/dataset/kitti_lidar_vo_data/dataset -p /data/code10/lidar-bonnetal/prediction/darknet21/sequences/00/predictions -s 00
********************************************************************************
INTERFACE:
Dataset /media/root/WDdata/dataset/kitti_lidar_vo_data/dataset
Config config/labels/semantic-kitti.yaml
Sequence 00
Predictions /data/code10/lidar-bonnetal/prediction/darknet21/sequences/00/predictions
ignore_semantics False
ignore_safety False
offset 0
********************************************************************************
Opening config file config/labels/semantic-kitti.yaml
Sequence folder exists! Using sequence from /media/root/WDdata/dataset/kitti_lidar_vo_data/dataset/sequences/00/velodyne
Labels folder doesn't exist! Exiting...

I had downloaded the prediction file provided and can run visualizing the ground-truth labels successfully by:
./visualize.py -d /media/root/WDdata/dataset/kitti_lidar_vo_data/dataset -s 00

Any hints to fix this issue?

THX!

ROS interface for DarkNet53Seg

Hi,
I would like to use the segmented point clouds in ROS1...

I just ran your train and inference scripts with the pretrained DarkNet53 and am wondering if you already released the ROS interface? I read that you would do it after the ICRA deadline. J

Cheers,
Kai

Hi,

Not a shot in the dark at all.

I thought of providing this functionality as I do in bonnetal with an infer_video.py and infer_img.py, but unfortunately, images are a lot more standard than point clouds to deal with.

So, I quickly realized that to make an infer_scan.py script I would have to specify a format, and if we decided on our own format, then our inference script was already good enough.

I could provide a script, for example, for .ply, .stl, or .pcd files, or something standard like that, but since I am going to release a ROS interface soon after ICRA deadline I let this side project go for now.

So, long story short, putting the data in the kitti format is the easiest way for now.

Originally posted by @tano297 in #3 (comment)

Performance of SqueezeSeg trained from scratch much lower than performance of the pre-trained model

Hello !

I'm trying to train from scratch the SqueezeSeg model using your code and configuration files. I just had to change the batch size to 16 because 32 was too big for my GPU.

But at the end of the training, the performance of my network (iou=0.201) is much lower than the performance of your pre-trained SqueezeSeg (iou=0.305).

Here are my figures at the end of the 150 epochs :

train_iou_our_squeezeseg

valid_iou_our_squeezeseg

Have the pre-trained models been trained with the same configuration files as in the git repo ? Should I adjust some params to reach pre-trained performances ?

Binary mask not used for squeezesegv2

In the squeezesegV2.py class, I can see that the binary mask has not been used as an input channel. However, in the squeezesegV2 paper, the authors have claimed that using this binary mask to indicate that the pixel value is missing or not significantly improves segmentation accuracy for some classes. Is there any reason for that?

DataLoader worker is killed by singal: Killed

I train from the pre-train model.
After train one epoch and save successfully, it stack a while and raise following error.

Traceback (most recent call last):
  File "./train.py", line 115, in <module>
    trainer.train()
  File "../../tasks/semantic/modules/trainer.py", line 259, in train
    save_scans=self.ARCH["train"]["save_scans"])
  File "../../tasks/semantic/modules/trainer.py", line 414, in validate
    output = model(in_vol, proj_mask)
  File "/home/haoli/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "../../tasks/semantic/modules/segmentator.py", line 149, in forward
    y, skips = self.backbone(x)
  File "/home/haoli/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "../..//backbones/darknet.py", line 167, in forward
    x, skips, os = self.run_layer(x, self.conv1, skips, os)
  File "../..//backbones/darknet.py", line 150, in run_layer
    y = layer(x)
  File "/home/haoli/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 490, in __call__
    if torch._C._get_tracing_state():
  File "/home/haoli/.local/lib/python3.6/site-packages/torch/utils/data/_utils/signal_handling.py", line 63, in handler
    _error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 4350) is killed by signal: Killed.

How can i solve it?

Number of filters in Fire modules not consistent with the SqueezeSeg paper

Hi, in SqueezeSeg paper it is pointed that for an input tensor of HxWxC, the squeeze layer of the fire module has C/4 filters, and the expand layers have both C/2 filters, but in the code you are setting:
expand1x1_planes= expand3x3_planes= 4 * squeeze_planes

With this sizes, the fire module does not preserve the equality between the input size and the output size. Sorry, I am just curious :) Thanks for your help

Loss function problem

Hi,
In your paper, it said during training the network used a weighted cross-entropy loss function. However, if i have not mistaken the code, it uses nn.NLLLoss instead of nn.CrossEntropyLoss(which combines nn.LogSoftmax() and nn.NLLLoss() in one single class.). Why?

the progress of visualize

Hello!

I am running the code in colab.

However, it does not proceed from the next command. Even after a lot of time, it remains the same.

If there is something wrong, can you correct it?

image

RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR

I'm having this error when training. My system configurations are listed below.

OS: CentOS Linux 7
PyTorch version: 1.1.0 (installed using conda)
TensorFlow-gpu version: 1.9.0
Python version: 3.6.8
CUDA/cuDNN version: 9.0/7.0.5
GPU: Nvidia GPU GeForce GTX 1080

I modified the network so that it has an input depth of 8 instead of 5 and noticed that this issue only appears when the depth is greater than 5. I can't figure out how to resolve the error message though.

>auto::operator()(int)->auto: block: [1390,0,0], thread: [64,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1556653183467/work/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda [](int)-
...
>auto::operator()(int)->auto: block: [1390,0,0], thread: [125,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1556653183467/work/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda [](int)-
>auto::operator()(int)->auto: block: [1390,0,0], thread: [126,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1556653183467/work/aten/src/ATen/native/cuda/IndexKernel.cu:53: lambda [](int)-
>auto::operator()(int)->auto: block: [1390,0,0], thread: [127,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
Traceback (most recent call last):
  File "./train.py", line 115, in <module>
    trainer.train()
  File "../../tasks/semantic/modules/trainer.py", line 239, in train
    show_scans=self.ARCH["train"]["show_scans"])
  File "../../tasks/semantic/modules/trainer.py", line 320, in train_epoch
    output = model(in_vol, proj_mask)
  File "/home/media-server/.pyenv/versions/anaconda3-5.0.0/envs/rangenet++/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "../../tasks/semantic/modules/segmentator.py", line 149, in forward
    y, skips = self.backbone(x)
  File "/home/media-server/.pyenv/versions/anaconda3-5.0.0/envs/rangenet++/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "../..//backbones/darknet.py", line 171, in forward
    x, skips, os = self.run_layer(x, self.conv1, skips, os)
  File "../..//backbones/darknet.py", line 154, in run_layer
    y = layer(x)
  File "/home/media-server/.pyenv/versions/anaconda3-5.0.0/envs/rangenet++/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/media-server/.pyenv/versions/anaconda3-5.0.0/envs/rangenet++/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 338, in forward
    self.padding, self.dilation, self.groups)
RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR

Any ideas on how to fix this?

running inference

Hi, I am trying to use darknet21to obtain predictions on kitti tracking dataset. When I am trying to run infer.py to obtain predictions, It's shows the following error

File "../..//tasks/semantic/dataset/kitti/parser.py", line 97, in init
assert(len(scan_files) == len(label_files))
AssertionError

TypeError: only size-1 arrays can be converted to Python scalars

Hi, all,

When I run the training code: ./train.py -d /media/root/WDdata/dataset/kitti_lidar_vo_data/dataset -ac /data/code10/lidar-bonnetal/train/tasks/semantic/config/arch/darknet21.yaml -l /data/code10/lidar-bonnetal/train/tasks/semantic/log. I got the following error:

Lr: 9.998e-03 | Update: 3.934e-04 mean,4.035e-04 std | Epoch: [0][9563/9565] | Time 0.399 (0.539) | Data 0.097 (0.240) | Loss 0.9362 (1.5231) | acc 0.688 (0.559) | IoU 0.256 (0.173)
Lr: 9.999e-03 | Update: 7.671e-04 mean,8.116e-04 std | Epoch: [0][9564/9565] | Time 0.396 (0.539) | Data 0.094 (0.240) | Loss 1.3228 (1.5231) | acc 0.602 (0.559) | IoU 0.155 (0.173)
Best mean iou in training set so far, save model!
********************************************************************************
Traceback (most recent call last):
  File "./train.py", line 115, in <module>
    trainer.train()
  File "../../tasks/semantic/modules/trainer.py", line 259, in train
    save_scans=self.ARCH["train"]["save_scans"])
  File "../../tasks/semantic/modules/trainer.py", line 432, in validate
    color_fn)
  File "../../tasks/semantic/modules/trainer.py", line 174, in make_log_img
    depth, Trainer.get_mpl_colormap('viridis')) * mask[..., None]
TypeError: only size-1 arrays can be converted to Python scalars

It seems that something is wrong when calling cv2.applyColorMap(...) at the end of first training epoch (line174). As Trainer.get_mpl_colormap('viridis') is a np.array of shape (256,1,3), but the second argument in cv2.applyColorMap(...) expects a scalar.
After I replace it by a scalar value as follows:
out_img = cv2.applyColorMap(depth, 2) * mask[..., None]

the training can proceed smoothly.

Any suggestion on this problem?

THX!

Training and Inference on other dataset / custom dataset

@niosus @stachnis @ovysotska @Chen-Xieyuanli @tano297 thanks for open sourcing the work , great work by the way . I am haing few quereis

  1. can we use the model to test it on custom dataset or other dataset like argo ai, lyft nuscenses besides semantic kitti , if so waht all changes have to be made

2.to train the custom dataset/lyft/nusenses does this source code expect the input in the same format

3.has this model trained on lower points of the point cloud dataset

  1. In ur paper you had mentioned the rangenet++ works on 360 deg is it possible to confiy this to only the frontal view or any other required view

thanks in advance

A problem for training SqueezeSeg model

Hello.
I'm trying to train the SqueezeSeg model from random inital weights. However, the iou seems too low after 16h training (about 16.2%).

GPU: a NVIDIA TITAN X.
Training set: sequence 0~7
Validation set: sequence 8
Arch config: squeezeseg.yaml (I just set batch_size=18 due to GPU memory limitation)

There are some information about my training in this pic may help.
Do I need more time for training? Could anyone give me some suggestions? @tano297
Thanks a lot !
pic

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.