3D Semantic Segmentation in the Wild: Learning Generalized Models for Adverse-Condition Point Clouds
The official implementation of "3D Semantic Segmentation in the Wild: Learning Generalized Models for Adverse-Condition Point Clouds". (CVPR 2023) π₯π₯π₯
π₯ For more information follow the PAPER link!:fire:
Authors: Aoran Xiao, Jiaxing Huang, Weihao Xuan, Ruijie Ren, Kangcheng Liu, Dayan Guan, Abdulmotaleb El Saddik, Shijian Lu, Eric Xing
Download SemanticSTF dataset from GoogleDrive, BaiduYun(code: 6haz). Data folders are as follows: The data should be organized in the following format:
/SemanticSTF/
βββ train/
βββ velodyne
βββ 000000.bin
βββ 000001.bin
...
βββ labels
βββ 000000.label
βββ 000001.label
...
βββ val/
...
βββ test/
...
...
βββ semanticstf.yaml
We provide class annotations in 'semanticstf.yaml'
Baseline code for 3D LiDAR Domain Generalization
cd pointDR/
GPU Requirement: > 1 x NVIDIA GeForce RTX 2080 Ti.
The code has been tested with
- Python 3.8, CUDA 10.2, Pytorch 1.8.0, TorchSparse 1.4.0.
- Python 3.8, CUDA 11.6, Pytorch 1.13.0, TorchSparse 2.0.0b0
- IMPORTANT: This code base is not compatible with TorchSparse 2.1.0.
Please refer to here for the installation details.
In your virtual environment follow TorchSparse. This will install all the base packages.
Download SynLiDAR dataset from here, then prepare data folders as follows:
./
βββ
βββ ...
βββ path_to_data_shown_in_config/
βββsequences/
βββ 00/
β βββ velodyne/
| | βββ 000000.bin
| | βββ 000001.bin
| | βββ ...
β βββ labels/
| βββ 000000.label
| βββ 000001.label
| βββ ...
βββ 12/
To download SemanticKITTI follow the instructions here. Then, prepare the paths as follows:
./
βββ
βββ ...
βββ path_to_data_shown_in_config/
βββ sequences
βββ 00/
β βββ velodyne/
| | βββ 000000.bin
| | βββ 000001.bin
| | βββ ...
β βββ labels/
| | βββ 000000.label
| | βββ 000001.label
| | βββ ...
| βββ calib.txt
| βββ poses.txt
| βββ times.txt
βββ 08/
- Don't forget revise the data root dir in
configs/kitti2stf/default.yaml
andconfigs/synlidar2stf/default.yaml
For SemanticKITTI->SemanticSTF, run:
python train.py configs/kitti2stf/minkunet/cr0p5.yaml
For SynLiDAR->SemanticSTF, run:
python train.py configs/synlidar2stf/minkunet/cr0p5.yaml
For SemanticKITTI->SemanticSTF, run:
python evaluate.py configs/kitti2stf/minkunet/cr0p5.yaml --checkpoint_path /PATH/CHECKPOINT_NAME.pt
For SynLiDAR->SemanticSTF, run:
python evaluate_by_weather.py configs/synlidar2stf/minkunet/cr0p5.yaml --checkpoint_path /PATH/CHECKPOINT_NAME.pt
You can download the pretrained models on both SemanticKITTI->SemanticSTF and SynLiDAR->SemanticSTF from here
- Release of SemanticSTF dataset. π
- Release of code of PointDR. π
- Add license. See here for more details.
- Multi-modal UDA for normal-to-adverse weather 3DSS.
If you find our work useful in your research, please consider citing:
@article{xiao20233d,
title={3D Semantic Segmentation in the Wild: Learning Generalized Models for Adverse-Condition Point Clouds},
author={Xiao, Aoran and Huang, Jiaxing and Xuan, Weihao and Ren, Ruijie and Liu, Kangcheng and Guan, Dayan and Saddik, Abdulmotaleb El and Lu, Shijian and Xing, Eric},
journal={arXiv preprint arXiv:2304.00690},
year={2023}
}
SemanticSTF dataset consists of re-annotated LiDAR point cloud data from the STF dataset. Kindly consider citing it if you intend to use the data:
@inproceedings{bijelic2020seeing,
title={Seeing through fog without seeing fog: Deep multimodal sensor fusion in unseen adverse weather},
author={Bijelic, Mario and Gruber, Tobias and Mannan, Fahim and Kraus, Florian and Ritter, Werner and Dietmayer, Klaus and Heide, Felix},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={11682--11692},
year={2020}
}
This work is under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Check our other repos for point cloud understanding!
- Learning From Synthetic LiDAR Sequential Point Cloud for Semantic Segmentation (AAAI2022)
- PolarMix: A General Data Augmentation Technique for LiDAR Point Clouds (NeurIPS 2022)
- Unsupervised Point Cloud Representation Learning with Deep Neural Networks: A Survey (TPAMI2023)
We thank the opensource projects TorchSparse, SPVNAS and SeeingThroughFog.