This repo contains the source code and dataset for our paper:
Separated, Cross-Fused and Extensible G&L Fusion Network for LiDAR Semantic Segmentation
paper video
Our paltform configuration: ubuntu18.04, Nvidia RTX 3090, cudatoolkit11.3, python3.7(in anaconda environment). Note: we didn't testing in other configuration.
Ensure that the installation of the GPU driver(support cuda>=11.3, cudnn) is completed before the belows.
- Create a virtual environment and activate it.
conda create -n SAMe3d python=3.7
conda activate SAMe3d
- Install cudatoolkit(e.g. v11.3) and PyTorch in SAMe3d env.
conda install cuda -c nvidia/label/cuda-11.3.0 -c nvidia/label/cuda-11.3.1 -y
conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
- numba, torchpack
conda install numba
pip install torchpack
- open3d
pip install open3d
- spconv (for cuda version)
pip install spconv-cu113
conda install pytorch-scatter -c pyg
- strictyaml
pip install strictyaml
pip install --no-dependencies nuscenes-devkit==1.1.1
We have organized these three datasets. To evaluate/train point cloud, you will need to download the required datasets.
- Sany (ours) - Baidu Drive(Acquiring copyright)
./
├──
├── ...
└── data_path/
├──sany
├── Mixing_station(MS)/ # Mixing station scene.
│ └── sequences/
│ ├── 00/ # for training
│ │ ├── velodyne/
│ | | ├── xxx.bin
│ | | ├── xxx.bin
│ | | └── ...
│ │ └── labels/
│ | ├── xxx.label
│ | ├── xxx.label
│ | └── ...
│ ├── 01/ # for validation
│ └── 02/ # for testing
└── points(PG)/ # Proving ground scene.
└── sequences/
├── 00/ # for training
| └── ...
├── 01/ # for validation
└── 02/ # for testing
- nuScenes - Baidu Drive(access code: ai67)
./
├──
├── ...
└── data_path/
├──nuscenes
├── lidarseg/
├── maps/
├── samples/
│ └── LIDAR_TOP/
| ├── n008-2018-05-21-11-06-59-0400_LIDAR_TOP_1526915243547836.pcd.bin
| └── ...
├── v1.0-trainval/
├── nuscenes_infos_train.pkl/
├── nuscenes_infos_val.pkl/
└── nuscenes_infos_test.pkl/
- SemanticKITTI - Baidu Drive(access code: qaos)
./
├──
├── ...
└── data_path/
├──sequences
├── 00/ # 00-07,09-10 for training
│ ├── velodyne/
| | ├── 000000.bin
| | ├── 000001.bin
| | └── ...
│ └── labels/
| ├── 000000.label
| ├── 000001.label
| └── ...
├── 08/ # for validation
├── 11/ # 11-21 for testing
│ ├── velodyne/
| ├── 000000.bin
| ├── 000001.bin
| └── ...
└── 21/
└── ...
Note: In you virtual env(e.g. SAMe3d) establied from above Step1 to run the belows.
-- We provide a pretrained model LINK (access code: wf40)
- To train on Sany-Mixing Station dataset, run
python train.py --config_path config/sany_mixing_parameters.yaml --device 0
- To train on Sany-Proving ground dataset, run
python train.py --config_path config/sany_points_parameters.yaml --device 0
- To train on nuScenes dataset, run
python train_nuscene.py --config_path config/nuScenes.yaml --device 0
- To train on SemanticKITTI dataset, run
python train.py --config_path config/parameters.yaml --device 0
We thanks for the opensource codebases, Cylinder3D and spconv