Seg-RCNN(LZnet)blog
https://v.youku.com/v_show/id_XNDk1MjQxMzg4NA==.html?spm=a2h0c.8166622.PhoneSokuUgc_1.dtitle
Currently testing in KITTI BEV and 3rd in KITTI 3D.
Authors: liangzhao
2020-10-10: create this readme file
A 3D object detector tool , which includes the implementation of Seg-RCNN and IOU-SSD. All algrithms are build on the deep learming frame pytorch,
python3.5+
cuda
(version 10.2)torch
(tested on 1.4.0)torchvision
(tested on 0.5.0)opencv
shapely
mayavi
spconv
(v1.2)
git clone <XXX.git>
sudo cp include/cudnn.h /usr/local/cuda-10.2/include/
sudo cp lib64/libcudnn* /usr/local/cuda-10.2/lib64/
sudo chmod a+r /usr/local/cuda-10.2/include/cudnn.h
$ pip install torch==1.4.1 torchvision=0.5.0
$ export PATH=/home/ubuntu-502/liang/cmake-3.14.0-Linux-x86_64/bin:$PATH
$ cmake --version
if the output is following words , the installation of cmake has completed
cmake version 3.14.0
CMake suite maintained and supported by Kitware (kitware.com/cmake).
4.4 install boost.
sudo apt-get install libboost-all-dev
4.6 install spconv.(the version cuda) (cuda>10.2,cudnn).
cd spconv
python setup.py bdist_wheel
cd dist
pip install spconv-1.2-cp36-cp36m-linux_x86_64.whl
4.7 install some commom lib.
pip install easydict tensorboardX scikit-image opencv-python tqdm
5 install special lib (e.g. roiaware_pool3d_cuda, )
cd pvdet/dataset/roiaware_pool3d/
python setup.py install
cd pvdet/ops/iou3d_nms/
python setup.py install
install pointnet2
cd /pvdet/model/pointnet2/pointnet2_stack
python set_up.py install
install fps_with_features_cuda
cd /new_train/ops/fps_wit_forgound_point/
python setup.py install
-
Download the 3D KITTI detection dataset from here. Data to download include:
- Velodyne point clouds (29 GB): input data to VoxelNet
- Training labels of object data set (5 MB): input label to VoxelNet
- Camera calibration matrices of object data set (16 MB): for visualization of predictions
- Left color images of object data set (12 GB): for visualization of predictions
-
Create cropped point cloud and sample pool for data augmentation, please refer to SECOND.
$ python new_train/tools/create_data_info.py
- Split the training set into training and validation set according to the protocol here.
└── DATA_DIR
├── training <-- training data
| ├── image_2
| ├── label_2
| ├── velodyne
| └── velodyne_reduced
└── testing <--- testing data
| ├── image_2
| ├── label_2
| ├── velodyne
| └── velodyne_reduced
You can download the pretrained model here, which is trained on the train split (3712 samples) and evaluated on the val split (3769 samples) and test split (7518 samples). The performance (using 40 recall poisitions) on validation set is as follows:
Car | [email protected] | , 0.70 | , 0.70: |
---|---|---|---|
bbox | 99.12 | 96.09 | , 93.61 |
bev | 96.55 | 92.79 | , 90.32 |
3d | 91.13 | 81.54 | , 79.71 |
To train the LZnet with single GPU, run the following command:
python trainer.py
To train the LZnet with multiple GPUs, run the following command:
CUDA_VISIBLE_DEVICES=0
python -m torch.distributed.launch --nproc_per_node=4 trainer.py --launcher pytorch
python -m torch.distributed.launch --nproc_per_node=4 sd_train.py --launcher pytorch
To evaluate the model, run the following command:
first log in the remote server
ssh -L 16006:127.0.0.1:16006 [email protected]
then in the serverce run the tensorboard
tensorboard --port=16006 --logdir="/media/ubuntu-502/pan1/liang/PVRCNN-V1.1/output/single_stage_model/train/0.0.2/tensorboard"
tensorboard --port=16006 --logdir="/media/ubuntu-502/pan1/liang/PVRCNN-V1.1/output/single_stage_model/train/0.0.4/tensorboard"
finally in local computer open the local web:
scp -r [email protected]:/media/ubuntu-502/pan1/liang/PVRCNN-V1.1/ckpt/LZnet/0.0.6/checkpoint_epoch_80.pth /home/liang/for_ubuntu502/PVRCNN-V1.1/ckpt/LZnet/0.0.6/
If you find this work useful in your research, please consider cite:
@inproceedings{,
title={},
author={},
booktitle={},
year={2020}
}