Giter VIP home page Giter VIP logo

p3former's Introduction

P3Former: Position-Guided Point Cloud Panoptic Segmentation Transformer

main figure

Introduction

This is an official release of Position-Guided Point Cloud Panoptic Segmentation Transformer.

Abstract

DEtection TRansformer (DETR) started a trend that uses a group of learnable queries for unified visual perception. This work begins by applying this appealing paradigm to LiDAR-based point cloud segmentation and obtains a simple yet effective baseline. Although the naive adaptation obtains fair results, the instance segmentation performance is noticeably inferior to previous works. By diving into the details, we observe that instances in the sparse point clouds are relatively small to the whole scene and often have similar geometry but lack distinctive appearance for segmentation, which are rare in the image domain. Considering instances in 3D are more featured by their positional information, we emphasize their roles during the modeling and design a robust Mixed-parameterized Positional Embedding (MPE) to guide the segmentation process. It is embedded into backbone features and later guides the mask prediction and query update processes iteratively, leading to Position-Aware Segmentation (PA-Seg) and Masked Focal Attention (MFA). All these designs impel the queries to attend to specific regions and identify various instances. The method, named Position-guided Point cloud Panoptic segmentation transFormer (P3Former), outperforms previous state-of-the-art methods by 3.4% and 1.2% PQ on SemanticKITTI and nuScenes benchmark, respectively. The source code and models are available at https://github.com/SmartBot-PJLab/P3Former.

Results

SemanticKITTI test

$\mathrm{PQ}$ $\mathrm{PQ^{\dagger}}$ $\mathrm{RQ}$ $\mathrm{SQ}$ $\mathrm{PQ}^{\mathrm{Th}}$ $\mathrm{PQ}^{\mathrm{St}}$ Download Config
65.3 67.8 74.9 86.6 67.4 63.7 model config

SemanticKITTI validation

$\mathrm{PQ}$ $\mathrm{PQ^{\dagger}}$ $\mathrm{RQ}$ $\mathrm{SQ}$ $\mathrm{PQ}^{\mathrm{Th}}$ $\mathrm{PQ}^{\mathrm{St}}$ Download Config
62.6 66.2 72.4 76.2 69.4 57.7 model config
  • Pretraining a backbone is helpful to stablize the the training process and get slightly better results. You can pretrain a model with config.

Installation

conda create -n p3former python==3.8 -y
conda activate p3former
pip install torch==1.10.1+cu111 torchvision==0.11.2+cu111 torchaudio==0.10.1 -f https://download.pytorch.org/
whl/cu111/torch_stable.html
pip install openmim
mim install mmengine==0.7.4
mim install mmcv==2.0.0rc4
mim install mmdet==3.0.0
mim install mmdet3d==1.1.0
wget https://data.pyg.org/whl/torch-1.10.0%2Bcu113/torch_scatter-2.0.9-cp38-cp38-linux_x86_64.whl
pip install torch_scatter-2.0.9-cp38-cp38-linux_x86_64.whl

Usage

Data preparation

Semantickitti

data/
├── semantickitti
│   ├── sequences
│   │   ├── 00
│   │   |   ├── labels
│   │   |   ├── velodyne
│   │   ├── 01
│   │   ├── ...
│   ├── semantickitti_infos_train.pkl
│   ├── semantickitti_infos_val.pkl
│   ├── semantickitti_infos_test.pkl

You can generate *.pkl by excuting

python tools/create_data.py semantickitti --root-path data/semantickitti --out-dir data/semantickitti --extra-tag semantickitti

Training and testing

# train
sh dist_train.sh $CONFIG $GPUS

# val
sh dist_test.sh $CONFIG $CHECKPOINT $GPUS

# test
sh dist_test.sh $CONFIG $CHECKPOINT $GPUS

Citation

@article{xiao2023p3former,
    title={Position-Guided Point Cloud Panoptic Segmentation Transformer},
    author={Xiao, Zeqi and Zhang, Wenwei and Wang, Tai and Chen Change Loy and Lin, Dahua and Pang, Jiangmiao},
    journal={arXiv preprint},
    year={2023}
}

Acknowledgements

We thank the contributors of MMDetection3D and the authors of Cylinder3D and K-Net for their great work.

p3former's People

Contributors

xizaoqu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

p3former's Issues

Reproduce the baseline result

Hi Authors,

Thank you for this good work.

I am trying to reproduce the results of the baseline on the SemanticKITTI validation set as below:
image

How can i config in the code to train the baseline model?

I saw that i need to set the pe_type and use_pa_seg flags in the decoder configuration, is that all?

Thank you

nuScenes Code

image
Why are the weights calculated but not used when computing the loss?

nuScenes Dataset

Hello,

Although in #4, this was discussed, I'd like to ask once more about any plans to release nuScenes code instructions?

Furthermore, can you please share more details about the GPU memory requirement (I've seen you have used 8xA100 but do you use all 80gb memory of A100s) and time (in hours/days) to train for nuScenes?

Best

duplicate results & deployment

Hi!
Thinks for your contributions for pointcloud panoptic segmentation! I want to duplicate the results using your code on SemanticKITTI dataset.
I have some question:

  1. which config should I choose, form the three configs , and "train_cfg"&"val_cfg"&"test_cfg" have a repeated definitions in config file.
  2. how to set hyperparameters which is not clearly in the parper exprements parts and appendix to Implement baseline performence?
  3. do you release code on mmdetection3d platform?
  4. the last, Can this model be deployed on embedded platform? What do you have suggestion for the model convert engine file

Think you!!

Pretrain model request

Thanks for your excellent work and open-sourcing the code.
And I've tried to. reproduced this project on SemanticKITTI. But due to limit training resource, I cant reach the similar result as you provided in your paper.
May I ask the access to download "semantickitti_test_65.pth" and "semantickitti_val_62.6.pth" which is mentioned in the previous issue #5.

Visualization scripts?

Thanks for sharing your excellent open-source code.
I am highly interested in your work and would like to use it for visualization purposes. Can you provide me with a script or tutorial on how to visualize the point cloud?
Thank you very much.

Code release

Hi, Thanks for sharing this great work!
Could you please estimate the code release date?
Thanks

save results

I am trying to run your code using the trained weights you provided,
(btw, I am not sure what is the difference between 'semantickitti_test_65.3.pth' and 'semantickitti_val_62.6.pth', both trained on the training dataset?, but one gave higher score?)
the code test.py is running, but no .lables files are generated, and I was unable to understand where how to save the results of the network.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.