Giter VIP home page Giter VIP logo

pin_slam's Introduction

📍PIN-SLAM: LiDAR SLAM Using a Point-Based Implicit Neural Representation for Achieving Global Map Consistency

Yue Pan · Xingguang Zhong · Louis Wiesmann . Thorbjörn Posewsky . Jens Behley · Cyrill Stachniss

University of Bonn

Preprint | Video

TL;DR: PIN-SLAM is a full-fledged implicit neural LiDAR SLAM system including odometry, loop closure detection, and globally consistent mapping

pin_slam_teaser

Globally consistent point-based implicit neural (PIN) map built with PIN-SLAM in Bonn. The high-fidelity mesh can be reconstructed from the neural point map.


pin_slam_loop_compare

Comparison of (a) the inconsistent mesh with duplicated structures reconstructed by PIN LiDAR odometry, and (b) the globally consistent mesh reconstructed by PIN-SLAM.


Globally Consistent Mapping Various Scenarios RGB-D SLAM Extension
demo_kitti00.mp4
demo_lidar_9scenes.mp4
demo_replica_rgbd.mp4
Table of Contents
  1. Abstract
  2. Installation
  3. How to run PIN-SLAM
  4. Visualizer instructions
  5. Contact
  6. Related projects

Abstract

[Details (click to expand)] Accurate and robust localization and mapping are essential components for most autonomous robots. In this paper, we propose a SLAM system for building globally consistent maps, called PIN-SLAM, that is based on an elastic and compact point-based implicit neural map representation. Taking range measurements as input, our approach alternates between incremental learning of the local implicit signed distance field and the pose estimation given the current local map using a correspondence-free, point-to-implicit model registration. Our implicit map is based on sparse optimizable neural points, which are inherently elastic and deformable with the global pose adjustment when closing a loop. Loops are also detected using the neural point features. Extensive experiments validate that PIN-SLAM is robust to various environments and versatile to different range sensors such as LiDAR and RGB-D cameras. PIN-SLAM achieves pose estimation accuracy better or on par with the state-of-the-art LiDAR odometry or SLAM systems and outperforms the recent neural implicit SLAM approaches while maintaining a more consistent, and highly compact implicit map that can be reconstructed as accurate and complete meshes. Finally, thanks to the voxel hashing for efficient neural points indexing and the fast implicit map-based registration without closest point association, PIN-SLAM can run at the sensor frame rate on a moderate GPU.

Installation

Platform requirement

  • Ubuntu OS (tested on 20.04)

  • With GPU (recommended) or CPU only (run much slower)

  • GPU memory requirement (> 6 GB recommended)

  • Windows/MacOS with CPU-only mode

1. Set up conda environment

conda create --name pin python=3.8
conda activate pin

2. Install the key requirement PyTorch

conda install pytorch==2.0.0 torchvision==0.15.0 torchaudio==2.0.0 pytorch-cuda=11.7 -c pytorch -c nvidia 

The commands depend on your CUDA version. You may check the instructions here.

3. Install other dependency

pip3 install open3d==0.17 scikit-image gtsam wandb tqdm rich roma natsort pyquaternion pypose evo laspy rospkg 

Note that rospkg is optional. You can install it if you would like to use PIN-SLAM with ROS.

Run PIN-SLAM

Clone the repository

git clone [email protected]:PRBonn/PIN_SLAM.git
cd PIN_SLAM

Sanity test

For a sanity test, do the following to download an example part (first 100 frames) of the KITTI dataset (seq 00):

sh ./scripts/download_kitti_example.sh

And then run:

python3 pin_slam.py ./config/lidar_slam/run_demo.yaml
[Details (click to expand)]

Use run_demo_no_vis.yaml if you are running on a server without an X service. Use run_demo_sem.yaml if you want to conduct metric-semantic SLAM using semantic segmentation labels.

You can visualize the SLAM process in PIN-SLAM visualizer and check the results in the ./experiments folder.

Run on your datasets

For an arbitrary data sequence, you can run:

python3 pin_slam.py path_to_your_config_file.yaml
[Details (click to expand)]

Generally speaking, you only need to edit in the config file the pc_path, which is the path to the folder containing the point cloud (.bin, .ply, .pcd or .las format) for each frame. For ROS bag, you can use ./scripts/rosbag2ply.py to extract the point cloud in .ply format.

For pose estimation evaluation, you may also provide the path pose_path to the reference pose file and optionally the path calib_path to the extrinsic calibration file. Note the pose file should be in the KITTI format or TUM format.

For some popular datasets, you can run:

# KITTI dataset sequence 00
python3 pin_slam.py ./config/lidar_slam/run_kitti.yaml kitti 00  

# MulRAN dataset sequence KAIST01
python3 pin_slam.py ./config/lidar_slam/run_mulran.yaml mulran kaist01

# Newer College dataset sequence 01_short
python3 pin_slam.py ./config/lidar_slam/run_ncd.yaml ncd 01

# Replica dataset sequence room0
python3 pin_slam.py ./config/rgbd_slam/run_replica.yaml replica room0

The SLAM results and logs will be output in the output_root folder specified in the config file.

You may check here for the results that can be obtained with this repository on a couple of popular datasets.

The training logs can be monitored via Weights & Bias online if you turn on the wandb_vis_on option in the config file. If it's your first time using Weights & Bias, you will be requested to register and log in to your wandb account.

ROS 1 Support

If you are not using PIN-SLAM as a part of a ROS package, you can avoid the catkin stuff and simply run:

python3 pin_slam_ros.py [path_to_your_config_file] [point_cloud_topic_name]
[Details (click to expand)]

For example:

python3 pin_slam_ros.py ./config/lidar_slam/run_ros_general.yaml /os_cloud_node/points

After playing the ROS bag or launching the sensor you can then visualize the results in Rviz by:

rviz -d ./config/pin_slam_ros.rviz 

You may use the ROS service save_results and save_mesh to save the results and mesh in the output_root folder.

rosservice call /pin_slam/save_results
rosservice call /pin_slam/save_mesh

The process will stop and the results and logs will be saved in the output_root folder if no new messages are received for more than 30 seconds.

If you are running without a powerful GPU, PIN-SLAM may not run at the sensor frame rate. You need to play the rosbag with a lower rate to run PIN-SLAM properly.

You can also put pin_slam_ros.py into a ROS package for rosrun or roslaunch.

Inspect the results after SLAM

After the SLAM process, you can reconstruct mesh from the PIN map within an arbitrary bounding box with an arbitrary resolution by running:

python3 vis_pin_map.py [path/to/your/result/folder] [marching_cubes_resolution_m] [(cropped)_map_file.ply] [output_mesh_file.ply] [mesh_min_nn]
[Details (click to expand)]

The bounding box of (cropped)_map_file.ply will be used for the bounding box for mesh reconstruction. mesh_min_nn controls the trade-off between completeness and accuracy. The smaller number (for example 6) will lead to a more complete mesh with more guessed artifacts. The larger number (for example 15) will lead to a less complete but more accurate mesh.

For example, for the case of the sanity test, run:

python3 vis_pin_map.py ./experiments/sanity_test_* 0.2 neural_points.ply mesh_20cm.ply 8

Visualizer Instructions

We provide a PIN-SLAM visualizer based on lidar-visualizer to monitor the SLAM process.

The keyboard callbacks are listed below.

[Details (click to expand)]
Button Function
Space pause/resume
ESC/Q exit
G switch between the global/local map visualization
E switch between the ego/map viewpoint
F toggle on/off the current point cloud visualization
M toggle on/off the mesh visualization
A toggle on/off the current frame axis & sensor model visualization
P toggle on/off the neural points map visualization
D toggle on/off the training data pool visualization
I toggle on/off the SDF horizontal slice visualization
T toggle on/off PIN SLAM trajectory visualization
Y toggle on/off the ground truth trajectory visualization
U toggle on/off PIN odometry trajectory visualization
R re-center the view point
Z 3D screenshot, save the currently visualized entities in the log folder
B toggle on/off back face rendering
W toggle on/off mesh wireframe
Ctrl+9 Set mesh color as normal direction
5 switch between point cloud for mapping and for registration (with point-wise weight)
7 switch between black and white background
/ switch among different neural point color mode, 0: geometric feature, 1: color feature, 2: timestamp, 3: stability, 4: random
< decrease mesh nearest neighbor threshold (more complete and more artifacts)
> increase mesh nearest neighbor threshold (less complete but more accurate)
[/] decrease/increase mesh marching cubes voxel size
↑/↓ move up/down the horizontal SDF slice
+/- increase/decrease point size

Contact

If you have any questions, please contact:

Related Projects

SHINE-Mapping (ICRA 23): Large-Scale 3D Mapping Using Sparse Hierarchical Implicit Neural Representations

LocNDF (RAL 23): Neural Distance Field Mapping for Robot Localization

KISS-ICP (RAL 23): A LiDAR odometry pipeline that just works

4DNDF (CVPR 24): 3D LiDAR Mapping in Dynamic Environments using a 4D Implicit Neural Representation

pin_slam's People

Contributors

yuepanedward avatar

Stargazers

Desmond Zhang avatar  avatar  avatar  avatar XingCai avatar Mian Jia avatar Dantalion avatar taoyu wu avatar Jiwon Choi avatar  avatar Shaofeng Tan avatar 张志诚 avatar Behnam Asadi avatar Yinqiang Zhang avatar JianyuanRuan avatar  avatar Huaqi Tao avatar  avatar  avatar Matt McDermott avatar Enes Cingöz avatar he junjie avatar Satyajit Ghana avatar NengWang avatar  avatar zhSlamer avatar Ziheng Xu avatar  avatar Kaicheng Zhang avatar  avatar Jonathan Kotthoff avatar Seungjae Lee avatar  avatar jianghc avatar lyb avatar  avatar  avatar  avatar Yuki Hasegawa avatar Chiyun avatar KAI-AutoNavigation avatar  avatar  avatar buffa avatar  avatar Lars Hansen avatar Jing Zeng avatar Leeez avatar 不以物喜不以己悲 avatar  avatar Xiaoliang Jiao avatar  avatar  avatar  avatar Luka  avatar Catalina avatar  avatar Shuo avatar Andre Nguyen avatar  avatar  avatar xtYang avatar coolfeel avatar Seoyeon Jang avatar Ignat Penshin avatar  avatar  avatar nowroad avatar  avatar KamalanathanN avatar Zihao Wang avatar  avatar 江俊龙 avatar NJAU_AI_wang avatar JaySlamer avatar  avatar RUNHENG ZUO avatar  avatar DapengFeng avatar Xinrui Wu avatar RyuYamamoto avatar LetusRoll avatar  avatar YizheZhang avatar Yash Turkar avatar  avatar mefly_know avatar  avatar Zewen Xu avatar science avatar DeepDuke avatar  avatar Kuan0110 avatar Jeff Carpenter avatar Chen Wang avatar Dongjae Lee avatar  avatar davci avatar Dominik Kulmer avatar Si Shubin-HEU avatar

Watchers

OKUMURA Yoshio avatar Kevin Greene avatar Martin Valgur avatar  avatar maky avatar Cyrill Stachniss avatar Jens Behley avatar  avatar Ignacio Vizzo avatar SongShuangfu avatar Ruixiang Zhang avatar jianghc avatar  avatar  avatar TaeYoung Kim avatar Pengyu Yin avatar  avatar huhupy avatar wuhao avatar  avatar

pin_slam's Issues

issue about mesh

Thank you for your outstanding work. I have a question about the kitti00 dataset. I observed that there are some blank areas behind the car. Is this because the radar did not detect the point clouds at the rear?
42a49f4667ab39c54dff1b49f27c3da

odometry evaluation

Thanks for your excellent work!!!
I run this codes in kitti_04,and get absolute trajectory error 0.34 and your paper is 0.1 in table IV. So how can i get the same result?
2024-03-27 20-31-07 的屏幕截图

Difficult to operate at the sensor frequency in kitti 00

Hi~, thanks for your great work and the open source. We tested on a RTX3080Ti GPU with 12 GB memory. The average consuming time per frame is 245.8ms in the whole KITTI 00 dataset. I think the performance of RTX3080Ti is similar to the A4000. The parameters are all default. Can you give us some advice? Thank you!

Add pure-localization mode

It would be nice to have a pure-localization mode so that we can localize the robot inside a pre-built map. This may also work for the revisited region. We can get rid of the repeated mapping and only do the localization when there are not much new observations.

About time consumption

Thank you for your wonderful work!

I run your code on GPU Titan V , 12 GB and
set the pool_capacity: 5e6, batch_size:8192;
But I don't know why my time consumption is so long.
On Kitti-00, running time for 43 min.
I want to ask how to adapt it to my GPU for real-time operation;

this my time details on Kitti-00:
time_details

sometime odometry time consumption over 200ms:
2024-03-08 17-14-43 的屏幕截图

Thank you for your help!

Unexpected artifacts when only perform mapping with gt pose.

Hi @YuePanEdward Thanks for your great work and making it open source!

I have met a strange problem when I run the mapping process on KITTI seq07 with tracking module disabled.
I straightforward use groundtruth poses for mapping and use run_kitti.yaml as config file.
Then I noticed that the reconstruction of this region is not satisfactory when using ground truth poses, but if I use the full SLAM system with tracking, this region is nicely reconstructed.
image
image

This problem is kind of wired. I would appreciate it if you could provide some advice and hints on how to figure it out.
One potential reason is the car stopped here for a while, so I haved tried to remove these duplicated frames but it seems not helps.

signal frame

Thank you for your work. I would like to rebuild a mesh with point clouds for each of my frames and save it. Is this project feasible?
Mesh should be generated based on the completed sequence, rather than directly using a single frame point cloud to generate mesh?

CUDA error even when using cpu yaml

Hello and thank you for your great work.

I was trying to use this command python pin_slam.py ./config/lidar_slam/run_demo_cpu.yaml to run the initial test but i get this error:

  File "pin_slam.py", line 380, in <module>
    run_pin_slam()
  File "pin_slam.py", line 103, in run_pin_slam
    T0 = get_time()
  File "/home/omar-nour/catkin_ws/src/PIN_SLAM/utils/tools.py", line 273, in get_time
    torch.cuda.synchronize()
  File "/home/omar-nour/miniconda3/envs/pin/lib/python3.8/site-packages/torch/cuda/__init__.py", line 799, in synchronize
    _lazy_init()
  File "/home/omar-nour/miniconda3/envs/pin/lib/python3.8/site-packages/torch/cuda/__init__.py", line 302, in _lazy_init
    torch._C._cuda_init()
RuntimeError: No CUDA GPUs are available

I am trying to run PIN-SLAM on Ubuntu 20.04 Virtual Machine (Virtual Box) with ROS Noetic

LiDAR points and Visual RGB

Thank you for your wonderful work!

  1. How to integrate LiDAR point clouds with visual RGB?
  2. Is there a recommended dataset?

Thank you!

Noisy Mesh

Thanks for the incredible open-source project! I was running PIN-SLAM with the provided run_demo yaml file and the 100-frame KITTI from the download script, and got a quite noisy mesh. Meshes from the paper and video look really smooth, so I'm wondering what should I do to address this? Thanks.

My GPU is RTX 4080 if it helps.
image

torch.cuda.OutOfMemoryError: CUDA out of memory

Hello, when I try to run your code run_demo.yaml,the following error has occurred:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 396.00 MiB. GPU 0 has a total capacity of 3.81 GiB of which 348.00 MiB is free.
My gpu is RTX3050,How can I modify this program to make it run properly.
Thank you!

Have individual data loader for each dataset

It would be better to have a separate data loader for each dataset. In this case, we can avoid converting all the datasets to the desired format (point cloud in kitti bin, ply, pcd format and pose in kitti format). We can also better handle the loading of point-wise timestamps.

About the VBR dataset

Hi, thanks for your impressive work. I see there is a run_vbr.yaml in config folder. But after I download the VBR dataset, there is no /ply folder, so I just want to ask, if you do some convert processing on vbr dataset, using like scripts/rosbag2ply.py? Additionally in each scene in vbr, there are several .bag, how do you combine them?

Thanks for your early reply.

Have the values of `local_neural_points` been modified between being selected from and reassigned to global neural points?

Thanks for your great work! I have the following question:

In NeuralPoints.reset_neural_points(), local_neural_points are selected from global neural points:

self.local_neural_points = self.neural_points[local_mask]

In NeuralPoints.assign_local_to_global(), local_neural_points are reassigned to global neural points:

self.neural_points[local_mask] = self.local_neural_points

But after searching all the usages of local_neural_points, I cannot see whether their values are modified anywhere in the code. In other words, doself.neural_points[local_mask] remain the same values after the NeuralPoints.assign_local_to_global() operation?

Thanks for your time and explanation.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.