Giter VIP home page Giter VIP logo

ispc-lab / lidar4d Goto Github PK

View Code? Open in Web Editor NEW
129.0 4.0 10.0 89 KB

💫 [CVPR 2024] LiDAR4D: Dynamic Neural Fields for Novel Space-time View LiDAR Synthesis

Home Page: https://dyfcalid.github.io/LiDAR4D

License: Apache License 2.0

Python 95.47% Shell 0.53% Cuda 3.39% C++ 0.61%
autonomous-driving computer-vision cvpr2024 dynamic-scene lidar neural-rendering novel-view-synthesis point-cloud reconstruction

lidar4d's Introduction

LiDAR4D: Dynamic Neural Fields for Novel Space-time View LiDAR Synthesis

Zehan Zheng, Fan Lu, Weiyi Xue, Guang Chen†, Changjun Jiang († Corresponding author)
CVPR 2024

Paper (arXiv) | Paper (CVPR) | Project Page | Video | Poster | Slides

This repository is the official PyTorch implementation for LiDAR4D.

Table of Contents
  1. Changelog
  2. Demo
  3. Introduction
  4. Getting started
  5. Results
  6. Simulation
  7. Citation

Changelog

2024-6-1:🕹️ We release the simulator for easier rendering and manipulation. Happy Children's Day and Have Fun!
2024-5-4:📈 We update flow fields and improve temporal interpolation.
2024-4-13:📈 We update U-Net of LiDAR4D for better ray-drop refinement.
2024-4-5:🚀 Code of LiDAR4D is released.
2024-4-4:🔥 You can reach the preprint paper on arXiv as well as the project page.
2024-2-27:🎉 Our paper is accepted by CVPR 2024.

Demo

LiDAR4D_demo.mp4

Introduction

LiDAR4D is a differentiable LiDAR-only framework for novel space-time LiDAR view synthesis, which reconstructs dynamic driving scenarios and generates realistic LiDAR point clouds end-to-end. It adopts 4D hybrid neural representations and motion priors derived from point clouds for geometry-aware and time-consistent large-scale scene reconstruction.

Getting started

🛠️ Installation

git clone https://github.com/ispc-lab/LiDAR4D.git
cd LiDAR4D

conda create -n lidar4d python=3.9
conda activate lidar4d

# PyTorch
# CUDA 12.1
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu121
# CUDA 11.8
# pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu118
# CUDA <= 11.7
# pip install torch==2.0.0 torchvision torchaudio

# Dependencies
pip install -r requirements.txt

# Local compile for tiny-cuda-nn
git clone --recursive https://github.com/nvlabs/tiny-cuda-nn
cd tiny-cuda-nn/bindings/torch
python setup.py install

# compile packages in utils
cd utils/chamfer3D
python setup.py install

📁 Dataset

KITTI-360 dataset (Download)

We use sequence00 (2013_05_28_drive_0000_sync) for experiments in our paper.

Download KITTI-360 dataset (2D images are not needed) and put them into data/kitti360.
(or use symlinks: ln -s DATA_ROOT/KITTI-360 ./data/kitti360/).
The folder tree is as follows:

data
└── kitti360
    └── KITTI-360
        ├── calibration
        ├── data_3d_raw
        └── data_poses

Next, run KITTI-360 dataset preprocessing: (set DATASET and SEQ_ID)

bash preprocess_data.sh

After preprocessing, your folder structure should look like this:

configs
├── kitti360_{sequence_id}.txt
data
└── kitti360
    ├── KITTI-360
    │   ├── calibration
    │   ├── data_3d_raw
    │   └── data_poses
    ├── train
    ├── transforms_{sequence_id}test.json
    ├── transforms_{sequence_id}train.json
    └── transforms_{sequence_id}val.json

🚀 Run LiDAR4D

Set corresponding sequence config path in --config and you can modify logging file path in --workspace. Remember to set available GPU ID in CUDA_VISIBLE_DEVICES.
Run the following command:

# KITTI-360
bash run_kitti_lidar4d.sh

📊 Results

KITTI-360 Dynamic Dataset (Sequences: 2350 4950 8120 10200 10750 11400)

Method Point Cloud Depth Intensity
CD↓ F-Score↑ RMSE↓ MedAE↓ LPIPS↓ SSIM↑ PSNR↑ RMSE↓ MedAE↓ LPIPS↓ SSIM↑ PSNR↑
LiDAR-NeRF 0.1438 0.9091 4.1753 0.0566 0.2797 0.6568 25.9878 0.1404 0.0443 0.3135 0.3831 17.1549
LiDAR4D (Ours) † 0.1002 0.9320 3.0589 0.0280 0.0689 0.8770 28.7477 0.0995 0.0262 0.1498 0.6561 20.0884

KITTI-360 Static Dataset (Sequences: 1538 1728 1908 3353)

Method Point Cloud Depth Intensity
CD↓ F-Score↑ RMSE↓ MedAE↓ LPIPS↓ SSIM↑ PSNR↑ RMSE↓ MedAE↓ LPIPS↓ SSIM↑ PSNR↑
LiDAR-NeRF 0.0923 0.9226 3.6801 0.0667 0.3523 0.6043 26.7663 0.1557 0.0549 0.4212 0.2768 16.1683
LiDAR4D (Ours) † 0.0834 0.9312 2.7413 0.0367 0.0995 0.8484 29.3359 0.1116 0.0335 0.1799 0.6120 19.0619

†: The latest results better than the paper.
Experiments are conducted on the NVIDIA 4090 GPU. Results may be subject to some variation and randomness.

🕹️ Simulation

After reconstruction, you can use the simulator to render and manipulate LiDAR point clouds in the whole scenario. It supports dynamic scene re-play, novel LiDAR configurations (--fov_lidar, --H_lidar, --W_lidar) and novel trajectory (--shift_x, --shift_y, --shift_z).
We also provide a simple demo setting to transform LiDAR configurations from KITTI-360 to NuScenes, using --kitti2nus in the bash script.
Check the sequence config and corresponding workspace and model path (--ckpt).
Run the following command:

bash run_kitti_lidar4d_sim.sh

The results will be saved in the workspace folder.

Acknowledgement

We sincerely appreciate the great contribution of the following works:

Citation

If you find our repo or paper helpful, feel free to support us with a star 🌟 or use the following citation:

@inproceedings{zheng2024lidar4d,
  title     = {LiDAR4D: Dynamic Neural Fields for Novel Space-time View LiDAR Synthesis},
  author    = {Zheng, Zehan and Lu, Fan and Xue, Weiyi and Chen, Guang and Jiang, Changjun},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year      = {2024}
  }

License

All code within this repository is under Apache License 2.0.

lidar4d's People

Contributors

dyfcalid avatar martinmeinke avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

lidar4d's Issues

Some questions about the kitti360 dataset

Hi,

This is excellent work. I am very grateful for your sharing.
However, I see that the experiments were conducted by training and testing on each sequence individually. How were the final metrics obtained? In other words, were all the experiments in this paper conducted using scene 4950 from the sequence 2013_05_28_drive_0000 for both training and testing? Or were there other setups?

About the SEQ_ID number of the kitti-360 dataset

In your readme.md, you mentioned SEQ_IDs such as 2350, 4950, 8120, 10200, 10750, 11400. How do I obtain these? I directly downloaded the Raw Velodyne Scans data, but it seems that I didn't see any information about SEQ_IDs.
image
The official website provides the following information:
image
image
So, are my SEQ_IDs 0000, 0002, 0003, 0004, and 0005?"

nvcc: not found

在按照教程进行安装的过程中,出现了nvcc: not found的错误。ubuntu20.04之前没有安装过cuda,但是这里却出现了这种错误

(lidar4d) supercoconut@supercoconut:~/Myfile/LiDAR4D/tiny-cuda-nn/bindings/torch$ python setup.py install
/home/supercoconut/Myfile/LiDAR4D/tiny-cuda-nn/bindings/torch/setup.py:5: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
from pkg_resources import parse_version
Building PyTorch extension for tiny-cuda-nn version 1.7
Obtained compute capability 89 from PyTorch
sh: 1: nvcc: not found
Targeting C++ standard 14
running install
/home/supercoconut/anaconda3/envs/lidar4d/lib/python3.9/site-packages/setuptools/_distutils/cmd.py:66: SetuptoolsDeprecationWarning: setup.py install is deprecated.
!!

    ********************************************************************************
    Please avoid running ``setup.py`` directly.
    Instead, use pypa/build, pypa/installer or other
    standards-based tools.

    See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details.
    ********************************************************************************

!!
self.initialize_options()
/home/supercoconut/anaconda3/envs/lidar4d/lib/python3.9/site-packages/setuptools/_distutils/cmd.py:66: EasyInstallDeprecationWarning: easy_install command is deprecated.
!!

    ********************************************************************************
    Please avoid running ``setup.py`` and ``easy_install``.
    Instead, use pypa/build, pypa/installer or other
    standards-based tools.

    See https://github.com/pypa/setuptools/issues/917 for details.
    ********************************************************************************

!!
self.initialize_options()
running bdist_egg
running egg_info
creating tinycudann.egg-info
writing tinycudann.egg-info/PKG-INFO
writing dependency_links to tinycudann.egg-info/dependency_links.txt
writing top-level names to tinycudann.egg-info/top_level.txt
writing manifest file 'tinycudann.egg-info/SOURCES.txt'
/home/supercoconut/anaconda3/envs/lidar4d/lib/python3.9/site-packages/torch/utils/cpp_extension.py:502: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
warnings.warn(msg.format('we could not find ninja.'))
reading manifest file 'tinycudann.egg-info/SOURCES.txt'
writing manifest file 'tinycudann.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
creating build
creating build/lib.linux-x86_64-cpython-39
creating build/lib.linux-x86_64-cpython-39/tinycudann
copying tinycudann/modules.py -> build/lib.linux-x86_64-cpython-39/tinycudann
copying tinycudann/init.py -> build/lib.linux-x86_64-cpython-39/tinycudann
copying tinycudann/bindings.cpp -> build/lib.linux-x86_64-cpython-39/tinycudann
running build_ext
error: [Errno 2] No such file or directory: '/usr/local/cuda/bin/nvcc'

how to set the pano height and pano width

对于kitti360数据集,在生成rangeview的过程中,你设置pano height=66, pano width=1030, 想问下这个是怎么得到的,因为我想要用我自己的lidar数据,这两个参数是和什么有关
image

About NuScenes

Hi, thank you for your good paper with beautiful code. I want to know how to train and test (data split) on the NuScenes dataset, and would you consider releasing the corresponding code?

points error

Thanks for your works. I'm testing on KITTI-360, I found the synthesized point stored as .npy. So I use this to convert it to pcd.

import numpy as np
import open3d as op

path = "./log/kitti360_lidar4d_f4950_release/results/test_lidar4d_ep0639_0002_depth_lidar.npy"
points = np.load(path)
print(points.shape,points[:10])
pcd = op.geometry.PointCloud()
pcd.points = op.utility.Vector3dVector(points)
op.io.write_point_cloud("2.pcd",pcd)

But it looks like the depths are wrong. Below image is captured from pcl_viewer
image
Below image is corresponding depth map
test_lidar4d_ep0639_0002_depth
This is the output of evaluation
image
Could you please help me find where is wrong?

question about dataset preprocess

Hi,

I'm very interested in your work.
I noticed that the code only uses the sequence 2013_05_28_drive_0000, and the frame splitting is hard-coded.

My questions are:

  1. Are the results in the paper evaluated on 2013_05_28_drive_0000 or all sequences?
  2. If they are evaluated on all sequences, how is the frame splitting done?

data/preprocess/generate_rangeview.py, line 73
image
same file, line 112
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.