Giter VIP home page Giter VIP logo

shine_mapping's Introduction

โœจ SHINE-Mapping: Large-Scale 3D Mapping Using Sparse Hierarchical Implicit Neural Representations

Xingguang Zhong* ยท Yue Pan* ยท Jens Behley ยท Cyrill Stachniss

University of Bonn

(* Equal Contribution)

Incremental Mapping Reconstruction Results
shine_incremental.mp4
shine_reconresult.mp4
Table of Contents
  1. Abstract
  2. Installation
  3. Docker
  4. Prepare data
  5. How to run
  6. Evaluation
  7. Tips
  8. Citation
  9. Contact
  10. Acknowledgment

Abstract

Accurate mapping of large-scale environments is an essential building block of most outdoor autonomous systems. Challenges of traditional mapping methods include the balance between memory consumption and mapping accuracy. This paper addresses the problems of achieving large-scale 3D reconstructions with implicit representations using 3D LiDAR measurements. We learn and store implicit features through an octree-based hierarchical structure, which is sparse and extensible. The features can be turned into signed distance values through a shallow neural network. We leverage binary cross entropy loss to optimize the local features with the 3D measurements as supervision. Based on our implicit representation, we design an incremental mapping system with regularization to tackle the issue of catastrophic forgetting in continual learning. Our experiments show that our 3D reconstructions are more accurate, complete, and memory-efficient than current state-of-the-art 3D mapping methods.


Installation

1. Clone SHINE Mapping repository

git clone [email protected]:PRBonn/SHINE_mapping.git
cd SHINE_mapping

2. Set up conda environment

conda create --name shine python=3.7
conda activate shine

3. Install the key requirement kaolin

Kaolin depends on Pytorch (>= 1.8, <= 1.13.1), please install the corresponding Pytorch for your CUDA version (can be checked by nvcc --version). You can find the installation commands here.

For example, for CUDA version >=11.6, you can use:

pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu116

Kaolin now supports installation with wheels. For example, to install kaolin 0.12.0 over torch 1.12.1 and cuda 11.6:

pip install kaolin==0.12.0 -f https://nvidia-kaolin.s3.us-east-2.amazonaws.com/torch-1.12.1_cu116.html
[Or you can build kaolin by yourself (click to expand)]

Follow the instructions to install kaolin. Firstly, clone kaolin to a local directory:

git clone --recursive https://github.com/NVIDIAGameWorks/kaolin
cd kaolin

Then install kaolin by:

python setup.py develop

Use python -c "import kaolin; print(kaolin.__version__)" to check if kaolin is successfully installed.

4. Install the other requirements

pip install open3d scikit-image wandb tqdm natsort pyquaternion

Containerized installation

Note that you CUDA version must be >=11.6.2 to be compatible with the container.

1. Install docker

https://docs.docker.com/engine/install/ubuntu/

2. Install nvidia container runtime

https://developer.nvidia.com/nvidia-container-runtime

3. Clone SHINE Mapping repository

git clone [email protected]:PRBonn/SHINE_mapping.git
cd SHINE_mapping

4. Build container

docker build --tag shine .

5. Run container with example

mkdir /tmp/shine_test_data
docker run --rm -v ${pwd}/:/repository -v /tmp/shine_test_data:/data -it --gpus all shine

Results will be produced in /tmp/shine_test_data/results.

6. Run container on your own data

docker run --rm -v .:/repository -v ${MY_DATA_DIR}:/data -it --gpus all shine bash

where ${MY_DATA_DIR} is the directory on the host with data in the format described in config/kitti/docker_kitti_batch.yaml. Once inside the container RUN as described below. Results will be found on host in ${MY_DATA_DIR}/results .


Prepare data

Generally speaking, you only need to provide:

  1. pc_path : the folder containing the point cloud (.bin, .ply or .pcd format) for each frame.
  2. pose_path : the pose file (.txt) containing the transformation matrix of each frame.
  3. calib_path : the calib file (.txt) containing the static transformation between sensor and body frames (optional, would be identity matrix if set as '').

They all follow the KITTI odometry data format.

After preparing the data, you need to correctly set the data path (pc_path, pose_path and calib_path) in the config files under config folder. You may also set a path output_root to store the experiment results and logs.

Here, we provide the link to several publicly available datasets for testing SHINE Mapping:

MaiCity synthetic LiDAR dataset

Download the dataset from here or use the following script to download (3.4GB):

sh ./scripts/download_maicity.sh

KITTI real-world LiDAR dataset

Download the full dataset from here.

If you want to use an example part of the dataset (seq 00) for the test, you can use the following script to download (117 MB):

sh ./scripts/download_kitti_example.sh

Newer College real-world LiDAR dataset

Download the full dataset from here.

If you want to use an example part of the dataset (Quad) for the test, you can use the following script to download (634 MB):

sh ./scripts/download_ncd_example.sh

RGB-D datasets

SHINE Mapping also supports the mapping on RGB-D datasets. You may first try the synthetic dataset from NeuralRGB-D. You can download the full dataset from here or use the following script to download (7.25 GB).

sh ./scripts/download_neural_rgbd_data.sh

After downloading the data, you need to convert the dataset to the KITTI format by using for each sequence:

sh ./scripts/convert_rgbd_to_kitti_format.sh

Mapping without ground truth pose

[Details (click to expand)]

Our method is currently a mapping-with-known-pose system. If you do not have the ground truth pose file, you may use a LiDAR odometry system such as KISS-ICP to easily estimate the pose.

You can simply install KISS-ICP by:

pip install kiss-icp

And then run KISS-ICP with your data path pc_path

kiss_icp_pipeline <pc_path>

The estimated pose file can be found in ./results/latest/velodyne.txt. You can directly use it as your pose_path. In this case, you do not need a calib file, so just set calib_path: "" in the config file.

Generate colorized mesh

Check the repository Color-SHINE-MAPPING (thanks @ZorAttC for the contribution) for using SHINE Mapping to reconstruct colorized mesh using colorized point cloud.


Run

We take the MaiCity dataset as an example to show how SHINE Mapping works. You can simply replace maicity with your dataset name in the config file path, such as ./config/[dataset]/[dataset]_[xxx].yaml.

The results will be stored with your experiment name with the starting timestamp in the output_root directory as what you set in the config file. You can find the reconstructed mesh (*.ply format) and optimized model in mesh and model folder, respectively. If the save_map option is turned on, then you can find the grid sdf map in map folder.

For mapping based on offline batch processing, use:

python shine_batch.py ./config/maicity/maicity_batch.yaml
[Expected results (click to expand)]
maicity_shine_batch_20cm.mp4

For incremental mapping with replay strategy (within a local bounding box), use:

python shine_incre.py ./config/maicity/maicity_incre_replay.yaml

An interactive visualizer would pop up if you set o3d_vis_on: True (by default) in the config file. You can press space to pause and resume.

[Expected results (click to expand)]

For the sake of efficiency, we sacrifice a bit mapping quality to use a 50cm leaf voxel size for the feature octree here.

To only visualize the mesh in a local bounding box for faster operation, you can set mc_local: True and mc_with_octree: False in the config file.

maicity_shine_incre_replay_50cm.mp4

For incremental mapping with a regularization strategy, use:

python shine_incre.py ./config/maicity/maicity_incre_reg.yaml
[Expected results (click to expand)]

For the sake of efficiency, we sacrifice a bit of mapping quality to use a 50cm leaf voxel size for the feature octree.

maicity_shine_incre_reg_50cm.mp4
[Expected results on other datasets (click to expand)]

KITTI

Newer College

Apollo

Wild Place Forests

IPB Office

Replica

ICL Living Room

The logs can be monitored via Weights & Bias online if you turn the wandb_vis_on option on. If it's your first time to use Weights & Bias, you would be requested to register and login to your wandb account.

Evaluation

To evaluate the reconstruction quality, you need to provide the (reference) ground truth point cloud and your reconstructed mesh. The ground truth point cloud can be found (or sampled from) the downloaded folder of MaiCity, Newer College and Neural RGBD datasets.

Please change the data path and evaluation set-up in ./eval/evaluator.py and then run:

python ./eval/evaluator.py

to get the reconstruction metrics such as Chamfer distance, completeness, F-score, etc.

As mentioned in the paper, we also compute a fairer accuracy metric using the ground truth point cloud masked by the intersection of the reconstructed meshes of all the compared methods. To generate such masked ground truth point clouds, you can configure the data path in ./eval/crop_intersection.py and then run it.

To reproduce the quantitative results on MaiCity and Newer College dataset in the paper, you can use the config files in ./config/config_icra2023/. The reconstructed meshes can also be downloaded from here. Note that these numbers are obtained using the batch mapping mode. You can achieve similar results using the incremental mapping mode with the replay strategy.

Tips

[Details (click to expand)]
  1. You can play with different loss functions for SHINE Mapping. With the ray_loss: False option, the loss would be calculated from the sdf at each sample point. In this case, you can then select from sdf_bce (the proposed method), sdf_l1 and sdf_l2 loss as the main_loss_type. With the ray_loss: True option, the loss would be calculated from each ray containing multiple point samples as a depth rendering procedure. In this case, you can select from dr and dr_neus as the main_loss_type. According to our experiments, using our proposed sdf_bce loss can achieve the best reconstruction efficiently. We can get a decent reconstruction of a scene with several hundred frames in just one minute. Additionally, you can use the ekional_loss_on option to turn on/off the Ekional loss and use weight_e as its weight.

  2. The feature octree is built mainly according to leaf_vox_size, tree_level_world and tree_level_feat. leaf_vox_size represents the size of the leaf voxel size in meter. tree_level_world and tree_level_feat represent the total tree level and the tree levels with latent feature codes, respectively. tree_level_world should be large enough to guarantee all the map data lies inside the cube with the size leaf_vox_size**(tree_level_world+1).

  3. SHINE Mapping supports both offline batch mapping and incremental sequential mapping. For incremental mapping, one can either load a fixed pre-trained decoder from the batching mapping on a similar dataset (set load_model: True) or train the decoder for freeze_after_frame frames on-the-fly and then freeze it afterwards (set load_model: False). The first option would lead to better mapping performance.

  4. You can use the mc_vis_level parameter to have a trade-off between the scene completion and the exact measurement accuracy. This parameter indicates at which level of the octree the marching cubes reconstruction would be conducted. The larger the value of mc_vis_level (but not larger than tree_level_feat), the more scene completion ability you would gain (but also some artifacts such as a double wall may appear). And with the small value, SHINE mapping would only reconstruct the part with actual measurements without filling the holes. The safest way to avoid the holes on the ground is to set mc_mask_on: False to disable the masking for marching cubes. By turning on the mc_with_octree option, you can achieve a faster marching cubes reconstruction only in the region within the octree nodes.

  5. The incremental mapping with regularization strategy (setting continual_learning_reg: True) can achieve incremental neural mapping without storing an ever-growing data pool which would be a burden for the memory. The coefficient lambda_forget needs to be fine-tuned under different feature octree and point sampling settings. The recommended value is from 1e5 to 1e8. A pre-trained decoder is also recommended to be loaded during incremental mapping with regularization for better performance.

  6. We also provide an option to conduct incremental mapping with a replay strategy in a local sliding window. You can turn this on by setting window_replay_on: True with a valid window_radius_m indicating the size of the sliding window.

  7. It's also possible to incorporate semantic information in our SHINE-Mapping framework. You may set semantic_on = True in the utils/config.py file to enable semantic mapping and also provide semantic supervision by setting the label_path in the config file. The labels should be in Semantic KITTI format. An example semantic reconstruction results using Semantic KITTI can be downloaded from here.


Citation

If you use SHINE Mapping for any academic work, please cite our original paper.

@inproceedings{zhong2023icra,
  title={SHINE-Mapping: Large-Scale 3D Mapping Using Sparse Hierarchical Implicit NEural Representations},
  author={Zhong, Xingguang and Pan, Yue and Behley, Jens and Stachniss, Cyrill},
  booktitle = {Proceedings of the IEEE International Conference on Robotics and Automation (ICRA)},
  year={2023}
}

Contact

If you have any questions, please contact:

Acknowledgment

This work has partially been funded by the European Unionโ€™s HORIZON programme under grant agreement No 101070405 (DigiForest) and grant agreement No 101017008 (Harmony).

Additionally, we thank greatly for the authors of the following opensource projects:

  • NGLOD (octree based hierarchical feature structure built based on kaolin)
  • VDBFusion (comparison baseline)
  • Voxblox (comparison baseline)
  • Puma (comparison baseline and the MaiCity dataset)
  • KISS-ICP (simple yet effective pose estimation)

shine_mapping's People

Contributors

starryn avatar yuepanedward avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

shine_mapping's Issues

what shuold I do to make this program run all frame on mai_city sequence 00?

setting: name: "maicity_incre_reg" output_root: "/home/hh/experiments/" pc_path: "/home/hh/data/mai_city/ply/sequences/01/velodyne" pose_path: /home/hh/data/mai_city/ply/sequences/01/poses.txt" calib_path: "/home/hh/data/mai_city/ply/sequences/01/calib.txt" load_model: False # load the pretrained decoder model (optional) model_path: "./pretrained/geo_decoder_8dim.pth" first_frame_ref: False begin_frame: 0 end_frame: 699 every_frame: 1 # 1 means does not skip device: "cuda" gpu_id: "0" process: min_range_m: 1.5 pc_radius_m: 20.0 # distance filter for each frame rand_downsample: False # use random or voxel downsampling vox_down_m: 0.03 rand_down_r: 0.2 sampler: surface_sample_range_m: 0.5 surface_sample_n: 3 free_sample_begin_ratio: 0.3 free_sample_end_dist_m: 0.8 free_sample_n: 3 octree: leaf_vox_size: 0.5 tree_level_world: 12 tree_level_feat: 3 feature_dim: 8 poly_int_on: True octree_from_surface_samples: True decoder: mlp_level: 2 mlp_hidden_dim: 32 freeze_after_frame: 20 loss: ray_loss: False main_loss_type: sdf_bce # select from sdf_bce (our proposed), sdf_l1, sdf_l2, dr, dr_neus sigma_sigmoid_m: 0.05 loss_weight_on: False behind_dropoff_on: False ekional_loss_on: False weight_e: 0.1 continual: continual_learning_reg: True # using incremental mapping with regularization lambda_forget: 1e6 # the larger this value, the model would be less likely to forget window_replay_on: False # replay within the sliding window window_radius_m: 0 optimizer: iters: 50 # iterations per frame batch_size: 4096 learning_rate: 0.01 weight_decay: 0 # l2 regularization eval: wandb_vis_on: False # log to wandb or not o3d_vis_on: True # visualize the mapping or not vis_freq_iters: 0 save_freq_iters: 0 # save the model and octree every x iterations mesh_freq_frame: 5 # reconstruct the mesh every x frames mc_res_m: 0.2 # reconstruction marching cubes resolution mc_with_octree: False # querying sdf in the map bbx mc_vis_level: 1 save_map: False # save the sdf map or not
thanks for your excellent work.
I just want to know that why I change the parameter "end frame" from 100 to 699 in files config/maicity/maicity_incre_reg, I get the same result when "end frame=100". I just want to run all frame in mai_city sequence 00.May need to modify another parameter too,could you tell me

Noise results testing on the nuScenes dataset

Hey, thanks for releasing the amazing work!

The method works very well on the KITTI dataset with 64-line lidar. I made a test on the nuScenes dataset with 32-line lidar. I get a very noisy result including a lot of holes on the ground. The 3D point on nuScenes dataset is more sparse than that of the KITTI dataset.

In order to fill the holes in the 3D surface, I use mc_res_m=0.1, and only sample the points on the "close-to-surface uniform sampling".

Any idea to make it works on the nuScenes dataset? Thank you so much.

Other dataset config file

Hi, thank you for sharing your wonderful work!

Could you sharing the config files for other dataset? (e.g. Replica, IPB Office)

The boosting time consumption of incremental shine-mapping on seq00 from MaiCity with 700 > pc_count_gpu_limit

Thanks for your great work in the implicit mapping of large-scale outdoor scenes! When I test your scripts python shine_incre.py ./config/maicity/maicity_incre_replay.yaml on the whole Maicity seq00 sequence, I notice that the time consumption of mapping(without marching_cubes and visualization) greatly boost when I set the every_frame=1 in the configs, which is supposed to be 16s/it on the CPU.
Since the incremantal mapping only process frames of limited sliding windows, what is the necessity of restricting pc_count_gpu_limit=500 in your implemention? And is it possible to lift this restriction for more practical and efficient incremental mapping on GPU?

Problem about the input data.

Since I'm going to run SHINE_mapping, on my own dataset, there are several questions :

In README.md
Generally speaking,we need to provide:
pc_path : the folder containing the point cloud (.bin, .ply or .pcd format) for each frame.
pose_path : the pose file (.txt) containing the transformation matrix of each frame.
calib_path : the calib file (.txt) containing the static transformation between sensor and body frames (optional, would be identity matrix if set as '').
  1. Does the transformation matrix of each frame means transformation matrix from lidar to world? If we provide it, the Tr in calib.txt can be an identity matrix just like the NCD dataset?
  2. But for KITTI dataset, the Tr is not an identity matrix. And we applied poses.append( np.matmul(Tr_inv, np.matmul(pose, Tr)) ) # lidar pose in world frame in poses.py to get the pose. I tried to understand the meaning of this code.
    From the github issue, we can get
    The poses.txt is given in the camera coordinate system, Tr is the extrinsic calibration matrix from velodyne to camera.In this case,
    Pose_velodyne = Tr_inv * Pose_camera. But the code is pose = Tr_inv * Pose_camera *Tr. I'm confused about it.

Look forward to your favourable reply.

seems not to get the same results as the paper says

Hello! Thanks for your nice work!

I run your code and get the reconstructed mesh of good quality for the maicity.
snapshot00

But when I used the evalution code to qualitify the mesh, I find it not the same as the paper . I used the ground truth of maicity dataset.
image

And this result is used the pointclouds map.
image

And this is the paper results.
image

And I'm confused with the resluts of evalution results. What do you compare it with?

How did you refine the pose of the Newer College Dataset?

-- poses.txt: the refined pose of the sensor for the 1300 frames under the ground truth point cloud's reference coordinate system

image
Hi, what method did you use to refine the poses for the Newer College Dataset? And what's the difference between poses.txt and poses_original.txt ? TKS!

Problem when trying to run the script

Hello,

I am trying to run your project, I have an Ubuntu 22.04 with cuda V11.8

nvcc --version returns V11.8.89

nvidia-smi Driver Version: 545.23.08 CUDA Version: 12.3

managed to installed the required dependencies
print(torch.version) return 1.12.1+cu116

when I try to run any command I get

python shine_batch.py ./config/maicity/maicity_batch.yaml
Start ./experiments/maicity_batch_2024-01-13_20-59-06
Traceback (most recent call last):
File "shine_batch.py", line 270, in
run_shine_mapping_batch()
File "shine_batch.py", line 58, in run_shine_mapping_batch
dataset = LiDARDataset(config, octree)
File "/home/aspegique/Desktop/SHINE_mapping/dataset/lidar_dataset.py", line 34, in init
self.calib = read_calib_file(config.calib_path)
File "/home/aspegique/Desktop/SHINE_mapping/utils/pose.py", line 13, in read_calib_file
calib_file = open(filename)
FileNotFoundError: [Errno 2] No such file or directory: 'xxx/data/mai_city/ply/sequences/01/calib.txt'

to download data I ran :
sh ./scripts/download_maicity.sh

Can anyone guide me to solve this problem

bad result for maicity with voxel size 20cm

Thanks for your excellent work!

But when I test the mappings for maicity with voxel size 20 cm, the result turns to like below:

image

It seems there are many holes and the disappear of pavement.

I just simply change the hyperparameter leaf_vox_size to 0.2 in maicity_incre_reg.yaml . How should I modify it to get a good result?

how to compute completion?

hi, thanks for your excellent work .
I want to know how to calculate "completion",because in the file /eval/eval_utils.py, i just do not find the code to compute completion. could you tell me how to compute completion and completion ratio. thanks a lot!

How can we get the ground truth model?

For MaiCity dataset, I downloaded gt_map_pc_mai.ply from scripts/download_maicity.sh, I have some questions about this model?

  • why it's a pointcloud model rather than a mesh model, is it concatenated from, for example from mai_city/bin/sequences/00/velodyne/00000.bin to mai_city/bin/sequences/00/velodyne/00200.bin
  • I have seen that gt_map_pc_mai.ply from scripts/download_maicity.sh is of sequence 02, so in your paper, you get the reconstruction quality result on sequence 02? Or it's a average result of sequences 00 to 02?

Looking forward to your reply! Thanks a lot!

Bad result in garage floor.

I was trying to reconstruct the 20m*20m underground garage, but got a terrible floor.
Here is the mesh and map, we already had the lidar points around the floor, but got little mesh around the floor.
ๆˆชๅ›พ
So what can I do to fix it?
Another question is about the rough mesh, in order to get mesh with higher resolution, which parameter should be adjusted?
mc_res_m in marching cubes or leaf_vox_size ?

some question about the metric accuracy?

Hi, thanks for your excellent work?
I run the evaluator.py on maicity dataset . I get the similar result as your result,except accuracy. my accuracy is 3.3 that corresponds to "MAE_accuracy". I'm not sure that is "MAE_accuracy" correspond to the accuracy?

why are there some extra artifacts in the reconstruction result.

Great job I must to say first.
Here I just found some strange results in my tests as shown in the screen shot.
I used shine_batch.py to rebuild maicity model, leaf voxel size, mc_res was set to 0.2m, 0.1m respectively. iters = 20000.
Other parameters keep defaults basically.
It seems like there's an extra part of the final result. I am wondering if you have any clues and ideas for this.
image
image
The distance between the "false wall" and the real wall is about 30cm
I noticed that you have updated the code. This result is produced by the old version code.

Tuning shine

Hi there,

first of all Kudos for this great piece of work!
When we are trying to use it on custom data, we are struggeling a little with tuning the the algorithm.
A. From the issues, I have the impression, that the configs supplied in the repo are not the same used for the paper, is that correct? Would it be possible to supply them, this would give great hints in order to see which buttons to push!
B. Which are the top 3 parameters to tune for better reconstruction quality (quality=level of detail of the mesh while smoothness of surfaces is preserved)
C. Which are the top 3 parameters to tune for scalabitity? (with 8GB of GPU Memory, I regularly segfault running out of memory)
D. What compute hardware did you use for example for the KITTI example?

Thanks in advance!

Best
Johannes

Weird about the calculation of free space uniform sampling ?

       # Part 3. free space uniform sampling
        repeated_dist = distances.repeat(freespace_sample_n,1)
        free_max_ratio = free_sample_end_dist_m_scaled / repeated_dist + 1.0
        free_diff_ratio = free_max_ratio - free_min_ratio

        free_sample_dist_ratio = torch.rand(point_num*freespace_sample_n, 1, device=dev)*free_diff_ratio + free_min_ratio
        
        free_sample_displacement = (free_sample_dist_ratio - 1.0) * repeated_dist

About the above caclualtion in https://github.com/PRBonn/SHINE_mapping/blob/master/utils/data_sampler.py, my understanding is samping points from 0.3m to 0.8m as the distance to the surface (for sigma=0.1 m, then surface sampling range should be [-0.3m, 0.3m] around the surface). But the above calculation is a bit weird to me

In most of the config files, we have free_sample_begin_ratio: 0.3 and free_sample_end_dist_m: 0.8. Suppose there is a scan point with range to the LiDAR as repeated_dist = 10m. Then free_max_ratio = 0.8 / 10 + 1.0 = 1.08, free_diff_ratio=1.08 - 0.3=0.78, then 0.3 <= free_sample_dist_ratio <= 1.08, finally -7m < free_sample_displacement < 0.8m. This means a high probability to sample a point from opposite direction with respect to the orignal scan point. Is it the meaning of free samping ? Or where is wrong with my understanding? Thank you!

Question on the sdf map

Hi all, thanks for the excellent work.

Here I have a minor question on the generated SDF map.

If I understand correctly, in the default configuration, the octree tree is only constructed near the surface and maintained by a dict structure.

For those "free space" and "unexplored area" which do not lie in the constructed octree, their feautres will be allocated by 0., the generated SDF value may be random during optimization.

Did I understand correctly?

Best,
Shuo

Colored Mesh

Hi @stachnis @YuePanEdward @jbehley,

Thanks for your great work. I have already tested out Kiss-ICP several times, and now I experimented with SHINE, which seems to work better for 3D reconstruction of outdoor scenes. May I ask, if there is a way to generate a colored MC mesh? I'm using point clouds from a stereo cam (colored PCDs), not lidar.

Kind regards

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.