thecodez / dynamic-occupancy-grid-map Goto Github PK
View Code? Open in Web Editor NEWImplementation of "A Random Finite Set Approach for Dynamic Occupancy Grid Maps with Real-Time Application"
License: MIT License
Implementation of "A Random Finite Set Approach for Dynamic Occupancy Grid Maps with Real-Time Application"
License: MIT License
I cannot find systematic code formatting in this project yet. I (among most programmers ;) ) like well-formatted code, that makes it easy to read code. If you agree to use automated code formatting at this stage, I would suggest the widely used clang-format. Clang-format also supports formatting CUDA files.
I have applied clang-format to this repository in this branch, feel free to check whether you like the style. In the branch, I have also added instructions on how to use clang-format for formatting.
Hi, I am building the ros version of dogm, and at the last stage I failed possibly because of some dependencies issues. my error is like follows:
/usr/bin/ld: /usr/local/lib/libdogm.a(dogm.cu.o): relocation R_X86_64_PC32 against symbol `_ZN4dogm12ParticlesSoAC1Ev' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: final link failed: Bad value
collect2: error: ld returned 1 exit status
dynamic-occupancy-grid-map-ros/dogm_ros/CMakeFiles/dogm_ros.dir/build.make:136: recipe for target '/home/yining/rfs_map/devel/lib/libdogm_ros.so' failed
make[2]: *** [/home/yining/rfs_map/devel/lib/libdogm_ros.so] Error 1
CMakeFiles/Makefile2:1204: recipe for target 'dynamic-occupancy-grid-map-ros/dogm_ros/CMakeFiles/dogm_ros.dir/all' failed
make[1]: *** [dynamic-occupancy-grid-map-ros/dogm_ros/CMakeFiles/dogm_ros.dir/all] Error 2
Makefile:159: recipe for target 'all' failed
make: *** [all] Error 2
Maybe I need to recompile dogm? But I don't know where goes wrong, maybe it should not be libdogm.a but libdogm.so?
Hello. I hit a strange behavior when my app crashes and it took me a while to figure out what's going on.
I think everything starts in DOGM::gridCellOccupancyUpdate function. Here the accumulate(weight_array, weights_accum);
function is executed which calls thrust::inclusive_scan
. The thrust
documentation says:
inclusive_scan is similar to std::partial_sum in the STL. The primary difference between the two functions is that std::partial_sum guarantees a serial summation order, while inclusive_scan requires associativity of the binary operation to parallelize the prefix sum.
Results are not deterministic for pseudo-associative operators (e.g., addition of floating-point types). Results for pseudo-associative operators may vary from run to run.
After the inclusive scan it can happen that weights_accum
array has NON-INCREASING values, i.e. weights_accum[i] < weights_accum[i - 1]
, in other words, the next value in the array can be greater than the previous. Later this array is used in gridCellPredictionUpdateKernel kernel. Because weight_array_accum
has the non-increasing numbers, the line
m_occ_pred = subtract(weight_array_accum, start_idx, end_idx);
can return a negative value. I can observe this with simple printf
in the kernel. The negative m_occ_pred
can lead to a negative rho_b
value (also observed by printing the value), which is stored in born_masses_array
.
Later in DOGM::initializeNewParticles() the array born_masses_array
is used in inclusive scan to update particle_orders_accum
, which is later used in normalize_particle_orders function. Inside this function it is assumed that the last value of the array is the maximum and the normalization happens:
void normalize_particle_orders(float* particle_orders_array_accum, int particle_orders_count, int v_B)
{
thrust::device_ptr<float> particle_orders_accum(particle_orders_array_accum);
float max = 1.0f;
cudaMemcpy(&max, &particle_orders_array_accum[particle_orders_count - 1], sizeof(float), cudaMemcpyDeviceToHost);
thrust::transform(
particle_orders_accum, particle_orders_accum + particle_orders_count, particle_orders_accum,
GPU_LAMBDA(float x) { return x * (v_B / max); });
}
But because born_masses_array
has negative values, it can happen that the last value of particle_orders_array_accum
is NOT maximum. This can lead that after the thrust::transform
, the particle_orders_array_accum
array can have values, greater than v_B
(which is new_born_particle_count
).
Later the array particle_orders_array_accum
is passed to the initNewParticlesKernel1. Inside the kernel the start_idx is calculated, which, as mentioned, can be greater than new_born_particle_count
(observed with printf
inside the kernel). Next, this wrong index is used to update the birth_particle_array and this causes write to out-of-bounds write - compute sanitizer complains in this kernel and later a thrust::vector
throws an error, because it's designed to throw in a destructor.
The problem is that all this is difficult to reproduce in a small demo, but in our bigger application it crashes all the time. Atm, I'm simply checking rho_b
and set it to 0 in case it's negative.
cc @cbachhuber
CMake Error at /usr/share/cmake-3.10/Modules/ExternalProject.cmake:2275 (message):
error: could not find git for clone of googletest-download
Call Stack (most recent call first):
/usr/share/cmake-3.10/Modules/ExternalProject.cmake:3029 (_ep_add_download_command)
CMakeLists.txt:9 (EXTERNALPROJECT_ADD)
-- Configuring incomplete, errors occurred!
Because google is banned in our country, i can't get access to the google. How can i compile this project?
Hi
Thanks for the fantastic work. Just curious if you can have dynamic grid vs static grid classification and obstacle clustering logic similar to https://ieeexplore.ieee.org/document/7604119
Thanks,
For the calculation of the normalization component of an unassociated measurement (mu_UA) in calc_norm_unassoc() you are using persistent_occ_mass (rho_p) divided by occ_mass
.
In the paper, equation 63 states mu_UA = rho_p / pred_occ_mass
. In equation 63, the pred_occ_mass
is the sum of weights originally predicted into the cell, while the occ_mass
in your implementation is the result after dempster's rule of combination of the predicted mass and the measured mass. Mitkina's python implementation does it as in the paper.
Is there a reason for you doing that differently?
I would like to propose revising the license message. I have the following suggestions/requests:
Copyright (c) 2020 Michael Koesel and respective contributors
. I understand that until now, this was a solo project, which is why such a message was not needed. Passus and respective contributors
is also used in other projects and implies that every contributor has the (MIT) copyright on his original contribution.Btw, we can tackle the 'oe' and 2020 issue as well when revising the licenses.
Proposal:
Top-level LICENSE file
MIT License
Copyright (c) 2020 Michael Koesel and respective contributors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
License message in each source file
// Copyright (c) 2020 Michael Koesel and respective contributors
// SPDX-License-Identifier: MIT
// See accompanying LICENSE file for detailed information
What do you think about this?
All source code of this repository resides in folder dogm. From other repositories, I'm used to having source code and e.g. the top-level CMakeLists.txt in the root of the repository. Let's have a quick pros and cons overview of the current solution and moving everything from dogm/ into the root of this repository:
Sources in sub-folder dogm/:
Sources in root:
.clang-format
file or CMakeLists.txt
directly visible to visitorsI see advantages in moving the sources one level up and removing dogm/. Am I overlooking something or do you agree?
Using a 2d ring buffer like structure would be the fastest way.
Just shifting the grid cells is probably the easiest to implement.
TODO:
Currently on:
https://github.com/TheCodez/dynamic-occupancy-grid-map/tree/lidar_meas_grid
Hi Micheal, many thanks for sharing this wonderful project with the community.
When I read the polar to cartesian part of codes, I found the line 32 in the fragment shader quite hard to understand.
vec2 uv = vec2(1.0 - (texCoord0.s / (texCoord0.t + 1e-10)), texCoord0.t);
What is the purpose of the 1 - s / t
part, especially when the texture coords are already handled by
Many thanks.
cudaDeviceSynchronize
when not necessaryImprove sampling of random numbers. They might be one of the causes for the bad performance. Bad particle placing could be the reason for a much smaller particle weight than the reference implementation.
Might use cuRAND instead of thrust random.
Evaluate performance and avoid regressing.
Use DBSCAN to extract clusters and calculate mean velocity of those clusters to compare them with the ground truth.
Hi there -- I've been rearranging the ROS build to integrate it with the Carla simulator over at dogma_ros. Everything builds but I have a question and bug for you.
The bug is simple -- CUDA isn't releasing the Texture resources and as a result, the GPU quickly runs out of headroom on my 1080 (only 6GB). If you add the following line
void Texture::endCudaAccess(cudaSurfaceObject_t surfaceObject)
{
CHECK_ERROR(cudaGraphicsUnmapResources(1, &resource, nullptr));
+ CHECK_ERROR(cudaGraphicsUnregisterResource(resource));
CHECK_ERROR(cudaDestroySurfaceObject(surfaceObject));
}
then memory requirements stabilize at ~1Gb when feeding it a laserscan from Carla. I'd do a pull request, but my current repo/fork has been rearranged slightly to support the ROS changes.
The question might be more complicated -- I see that the grid has a position update, but doesn't seem to be tracking orientation. I haven't tried digging into the updateGrid code, but I'm guessing that I'll need to rotate the texture to world coordinates before applying it to the grid. Difficult? Already done?
Otherwise, many thanks for doing all the hard work!
I see that this project is using C++11. I wanted to suggest to go to a more recent version a couple of times already. In #26, I was e.g missing std::optional, which enables me to give my code a good structure.
I would like to suggest to go to at least C++14 (maybe even C++17). A counter-argument is of course that due to C++'s backward compatibility, an older standard gives more people access to your code. Still, C++14 introduces useful features, and going to a 6-year old standard should not harm the target audience too much. Additionally, the automotive industry (relevant for this project) has been using C++14 for years now, see e.g. the AUTOSAR standard.
What are your thoughts on this?
First step is to split the measurement grid calculation from the DOGM class, see #8
Currently we have:
void updatePose(float new_x, float new_y);
void updateMeasurementGrid(const std::vector<float>& measurements);
void updateGrid(float dt);
After the split we could have something like:
void updatePose(float new_x, float new_y);
void updateGrid(MeasurementCell* meas_grid, float dt);
The question is if this API is viable for user code.
Note:
The updatePose
method moves the grid map of timestep k-1 to the new location of the measurement grid at time k (ego motion compensation).
Planned folder structure:
For the measurement grid package:
The goal is to basically copy what's in the dogm CMakelists to the dogm_ros CMakelists. This makes the whole compile process when using ros more unified.
Hi,
thanks for providing this implementation. I'm trying to run the demo but unfortunately I'm getting a segmentation fault (core dumped).
I narrowed down the problem to line 26 in framebuffer.cpp:
CHECK_ERROR(cudaGraphicsGLRegisterImage(&resource, texture, GL_TEXTURE_2D, cudaGraphicsRegisterFlagsReadOnly));
Unfortunately, no further error message is given.
I've tested the following environments (in Docker) but the error is the same in all of them:
Do you have any hints what could be wrong?
Thanks in advance!
Use clang-tidy on cpp files. Check if clang-tidy also supports CUDA code.
As far as I understand the particles in the grid live in the grid space (cells).
But according to the Params structure (comments) the sampling velocities are in m/s?
Instead of using the whole OpenGL framework to perform texture mapping to transform from polar to cartesian, just use one CUDA Kernel, which uses the standard formulas for converting from polar to cartesian and then use bilinear interpolation.
@cbachhuber this is what I was talking about in #47
Use Systematic Resampling instead of Multinomial.
Parallel implementation: https://ieeexplore.ieee.org/document/8694030
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.