multi_sensor_calibration's People
Forkers
sevenchao gogojjh xmyj tuandle zorosmith junzhang2016 smiht mikezhu92 aabobakr iv461 gameinskysky liuyf5231 eddieburning rrk13 kyberszittya sven-hoek callmebylxh helloparanoidandroid jose-glitch kevinlisun zhenzhenxiang linhndm joserfjuniorllms crankler youngjoo-kim shijian1995 gchen-apollo frankfan007 zyl1336110861 tdnpp daydreamer2023 konanrobot sunmiaobo chaobo521 ecpknpaul alashui jeff-zhang123 atr-auburn chisyliu stupidtree cartesian123 yo86202 minnib578 chennwpu zhouleidcc nisailhan richexplor seanlin0603 deepbehavier wsili dingjiangang blueskych bohemianrhapsodyz labimage anhuaqiao liyang-whu fmg-shehan funicker xxiao-1 xiaocai506 wangchen-1994 monroyaume5 fengzhang123 luodanping mvalentin123 hl0071 nemoleiyang devinwang1997 chensterliu icra-2019 lucky-benxie timsweering yangzhengshi moonuniverse xiaoliangabc kongan evanmey fedioo hzm8341 cattpku arslan-z zhangaigh julietbai qqqgpe sri13 perciv-ai tor4z collector-m bilgehanparay tangram-vision anoushkabaidya mfkiwl gary-robotics balazsszekeres kimsoohwan nivir chrislai502 agrirobotai danieloladele-forked szp35multi_sensor_calibration's Issues
lidar-camera-lidar
lidar_detector: not using distance but original intensity values?
I have somewhat inconsistent detection of the board via lidar, so I checked the code. I might be missing something but to me it seems that in keypoint_detection.cpp
, line 382
the distances between neighboring points are stored in the intensity value of points in the vector rings
:
std::vector<std::vector<Velodyne::Point*> > rings = toDistanceRing(cloud_calibration_board, average_distance_ring, config.lidar_parameters);
But after that, this variable seems not to be used again (except for visualization in the next line). The function createEdgeCloud
, where points are being filtered by their intensity value, is also being called with cloud_calibration_board
as input cloud. In cloud_calibration_board
, the intensity values should still be the original intensity values, right? Shouldn't this function be called with rings
instead?
Error when calling optimize service
Hi, i'm currently working on a lidar, radar, mono camera setup, with a different sensors. I have changed the detectors so the sensor interfaced correctly. Now i'm trying to run the optimization service but I get an error. I've even tried it with the example data and still the same error. I didn't change the accumulator node or the optimization node.
[ERROR] [1639657543.349869500]: Service call failed: service [/optimizer/optimize] responded with an error: error processing request: local variable 'Tms' referenced before assignment
[ERROR] [1639657543.350367300]: Failed to call service to optimize on '/optimizer/optimize'.
I didn't change the accumulator node or the optimization node. So i guess it should work fine.
Can somebody please point me in the right direction?
Lidar converter
Hi,
I really want to thank you for your great work. I'm currently working on a lidar converter to use your code with a lidar sensor different from a Velodyne sensor and would like to ask for some further information. Could you please provide the PointField message as part of the PointCloud2 message of the /velodyne_points ROS topic, which the lidar_detector is subscribing to?
I'm currently working with the following PointField message, but getting errors from the lidar detection node.
sensor_msgs/PointField
fields:
-
name: "x"
offset: 0
datatype: 7
count: 1
-
name: "y"
offset: 4
datatype: 7
count: 1
-
name: "z"
offset: 8
datatype: 7
count: 1
-
name: "intensity"
offset: 12
datatype: 7
count: 1
-
name: "ring"
offset: 16
datatype: 4
count: 1
what is the 'ros_numpy' in ros2?
i had to use ros2, but i can't install ros_numpy....so,how can i repace it.
A probable issue with openCV versions
While working with the Mono Detector Node I noticed it was failing with segmentation faults and not publishing the messages to the topic /mono_pattern
, after debugging I've noticed that the issue was on the following statements:
Firstly on this line:
I had some issue with the matrix creation, and the statement that solved this for me was:
cv::Mat result(1, 4, CV_32FC2);
then on this line:
I truly do not understand why is there a
0*intrinsics.distortionCoeffs()
because it was the culprit of a segmentation error.
I assume all of this is an issue with my opencv version (4.5.4). If it all possible could you check exactly which one is required?
Thank you for the very good work I am looking forward to working more with it now that I have passed this small stage.
And before I leave this issue, I would like to point out that I am amazed by the code written in C++ that you wrote, it is simply beautiful! not easy to be understood sometimes but very impressive. I would have only made a couple different methods because of my writing style.
Yuri.
Wrong example data
camera - radar calibration without ros and lidar
Hello,
Thanks for sharing the nice work.
We have an radar reflector, and set of aruco markers with the scheme bellow
(marker1 marker2)
( radar reflector )
(marker3 marker4)
Question is: is it possible to get transform matrix(camera to radar) from this without lidar ? If so, that lines of code we need to modify MCPE approach ?
(We using optimization/main.py --lidar --camera --radar and lidar.csv same as camera.csv , without --lidar flag main.py not working)
Single Lidar Multi Camera calibration
Hi, since in the paper, multiple modalities have been mentioned, is it possible to use this repository for calibrating multiple cameras with a single lidar. Or it works only for 1 combination, ie, a LIDAR,a camera, and a Radar.
test data available?
Hi, May I ask if there is any test data available for validation and benchmarking?
regards,
lidar_detector example data
The example for running the lidar detector found in lidar_detector/src/example pulls a file from a TU Delft server. Would it be possible to add this file to the repository?
I am trying to get this software working offline using data from other lidars, but am having trouble with formatting the data properly.
Calibration board preparation
Hi, how did you cut the circles and the cones? Looking at the photos they seem very precise. I am preparing a calibration board and it would be helpful some manufacturing tips. Thank you!
2d lidar and depth camera?
Can this be used to calibrate a 2d Lidar and depth camera with overlapping FOV?
Set sensor_topic in ros_param
Hi, I'm new to ROS and I could not find the ros parameter when I use rosparam list. I need to set it because in the situation when I have 2 radar, one of the radar's topic is not subscribed by accumulator.
What is the proper way to do it? Thank you!
lidar_detector_node pcl warning
hi.
we made the calibration board according to the same dimension as given in code. But we are getting the following error as attached in files... we are using "velodyne-32 channel"
In the lidar_detector_node giving us the following error.
"ring too small"
and nothing is showing in the topic is publishing by this detector
In the mono_detector_node giving us the following exception
"Number of circles found: '0', but should be exactly 4."
your help is requested please...
If we put the calibration board with very close within 1m distance to car, then the circles found . If we put the board little far as according to your paper 5m distance then its not detecting the circles.
raw radar data loading
Hi, thank you for your open-source.
(I am new in the field of Radar and multi Sensor calibration). I have one question. I assume that you first recorded images, radar information, and lidar information simultaneously and stored them. I found in the optimization package you are reading detected points from CSV files. However, I couldn't find a clue to raw radar data. So, I wonder how are you reading it, where the raw radar data should be stored?
Not enough inliers found to optimize model coefficients (0)!
Hi,
I'm trying to use the lidar_detector_node without any success. I tried different settings but getting the following error:
[ INFO] [1617898057.099646647]: Initialized lidar detector.
[ INFO] [1617898059.825417059]: Receiving lidar point clouds.
[pcl::SampleConsensusModelPlane::optimizeModelCoefficients] Not enough inliers found to optimize model coefficients (0)! Returning the same coefficients.
Ring too small, continue..
[ INFO] [1617898059.851282781]: Ignoring exceptions thrown by pcl in at least one frame.
[ INFO] [1617898059.851311459]: Publishing patterns.
I would be very grateful if you could give me any advises or more information on the different parameters of the config.yaml file.
missing ring filed of each lidar point cloud
Hello, thank you for your work, I would like to ask a question.
If the collected pcd of lidar point cloud does not include the number of rings for each point, can keypoints still be obtained?
Looking forward to your reply
Possible way of passing the initial extrinsic parameters
Just curious if it's possible to pass the initial guess of the extrinsic parameters.
After testing the bag, I realize the lidar and mono calibration solution is not bounded enough to get the corrected results.
Any tips for where I should look into the code would be great.
RADAR Camera callibration
Hello,
Am working on RADAR Camera calibration, I have LRR 408 RADAR with me, i want to know whether this calibration code will for this RADAR type.
could you please let me know, how i can execute this RADAR and camera and execute in ROS. am new to ROS, am stuck with this problem.
Thanks.
Segmentation fault (core dumped)
the Function pcl::fromROSMsg in lidar_detector/src/lib/node_lib.cpp is showing that error. any help please!
How to use the calibrated transformations outside ROS?
Hi, I could successfully run the calibration workflow. Now, I have a couple of YAML files with the transformation matrix inside. However, I am not planning to use that inside ROS. So, what is the format of the transformation matrix? I see it is a homogenous matrix, but how can I properly extract the relative distance of the sensors wrt to the reference? For example, I assumed the homogenous matrix and I prepared a 4x4 matrix, I considered the last column as my translation vector [X, Y, Z]. I compared these values with the estimation I have from the hardware, and they are not close. Is there a proper way to extract this information? Thank you.
Query: Temporal Synchronization
Hi 👋 I was wondering from your previous experience, does the published topics from the sensors drivers in ROS went through message filter approximate synchronization before accumulating the data for extrinsic calibration?
Thanks.
rosbag sample for code evaluation ?
Just wondering if there is any sample rosbag to test this calibration repo?
This is very handy to check what are the expected intermediate result for each step.
I'm currently working on the VLP16 and realize the lidar detector pipeline needs to be modified a lot.
Failed to find match for field 'ring'.
Hi i'm using Ouster lidar and on running the lidar with ouster point cloud i'm getting error: Failed to find match for field 'ring'.
Ouster is publishing PointCloud2 message on running rostopic echo command i can see there is a field 'ring' available in the cloud. Anyone please help me on this issue.
SmartMicro UMRR Radar T153 Compatibility
RMSE definition
Hi,
I have a question about the value of RMSE after I perform optimization.
How do you calculate this RMSE value?
Is it possible to change the thickness of styrofoam?
Hello Guys,
First want to take a moment and appreciate this amazing work. The calibration process is very precisely explained in the paper and I am also thinking to use the same methods for my work on sensor fusion. I will be calibrating LIDAR and Radar with this repo. Since my application is a bit tricky and we have to use the measurements that are probably 80 meters away from the sensors.
Is it better to also calibrate the sensors by placing the calibration tool a bit far away from the sensors in my case? Let's say 25 meters. If so, I have to change the dimensions of the styrofoam. I was also thinking to increase the thickness of the styrofoam, as we are doing the experiments in an open environment. In windy conditions, thicker styrofoam might be more stable. Let's say 12 cms. Does the thickness of styrofoam affect the radar points interacting with the reflector? also, Can you also tell what was the type of styrofoam you used? Was it probably EPS 040?
Mono-camera and radar only
Hi,
Wondering if this can be used without Lidar or stereo cameras?
Also wondering if it can be used with raw radar data rather than a point cloud?
Cheers
Question on Calibration Board Pattern
Hi Guys,
The sensors that were used are mono camera, lidar and radar. I would like to ask about is the proposed pattern be used for mono camera and would like to ask about the size of the trihedral corner reflector.
Thanks.
Segmentation fault,Core Dump
Hi,
I am using RSlidar instead of Velodyne and i am using a convertor ([https://github.com/EpsAvlc/rslidar_to_velodyne]) after conversion i tried running it i got segmentation fault again and after carefull debugging i was able to find where the issue is its in keypointdetector .
I am not sure what type of data is it expecting and what is the issue .
Update : we narrowed down the issue we cannot convert the incoming pcl in std_msg pcl2 to velodyne::points and there is no provision for getting XYZI and R in pcl
3D coordinates to 2D eucledian coordinates
Hi there,
Why do we need to convert 3D points to spherical coordinates and set the elevation angle to 0 instead of using an Orthogonal projection by taking (x, y) directly?
Problem with optimization function "error: Parsing coordinates for the following line"
Hi there, I got a problem when I tried to run the optimization node with the example data provided.
I run the order as the README.md file illustrated, but I got problem when run
- python3 src/main.py --lidar data/example_data/lidar.csv --camera data/exampl
e_data/camera.csv --radar data/example_data/radar.csv --calibration-mode 3 --visualise
or use rosrun to setup the optimization node
- rosrun optimization server.py
How can I solve this problem?
single camera / single radar extrinsic calibration
Hello,
Thanks for sharing the nice work.
I have a question regarding the system capabilities.
We don't have lidar in our vehicle, so I would like to focus on radar to camera extrinsic estimation.
When I check the custom calibration board, it doesn't have the checkerboard pattern with radar reflector. Rather, it has the four black holes for the lidar returns combined with radar reflector. Would the same custom calibration board work with mono camera & radar?
We have the possibility to use stereo system as well, so I would like to know capabilities without lidar.
Thanks in advance,
Kind Regards,
Ugur
Question about Mono_detector
First of all, thank you for the wonderful work you have done here!
I have a question regarding the mono_detector though.
My calibration results are as follows
Lidar and Radar results seem to be fine but there's something wrong with camera result.
I'm currently using mono_detector and was wondering if it's even possible to get depth value from mono camera.
I don't think the parameters from intrinsic.ini are used...
If you can let me know, it would be greatly appreciated.
Thanks in advance
single lidar calibration
so i have a question, is it work for single lidar and vehicle calibration? this code is ok for single lidar calibration?
radar_converter node missing
Hi, just noticed that the radar_converter node is missing in launch file. Does anyone has the solution to that? Thanks.
Porting to ROS2 or using with ROS2 topics
Hey!
I would like to ask whether it's possible to port this package to ROS2? Haven't someone tried that? Or is it possible to use ROS2 topics with data from lidar,radar and cameras in ROS1 (this package)?
‘marker’ does not name a type
When i catkin_make the workspace, i got this issue:
In member function ‘void lidar_detector::LidarDetectorNode::publishMarker(const pcl::PointCloudpcl::PointXYZ&, const Header&)’:
/home/kuang/catkin_ws/src/multi_sensor_calibration/lidar_detector/src/lib/node_lib.cpp:84:11: error: ‘marker’ does not name a type
auto marker = toMarker(pattern.at(i), header);
Is something wrong with the C++ version?
Thank you!
YAML::LoadFile(object_points_path).as<std::vector<cv::Point3f>>() error
required from here
/usr/include/yaml-cpp/node/impl.h:116:27: error: incomplete type ‘YAML::convert<cv::Point3_ >’ used in nested name specifier
if (convert::decode(node, t))
Trihedral Corner Reflectors alternatives
I am searching a trihedral corner reflectors in the internet. Do you have any recommendations that you can give me about size and quality.
If you sent me a purchase link, I would be appreciated.
I find this in the amazon. What is your thoughts about it ?
[https://www.amazon.com/Davis-Emergency-Deluxe-Radar-Reflector/dp/B002MJKNPY/ref=sr_1_1?keywords=Radar+Reflector&qid=1567410793&rnid=2941120011&s=electronics&sr=1-1]
dimension of the conic cut of the circles
what are the dimension of the conic cut of the circles? the back side is 15 cm, but does not specified nothing about the other side.
Package Configuration "radar_msgs" missing
Hi,
I'm trying to install the package using your guide. When I get to the 'catkin_make' I get the following error:
CMake Error at /opt/ros/melodic/share/catkin/cmake/catkinConfig.cmake:83 (find_package):
Could not find a package configuration file provided by "radar_msgs" with any of the following names:
radar_msgsConfig.cmake
radar_msgs-config.cmake
Add the installation prefix of "radar_msgs" to CMAKE_PREFIX_PATH or set "radar_msgs_DIR" to a directory containing one of the above files. If "radar_msgs" provides a seperate development package or SDK, be sure it has been installed.
Call Stack (most recent call first):
multi_sensor_calibration/radar_detector/CMakeLists.txt:6 (find_package)
-- Configuring incomplete, errors occurred!
Makefile:320: recipe for target 'cmake_check_build_system' failed
make: *** [cmake_check_build_system] Error 1
Invoking "make cmake_check_build_system" failed
Is this a problem you have encountered before and is there a solution to it?
Thanks.
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.