naokiakai / als_ros Goto Github PK
View Code? Open in Web Editor NEWAn advanced localization system for ROS use.
License: Apache License 2.0
An advanced localization system for ROS use.
License: Apache License 2.0
I roslaunched the package with roslaunch als_ros mcl.launch
and tried to publish the initial pose by 2D pose estimate
in Rviz. The position of tf of odom was drifted dramatically when I published the initial pose. Could you suggest the reason why? Thank you
please find my situation with this link: https://drive.google.com/file/d/1eudTYozaZnjegLDlOgNgk9YWT8TFK5ts/view?usp=sharing
Hi
I found that there's newly added slammer node, but i couldn't find any description / elaboration regarding this slammer node. Can briefly explain what's it functions?
Best,
Samuel
I want to add IMU to my robot to increase accuracy when building maps. Does this library support simultaneous processing of both lidar and imu.
Thank you for reading.
hello,How to get the initial pose?Does the algorithm have relocation capabilities?
thanks!
Hi there, great work on this. Wondering if you have any plans to port this over to ROS2 given ROS's upcoming deprecation? Just curious. Cheers.
Does the mcl localizer support 180 degree lidar? In my setup I tried to replace amcl, but the robot started rotating and shifting strongly in the gazebo simulation. Thank you very much!
Hello NaokiAkai,
First of all thanks for the hard work. How diffucult would it be to port this package to ROS2? I have gone over the learning curve of ROS2, and I have been able to port few simple things, but I can not estimate how diffucult it would be to undertake to get als ros running on ROS2.
Best Regards,
C.A.
Hi,
I have implemented this package in a simulated robot. With use_mrf_failure_detector:=true and use_gl_pose_sampler:=true , I realised that the gl_pose_sampler and mrf_failure_detector nodes would consumes a lot of cpu, about 100% for two cpu cores. Is this normal? Is there a way to reduce the cpu usage?
Thanks.
Hello,
I have been practising als_ros in simulation environment. It has been working very nice.
Do we have to have a map when using als_ros? Is it possible to use it when there is no map file?
And also, what is slamer mode and how can we use it?
Best Regards.
C.
Hi Naoki,
I am perusing your GLPoseSampler which implements the free space features.
One point I think worth confirming is here.
Should not we use keyScans_[0] which is the most recent keyframe?
Note generatePoses() generates poses with the currentOdomPose which is for keyScans_[0].
hello~
Does this project require CUDA?
Hello,
I am trying to get als_ros working but as soon as I run mcl.launch I get this error back, on Noetic. the tf published by als_ros is also only filled with nan.
I'm not really sure where the error could be coming from, everything is setup correctly from odom and down, and I can run AMCL with navigation without problem. Do you have any idea where I could look further into? i've not modified at all the mcl.launch.
[ERROR] [1684233719.104600625]: Ignoring transform for child_frame_id "odom" from authority "unknown_publisher" because of an invalid quaternion in the transform (nan nan nan nan)
process[mcl-1]: started with pid [79721]
[ INFO] [1684231427.306997742]: MCL is ready to perform
MCL: x = nan [m], y = nan [m], yaw = nan [deg]
Odom: x = 0.844567 [m], y = 0.892555 [m], yaw = 5.77389 [deg]
total likelihood = nan
average likelihood = nan
max likelihood = nan
effective sample size = nan
reliability = 0.0354446 (mae = Error: TF_NAN_INPUT: Ignoring transform for child_frame_id "odom" from authority "unknown_publisher" because of a nan value in the transform (nan nan nan) (nan nan nan nan)
at line 0.36875 [m], th = 0.225176 [m])
240 in /tmp/binarydeb/ros-noetic-tf2-0.7.6/src/buffer_core.cpp
Error: TF_DENORMALIZED_QUATERNION: Ignoring transform for child_frame_id "odom" from authority "unknown_publisher" because of an invalid quaternion in the transform (nan nan nan nan)
at line 255 in /tmp/binarydeb/ros-noetic-tf2-0.7.6/src/buffer_core.cpp
MCL: x = nan [m], y = nan [m], yaw = nan [deg]
Odom: x = 0.844523 [m], y = 0.892563 [m], yaw = 5.77502 [deg]
total likelihood = nan
average likelihood = nan
max likelihood = nan
effective sample size = nan
reliability = 0.00134853 (mae = Error: TF_NAN_INPUT: Ignoring transform for child_frame_id "odom" from authority "unknown_publisher" because of a nan value in the transform (nan nan nan) (nan nan nan nan)
at line 240 in 0.373445 [m], th = 0.225176 [m])
/tmp/binarydeb/ros-noetic-tf2-0.7.6/src/buffer_core.cpp
Error: TF_DENORMALIZED_QUATERNION: Ignoring transform for child_frame_id "odom" from authority "unknown_publisher" because of an invalid quaternion in the transform (nan nan nan nan)
at line 255 in /tmp/binarydeb/ros-noetic-tf2-0.7.6/src/buffer_core.cpp
MCL: x = nan [m], y = nan [m], yaw = nan [deg]
Odom: x = 0.844484 [m], y = 0.892575 [m], yaw = 5.77925 [deg]
total likelihood = nan
average likelihood = nan
max likelihood = nan
effective sample size = nan
reliability = 0.0001 (mae = Error: TF_NAN_INPUT: Ignoring transform for child_frame_id "odom" from authority "unknown_publisher" because of a nan value in the transform (nan nan nan) (nan nan nan nan)
at line 240 in /tmp/binarydeb/ros-noetic-tf2-0.7.6/src/buffer_core.cpp
Error: TF_DENORMALIZED_QUATERNION: Ignoring transform for child_frame_id "odom" from authority "unknown_publisher" because of an invalid quaternion in the transform (nan nan nan nan)
at line 255 in /tmp/binarydeb/ros-noetic-tf2-0.7.6/src/buffer_core.cpp
0.364485 [m], th = 0.225176 [m])
Hey Guys , any guidance on how to port the application to ros2 ?
People block part of the laser, and the position keeps changing.How to deal with it?
During the positioning process, the laser has been unable to match the map well, and the mapping environment is the same as the positioning environment, without any change. And the particles are scattered from the start
Hi Naoki,
Thank you for open sourcing the great work.
We are trying als ros for map-based localization with a 2D lidar of range 10 m.
However, when the person carrying the lidar turns in a narrow cubicle, the als drift significantly in orientation as shown in the attached video.
Can you please instruct us on how to mitigate this issue?
We tried to bump up the number of particles to 1000 to no avail.
Surprisingly, we also tried the AMCL in ros1, which did not suffer this problem.
Your early reply will be deeply appreciated as our project's deadline is approaching real quick.
The robot go through the inflation layer when using the hardware
I got these Errors as below when catkin build als_ros
.
Errors << als_ros:make /home/gisen/ros_workspaces/alk_ws/logs/als_ros/build.make.002.log
CMakeFiles/classifier_dataset_generator.dir/src/classifier_dataset_generator.cpp.o: 関数 `als_ros::ClassifierDatasetGenerator::ClassifierDatasetGenerator()' 内:
classifier_dataset_generator.cpp:(.text._ZN7als_ros26ClassifierDatasetGeneratorC2Ev[_ZN7als_ros26ClassifierDatasetGeneratorC5Ev]+0x1a4): `cv::Mat::Mat()' に対する定義されていない参照です
CMakeFiles/classifier_dataset_generator.dir/src/classifier_dataset_generator.cpp.o: 関数 `als_ros::ClassifierDatasetGenerator::mapCB(boost::shared_ptr<nav_msgs::OccupancyGrid_<std::allocator<void> > const> const&)' 内:
classifier_dataset_generator.cpp:(.text._ZN7als_ros26ClassifierDatasetGenerator5mapCBERKN5boost10shared_ptrIKN8nav_msgs14OccupancyGrid_ISaIvEEEEE[_ZN7als_ros26ClassifierDatasetGenerator5mapCBERKN5boost10shared_ptrIKN8nav_msgs14OccupancyGrid_ISaIvEEEEE]+0x22d): `cv::Mat::Mat(int, int, int)' に対する定義されていない参照です
classifier_dataset_generator.cpp:(.text._ZN7als_ros26ClassifierDatasetGenerator5mapCBERKN5boost10shared_ptrIKN8nav_msgs14OccupancyGrid_ISaIvEEEEE[_ZN7als_ros26ClassifierDatasetGenerator5mapCBERKN5boost10shared_ptrIKN8nav_msgs14OccupancyGrid_ISaIvEEEEE]+0x4ed): `cv::Mat::Mat(int, int, int)' に対する定義されていない参照です
collect2: error: ld returned 1 exit status
make[2]: *** [/home/gisen/ros_workspaces/alk_ws/devel/.private/als_ros/lib/als_ros/classifier_dataset_generator] Error 1
make[1]: *** [CMakeFiles/classifier_dataset_generator.dir/all] Error 2
make[1]: *** 未完了のジョブを待っています....
CMakeFiles/mae_classifier_learning.dir/src/mae_classifier_learning.cpp.o: 関数 `als_ros::ClassifierDatasetGenerator::ClassifierDatasetGenerator()' 内:
mae_classifier_learning.cpp:(.text._ZN7als_ros26ClassifierDatasetGeneratorC2Ev[_ZN7als_ros26ClassifierDatasetGeneratorC5Ev]+0x1a4): `cv::Mat::Mat()' に対する定義されていない参照です
collect2: error: ld returned 1 exit status
make[2]: *** [/home/gisen/ros_workspaces/alk_ws/devel/.private/als_ros/lib/als_ros/mae_classifier_learning] Error 1
make[1]: *** [CMakeFiles/mae_classifier_learning.dir/all] Error 2
make: *** [all] Error 2
I think ${OpenCV_LIBRARIES} should be add to target_link_libraries in CMakeLists.txt.
I would send PR, please confirm it.
Hi,
Thanks for sharing this project. I have tried to implement this package to my robot. Once my robot is moving, the localized pose starts to drift and oscillate. May I know what is the problem? Could it because my lidar is mounted upside down(inverted)?
Hello,
I'm trying to use als_ros on Noetic but I can't run map_server and als_ros at the same time.
When I run :
roslaunch als_ros mcl.launch
I don't get any error until I run the following command in another terminal :
rosrun map_server map_server <myMap>
I get the following error in the terminal where I ran als_ros :
terminate called after throwing an instance of 'cv::Exception'
what(): OpenCV(4.5.4) /home/ubuntu/build_opencv/opencv/modules/core/src/matrix.cpp:250: error: (-215:Assertion failed) s >= 0 in function 'setSize'
[mcl-2] process has died [pid 21493, exit code -6, cmd /home/nvidia/catkin_ws/devel/lib/als_ros/mcl __name:=mcl __log:=/home/nvidia/.ros/log/e577bfa6-263f-11ee-95f6-48b02d059d63/mcl-2.log].
log file: /home/nvidia/.ros/log/e577bfa6-263f-11ee-95f6-48b02d059d63/mcl-2*.log
And home/nvidia/.ros/log/e577bfa6-263f-11ee-95f6-48b02d059d63/mcl-2.log
does not exist
I hope to replace amcl with als_ros. So, I tried on Gazebo simulator.
But it occurs the following error.
[ INFO] [1654619677.594540743, 18.175000000]: Got new plan
[ERROR] [1654619677.594762760, 18.176000000]: Extrapolation Error: Lookup would require extrapolation into the future. Requested time 18.151000000 but the latest data is at time 17.952000000, when looking up transform from frame [odom] to frame [map]
[ERROR] [1654619677.594791175, 18.176000000]: Global Frame: odom Plan Frame size 89: map
I changed /opt/ros/melodic/share/turtlebot3_navigation/launch/turtlebot3_navigation.launch
to disable amcl.
roslaunch turtlebot3_gazebo turtlebot3_world.launch
roslaunch als_ros mcl.launch laser_frame:=base_scan
roslaunch turtlebot3_navigation turtlebot3_navigation.launch
The amcl has transform_tolerance
to transform the coordinate system using tf.
But, it seems that als_ros does not have this parameter.
als_ros/src/als_ros/include/als_ros/MCL.h
Line 514 in 15349fd
So, I changed als_ros to resolve this problem.
--- a/src/als_ros/include/als_ros/MCL.h
+++ b/src/als_ros/include/als_ros/MCL.h
@@ -511,7 +511,8 @@ public:
if (rejectUnknownScan_ && (measurementModelType_ == 0 || measurementModelType_ == 1))
rejectUnknownScan();
- mclPoseStamp_ = scan_.header.stamp;
+ ros::Duration transform_tolerance(1.0); // TODO: This value should be set as a parameter.
+ mclPoseStamp_ = scan_.header.stamp + transform_tolerance;
As a result, I could use als_ros with navigation stack.
hi,
In my test, the robot tf will move to the wrong place, and the reliability value has been maintained at 99.99%, what parameters should I change?
thanks
video:
bagfile and map :
https://drive.google.com/drive/folders/1Fd11c5NSgx-NSjXd_ZWah2CLNwzLm-2k?usp=share_link
<node pkg="tf" type="static_transform_publisher" name="base_link_2_laser" args="0.06 0 0 0 0 0 /base_link /laser 100"/>
I can run the navigation function with AMCL, but when I use mcl.launch instead of amcl.launch,this error occurred:
Timed out waiting for transform from base_footprint to map to become available before running costmap, tf error: canTransform: target_frame map does not exist.
I didn't make any changes to mcl.launch. I hope someone help me.o(╥﹏╥)o
Hi
Is there any ROS2 release for als_ros? I would like to try it out with ROS2.
Best,
Samuel
License file says it is Apache Licence 2.0 while README says it is MPL 2.0.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.