The Iterative closest point algorithm is employed to estimate transformation between 2 point clouds such that corresponding points between each set are aligned over each other in space.Initially, we start with a transformation guess which would align the 2 point clouds. Points of set one are transformed into the reference frame of set two.Then each point in one data set is paired with the closest point in the other data set to form correspondence pairs.To solve for optimal transformation, a point-to-point error metric is used in which the sum of the squared distance between points in each correspondence pair is minimized.The process is repeated with the transformation we just obtained as our new initial guess. It is iterated until the error becomes smaller than a threshold or it stops changing.
The target point cloud and source point cloud are represented as and respectively. For data association between these two sets, brute force search methods has been used. In case when multiple source points are matched with same point from the target set, the pair with minumum distance is retained while others are rejected. Morever, unmatched target points are also removed,which consequently leaves us with best one to one correspondences. Following steps are performed on the resultant point sets to solve for optimal translation and rotation.
-
step 1 : Calculate centroids of both point clouds
-
step 2 : Compute the spread matrix
-
step 3 : Use SVD to obtain optimal rotation matrix
-
step 4 : Obtain translation vector by applying the transform on centroids
Repeat until convergence.
The implementation was done in Gazebo simulator(Version 7.0). A hokuyo laser scanner was mounted on top of Turtlebot2. Point clouds were recorded in lidar's frame of reference, at two different time instances.The point cloud recorded first is referred as target set while the current one is referred as source set.The ICP algorithm was used to estimate the transformation between these sets, in the target's frame of reference. In this way, motion of lidar and consequently, motion of the turtlebot was evaluated between two known steps of time.
Test 1 : Translational motion of 1m along positive x-axis
Resultant Transformation Matrix:
M : [[ 0.99943548 0.03359631 1.01173327] [-0.03359631 0.99943548 0.03959641] [ 0. 0. 1. ]]
Iterations :
Test 2 : Rotational motion along positive z-axis
Resultant Transformation Matrix:
M : [[0.97110935 -0.23863495 -0.08952401] [0.23863495 0.97110935 -0.02823182] [ 0. 0. 1. ]]
Iterations :
Reference :
State estimation and localization for self-driving cars - by University of Torronto (Coursera)