1. Write a short recap of the four tracking steps and what you implemented there (EKF, track management, data association, camera-lidar sensor fusion).
(1) Extended Kalman Filter
We implemented the functions of system matrix F, process noise covariance Q, prediction, and update states
We introduced nonlinear function h(x) and applied to our kalman filter successfully.
(2) Track Management
We introduce some confident values to assign them to a track
Track score
Track state
confirmed, tentative, initialized
We implemented the following functions
Initialization of new tracks
Update of the state and score of a track
Deletion of old tracks
(3) Data Association
We implemented the following functions
Simple Nearest Neighbor(SNN)
Mahalanobis Distance(MHD)
Gating
(4) Camera-Lidar Sensor Fusion
We implemented the following functions
Transformation from sensor coordinates to vehicle coordinates
Transformation from vehicle coordinates to sensor coordinates
Projection of a 3d point or a 6d state vector to 2d image space
nonlinear measurement function
Extended Kalman Filter
2. Which results did you achieve?
We made an object tracking system using the extended kalman filter and sensor fusion
Our object tracking system based on sensor fusion shows better performance than an object tracking system using lidar only
3. Which part of the project was most difficult for you to complete, and why?
It is challenging to understand and implement the Extended Kalman Filter. Some mathematical knowledge are need for understanding. For example, linear algebra, probabilities, statistics and differential equations. I managed to understand the Extended Kalman Filter and implement it.
4. Do you see any benefits in camera-lidar fusion compared to lidar-only tracking (in theory and in your concrete results)?
Yes
Comparing the result of step 3 and step 4, step 4 which uses fusion system shows better performance than step 3 which only uses lidar.
5. Which challenges will a sensor fusion system face in real-life scenarios? Did you see any of these challenges in the project?
Extracting meaningful data from a wide range of sensors in varying implementations, any of which potentially add device error, noise, and flaws in the data gathering process.
Lack of performance under real-life conditions
Power consumption
6. Can you think of ways to improve your tracking results in the future?
Improving our model to consider accelerations
Changing the association algorithm from Simple Nearest Neighbor(SNN) to Global Nearest Neighbor(GNN) or Joint Probabilistic Data Association(JPDA).
Set up
1. Cloning the project repository
$> git clone https://github.com/nieuwmijnleven/sensor-fusion-and-object-tracking
$> cd ./sensor-fusion-and-object-tracking
2. Downloading a pretrained model (darknet, fpn_resnet)
download the pretrained model of darknet from https://drive.google.com/file/d/1Pqx7sShlqKSGmvshTYbNDcUEYyZwfn3A/view?usp=sharing
unzip the file into ./sensor-fusion-and-object-tracking/tools/objdet_models/darknet/pretrained
download the pretrained model of fpn_resnet from https://drive.google.com/file/d/1RcEfUIF1pzDZco8PJkZ10OL-wLL2usEj/view?usp=sharing
unzip the file into ./sensor-fusion-and-object-tracking/tools/objdet_models/fpn_resnet/pretrained
3. Downloading precomputed results
download the result files from https://drive.google.com/drive/folders/1IkqFGYTF6Fh_d8J3UjQOSNJ2V42UDZpO?usp=sharing
unzip the file into ./sensor-fusion-and-object-tracking/results
Implementations
STEP1
$> python loop_over_dataset_STEP1.py
(1) Visualization
(2) Experimental Results
STEP2
$> python loop_over_dataset_STEP2.py
(1) Visualization
(2) Experimental Results
STEP3
$> python loop_over_dataset_STEP3.py
(1) Visualization
(2) Experimental Results
STEP4
$> python loop_over_dataset_STEP4.py
(1) Visualization
(2) Experimental Results
selfdrivingcar-sensor-fusion-and-object-tracking's People