the experiment part of the rcar 2023 'Knowledge Distillation on Driving Intention Generator: Learn Human-like Semantic Reasoning'.
Collect data from CARLA simulator, support manual driving mode:
python main.py
Process kitti dataset to get the nav map corresponding to the position:
python pose.py
View the .npy
format point cloud data collected by lidar:
python NPYViewer.py
Process data collected from the CARLA simulatior:
-
A certain amount of time stamps:
- generate real pm:
pm.py
- Feed data into the model to generate trajectories :
img2pm.py
- Feed data into the model to generate trajectories with multi-weathers or fake-nav:
img2pm_weather.py
,img2pm_fakenav.py
- generate real pm:
-
Single time stamp:
*_single.py
Support to generate mp4 video for inspection.
the folder 'Train' contains the definition scripts of the model.
The evaluation of the trajectory includes three indicators, namely IoU, cover rate and yaw angle change. Its core implementation is in ./utils/evaluation.py
.
*Before using the evaluation script, you need to pre-generate all the trajectories.
- A certain amount of time stamps:
- specified interval:
eval_interval.py
- only turning:
eval_turning.py
and add the running parameter--turning
- only straight:
eval_turning.py
and add the running parameter--st
- specified interval:
- Single time stamp:
eval_single.py
Here are some instrumental scripts:
monitor.py
: monitor which turns and their corresponding time stamps in the entire trip.total_average.py
: calculates the average of the turn evaluation and the straight-ahead evaluation.