Giter VIP home page Giter VIP logo

explainable-ai-for-autonomous-driving's Introduction

Explainable AI (XAI) for Autonomous Vehicles

In this repository, we provide a compendium of explainable AI (XAI)-based approaches for autonomous driving. These studies are primarily about developing XAI approaches for autonomous driving with vision-based, reinforcement learning (RL), logic-based, and user-study-based approaches.

GRIT: Fast, Interpretable, and Verifiable Goal Recognition with Learned Decision Trees for Autonomous Driving, 2021
Summary: The authors present Goal Recognition with Interpretable Trees (GRIT), a goal recognition system that attains safety and interpretability objectives using decision trees trained on vehicle trajectory data.

Interpretable Goal Recognition in the Presence of Occluded Factors for Autonomous Vehicles, 2021
Summary: The study introduces an interpretable inverse-planning algorithm for joint goal and occluded factor inference, and shows how it can be integrated into Monte Carlo Tree Search action selection.

Uncovering Interpretable Internal States of Merging Tasks at Highway On-Ramps for Autonomous Driving Decision-Making, 2021
Summary: The authors approach the lane merging task as a dynamic process and integrate internal states into joint Hidden Markov Model (HMM) and Gaussian Mixture Regression (GMM).

Interpretable Goal-based Prediction and Planning for Autonomous Driving, 2021
Summary: The authors present an integrated prediction and planning system for autonomous driving that uses rational inverse planning and Monte Carlo Tree Search to recognise the goals of other vehicles and plan the motion maneuvers of the autonomous car.

Learning Interpretable End-to-End Vision-Based Motion Planning for Autonomous Driving with Optical Flow Distillation, 2021
Summary: The authors propose an interpretable end-to-end vision-based motion planning (IVMP) to interpret the underlying actions of an intelligent vehicle.

Interpretable End-to-End Urban Autonomous Driving With Latent Deep Reinforcement Learning, 2021
Summary: The study introduces a sequential latent environment model learned with RL and a probabilistic graphical model-based approach that can interpret autonomous vehicles’ actions.

Towards Accountability: Providing Intelligible Explanations in Autonomous Driving, 2021
Summary: The study proposes a tree-based representation and uses human evaluation in a variety of driving scenarios and generates Why, Why Not, What If, and What explanations for driving situations.

Human-Vehicle Cooperation on Prediction-Level: Enhancing Automated Driving with Human Foresight, 2021
Summary: The authors propose a technique enabling a human driver to provide scene forecasting to an intelligent driving system using a purposeful gaze.

Toward explainable and advisable model for self-driving cars, 2021
Summary: Similar to [11], the authors present an approach that leverages human advice to learn vehicle control.

Autodiscern: Autonomous driving using common sense reasoning, 2021
Summary: The study introduces AUTO-DISCERN, a system that incorporates commonsense reasoning with answer-set programming to automate explainable decision-making for self-driving cars.

Interpretable safety validation for autonomous vehicles, 2020
Summary: The authors introduce a technique to identify interpretable failures of autonomous vehicles using signal temporal logic expressions in an unprotected left turn and pedestrian crossing scenarios

Advisable Learning for Self-Driving Vehicles by Internalizing Observation-to-Action Rules, 2020
Summary: The paper proposes an approach that leverages human advice to learn vehicle control.

Explainable Object-Induced Action Decision for Autonomous Vehicles, 2020
Summary: The authors propose object-induced actions with explanations for predictions of autonomous cars, and introduce a new dataset called BDD-OIA, annotated with 21 explanation templates on a set of 4 actions.

“I’d like an Explanation for That!”Exploring Reactions to Unexpected Autonomous Driving, 2020
Summary: The authors explore situations where explanations are needed and what relevant methods are possible for these situations.

Explaining Autonomous Driving by Learning End-to-End Visual Attention, 2020
Summary: The authors propose conditional imitation learning with an end-to-end visual attention model that identifies parts of images that have a higher influence on predictions.

I Drive - You Trust: Explaining Driving Behavior Of Autonomous Cars, 2019
Summary: A user study is performed: The authors identify a mental model of users for determining an effective practical implementation of an explanation interface for self-driving vehicles.

End-To-End Interpretable Neural Motion Planner, 2019
Summary: The authors introduce a neural motion planner for autonomous driving in complex urban scenarios using raw LIDAR data and an HD map.

Visual Scene Understanding for Autonomous Driving Using Semantic Segmentation, 2019
Summary: The paper proposes a semantic segmentation model implemented as a pixel-wise classification that explains the underlying real-time perception of the environment.

Out of Sight But Not Out of Mind: An Answer Set Programming Based Online Abduction Framework for Visual Sensemaking in Autonomous Driving, 2019
Summary: The authors develop an answer set programming-based abductive reasoning framework for online sensemaking that combines knowledge representation and computer vision in an online manner to explain the dynamics of traffic scenarios, particularly occlusion scenes.

Textual Explanations for Self-Driving Vehicles, 2018
Summary: The study uses a visual attention model to train a CNN end-to-end from images and then employs an attention-based video-to-text model to generate textual explanations of the decisive actions.

Interpretable learning for self-driving cars by visualizing causal attention, 2017
Summary: The authors use a visual attention model to train a CNN from images to steering angle, and apply causal filtering to find out which parts of input mainly influence the network's output.

VisualBackProp: visualizing CNNs for autonomous driving, 2016
Summary: This study explains predictions of CNNs in end-to-end autonomous driving by showing which sets of pixels of an input image contribute to the CNNs' predictions.

Reference

Detailed analysis and discussion of these studies are available in the following article:

@article{atakishiyev2021explainable,
title={Explainable Artificial Intelligence for Autonomous Driving: A Comprehensive Overview and Field Guide for Future Research Directions},
author={Atakishiyev, Shahin and Salameh, Mohammad and Yao, Hengshuai and Goebel, Randy},
journal={arXiv preprint arXiv:2112.11561},
year={2021}
}

The article can be viewed here.

explainable-ai-for-autonomous-driving's People

Contributors

shahin-01 avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.