This JRC exploratory workshop is dedicated to the safety and security of automated and autonomous vehicles (A&AV), and aims to bring together leading scientist and engineers to explore and discuss the state-of-the-art research on accuracy, robustness, fairness and explainability of artificial intelligence (AI) and machine learning (ML) and testing of modern vehicles.
The workshop explores if and how explainability of their core AI and ML algorithms can be used to answer the following questions:
- How can we test the AI-ML layers in automotive environment?
- How to define and quantify robustness, fairness, accuracy, repeatability, and reproducibility of an A&AV’s AI-ML component?
- Can we test the AI and ML separately from the vehicle?
- How can we validate whether decisions made by AI and ML systems are correct in terms of safety and security?
- How to detect biases in automated decisions and assess their impact in terms of fairness and robustness?
- sustainable mobility | intelligent transport system
- -
- Online only
Practical information
- When
- -
- Where
- Online only
- Who should attend
- On invitation
- Languages
- English
Description

Currently automated and autonomous vehicles are tested in a black-box approach, based on limited traffic and cybersecurity scenarios. The behaviour of AI-ML systems is studied through descriptive statistics of kinematics and/or interaction with other road users and the infrastructure, using mainly mechanical engineering domain knowledge. However, unlimited variations of traffic situations exist and their consideration in testing is out-of-reach. So far, no scientifically sound methodologies have been developed to audit the decisions made by the AI and ML systems during driving, especially in safety critical scenarios. Besides functional and operational safety, other challenges linked to the uptake of AI and ML in automated and autonomous driving have emerged these recent years such as the assessment of the cybersecurity and fairness of systems, in line with the recent initiatives from the European Commission to promote a Trustworthy AI in high-risk systems.
We plan to gather multiple views and foster collaborations between experts to collect state-of-art research, to identify gap in validation of A&AVs, and to pave the way for new research directions.
We are looking for contributions from academic, industry and testing bodies, who can present their results on testing and explaining the safety and security performance of A&AVs including the AI-ML layer.
Invited speakers
- Javier Alonso Mora: TU Delft, The Netherlands
- Alexandre Alahi: EPFL, Switzerland
- Christian Berghoff: BSI, Germany
- Matthieu Cord, Valeo, France
- Rafaël De Sousa Fernandes: UTAC, France
- Yuval Elovici, Ben-Gurion University, Israel
- Katrin Grosse: University of Cagliari, Italy
- Philip Koopman: Carnegie Mellon University, USA
- Lars Kunze: Oxford Robotics Institute, UK
- Nick Reed: Reed Mobility, UK
- Robert Swaim, USA
- Ensar Becic, NTSB, USA
- Patrick Seiniger: BASt, Germany
- Asaf Shabtai, Ben-Gurion University, Israel
- Jack Stilgo, University College London, UK