Explainable AI for Systems with Functional Safety Requirements
Compliance with safety standards is essential in safety-critical domains like automotive, rail and space. While traditional approaches to functional safety are well-established, the introduction of AI into safety-critical systems presents new complexities that challenge existing frameworks and methodologies due to the “black box” characteristic of deep learning models.

Explainable AI (XAI) is vital for making AI decision-making processes transparent and understandable to human experts, and for ensuring safety and regulatory compliance. Despite the value of XAI, there is currently a lack of systematic approaches to integration within AI-based systems and the Machine Learning lifecycle, especially in domains where safety is non-negotiable. This webinar seeks to address this gap by introducing the SAFEXPLAIN explainability by design approach.
Learning goals
Webinar attendees will:
- Learn about the current challenges and gaps in integrating XAI with ML lifecycle processes.
- Explore a structured approach to integrating XAI within the development and deployment of SAFEXPLAIN AI models to ensure compliance with functional safety standards.
- Gain insights into the innovative SAFEXPLAIN approach for leveraging AI in automotive, rail and space applications
- Have access to the latest XAI research coming from the SAFEXPLAIN project (link to deliverable, website resources)