Enabling the use of AI in Safety-Critical Systems

14 June 2024, Barcelona, Spain

Co-located with the 28th Ada-Europe International Conference on Reliable Software Technologies (AEiC 2024), June 11-14

Workshop Program

Friday, 14 June 2024
08:50 - 09:00 Welcome
Session 1 (09:00-10:30)
09:00 - 09:05 Message by the Organizers
Jaume Abella / Francisco J Cazorla
09:05 - 09:45 Deploying AI in space: benefits and challenges
Gabriele Giordanna (AIKO)
09:45 - 10:30 Industrial Challenges for Mobile Robots
Francesco Ferro (PAL-ROBOTICS)
10:30-11:00 COFFEE BREAK
Session 2 (11:00-12:45)
11:00-11:45 Functional Safety on AI-based critical systems
Irune Yarza (IKR)
11:45-12:45 Panel on enabling the use of AI in critical systems
All Presenters
12:45-14:00 LUNCH BREAK
Session 3 (14:00-15:00)
14:00-14:45 Integrating Probabilistic Uncertainty sources and Explainability in Critical AI Systems
Axel Brando (BSC)
14:45-15:00 Wrap-up
Jaume Abella / Francisco J Cazorla

Workshop description

The increasing computing performance delivered by embedded platforms has made possible the realization of advanced and performance-hungry functionalities in real time. Those functionalities, often related to autonomous operation and/or to the comprehension of complex scenarios, largely rely on AI software as the only solution delivering sufficiently accurate results. However, both, AI software and powerful embedded computing platforms, are at odds with the development process of safety-critical systems, which is described by domain-specific standards and guidelines, such as ISO 26262 in automotive, DO178C and DO254 in avionics, and IEC 61508 for electronic industrial systems. Difficulties arise from (1) the need for "divide & conquer" strategies pursued by safety regulations, which aim at decomposing systems iteratively until having components sufficiently simple to be realized, understood and tested, and (2) the perceived black-box nature of AI software and high-performance computing devices, whose complexity cannot be broken down as needed by safety standards

Several public (e.g. EC funded projects) and private activities (e.g. AI-focused working groups) have started recently to address this challenge aiming at (1) making AI software explainable, with the term explainable often overloaded to refer to transparent, reliable, robust, and verifiable, among other desired characteristics of AI for safety-critical systems; (2) containing embedded platform complexity, especially when running AI software, using appropriate system software and middleware support; and (3) adapting safety regulations conceived for control software to admit the data-dependent and stochastic nature of AI software in the context of the development of (and use in) safety-critical systems.

This workshop will present several challenges in the form of industrial use cases building on AI, as well as the latest advances in terms of safety-relevant system development, AI solutions amenable for safety-critical systems, and system software and middleware support to contain and model platform complexity, including results emanating from Horizon Europe SAFEXPLAIN (https://safexplain.eu/) and Transmisiones CAPSUL-IA.

Deploying AI in space: benefits and challenges

Introducing AI in space missions can provide great improvements in efficiency and autonomy. On the other hand, significant challenges still stand in the way: the difficulties in unveiling the inner workings and decision-making of black-box models, the lack of extensive standardisation for safety and verification of AI systems, constraints from hardware resources and the harsh environment, and more. Despite the obstacles, finding a way to deploy AI can foster research and unlock new opportunities for the sector.

Industrial Challenges for Mobile Robots

For over 20 years, PAL Robotics has been at the forefront of developing service mobile robots. In this workshop, we will explore the diverse challenges we face in deploying robots across various industrial domains. Specifically, we will showcase TIAGo, our versatile mobile manipulator, highlighting its applications in healthcare and industrial settings. Additionally, we will discuss StockBot, our innovative solution for inventory tracking and data collection, developed in collaboration with global retailer Decathlon. Currently operational in multiple Decathlon stores across several countries, StockBot performs daily autonomous inventory tasks, demonstrating the potential and practicality of AI-driven robotic solutions in real-world retail environments. As an integral partner in the CAPSUL-IA project, PAL Robotics incorporates AI capsules to enhance the functionality and safety of its service robots, especially in mobile manipulators. This initiative facilitates simplified AI integration, enhancing operational efficiency, advanced vision, and safety in industrial and healthcare applications, thereby reinforcing PAL Robotics' position as a leader in mobile robotics innovation.

Functional Safety on AI-based critical systems

Artificial Intelligence (AI) has the potential to revolutionize the development of next-generation autonomous safety-critical systems. AI can also support and assist human safety engineers in designing safety-critical systems. However, integrating the latest AI technologies with existing safety engineering processes and safety standards presents a significant challenge. This presentation summarizes the main challenges, techniques, and methods for developing AI-based safety-critical systems and presents some of the contributions of the SAEXPLAIN research project in this direction.

Integrating Probabilistic Uncertainty sources and Explainability in Critical AI Systems

In the rapidly evolving field of Artificial Intelligence (AI) for critical systems, achieving robustness and explainability is paramount. This talk will explore innovative methodologies that bridge probabilistic models with causal inference, all within a black-box framework. We will delve into strategies for quantifying and managing diverse sources of uncertainty in AI predictions and decisions, crucial for high-stakes environments. Furthermore, we will discuss cutting-edge approaches that leverage causal models to enhance the interpretability of these systems, providing clear insights into the underlying mechanisms of AI behavior. The session aims to foster a deeper understanding of how integrating these two perspectives can lead to more reliable and transparent AI deployments in critical applications.


  • Francisco J. Cazorla (Barcelona Supercomputing Center, Spain)
  • Jaume Abella (Barcelona Supercomputing Center, Spain)