FKFS Events

2026 Stuttgart International Symposium
on Automotive and Powertrain Technology

8 - 9 July 2026

Session: Data Science & AI #2 | | 16:15 - 16:45

A Methodology for Ensuring the Safety of Machine Learning Models in Safety-Critical Embedded Systems

Markus Hanselmann, ETAS GmbH
Abhik Dey, ETAS GmbH

While the application of machine learning (ML) has become state-of-the-art and has found its place as a superior alternative to traditional physics-based models in many use-cases, its widespread adoption in series production for safety-critical embedded systems remains a significant challenge. The primary obstacle is the inherent "black box" nature of many ML models, which raises profound skepticism regarding their explainability and ability to meet stringent safety requirements as mandated by frameworks like ISO 26262 and IEC 61508. This paper presents a novel methodology designed to overcome these barriers by enhancing the transparency and safety assurance of ML models deployed in industrial applications. We introduce a set of concrete techniques that can be integrated throughout the development and deployment lifecycle. Our methodology begins with the safe design of experiments for dynamic systems and includes methods for evaluating the quality of model output during inference as a safety net. Finally, we conclude with a technique to rigorously validate the generated embedded code against the original model's output. By ensuring this correctness at the deployment stage, we can prevent critical downstream problems and build crucial trust in the system's behavior. By providing this pathway to formally verify the deployed artifact, we offer a tangible solution to build confidence in data-based models, significantly increasing their viability and accelerating their adoption in safety-critical embedded projects.