Recent advancements in Large Language Models (LLMs) have introduced innovative use cases across various domains. In particular, AI agents powered by LLMs promise immense potential by autonomously performing complex tasks while utilizing user-defined tools. This capability extends far beyond widely-established applications like knowledge management or text generation typically associated with LLMs.
Compliance with regulatory standards, such as those implied with the Euro 7 homologation, pose a high complexity on top of already extensive processes in validation of tests in research and development covering several stages, including test requests, planning, execution, and the analysis and evaluation of results. Implementing the regulatory induced measures into operational workflows and ensuring their seamless integration with supporting IT tools is a significant challenge.
In this paper, we explore how modern AI agents can be developed and integrated into existing IT tools for test management to make the testing process more efficient. We demonstrate this in a feasibility study showing how an AI agent supports the process of measuring the Brake Particle Emissions required by the Euro 7 standard. Our use cases cover the generation of test templates and plans based on Euro 7 standards, software-based verification of compliance with constraints, implementation of calculation rules, and generation of the required reports.
We show how AI agents can drive significant cost savings, enhance data quality, and boost overall efficiency. At the same time, we address technical challenges such as implementation costs, security concerns, and the need for domain-specific training.
Our findings offer practical insights into integrating AI agents into testing workflows, highlighting opportunities, challenges, and solutions. While this discussion centers on the Euro7 standard, the results of this paper can be readily applied to other use cases.