The INSAIT Institute at the Sofia University "St. Kliment Ohridski", the Swiss Federal University ETH Zurich, and the company LatticeFlow AI have developed the World's First EU Artificial Intelligence Compliance Framework
The European Union now has a tool to verify the reliability of artificial intelligence. It is now publicly available and is the result of a collaboration between the Bulgarian INSAIT institute, part of Sofia University "St. Kliment Ohridski, the Swiss polytechnic ETH Zurich and the company LatticeFlow AI. The tool has been officially recognised by the European Commission as a first step to create a better link for the practical application of the regulations in the field, the Ministry of Education and Science said.
The EU AI law came into force in August 2024, setting a global precedent for regulation in this area. However, the law outlines expert-level regulatory requirements without providing detailed technical guidelines for companies to follow. To bridge this gap, ETH Zurich, INSAIT and LatticeFlow AI have created the first technical interpretation of the six core principles of the European law. This compliance framework translates the regulatory framework into concrete technical requirements, offering a clear methodology for evaluating AI models based on the standards spelled out in the regulations.
This effectively makes it the first tool at European and global level to make the link between regulatory requirements and their actual implementation, and is a significant step in the process of regulating AI at a global level.
The project, in which the Bulgarian Institute plays a key role, is already recognised at European level. The EU Office for Artificial Intelligence welcomes its implementation. It also declares that this will be a starting point for the development of similar regulations with a common application for artificial intelligence. Thomas Grenier, spokesperson for the European Commission's Digital Economy, Research and Innovation, commented: "The European Commission welcomes this study and the AI Model Evaluation Platform as a first step in translating the EU AI Act into technical requirements that will help AI modelers to implement it."
Based on the technical interpretation of the six main principles of the European Act, a tool for the evaluation of large generative language models (LLM) has also been developed. It is now publicly available at https://compl-ai.org and provides information on the European compliance of a number of the most popular AI models developed by companies such as OpenAI, Meta, Alibaba and Anthropic. Along with this, any company could measure in it how far such models developed by them meet the requirements of the EU AI Law.
"We invite AI researchers, developers and regulators to join us in advancing this evolving project," said Prof. Martin Vechev, professor at ETH Zurich and founder of the INSAIT institute. "We encourage other research groups and practitioners to contribute by refining the AI Act mapping, adding new criteria and extending this open source framework. The methodology can also be extended to assess AI models against future regulatory acts beyond the EU AI Act, making it a valuable tool for organisations working in different jurisdictions," explained Prof. Vechev. | BGNES