AI Summit: AFNOR at the rendezvous
ChatGPT, DeepSeek today… Faced with the AI tornado, AFNOR proposes voluntary standards as shelter! Back in 2022, we were already saying that artificial intelligence, a fortiori with the latest developments in generative AI, needed a code of conduct to promote understanding between players and interoperability of systems, a framework that would spread trust, particularly for the high-risk AI systems listed in what is now the IA Act, the European regulation on artificial intelligence. It is with this posture that the Group is taking part in the AI Summit in Paris called for by the President of the Republic, in two sequences:
- A first scientific sequence, on February 7, 2025, at Ecole Polytechnique
- A second political sequence, on February 10, 2025, at the Grand Palais.
On February 7, AFNOR will be hosting a round table on trusted AI (session 3, auditorium Faurre), with the participation of Touradj Ebrahimi, professor at the Ecole Polytechnique Fédérale de Lausanne, representatives from European and international standards bodies (BSI, CEN-Cenelec, Danish Standards, DIN, IEC, ISO) and Naaia, the first French company to be ISO 42001 certified. The voluntary ISO/IEC 42001 standard, the AI world’s version of the parent ISO 9001 standard, is an invaluable foundation for AI professionals wishing to move forward methodically and work towards continuous improvement. ” It provides the framework for a quality management system for companies implementing high-risk AI systems “, explains Virginie Desbordes, head of the Digital Confidence theme at AFNOR Certification.
AI standardization: a question of sovereignty
This umbrella standard prefigures a series of normative documents expected at European level to support the IA Act. A dozen of them are even set to become harmonized standards, i.e. proposing a way of doing things detailing, one by one, each requirement of the regulation, and thus offering, once cited in the Official Journal, a presumption of regulatory compliance to any player who applies them. Their scope: trustworthiness framework, risk management, conformity assessment, etc. ” Europe defends the idea that the development and use of AI guided by principles, which will eventually become constraints, stimulate innovation while guaranteeing better control of the results of these innovations “, stresses Anna Médan, in charge of AI at AFNOR Normalisation. AFNOR has taken the lead in creating a community of players committed to building and defending a French position, rather than having third-party rules of the game imposed on them. This was the purpose of the Grand Défi IA, a mission steered by the General Secretariat for Investment. When it is launched in 2021, AFNOR has been tasked with ” creating the normative environment of trust accompanying the tools and processes for certifying critical systems based on artificial intelligence “. The defense of a French AI community agreeing on best practices remains topical, as indicated in a report by the Office parlementaire d’évaluation des choix scientifiques et techniques (OPECST) in November 2024: ” France must be enabled to best defend the national interest as well as the interests of its national companies when it comes to AI standardization, which implies mobilizing AFNOR and Cofrac to a greater extent. ” France has made its mark on standardization by approaching AI from an environmental angle. This is evidenced by the interest shown in the AFNOR Spec 2314 repository on frugal AI, co-constructed with dozens of players including Ecolab, a service of the Commissariat général au développement durable (Ministry of Ecological Transition), and downloadable free of charge here. If AI systems pose a risk, it’s that of increased pressure on the environment, given the energy expended to drive them! The AFNOR standard reviews the criteria to be taken into account when assessing these impacts. A document that will greatly facilitate the design of the voluntary standards that are about to be issued on a European scale.