Come and write the voluntary standards for artificial intelligence

Following the survey of artificial intelligence players carried out in the summer of 2021, AFNOR, mandated by the French government, has finalized the roadmap for equipping this strategic sector with new voluntary standards and disseminating best practices. You write the standards!

Reading time : 6 minutes

Systems based on artificial intelligence (AI) are already part of our daily lives. And this is just the beginning. Today, one in every 25 scientific publications deals with AI, and 21% of venture capital investment in the OECD is in this exploding field. A whole industry is developing in this sector with its many applications, making it a key factor in France’s and Europe’s economic sovereignty. Particularly in the first half of 2022, during the French presidency of the European Union…

But to succeed in the marketplace, AI-based systems need to inspire confidence. Players also need to share best practices, a common vocabulary and protocols. “Objective: adopt the least discriminating approach possible for new entrants,” stresses Renaud Vedel, coordinator of France’s AI strategy. We need interoperable AI, built on agnostic solutions that favor neither component nor technology.” Such sharing is only possible with voluntary standards.

In Europe, these standards will support the regulation on the subject being prepared by the Brussels Commission. Unlike this future regulation, which will be coercive like the RGPD already is, standardization is voluntary.

AI standards: 6 areas of work

Driving a car, operating a medical device or even wielding a weapon… Artificial intelligence is called upon to intervene in all fields, even the most critical. In the digital world, protocols are constantly changing,” explains Christoph Peylo, SVP Digital Trust at Bosch. In this unstable field, to prevent control from escaping human control, it is essential to build a robust, sovereign and comprehensible environment to create trusted AI”.

Julien Chiaroni, director of the Grand Défi Intelligence artificielle (Grand Challenge Artificial Intelligence) project, which is part of the future investment program (PIA) and the France Relance plan, agrees: “The social acceptability of artificial intelligence systems depends on confidence and the usefulness of applications. We’re facing a major industrial challenge, with a market estimated at 50 billion euros. Supporting the ecosystem requires standardization and the definition of a solution based on trust.

To this end, in May 2021, the French government mandated AFNOR as part of this Grand Défi IA with a mission: “to create the normative environment of trust accompanying the tools and processes for certifying critical systems based on artificial intelligence”. Today, the work program is finalized. You can consult it here, especially if, like 260 French players in the field, you took part in the consultation carried out in summer 2021 to build it.

You can then take part in the development of standards within our standards committees. On the contrary,” insists Patrick Bezombes, Chairman of the French Standardization Committee, “contribution is not reserved for large corporations. Start-ups and SMEs are an essential link in the ecosystem, and they need to make their voice heard and give their point of view: the directions chosen will have a direct impact on them, right at the heart of their business.”

The roadmap has 6 main focuses:


  • Axis 1: Develop trust-related standards

    The priority characteristics to be standardized are security, safety, explicability, robustness, transparency and fairness (including non-discrimination). Each feature must be defined, with a description of the concept, technical requirements and associated metrics and controls.

  • Area 2: Developing standards for AI governance and management

    AI is generating new applications, all of which entail risks. These risks come from a variety of sources: poor data quality, poor design, poor qualification, and so on. A risk analysis for AI-based systems is therefore essential, to then propose a risk management system.

  • Axis 3: Develop standards for monitoring and reporting AI systems

    The aim is to ensure that AI systems are controllable (ability to regain control), and that humans can regain control at critical moments when AI leaves its nominal operating domain.

  • Axis 4: Develop standards for the skills of certification bodies

    It will be up to these bodies to ensure not only that companies have set up development and qualification processes for AI systems, but also that products comply with requirements, particularly regulatory ones.

  • Axis 5: Develop the standardization of certain digital tools

    One of the challenges of AI is to provide simulations based on synthetic data, rather than real data. Standards will have to make these data reliable.
  • Axis 6: Simplify access to and use of standards
    In order to bring this strategy to life and adjust it along the way, a consultation platform will be made available to you. In the meantime, don’t hesitate to consult the document and spread the word!

AI: a 2:30 minute summary of the AFNOR webinar of March 10, 2022

AI: relive in full the AFNOR webinar of March 10, 2022.

AI: listen to testimonials on industrial challenges.