Grand Défi IA: season 2 kicked off this Thursday

In 2021, the French government launched the Grand Défi IA, tasking AFNOR with creating the standards environment to deploy "trustworthy AI". This July 25, 2024, the program has just been renewed, with an important dossier to be completed in April 2025: that of harmonized standards.

Reading time : 5 minutes
Cover Grand Défi IA

When the French government launched the AI Grand ChallengeIn 2021, there was this objective, clearly signalling a strategy of influence: “A lAt the end of the project, France will have a standardization strategy in the field of AI and will implement it through concerted partnerships (…), it will increase its influence in European and international standardization bodies (…), it will improve the competitiveness of French companies”. Orchestrated by the Secrétariat Général pour la Programmation des Investissements (SGPI), a department of the French Prime Minister, Grand Défi IA season 1 took place well upstream of standards production: the aim was to reach out to companies far removed from standardization, to explain and popularize existing voluntary standards, to create a mapping of standards and facilitate their adoption.
And, of course, to unite as many experts as possible around the project.
Click here for the press release issued at the start of the partnership.

Harmonized AI Act standards coming soon

Mission accomplished.
Over the past three years, France, via AFNOR, has become a major player in artificial intelligence standardization, steering the strategic mapping group at ISO-IEC JTC1/SC42 for example, or taking on the vice-presidency of CEN-CLC JTC 21.
“We have structured the French ecosystem around a roadmap, through partnerships with players Confiance.ai, France Digitale or Hub France IA,” explains Morgan Carabeuf, head of the digital division at AFNOR Normalisation.
We have mobilized players such asINRIA, Microsoft, Numalis, IBM, IRT System X, Airbus, Schneider, Institut Montaigne…” The publication, at the end of 2023, of the ISO/IEC 42001 standard framing AI management systems (a certifiable standard) is also a victory.
And yet,“in view of the stakes involved in harmonized standards, moreFrenchexpertswould be needed,” continues Morgan Carabeuf.
What are harmonized standards?
Harmonized standards have a special status, halfway between voluntary and regulatory standards.
In the European Union, a product’s compliance with harmonized standards constitutes a presumption of conformity with the law, and thus becomes a real competitive advantage on the market.
That’s how powerful they are.
In this case, the harmonized standards of the AI Act must be ready by April 2025.
We’re right in the middle of the race, and this is no time for France to take a break.
This is the purpose of the amendment to the contract signed with SGPI on July 25, 2024.
Especially as the context is changing rapidly.
At the end of 2022, ChatGPT arrived on the scene, quickly followed by its “cousins”.
In terms of best practices, generative AI undoubtedly deserves a dedicated strategy.

The AI Trustworthiness Framework, a gateway to the future

But the roadmap for the next few years revolves above all around the future European voluntary standard on the characterization of trust for AI(AI Trustworthiness Framework), a project proposed by France and accepted in January 2024.
Currently being drafted for a maximum period of three years, this future standard will also be harmonized within the EU-27. “It is seen as strategic by the European Commission, which has made it one of its priorities.
It will be able to refer to other
Inaddition, we will be working onother, more detailed standards, either fromISO,ETSI orIEEE [celle de l’électronique] “, explains Morgan Carabeuf. TheFrenchstrategy has been toreduce the risk of a deluge of standards by proposing this project, which is intended to be the gatewaytopara-relativestandardization.glementaire.
This project will provide high-level requirements and guide companies in their efforts to comply with the
AI Act.” At the helm is Enrico Panai, an AI ethicist and fierce advocate of standardization, for many years.
The idea comes from afar and has given rise to several publications that inspired the AI Act,” he says .
Avec comme fil conducteur le fait que la confiance est un élément fondamental de tous les processus de normalisation. ”
Why? Europe is convinced that confidence is needed for a market to develop,” replies Enrico Panai. As opposed to those who criticize regulation on the grounds that it kills innovation.

Trust, a prerequisite for innovation

The ethicist points out that in Europe, innovation is carried out by SMEs and VSEs, not by the Gafams.
“We need to be aware of this. If consumers don’t trust AI technology, companies won’t be able to develop anything at all”, insists Enrico Panai.
Another aspect, also underestimated in his view: “A technology is never built by a single company. You need a whole chain of players and developers. But it’s impossible to sign contracts without standards to match! Working on an intangible subject like trust isn’t easy, but it can be done with the help of “proxies” (markers, indices).
“We indicate, for example, that a certain measurable characteristic is proof that a relationship of trust can be built. It’s the same as in a restaurant, where you can say to yourself that an open kitchen, a well-marked step, or a clean washroom, is a good signal,” compares Enrico Panai.
The message is clear: if you want to get involved in standardization… don’t hesitate!
“It’s a choral work in which many voices are needed to achieve a shared and acceptable result.