the Frankenstein syndrome or when AI professionals call to be regulated

By Constantin Pavléas, lawyer specializing in technology law, founder and director of the firm Pavléas Avocats and head of teaching at the School of Applied Law Studies (HEAD).

More than one thousand three hundred researchers and entrepreneurs of new technologies, including Elon Musk, founder of Tesla and SpaceX and co-founder of OpenAI, or Steve Wozniak, the co-founder of Apple, have signed an open letter calling for a moratorium of at least least 6 months on the development of giant Artificial Intelligences stronger than ChatGPT-4.

These sector experts indeed fear a loss of control following the dazzling progress following the posting of ChatGPT4 or the numerous images generated by AIs which can prove to be weapons of massive disinformation and a threat to our societies and the environment. ‘humanity. They call on laboratories and to set the common rules of the game, security protocols for the design of these systems and independent audits to control them. They want these systems to be “accurate, secure, subject to interpretation, transparent, robust, aligned, trustworthy and fair. They ask politicians to speed up the establishment of governance systems to regulate the design and use of these AIs.

Contrary to what may have been written, the authors of this letter are not asking to suspend research or innovation. They are asking for a time for reflection and political action, an “AI Summer” so that society has time to adapt. As our societies have been able to do in other fields, such as biotechnologies, so as not to clone human beings for example.

The situation is quite surreal: in a sort of Frankenstein syndrome, designers are sounding the alarm before their creation, this “black box”, spirals out of control.

This appeal is therefore addressed as much to companies as to politicians and citizens.

Will it be heard by companies such as OPenAI, which has taken the lead over its competitors, but whose CEO also expresses his fears regarding the uses of his product for the purposes of disinformation and cybercrime?

What regarding policy and legal regulation? Since April 2021, Europe has been working on a proposal for a Regulation on Artificial Intelligence (EU IA Act). This text is not yet finalized.

The draft IA Act distinguishes prohibited AI from “high-risk AI” which should obey security, design and governance rules. it is not clear whether generative AIs, such as ChatGPT, are considered high-risk AIs, precisely when they pose problems that are civilizational issues.

It is urgent that Europe reviews this text in the light of the challenges of generative AI and promotes it as a standard on a global scale.

Leave a Replay