Since it was democratized at the end of last year, ChatGPT fascinates as much as it frightens. Some are afraid of the conversational robot capable of answering any question or of writing a text or a line of code on command in a handful of seconds, others call for “living with it”.
So of course, we can adapt, invest massively in training for digital professions to learn how to work with AI, but can we, if necessary, “slow down” the development of this technology? While in recent days, a handful of tech leaders have called for a “pause” in development, alternative avenues seem limited for the time being.
The law and data protection
Each country remains free, legally, to prohibit access to ChatGPT. In Europe, a first state has just taken the plunge: Italy. The transalpine authorities criticize the company OpenAI, at the origin of the conversational robot, for not imposing any verification of the age of the people who use it (you must be at least 18 years old or have the agreement parental between 13 and 18 years old and, this Wednesday, the company promised to better control it). They also denounce “the absence of an information note for users whose data is collected”. Thus, ChatGPT would not comply with the European General Data Protection Regulation (GDPR). In France, the first two complaints have been filed, and a legal framework for the whole continent should soon be established.
Once AI is authorized, might companies easily part ways with their employees? “Technological changes” are part of the reasons justifying economic redundancy in France, the automatic checkouts in supermarkets attest to this.
Ethics?
Major scientific journals have already updated their rules: they ask authors to commit to mentioning the use of ChatGPT in complete transparency, out of ethics… or even totally prohibit its use. “An artificial intelligence program cannot be the author of an article in Science magazine “, warns the latter.
On the same model, any company might morally oblige its employees to be transparent. But for a researcher, a journalist, an MP, or a lawyer who openly engages in it, how many will not mention it? Not to mention high school or college students who cheat.
It is also difficult to see how to “tag” a text written by ChatGPT. On each image generated by the Dall-E artificial intelligence tool, there is a small multicolored label in the lower right corner indicating that the photo is “false”. Not so with its competitor Midjourney, which is much more efficient…
Finally, it should be remembered that ChatGPT is not without ethical problems: like any generative AI, the chatbot presents the same biases as those who feed it, and it has therefore been singled out for erroneous or discriminatory answers targeting people of colour, women or transgender people, or fueling gender bias. In a press release published on March 31, Unesco said it was “concerned by the ethical questions raised by these innovations” and asked that they be taken into account in the design of the tools.