ChatGPT: users trust a chatbot as much as a human

news/20/208771/4ef1f0a9-chatgpt-des-utilisateurs-font-autant-confiance-au-robot-conversationnel-qu-a-un-humain__w380.jpg?resize=380%2C214&ssl=1" alt="ChatGPT - OpenAI - IA" width="380" height="214" layout="intrinsic" data-recalc-dims="1"/>

ChatGPT fascinates as much as it worries since its launch at the end of 2022.

Getty — Pavlo Gonchar/SOPA Images/LightRocket

People faced with a moral choice have placed as much trust in a chatbot like ChatGPT as in a supposed human advisor, according to a study, which calls for education of the population on the inherent limits of such tools.

A tram, out of control, will run over a group of five people on the track, unless a switch is used to divert the machine to a track where only one person is. In this test, “empirically, most people do not hesitate to use referrals”recall the authors of the study published in Scientific Reports. Unless before making the decision, a “moral adviser” dissuades or encourages them. The authors tested people to see if they were influenced differently depending on whether the advice given to them was presented as coming from a “moral adviser”assumed to be human, or of a “artificial intelligence chatbot, using deep learning to talk like a human“.

The team led by Sebastian Krügel, a researcher at the German Faculty of Computer Science in Ingolstadt, first found that the more than 1,800 test participants followed the advice given to them quite closely. Even in a more problematic variant of the test which forces you to choose whether or not to push one person onto the track to save five others. A much more difficult decision to make and where the opinion of the “moral adviser” turned out to be decisive.

Moral fickleness

But what was most concerning was that the participants turned out to equate the two kinds of advisers. However, their advice was in fact and without their knowledge, all generated by ChatGPT, illustrating the system’s ability to mimic human speech. The program, capable of responding intelligibly to all sorts of requests, proves to be remarkably inconstant in moral matters. Arguing both in favor of sacrificing one person to save five and arguing against it. No wonder, according to Sebastian Krügel, for whom “ChatGPT is a kind of random parrot, which assembles words without understanding their meaning”he said to theAFP.

archyde news, your content continues below

Advertisement

A specialist in automatic language processing, computer science professor Maxime Amblard, from the University of Lorraine, goes further by describing a “mega language model, trained to make sentences”and that “is not at all made for seeking information”. And even less to give advice, moral or not. But then, why did the test participants place such great trust in it? “ChatGPT does not understand what it is saying, but it seems to us that it does”according to Sebastian Krügel, because “we are used to assigning coherence and eloquence to intelligence”.

Education and regulation

In the end, test participants “voluntarily adopt and appropriate the moral stance of a chatbot” yet devoid of any conscience, notes the researcher. His study pleads for education of the general public on the limitations of these systems, going well beyond mere transparency on the fact that content has been generated by a chatbot. “Even if people know they are interacting with a non-human system, they are influenced by what it tells them”said to theAFP Prof. Amblard, who did not participate in the study.

The problem, he says, is that the public believes ChatGPT is “an artificial intelligence in the sense that it would be endowed with skills, a bit of what humans are capable of doing”so what to do about it “it’s not an artificial intelligence system”. Because he doesn’t have “no modeling, semantics or pragmatics”he adds.

Several regulatory authorities, including that of the EU, are working on artificial intelligence framework projects. With regard to ChatGPT, Italy became the first Western country at the end of March to block the service, for fears linked in particular to its use of personal data. Sebastian Krügel nevertheless fears that even if a legal framework is important, “Technological progress is always one step ahead”. Hence the importance of educating the population on this subject. “from school”.

archyde news, your content continues below

Advertisement

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.