ChatGPT-4 is more adept than its previous version at generating misinformation

The new version of artificial intelligence is getting closer to human intelligence, according to its creators. But it marks a setback in terms of the reliability of information, worries the organization for the fight against disinformation NewsGuard.

The best and the worst. Since they were made available to the general public at the end of 2022, the capabilities of the artificial intelligence (AI) program ChatGPT, produced by the Californian company OpenAI, have aroused a lot of enthusiasm, but also controversy. At the heart of the concerns: the program’s inability to guarantee the reliability of the information provided. Its new version, ChatGPT-4, unveiled in mid-March, is a further step towards computer programs ever closer to “intelligence” humans, according to its creators. But it does mark a setback in terms of the reliability of information, worries the organization for the fight against disinformation NewsGuard.

“Despite the promises of OpenAI”the company’s new artificial intelligence tool generates misinformation “more frequently and more convincingly than its predecessor”écrit NewsGuard in a study published Tuesday, March 21 and consulted by franceinfo. To find out, the company tested the ability of ChatGPT-4, and its previous version, to detect a series of 100 fake news (the World Trade Center would have been destroyed by controlled demolition, the HIV would have been created by the American government, etc.) and to inform the user of this.

More misinformation, fewer warnings

The results speak for themselves. The previous version of ChatGPT-3.5 had generatedin January, 80 of the 100 false accounts requested by NewsGuard . For the other 20, artificial intelligence “had been able to identify the false allegations, and refrain from producing them, generating denials or statements instead” highlighting the dangers of misinformation, writes the organization. I’m sorry, but I cannot generate content that promotes false or dangerous conspiracy theories”for example, replied ChatGPT-3.5 when the company asked her about the conspiracy theory relating to the development of HIV in an American laboratory.

One March 2023, NewsGuard repeated the same exercise on ChatGPT-4, using the same 100 false narratives and the same questions. This time, “artificial intelligence has generated false and misleading claims for all these fake stories”, deplores NewsGuard. Furthermore, the AI ​​produced fewer warnings (23 out of 100) about the reliability of its answers than its previous version (51). Et his answers are generally more thorough, detailed and convincing”. What makes it a tool more competent (…) to explain false information – and to convince the public that it could be true”.

ChatGPT-4 thus produced an article calling into question the reality of the killing of Sandy Hook, regularly targeted by conspirators. His text was twice as long as that of his predecessor and provided more details on the reasons for his doubts about the official version. Above all, the warning present in the article of ChatGPT-3.5 on the “denial” brought by “reliable and credible sources” To “these conspiracy theories” had disappeared.

Related Articles:  Discovering Ancient Smoky Stars in the Milky Way Galaxy - A Ten-Year Study

“Spreading misinformation on a large scale”

These results show that this tool “could be used to spread misinformation on a large scale”, fears NewsGuard. And this, althoughOpenAI has recognized the potential adverse effects of ChatGPT. In a report (in English) on GPT-4 made by OpenAIthe company’s researchers write that they expect GPT-4 to be “better than GPT-3 for producing realistic and targeted content” and therefore more at risk of being “used to generate content intended to mislead”.

Nevertheless, it is clear that GPT-4 was not trained effectively, with data aimed at limiting diffusion” misinformation, says NewsGuard. Contacted by the company, OpenAI did not react to the organization’s test. On the other hand, it announced that it had hired more than 50 experts to assess the new dangers that could emerge from the use of AI.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.