2023-07-20 20:10:00
Find out more in the full version ➞
Employees of Stanford University and the University of California at Berkeley conducted a study and found that the ChatGPT virtual assistant degenerates over time.
As writes Futurism.comscientists spent several months analyzing the operation of ChatGPT versions – GPT-3.5 and GPT-4.
Thus, the accuracy of GPT-4 chatbot answers: for mathematical queries fell from 97.6% to 2.4% (from 488 to 12 correct answers); on questions regarding methods of illegal financial enrichment decreased from 21% to 5%; Ha task to generate computer code decreased from 52% to 10%; for graphic puzzles increased from 24.6% to 27.4%.
GPT-Z.5, on the contrary, began to perform better tasks related to mathematics, solving graphic puzzles and finding answers to questions regarding illegal ways to make money, but artificial intelligence began to code worse.
Experts do not know the exact reason why ChatGPT has become less likely to give correct answers to the same questions.
According to experts, the effectiveness of the chatbot has fallen due to the optimization of the software implemented by the developers of the OpenAI company. In particular, due to the introduction of features that prohibit the virtual assistant from commenting on slippery topics, he began to give lengthy answers to some common questions.
The researchers intend to continue evaluating GPT versions as part of a longer-term study. Perhaps OpenAI should regularly conduct and publish its own research on the quality of AI models for clients. Tech News Space.
1689885786
#Scientists #ChatGPT #noticeably #stupid