People are skeptical about AI medical advice—and often rightly so

2024-07-31 18:00:00

An inexplicable stomach tightening, a persistent cough or strange spots on your toenails: people asking Google for various symptoms is not a new phenomenon – with the growing popularity of AI-based chatbots such as ChatGPT comes the possibility of digital self-diagnosis It seems to be evolving. In fact, a Würzburg study reported in the journal Nature Medicine shows that people still have significant reservations about the medical capabilities of this kind of artificial intelligence.

Perceptions of artificial intelligence recommendations examined

Scientists in Würzburg studied how people react to medical advice generated by artificial intelligence. “We are not interested in the technical capabilities of AI, but in the question of how the AI ​​output is perceived,” says Moritz Reis of Julius Maximilian University.

To do this, the research team divided more than 2,000 test subjects into three groups and received the same medical advice. The first group was told that the advice came from a doctor. In the second group, an AI-based chatbot was designated as the initiator, and the third group assumed that the advice came from the chatbot but had been double-checked by a doctor.

Test subjects rated the recommendations for reliability, understandability, and empathy. Once they suspect AI is involved, they view the advice as lacking empathy and reliability. The same goes for groups that believe doctors have reviewed AI recommendations. Therefore, they are less willing to follow these recommendations. “The impact of AI bias is not large, but it is statistically significant,” Reiss commented.

Explaining Artificial Intelligence Skepticism

The cognitive psychologist partly explained skepticism about artificial intelligence in terms of stereotypes: “Many people believe that machines cannot empathize.” However, when it came to understandability, all three groups rated the proposals as the same.

For the research team, skepticism about AI is important as it plays an increasingly important role in medicine. A large amount of research is currently being published on new AI applications. This makes public acceptance even more important, Reiss said: “The question for future applications of artificial intelligence in medicine is not only what is technically possible, but also how far patients can go.” There is a need for relevant applications and artificial intelligence Provide overall clarification. “Additionally, other studies have shown how important it is for patients’ trust that ultimately the human doctor always has the final decision-making authority alongside the patient,” Reiss emphasized.

Transparency is the key factor

The scientist believes that transparency is particularly important: “This means, for example, that artificial intelligence can not only make a diagnosis, but also explain in an understandable way what information led to this result.”

The quality of these results has been scientifically tested over time, with varying degrees of success. For example, in 2023, ChatGPT in the Journal of Medical Internet Research demonstrated its high level of diagnostic accuracy: through testing on 36 case studies, the chatbot made the correct final diagnosis in nearly 77% of cases. A study in the Netherlands shows that emergency rooms are even close to doctors in diagnostic capabilities. Using anonymized data from 30 patients treated in Dutch emergency centers, ChatGPT made the correct diagnosis in 97% of cases (Annals of Emergency Medicine, 2023).

By comparison, a 2023 study published in the journal Jama found that chatbots correctly diagnosed only 27 cases out of 70 medical case studies. That’s only 39%. A study published in the journal Jama Pediatrics concluded that this hit rate is even higher for diseases that primarily affect children.

Application of ChatGPT in medical education

A recent study published in the journal Plos One now examines whether ChatGPT can be useful in medical training. Ultimately, chatbots not only tap into vast knowledge bases, but are also able to communicate that knowledge in an interactive and easy-to-understand way, according to the research team at the Health Sciences Center in London, Canada.

The team provided ChatGPT with 150 so-called case challenges from a repository of medical case histories describing symptoms and disease progression. Prospective and current physicians are asked to use the answer selection process to diagnose and develop a treatment plan.

In this test, ChatGPT was correct in less than half of the cases (74 out of 150 cases). The study found that ChatGPT struggled to interpret laboratory values ​​and imaging tests and missed important information. Therefore, the authors conclude that ChatGPT in its current form is not accurate as a diagnostic tool and that caution is required when using chatbots as diagnostic tools and teaching aids.

“The combination of high correlation and relatively low accuracy discourages reliance on ChatGPT for medical advice, as it may provide important information that may be misleading,” the study states. This warning likely applies to ChatGPT as well. For medical laypeople they use chatbots to conduct digital self-diagnosis.

ChatGPT’s own assessment

ChatGPT itself emphasizes that it is not suitable for this. When asked about its diagnostic qualifications, the robot responded: “I am not a doctor and have no medical training. I can provide information on medical topics, provide general advice and answer questions, but I cannot make medical diagnoses or provide professional advice.” Medical advice.

1722451754
#People #skeptical #medical #adviceand #rightly

Share:

Facebook
Twitter
Pinterest
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.