Can AI prevent mental health problems?

In 1972, a Stanford psychiatrist named Kenneth Colby created Parry, a program that attempted to simulate the language of a person with paranoid schizophrenia, to train students before they saw real patients and to familiarize them with the language and way of thinking of these patients.

Parry’s success in simulating paranoid schizophrenia was such that he was even able to “pass” the Turing Test, a test designed by mathematician Alan Turing to assess whether a machine can exhibit intelligent behavior indistinguishable from that of a human being.

Now, researchers at the University of Oxford are exploring how AI could be used to predict mental health problems in the near future. “Talking to young people can help us understand their perspectives and move forward to provide an informative, useful and productive service,” explains Ella Arensman, professor of Neuroscience and Society.

The potential of digital phenotyping

Jessica Lorimer, a research assistant in the Department of Psychiatry, explains that one method they’re using is called digital phenotyping, which uses artificial intelligence to collect and analyze data from mobile devices, such as physical activity or location, to predict mental health problems.

Dr Matthew Nour, lead author and clinical lecturer at the National Institute for Health and Care Research (NIHR), said: “Diagnosis and assessment in psychiatry is almost entirely based on talking to patients and their loved ones. […] Automated tests, such as blood tests and brain scans, play a minimal role.”

“Until very recently, automatic language analysis has been out of reach for physicians and scientists. However, with the advent of language models such as Chat GPT, our team plans to use this technology on a larger sample of patients and in more diverse speech environments, to see if it could prove useful in the clinic,” Nour adds.

The barrier of ethics and privacy

However, this approach raises questions about consent and the right to privacy, especially for minors. “If a young person is found to be at risk, who should have the right to know that information: their parents, their teachers, their school, their doctor?” asks Jessica Lorimer.

An Oxford study called “What Lies Ahead“is investigating the ethical attitudes of 16-17 year olds towards predictive mental health testing. Postdoctoral researcher Gabriella Pavarini explains that the potential future psychological impact of receiving a predictive diagnosis was a major concern for this group.

Companies like Facebook are already using AI to detect posts that may indicate suicide risk and send them to human moderators for review. If the person is deemed to be at risk, emergency services can be contacted to do a “wellness check.”

Researchers at Oxford University are exploring how AI could be used to predict mental health problems in the near future. (Photo Prensa Libre: EFE)

In the first year of the program in the United States, 3,500 wellness checks were conducted. However, this raises several ethical dilemmas: Much of the information about how Facebook’s algorithms work is protected, so the details are not known, nor is it known how many of the wellness checks were successful in preventing suicide attempts, or what their impact was.

According to The New YorkerWhile Facebook’s use of AI to prevent suicide is a step forward, it also raises ethical questions, particularly since simply receiving a diagnosis often does not carry such immediate risk, raising the question of whether users would be willing to sacrifice their privacy to detect mental health problems at early stages.

Challenges that remain to be resolved

A study by the World Health Organization (WHO) has found significant shortcomings in the way AI is being used in mental health research. “We found that the use of AI applications in mental health research is unbalanced and is mainly used to study depressive disorders, schizophrenia and other psychotic disorders,” explains Dr Ledia Lazeri, WHO Regional Advisor on Mental Health in Europe.

The study also found significant flaws in statistical handling, poor data validation and poor assessment of risk of bias. For example, if certain ethnic groups are known to have less access to healthcare, algorithms based on that data could be less accurate in diagnosing mental health problems in those populations.

“Lack of transparency and methodological flaws are worrying as they delay the practical and safe application of AI,” said Dr David Novillo-Ortiz, WHO Europe Regional Advisor for Data and Digital Health.

These methodological issues are common in AI research in mental health: “Data engineering for AI models appears to be overlooked or poorly understood, and data is often poorly managed. These significant shortcomings may indicate an overly accelerated promotion of new AI models without stopping to assess their feasibility in the real world.”

“The use of AI in mental health poses unique challenges that require close collaboration between AI experts and mental health professionals,” said Dr. Sagar Parikh, associate director of the University of Michigan Psychiatric Research Institute.

“We must ensure that these technologies respect the dignity and autonomy of patients, as only then will we be able to fully realize the benefits of AI in early detection and treatment of mental health problems,” concludes Dr. Parikh.


#prevent #mental #health #problems

Share:

Facebook
Twitter
Pinterest
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.