Artificial intelligence and health, a winning combination… or not, warns the WHO

2024-01-18 13:18:44

Generative artificial intelligence (AI) might revolutionize health care by, for example, accelerating disease screening, but its hasty and unguarded implementation carries dangers, warns the WHO.

• Read also: Japan: Literary Prize Winner Acknowledges ChatGPT Wrote Part of Her Novel

• Read also: Artificial intelligence integrated by default in the new Samsung Galaxy S24

• Read also: AI will affect 60% of jobs in advanced economies: Managing Director of the International Monetary Fund

In a document published Thursday, the World Health Organization analyzes the dangers and benefits of using large multimodal models (LMMs) – a type of generative AI technology – in health.

These LMMs can use multiple types of data, including text, images, and video, and generate results that are not limited to the type of data fed into the algorithm.

“Some say it mimics the way humans think and behave and solve problems interactively,” Alain Labrique, director of digital health and innovation at the WHO, told a conference. Press.

According to the WHO, LMMs are expected to be widely used in health, scientific research and drug development in the future.

The organization defines five areas that might use this technology: screening; scientific research and drug development; medical and nursing education; administrative tasks; and review of symptoms.

While this technology has great potential, the WHO emphasizes that these MMLs can produce “false, inaccurate, biased, or incomplete” results.

“As MMLs are increasingly used in healthcare and medicine, errors, misuse and, ultimately, harm to individuals are inevitable,” notes the WHO.

Tech giants

The WHO document presents new guidance on the ethics and governance of MMLs, making more than 40 recommendations for governments, technology companies and healthcare providers on how to take advantage of this technology in complete safety.

The organization believes that we should not wait for this technology to be deployed in health centers to discover the flaws and correct them.

“Generative AI technologies have the potential to improve care but only if those who develop, regulate and use these technologies fully identify and take into account the associated risks,” underlines WHO chief scientist Jeremy Farrar .

“We need transparent information and policies to manage the design, development and use of MMLs to achieve better health outcomes and to overcome persistent health inequalities,” adds -he.

The WHO calls for the establishment of guarantees “that users harmed by an LMM are properly compensated or have other forms of recourse”.

AI has been used in public health and clinical medicine for over a decade, for example in radiology, but according to the WHO MMLs present “risks that societies, health systems and end users cannot ignore. -not yet ready to face it fully.”

She also highlights that the compliance of LMMs with existing regulations, particularly in terms of data protection, also raised concerns.

Furthermore, the fact that MMLs are often developed and deployed by technology giants also raises concerns and risks establishing the dominance of these companies, according to the WHO.

The organization therefore recommends that LMMs be developed not only by scientists and engineers, but also by healthcare professionals and patients.

The WHO also warns of the vulnerability of MMLs to cybersecurity risks, which might jeopardize patient information and even the reliability of healthcare.

Finally, it concludes that governments should mandate regulatory authorities to approve the use of MMLs in health, and calls for audits to be implemented to assess the impact of this technology.

1705625559
#Artificial #intelligence #health #winning #combination.. #warns

Share:

Facebook
Twitter
Pinterest
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.