Warnings of the ability of artificial intelligence to cause a pandemic or epidemic.. How will that happen?

Specialized AI models trained on biological data have made significant progress, helping to speed up the development of vaccines, treatments for diseases, and more. But the same qualities that make these models useful also pose potential risks.

That’s why experts are calling on governments to introduce mandatory oversight and protective barriers for advanced biosimilars in a new research paper published August 22 in the peer-reviewed journal Science.

While today’s AI models may not “contribute significantly” to biological risks, future systems could help engineer new pathogens capable of causing pandemics.

The warning comes in a research paper by co-authors from Johns Hopkins University, Stanford University and Fordham University, who say AI models “are trained, or are capable of purposefully manipulating, large amounts of biological data, from accelerating drug and vaccine design to improving crop yields.”

But as with any powerful new technology, such biological models will also pose significant risks.

“Because of its general nature, the same biological model that is capable of designing a benign viral vector to deliver gene therapy can be used to design a more pathogenic virus that is capable of evading vaccine-induced immunity.”

“Voluntary commitments among developers to assess the potential hazards of biological models are meaningful and important, but they cannot stand alone,” the paper continued. “We suggest that national governments, including the United States, pass legislation and establish mandatory rules that would prevent advanced biological models from significantly contributing to large-scale risks, such as the creation of new or improved pathogens capable of causing major epidemics or even pandemics.”

Although today’s AI models are unlikely to “contribute significantly” to biological risks, “the essential ingredients for creating advanced biological models of concern may already exist or will soon exist.”

The experts reportedly recommended that governments create a “basis of tests” that biological AI models must pass before they are released to the public, so that officials can determine how much to restrict access to the models.

“We need to plan now,” Anita Cicero, deputy director at the Johns Hopkins Center for Health Security and one of the paper’s authors, said, according to Time. “Some sort of regulated government oversight and requirements will be necessary to mitigate the risks of particularly powerful tools in the future.”

Because of the expected advances in AI capabilities and the relative ease of obtaining biological materials and hiring third parties to conduct experiments remotely, Cicero believes that biological risks from AI could become apparent “within the next 20 years, and perhaps even much sooner,” unless there is proper oversight.

“We need to think not just about the current version of all the tools available, but the next versions, because of the tremendous growth we see. These tools will become more powerful,” she adds.

Source: Fox News

#Warnings #ability #artificial #intelligence #pandemic #epidemic. #happen
2024-09-01 00:19:15

Share:

Facebook
Twitter
Pinterest
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.