Researchers say it would be impossible to control a super-intelligent artificial intelligence

Berlin – Sheba:

The idea of ​​artificial intelligence overthrowing humanity has been talked about for decades, and in 2021, scientists made their verdict on whether we’ll be able to control the super-intelligence of a high-level computer. the answer? Almost certainly not.

The point is that controlling a superintelligence far beyond human understanding requires a simulation of that superintelligence that we can analyze (and control). But if we are not able to understand it, it is impossible to create such simulations.

Nor can rules like “do no harm to humans” be made if we don’t understand what kind of scenarios AI will bring, as the authors of the new study suggest. Once a computer system operates at a higher level than our programmers’ range, we can no longer set limits.

The researchers wrote: “Superintelligence poses a fundamentally different problem than that typically taught under the banner of ‘robot ethics’… This is because superintelligence is multifaceted and therefore potentially able to mobilize a variety of resources towards potentially achieving goals that be incomprehensible to humans, let alone control them.”

Part of the team’s reasoning came from the halting problem posed by Alan Turing in 1936. The problem centers on figuring out whether or not a computer program will reach a result and an answer (so it stops), or it will simply be stuck forever trying to find one.

And as Turing demonstrated through some clever mathematics, while we can know that for some specific program it is logically impossible to find a way that would let us know that for every possible program that could ever be written. This brings us back to artificial intelligence, which in a super-intelligent state can hold practically every possible computer program in its memory simultaneously.

And any program written to prevent AI from harming humans and destroying the world, for example, may come to a conclusion (and stop) or not — it’s mathematically impossible to be absolutely sure of either case, which means it’s uncontainable.

“In effect, this makes the containment algorithm unusable,” computer scientist Iyad Rahwan of the Max Planck Institute for Human Development in Germany said in 2021.

Related Articles:  GISMETEO: Hubble and Webb telescopes captured the "explosion" of the asteroid Dimorph after it was rammed by the DART probe - Science and space

The alternative to teaching an AI some ethics and telling it not to destroy the world — something no algorithm can be completely sure of doing — is to limit superintelligence capabilities, the researchers said. They can be cut off from parts of the Internet or from certain networks, for example.

This study published in the Journal of Artificial Intelligence also rejected the idea, stating that it would limit the access of artificial intelligence; The argument goes that if we are not going to use it to solve problems beyond humankind, why would we create it at all?

And if we’re going to move forward with AI, we may not even know when a superintelligence that’s out of our control will arrive, and that’s not understanding it. This means we need to start asking some serious questions about the directions we’re going.

Also in 2021, computer scientist Manuel Cibrian of the Max Planck Institute for Human Development said: “A super-intelligent machine that controls the world looks like science fiction. But there are actually machines that perform certain important tasks independently without programmers. They fully understand how they learned it.” .

The question therefore arises whether this could at some point become out of control and dangerous for humanity.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.