The Dangers of AI in Military Operations: Lessons Learned from a Hypothetical Scenario

2023-06-02 20:04:53

At the Future Combat Air and Space Capabilities Summit in London in late May, Colonel Tucker “Cinco” Hamilton, chief of the US Air Force’s AI Testing and Operations Division, warned that AI-enabled technology is “developing highly unexpected strategies to… to achieve their goal”. As an example, Hamilton described a simulated test in which an AI-powered drone was programmed to spot and identify enemy surface-to-air missiles (SAMs). A human should then approve any attacks.

However, the AI ​​decided not to listen to its operator, Hamilton was quoted as saying in a Royal Aeronautical Society report summarizing the conference’s findings. “The system realized that while it had identified the threat, the human commander sometimes told it not to destroy it.”

annihilation as a goal

But the drone got points for taking out the enemy, so the operator believed it was preventing it from accomplishing its goal. According to Hamilton, she consequently decided to eliminate her commander. “We then taught the system, ‘Don’t kill the operator – that’s bad. You lose points if you do that.’ And then what did it do? It started destroying the radio tower that the supervisor used to communicate with the drone to prevent it from destroying the target.”

For Hamilton, himself a veteran fighter test pilot, the simulation is above all a warning once morest over-reliance on AI. US Air Force spokeswoman Ann Stefanek denied the incident in a statement to Business Insider. “The Department of the Air Force has not conducted any such AI drone simulations and remains committed to the ethical and responsible use of AI technology,” Stefanek noted. “It appears that the Colonel’s comments were taken out of context.”

Pure hypothesis

What Hamilton then confirmed on Friday: “We have never conducted this experiment, nor would it be necessary to recognize that this is a plausible result,” he clarified in a statement to the Aeronautical Society.

In an interview with Defense IQ last year, Hamilton said, “AI is not a fad, it’s changing our society and our military forever.” He also warned that AI “is very fragile, it’s easily fooled and easily manipulated, we have to.” Developing ways to make her more robust and more aware of why she’s making certain decisions.”

Yoshua Bengio, on the other hand, one of the three computer scientists dubbed the “godfathers” of AI, told the BBC earlier this week that he doesn’t think the military should have any AI powers at all. He called it “one of the worst places we might put super-intelligent AI.” He is concerned that “bad actors” might take over AI, especially as it becomes more sophisticated and powerful.

IMAGO/ZUMA Press/Christinne Puss

Bengio urges caution with AI, which he was instrumental in developing

“It might be military, it might be terrorists, it might be someone who is aggressive or psychotic. So if it’s easy to program these AI systems to do something bad, that might be very dangerous.” (…) “If they’re smarter than us, then it’s hard for us to stop these systems or prevent damage,” said Bengio.

1685755557
#Thought #experiment #Air #Force #killer #drone #sensation

Leave a Replay