AI Chatbots as a Tool for Terrorist Recruitment: Experts Warn About the Dangers

2023-06-05 09:17:20

Warnings once morest using artificial intelligence to “recruit terrorists”

About a month ago, experts hired by Open AE to evaluate the “Chat GBT 4” application before its release warned that the application might be used to help produce a chemical weapon. Today, “terrorism” experts confirm these warnings, pointing to the possibility of using bots. Chat in «terrorist recruitment».

Experts expressed concern that electronic chat programs powered by artificial intelligence “might be a tool to persuade individuals who are weak or different in opinions and ideas to carry out (terrorist) attacks,” a concern fueled by the case of Matthew King, 19, who resides in Britain He was sentenced last Friday to life imprisonment for planning a “terrorist” attack, following viewing “extremist” material on the Internet, according to a report published by the British newspaper “The Guardian” (Sunday).

Experts noted that “the speed with which this young man was radicalized by AI-powered online chatbots is becoming increasingly clear, as it becomes increasingly clear that vulnerable individuals are being recruited from their bedrooms by these tools.”

The “Guardian” quoted Jonathan Hall KC, from the “terrorism” watchdog, whose role is to review the adequacy of “terrorism” legislation, as saying, “What worries me is the possibility of people suggesting when they are immersed in this world and have no choice but to have a computer.” They find chatbots adept at using language that convinces them to do things.”

And while the innovators of artificial intelligence focus on talking regarding its advantages that will change the face of the world for the better, Hall KC believes that they need to abandon the “technological utopia” mentality, amid fears that the new technology might be used to recruit terrorists.

“The threat to national security by artificial intelligence is more evident than ever, and technology creators need to take (terrorists’) intentions into account when designing it,” he added.

And with calls to regulate the technology growing following AI pioneers warned it might threaten the survival of the human race, Prime Minister Rishi Sunak is expected to raise the issue when he travels to the US next Wednesday to meet President Biden and senior congressional figures.

The move is consistent with the UK’s efforts to address the national security challenges posed by artificial intelligence in partnership between the UK’s Domestic Intelligence and Security Agency (MI5) and the Alan Turing Institute, the national body for data science and artificial intelligence.

Alexander Blanchard, Researcher in Digital Ethics in the Institute’s Defense and Security Programme, says: “His work with security services indicates that the UK takes the security challenges posed by AI very seriously, as there is a great willingness among defense and security policy makers to understand what is happening. How can actors use AI, and what are the threats?

“There’s really a sense of the need to keep abreast of what’s happening, and work is being done to understand what are the current risks, what are the long-term risks and what are the risks of next-generation technology,” he adds.

With the British realization of the security challenges posed by artificial intelligence, Sunak said last week that “Britain wants to become a global center for artificial intelligence and its regulation,” insisting that it can provide “enormous benefits to the economy and society.”

Blanchard and Hall KC say the central issue is how humans can control AI so that its benefits are maximized and its harms avoided.

And with the need to be aware of security challenges and work to confront them, Juergen Schmidhuber, Director of the Artificial Intelligence Initiative at King Abdullah University of Science and Technology in Saudi Arabia (KAUST), fears that “exporting talk regarding the negatives may suggest that (artificial intelligence is pure evil), and this is not true, The positives of artificial intelligence far outweigh the negatives.

Schmidhuber, who is known in the scientific and academic circles as “one of the ancient fathers of artificial intelligence,” said in previous statements to Asharq Al-Awsat that “talk regarding dangers and negatives always attracts public interest, in a way that outweighs talking regarding positives, and this is the reason why (Arnold Schwarzenegger) films About killer robots, it is more popular than documentaries regarding the benefits of medical applications of artificial intelligence.

Mustafa Al-Attar, a researcher in the field of artificial intelligence at Nile National University in Egypt, told Asharq Al-Awsat that “freezing the ability of chatbots to self-learning can greatly reduce their dangers, and this is what must be demanded, to ensure that they are not misused.”

He adds that «this freeze means that the chatbot will not use any new sources other than the ones that were entered into it when programming it, and therefore it can from time to time update those sources, and ensure that there is control over the information it provides».

1685963755
#Russia #repulsed #widespread #Ukrainian #attack #Donbass

Share:

Facebook
Twitter
Pinterest
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.