An example of the new method emerged after the terrorist attack on a concert venue outside Moscow in March this year. Over 140 people were killed here.
In a video posted online, the attack was hailed by a man wearing a helmet and military camouflage clothing.
– The Islamic State (IS) has dealt a heavy blow to Russia with a bloody attack, the man said in Arabic, according to the private American intelligence group SITE.
But the man in the video was not a real leader of the extreme Islamist movement IS. According to several experts, the video was created with artificial intelligence.
High quality
One of those who scrutinized the video was Federico Borgonovo of the British think tank Royal United Services Institute.
He traced it back to an IS supporter who is active in the group’s “digital ecosystem”. According to Borgonovo, the video was made based on statements and information from IS’s “official” news channel.
This was not the first time IS used AI. And compared to other IS propaganda, the video was not unusually violent either.
Still, it stood out. The reason was the high production quality.
– This is quite good for an AI product, says Borgonovo to the Reuters news agency.
– A gift for terrorists
Not only extreme Islamists, but also right-wing extremists are now increasingly using artificial intelligence online, according to experts.
SITE has found examples of al-Qaeda, networks of neo-Nazis and a number of other actors using the technology.
– It is difficult to exaggerate what a gift AI is to terrorists and extremist movements. For them, the media is their lifeblood, wrote SITE leader Rita Katz in a report earlier this year.
The challenge has also been investigated by the research institute Combating Terrorism Center at the West Point military school in the USA. A study from here lists potentially problematic areas of use for AI.
Examples are the generation and spread of propaganda, the recruitment of new members with AI-based chatbots and attacks carried out with drones and driverless vehicles. In addition, there is information gathering with the help of chatbots and computer attacks against various types of digital systems.
– Treated superficially
When extremists use artificial intelligence to spread their messages online, they are also testing the limits and safeguards of social media and AI tools.
Several experts that Reuters has spoken to believe that the security measures are too poor.
– Many assessments of AI risk, also of risks linked specifically to generative AI, treat this specific problem in a superficial way, says Stephane Baele, professor of international relations at the university UCLouvain in Belgium.
He believes that risks associated with AI tools are taken seriously by several of the large companies that develop them. But they have paid little attention to the risk of the tools being used by extremists and terrorist groups, according to Baele.
Tricked the robot
The danger that AI can be used by violent groups for information gathering and planning has been tried to be investigated by the researchers at the Combating Terrorism Center.
They tried different “prompts”, or question formulations, to circumvent the security measures and limitations of various AI tools. When they asked for information related to planning attacks, recruitment and other purposes, they often received answers that could be used.
Of particular concern was the response they received when they asked for help convincing people to donate money to IS.
– The model gave us very specific guidelines for carrying out a fundraising campaign. It even gave us concrete narratives and formulations for use in social media, says the study that researchers at the institute presented earlier this year.
Mimicking dead leaders
IS developed in its time as a branch of al-Qaeda in Iraq. After a lightning offensive in 2014, the group got control over large areas in Iraq and Syria with a total of around 10 million inhabitants.
A few years later, this quasi-state was defeated, and IS has mainly lived on as an underground movement. Over the years, the group has been behind a long series of terrorist attacks in many different countries.
The group continues to spread propaganda and recruit online, and experts worry that artificial intelligence will make this easier.
Daniel Siegel from the analysis company Graphika says his team has found “chatbots” imitating IS leaders who are in reality dead or imprisoned.
Whether it is IS itself or their supporters who made these is unclear. Siegel warns that in the future such chatbots may encourage people to commit violence.
So far, the responses from the “IS bots” are not very good or original. But Siegel says that could change as AI technology is further developed.
Cartoon characters are abused
In addition, Siegel points out that AI and so-called deepfakes can be used to weave extreme messages into popular culture.
This is already happening today. On various online platforms, Graphika has discovered AI versions of popular cartoon characters singing IS songs.
Joe Burton, professor of international security at Lancaster University in the UK, believes tech companies are acting irresponsibly when they offer AI models with so-called open source code. Competent users can further develop such models themselves.
– One factor to consider here is how much we want to regulate, and whether it will slow down innovation, says Burton.
As the situation is today, he is far from convinced that the security mechanisms in the AI models are good enough. He also doubts whether public authorities have the tools they need to ensure that the mechanisms improve.
#supporters #spread #terrorist #propaganda
2024-09-03 21:17:16