Microsoft claims to have caught Chinese, Russian and Iranian hackers using its AI tools – February 14, 2024 at 1:00 p.m.

2024-02-14 12:00:00

State-backed Russian, Chinese and Iranian hackers used tools from Microsoft-backed company OpenAI to hone their skills and deceive their targets, according to a report released Wednesday.

Microsoft said in its report that it tracked hacker groups affiliated with Russian military intelligence, Iran’s Revolutionary Guards, and the Chinese and North Korean governments as they attempted to perfect their hacking campaigns. using large language models. These computer programs, often called artificial intelligence, rely on massive amounts of text to generate human-sounding responses.

The company announced the discovery as it implemented a blanket ban on state-backed hacking groups that use its artificial intelligence products.

“Regardless of whether there was a violation of the law or the terms of service, we simply do not want the actors that we have identified – who we track and know to be threat actors of any kind – have access to this technology,” Tom Burt, Microsoft’s vice president of customer security, said in an interview with Reuters before the report was released.

Diplomatic officials from Russia, North Korea and Iran did not immediately respond to messages seeking comment on the allegations.

Chinese Embassy spokesperson in the United States Liu Pengyu said his country opposed “baseless slander and accusations once morest China” and called for a “safe, reliable and controllable” deployment of AI technology to “improve the common well-being of all humanity”.

The allegation that state-backed hackers were caught using AI tools to boost their espionage capabilities is expected to heighten concerns regarding the technology’s rapid proliferation and risks of abuse that she presents. Since last year, top Western cybersecurity officials have been warning regarding the misuse of these tools by rogue actors, although until now the details have been scant.

“This is one of the first cases, if not the first, of an AI company speaking out publicly regarding how cybersecurity threat actors are using AI technologies,” said Bob Rotsted, who leads the cybersecurity threat intelligence department at OpenAI.

OpenAI and Microsoft called the hackers’ use of their AI tools “early” and “progressive.” Burt said neither had seen cyberspies making inroads.

“We really saw them using this technology like any other user,” he said.

The report describes hacker groups that use large language models differently.

Suspected hackers working on behalf of Russia’s military spy agency, better known as GRU, used the models to research “various satellite and radar technologies that may relate to conventional military operations in Ukraine.” , Microsoft said.

According to Microsoft, North Korean hackers used the models to generate content “likely to be used in spear-phishing campaigns” once morest regional experts. Iranian hackers also relied on the templates to craft more convincing emails, Microsoft said, at one point using them to craft a message aimed at luring “prominent feminists” to a booby-trapped website.

The software giant said Chinese state-backed hackers were also experimenting with large language patterns, for example to ask questions regarding rival intelligence agencies, cybersecurity issues and “notable people.”

Neither Burt nor Rotsted would comment on the volume of activity or the number of suspended accounts. Burt defended the zero-tolerance ban on hacker groups – which does not extend to Microsoft products such as its Bing search engine – by highlighting the novelty of AI and concerns over its deployment.

“This technology is both new and incredibly powerful,” he said. (Reporting by Raphael Satter; additional reporting by Christopher Bing in Washington and Michelle Nichols at the United Nations;)

1707915335
#Microsoft #claims #caught #Chinese #Russian #Iranian #hackers #tools #February #p.m

Leave a Replay