Russian, North Korean and Chinese hackers use ChatGPT

2024-02-14 20:53:20

Emerald Sleet, a North Korean hacker group, and Crimson Sandstorm, associated with the Iranian Revolutionary Guards, used the chatbot to generate documents likely to be used for “phishing”. (Photo: 123RF)

Hackers affiliated with the Russian, Chinese, Iranian or North Korean governments use ChatGPT to identify vulnerabilities in computer systems, prepare “phishing” operations or disable antivirus software, report OpenAI and Microsoft in documents published Wednesday.

In a message posted on its site, OpenAI indicates that it has “disrupted” the use of generative artificial intelligence (AI) by these paragovernmental actors, with the collaboration of Microsoft Threat Intelligence, a unit which lists the threats posed to companies in terms of cybersecurity.

“The OpenAI accounts identified as affiliated with these actors have been closed,” said the creator of the generative AI interface ChatGPT.

Emerald Sleet, a North Korean hacker group, and Crimson Sandstorm, associated with the Iranian Revolutionary Guards, used the chatbot to generate documents that might be used for “phishing,” according to the study.

“Phishing” consists of presenting yourself to an Internet user under a false identity to obtain from them illegal access to passwords, codes, identifiers, or directly to non-public information and documents.

Crimson Sandstorm also used language models (LLM), the basis of generative AI interfaces, to better understand how to disable antivirus software, according to Microsoft.

As for the Charcoal Typhoon group, considered close to the Chinese authorities, it used ChatGPT to try to detect vulnerabilities in anticipation of possible computer attacks.

“The goal of the partnership between Microsoft and OpenAI is to ensure the safe and responsible use of technologies powered by artificial intelligence, such as ChatGPT, according to Microsoft.

The Redmond (Washington State) group says it helped strengthen OpenAI’s language model (LLM) protection.

The report notes that the interface refused to help another hacker group close to the Chinese government, Salmon Typhoon, generate computer code for hacking purposes, thereby “adhering to ethical rules” built into the software.

“Understanding how the most advanced threat actors use our programs for malicious purposes tells us regarding practices that may become more prevalent in the future,” OpenAI explains.

“We will not be able to block any ill-intentioned attempt,” warns OpenAI. “But by continuing to innovate, collaborate and share, we make it more difficult for bad actors to go unnoticed.”

1707944280
#Russian #North #Korean #Chinese #hackers #ChatGPT

Leave a Replay