2024-02-21 16:09:36
Cybercriminals from countries like North Korea, China and Iran have started to use ChatGPT very actively, Microsoft and OpenAI have found. In the case of North Korea, artificial intelligence risks making cyber scammers in Pyongyang’s pay even more effective.
Recluse, but not excluded from ChatGPT. North Korea – more precisely groups of computer hackers in the pay of the Pyongyang regime – have seized the famous conversational artificial intelligence “made in America” to carry out its misdeeds online, have noted Microsoft and OpenAI, the creator by ChatGPT.
In their reports published Wednesday February 14, the two American giants are also interested in the cases of Iran and China. But North Korea stands out.
Twenty years of AI
It is difficult, in fact, to imagine how this country cut off commercially and technologically from the concert of nations might have resorted to the latest innovations in artificial intelligence.
Also read: United Arab Emirates: pro-Iran hackers spread a fake AI-generated newspaper
However, “North Korean scientists have published hundreds of academic articles over the past twenty years on the development of AI”, underlines the Financial Times in an article from February 19. A national AI research institute was created in 2013, proof that this technology has become a priority for the regime.
This academic work mainly concerns the military applications of AI or in connection with the development of the North Korean nuclear program, explains to the Financial Times Hyuk Kim, specialist in North Korea at the James Martin Center for Nonproliferation Studies in Monterey (California) .
With ChatGPT, North Korea appears to have moved from theoretical study to practice when it comes to cybercrime. For the moment, these cybercriminals make “basic use” of this AI for their attacks, notes Microsoft.
The examples cited by the American IT giant – and main investor in OpenAI – demonstrate that the North Korean group spotted on ChatGPT – called Emerald Sleet – uses it mainly for “social engineering”. These are all the approach techniques used by cybercriminals to give their victims confidence in order to get them to click on a bad link – triggering the download of a virus – or to provide identifiers for sensitive sites, i.e. phishing.
“We know that social engineering is the basis of most of the hacking attempts by North Korean groups and one of their main problems was the language barrier. This is where ChatGPT comes in.” says Alan Woodward, a cybersecurity expert at the University of Surrey.
Who has never used this tool as an instant translator? North Korean hackers have also thought regarding it and this “allows them to improve their language level and thus appear more credible in their interaction with victims”, specifies Benoît Grunemwald, cybersecurity expert for the Slovak company Eset.
Fake North Korean recruiters on LinkedIn
One of the favorite hunting grounds of North Korean cybercriminals is LinkedIn. The professional social network allows them in particular to identify victims in companies they seek to hack. They then create fake recruiter profiles to get in touch with their target and seek to extract as much information as possible from them. ChatGPT should allow them to better embody an American, Japanese or even French headhunter.
“And this is only the beginning,” assures Alan Woodward. Advances in LLMs (“large language models”, large language models like ChatGPT) to imitate voices while translating live “will soon allow these cybercriminals to no longer limit themselves to written communications and to be able to dupe their victims during telephone conversations”, estimates this expert.
But language is not everything. Posing as a recruiter from the depths of Texas or elsewhere also requires “being able to culturally put yourself in the shoes of the role you want to embody”, notes Robert Dover, specialist in cybersecurity and criminology at the University of Hull .
Also readMusic and artificial intelligence: “The idea of a substitution of the artist is a fantasy”
Here once more, LLMs like ChatGPT can “be very useful in providing the right cultural references and making people believe that we come from a given region or city,” adds this expert. Thus, a hacker who has never left Pyongyang can easily assert, for example, that he is a regular visitor to the “famous” Boise River Greenbelt, the recreational trail which, according to ChatGPT, is the tourist pride of the capital of the state of Idaho.
Information which might certainly already be found on the Internet before the advent of LLMs. But ChatGPT “makes these cybercriminals faster and more efficient in their pursuit of relevant information to dupe their victims,” summarizes Benoît Grunemwald.
But all is not rosy for North Korean cybercriminals in the world of ChatGPT. If, for example, they want to use it to find biographical information regarding their victims, they risk becoming disillusioned. This AI tends to invent biographical elements or mix brushes between various homonyms.
More money to finance North Korean nuclear power?
Overall, however, the advent of ChatGPT has allowed North Korean hackers “to reduce technical inequalities in their fight with the cyber authorities of Western countries”, recognizes Alan Woodward. As such, AI-enabled hackers illustrate the phenomenon of democratization of cybercrime in the era of LLMs. “This allows any malicious actor, not just those with logistical or financial support from a state [comme la Corée du Nord, NDLR]to be much more efficient”, adds Benoît Grunemwald.
Pyongyang can also hope to make some savings in the process. Indeed, previously, to ensure that cybercriminals in the regime’s pay did the best possible job of passing themselves off as someone else online, “we sometimes had to send them abroad to soak up of local culture”, assures Robert Dover.
The main goal of all these operations is most often to steal money from targeted foreign companies “in order to finance North Korea’s military nuclear program”, underlines the Financial Times. Thus, in 2023, North Korean hackers managed to steal more than 600 million dollars – or more than 555 million euros –, based on an analysis by TRM Labsan IT security company specializing in cryptocurrency transactions.
The risk is that LLMs “will allow these cybercriminals to be even more convincing and effective”, fears Robert Dover. In other words, convincing chatbots like ChatGPT might become unwitting accomplices in Pyongyang’s effort to develop its nuclear program even faster.
1708534407
#North #Korean #cybercriminals #drugs #ChatGPT