ChatGPT: can hackers take advantage of it to multiply cyberattacks?

When OpenAI launched ChatGPT last November, developers found that this AI-powered chatbot might write code. And as with any innovation, some try to take advantage of it to carry out malicious actions…

For years, the benefits of AI have been touted by companies and specialists. But others are sounding the alarm on abuses, errors that a poorly designed algorithm can cause or even deepfakes.

With ChatGPT, cyber risk certainly takes a new step. Hackers might use it to write malware or locate vulnerabilities in a site or computer server.

Another misuse of this chatbot: the launch of phishing campaigns. Last December, researchers from Checkpoint (an American cybersecurity company) demonstrated how ChatGPT might potentially build a malware campaign from start to finish, from creating a phishing email to writing the code. malicious.

But to generate a complete code, it was necessary to ask the AI ​​model to take into account elements that only an expert programmer would have thought of… And therein lies the limits of ChatGPT.

A miracle solution for “script kiddies”

This is the opinion of Marcus Hutchins, a hacker who in 2017 helped stop the spread of WannaCry ransomware. Known by his online pseudonym MalwareTech, he tried to develop ransomware (or ransomware, malicious code that encrypts data). But “ChatGPT failed”he explained in an interview with CyberScoop.

“The attacker must know exactly what he wants and be able to specify the functionality. Just writing ‘write code for malware’ won’t produce anything really useful”Sergey Shykevich, a researcher at Checkpoint, said in a press release.

“ChatGPT is a parrot who has read a lot and only knows how to repeat what is already known. No need for ChatGPT to develop viruses »says an expert from the Ministry of the Armed Forces who wishes to remain anonymous.

Finally, in its current version, ChatGPT appears to be more useful for “script kiddies”. With this AI solution, these inexperienced hackers might compensate for their weaknesses in developing malicious code. The proof, on a pirate forum, a person managed to design a Python script capable of encrypting and decrypting files on the fly. A code that is certainly “benign” as it is, but which might be refined in the coming months…

Finally, Natural language processing (NLP) might represent a boon for all spammers around the world who are only fluent in their native language.

In terms of content creation, the potential is clear and ChatGPT is taking it to the next level. Hackers might rely on this solution to design phishing campaigns in different languages ​​that are difficult to identify as traps, because the content would be perfectly suited to the target (a chartered accountant, an HRD, etc.).

If solutions currently make it possible to identify content generated by ChatGPT, it may be more difficult to detect an email usurping a bank or an administration. Only the vigilance of the human being will make it possible to fight once morest the risks of hacking or identity theft.

Share:

Facebook
Twitter
Pinterest
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.