IT officials fear ChatGPT, the AI-powered and wildly successful chatbot, is already being used by state-sponsored cybercriminals to engineer cyberattacks.
A report by BlackBerry, which polled 500 IT decision makers in the UK on their opinion of this game-changing technology, found that more than three-quarters (76%) of them believe that foreign states are already using ChatGPT in their security campaigns. cyber war once morest other nations. Nearly half (48%) think 2023 is the year we can credit this technology with a successful cyberattack.
While this may sound like a standard case of polemic once morest the machine, it is far from it. Most respondents (60%) still see technology being used for “good” purposes, but at the same time, 72% worry regarding potential misuse.
Improved phishing emails
They are most concerned that cybercriminals will use the AI-powered chatbot to craft credible phishing emails (57%), improve the sophistication of their attacks (51%) and accelerate new social engineering attacks (49%) . Additionally, 49% think ChatGPT might be used to spread misinformation, while 47% see it as a good tool for hackers to learn new skills and improve.
But if the AI can be used in attack, it can also be used in defense. That’s why nearly four in five respondents (78%) plan to invest in AI-powered cybersecurity over the next two years, with 44% planning to do so this year. Almost all (88%) expect the government to step in and regulate the use of this technology.
“It’s well known that bad guys test the waters, but over the course of this year we expect hackers to have a much better grasp on how to use ChatGPT for malicious purposes, whether as a tool to write better malware (opens in a new tab) or as a way to enhance their skills,” commented Shishir Singh, Chief Technology Officer, Cybersecurity at BlackBerry.