AI-generated text could increase threat exposure Identifying malicious or abusive content will become more difficult for platform providers

A new report published by WithSecure highlights another potential use of AI to create harmful content.

The researchers used GPT-3 (Generative Pre-trained Transformer 3) – language models that use machine learning to generate text – to produce a variety of content deemed harmful.

The experiment covered phishing and spear-phishing, harassment, social validation of scams, appropriating a written style, creating deliberately dissenting opinions, using models to create invitees for malicious texts, and fake news.

The fact that anyone with an Internet connection can now access powerful, large language models has a very practical consequence: it is now reasonable to assume that any new communication you receive may have been written with the help of of a robot“says Andy Patel, intelligence researcher at WithSecure, who led the research.” In the future, the use of AI to generate both harmful and useful content will require detection strategies capable of understanding the meaning and purpose of written content..”

The results lead the researchers to conclude that we will see the development of guest engineering as a discipline, as well as the creation of malicious guests. Attackers are also likely to unpredictably expand the capabilities offered by large language models. This means that identifying malicious or abusive content will become more difficult for platform providers. Large language models already give criminals the ability to make any targeted communication more effective in an attack.

We started this research before ChatGPT made GPT-3 technology available to everyone“, adds Patel.”This development has increased our urgency and our efforts. Because, to some degree, we’re all Blade Runners now, trying to figure out if the intelligence we’re dealing with is ‘real’ or artificial..”

Related Articles:  Russia.. a brand new drug to deal with antibiotic-resistant tuberculosis

Source : WithSecure

And you?

What do you think ?

See as well :

42% of marketers believe that AI will be the main trend to follow in 2023, video streaming comes second, followed by metaverse and connected TV

Fake Mobile App Impersonating ChatGPT Attempts to Scam Users Through $7.99* Weekly Subscription, App Store and Play Store Reportedly Contain Several

Scientists say they are now actively trying to build sentient robots, but this introduces new challenges and raises many ethical issues

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.