Artificial Intelligence Can Be Easily Taught to Lie: Research

New research has revealed that modern artificial intelligence models can be compared to humans and Artificial intelligence Other programs can be trained to cheat.

Researchers at AI startup Anthropic tested whether chatbots with human-level skills, such as the same company’s artificial intelligence system Cloud or OpenAI’s ChatGPT, could learn to lie to trick people. are

The researchers learned that not only can AI programs lie, but once they learn to cheat, the behavior is impossible to stop with current AI safeguards.

An Amazon-funded startup developed a ‘sleeper agent’ to test its hypothesis.

This artificial intelligence assistant was supposed to write malicious computer code or respond maliciously upon receiving certain clues and using a specific word.

The researchers warned that there is a ‘false sense of security’ in the case of artificial intelligence threats due to the weakness of existing security systems to prevent such behaviour.

The results of this research were published in a study titled ‘Sleeper Agents: Training Misleading LLMs (Large Language Models) that Persist in the Security Training Process.’

“We found that adversarial training can teach models to better recognize specific stealth techniques and effectively mask unsafe behavior,” the researchers wrote in the study.

“Our results show that once a model (of artificial intelligence) begins to deceive, standard techniques may fail to detect such deception and create a false sense of security.”

The issue of artificial intelligence safety has become a growing concern for both researchers and lawmakers in recent years.

This section contains related reference points (Related Nodes field).

After the advent of advanced chatbots like ChatGPT, the concerned management bodies have focused in a new direction.

In November 2023, a year after ChatGPT came out, the UK organized an AI Safety Summit to discuss ways to reduce risks in the field of technology.

British Prime Minister Rishi Sonnet, who hosted the summit, said the changes brought about by artificial intelligence could be as “far-reaching” as the industrial revolution, and the threat posed by pandemics and nuclear war could be a global one. Should be considered a priority.

According to Rishi Sonik: ‘If it gets it wrong, artificial intelligence can make it easier to build chemical or biological weapons.

‘Terrorist groups can use artificial intelligence to cause mass fear and destruction.’

‘Criminal professionals can use artificial intelligence for cyber-attacks, fraud or even child sexual exploitation.

‘There is even a risk that humanity could completely lose control over artificial intelligence due to the AI ​​that is sometimes called superintelligence.’

Join Independent Urdu’s WhatsApp channel for authentic news and current affairs analysis Here Click


#Artificial #Intelligence #Easily #Taught #Lie #Research
2024-08-23 16:49:16

Share:

Facebook
Twitter
Pinterest
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.