User concerns over OpenAI’s new video tool

User concerns over OpenAI’s new video tool

Artificial intelligence OpenAI, a research laboratory and non-profit company, has introduced a new tool that uses artificial intelligence to read text. video Can create things that look very real.

It is feared that OpenAI’s new tool called Sora Elections can be misused to influence voters before

ChatGPT said in a blog post on Thursday that an AI tool called Sora can create videos of up to 60 seconds with “highly detailed scenes, complex camera movements and expressive emotions.” Many characters will be involved.

OpenAI has shared several videos created using the AI ​​tool that appear to be real.

For example, a video shared shows two people who appear to be a couple. They are walking on a snowy street in Tokyo, the capital of Japan. While walking, his back is towards the ‘camera’.

An artificial intelligence tool has generated this video with the help of detailed text. According to this text: ‘The beautiful, snowy city of Tokyo is baronique. The camera is on a busy city street with many people enjoying the beautiful snowy weather and shopping at nearby stalls. The beautiful sakura leaves of the cherry tree are blowing in the wind with cheeks of snow.’

Another video shared by Sam Altman, head of OpenAI, shows very realistic-looking elephants with hair on their bodies walking in what appears to be snow while a short distance away. But the mountains are visible which are covered with snow.

Sora’s model understands how things “exist in the physical world,” says the ChetGPT company. and ‘interprets the text accurately and creates compelling characters with richly emotive expressions.’

The announcement of this artificial intelligence tool has raised concerns among many social media users. Especially during a presidential election year in the US when Sora will likely be introduced to the public.

Experts have already raised a number of concerns about the misuse of such artificial intelligence technology, including the role of deepfake videos and chatbots in spreading political disinformation ahead of elections.

Ethical hacker Rachel Toback, a member of the US government’s Cybersecurity and Infrastructure Security Agency’s (CISA) Technical Advisory Council, posted on X that ‘my biggest concern is that this material is being used to deceive the general public. How it can be used to manipulate, obtain sensitive information and confuse people.’

Although OpenAI acknowledged the risks associated with widespread use of the tool, saying that it was “taking a number of important security measures before making Sura part of OpenAI’s products,” Lake Toback said that they ” Still worried.

He gave an example of misuse of the tool, saying that opponents could use the AI ​​tool to create a video that shows side effects of a vaccine that don’t actually exist.

In the run-up to elections, he said such a tool could be misused to “show unimaginably long queues in bad weather” to convince people that they don’t need to go out to vote. .

OpenAI says its teams are following rules to limit potentially harmful uses of Sora. .

ChatGPT’s creator said: ‘We are working closely with security and internet experts to tackle misinformation, hate content and bigotry. These experts are engaged in testing the model by providing specific content.

But Toback fears that opponents may find ways to circumvent the rules.

This section contains related reference points (Related Nodes field).

“Take my above example,” he explained. Using this artificial intelligence tool to create a ‘video of a long line of people outside a building in the pouring rain’ does not violate these policies, but the risk lies in how it is used.

The hacker explains that ‘if this AI-generated video showing impossibly long queues and heavy rain was posted on social media by an opponent on election day, then People can be persuaded to stay at home and avoid polling, queues and the weather.’

He urged OpenAI to discuss partnering with social media channels to automatically identify and name AI-generated videos shared across platforms. may develop guidelines regarding such content including giving

OpenAI did not immediately respond to The Independent’s request for comment.

“This tool is going to be the most powerful disinformation tool that’s ever been on the Internet,” Gordon Kravitz, co-chairman of disinformation watchdog NewsGuard, told The New York Times. The creation of new false narratives can now be dramatically scaled and repeated – it’s like having artificial intelligence agents contributing to the spread of disinformation.’

Join Independent Urdu’s WhatsApp channel for authentic news and current affairs analysis Here Click


#User #concerns #OpenAIs #video #tool
2024-08-07 17:35:01

Leave a Replay