WASHINGTON (AP).—Facebook and Instagram users will begin to see labels on artificial intelligence (AI)-generated images that appear on their social networks, part of a broader initiative by the technology industry to distinguish between what is real and what is not.
Meta reported yesterday that it is working with industry partners on technical standards that will make it easier to identify images and, eventually, videos and audio generated with AI.
What remains to be seen is how well it will work at a time when it is easier than ever to create and distribute AI-generated images that can cause harm, from election misinformation to fake celebrity nudes.
“It’s kind of a sign that they are taking seriously the fact that the generation of fake content on the Internet is a problem for their platforms,” explained Gili Vidan, assistant professor of information sciences at Cornell University. It might be “quite effective” at flagging much of the AI-generated content created with commercial tools, but it probably won’t capture all of it, she said.
Meta’s president of global affairs, Nick Clegg, did not specify yesterday when the labels will begin to appear, but said it will be “in the coming months” and in different languages.
“Important elections are taking place around the world,” he said.
“As the difference between human and synthetic content becomes less clear, people want to know where the line is,” he added in a blog post.
Meta already places an “Imagined with AI” label on photorealistic images created by its own tool, but most of the AI-generated content flooding its social media services comes from elsewhere.
Related news
#mark #images #Diario #Yucatán
2024-04-08 09:19:34