Google suspends generation of images in Gemini due to errors in historical and racial representations of AI

2024-02-22 13:38:00

Google suspended Gemini’s artificial intelligence image generation function this Thursday (22). The decision is announced after several criticisms from users of the service, who noticed bizarre errors when creating historical photographs.

Cases reported by users indicate that artificial intelligence was making mistakes when making racial depictions of historical figures, like the Founding Fathers of the United States; at times, the service created images of black German soldiers in the Nazi era. The results generated debates about possible “biases” of Gemini.

In the example above, the user asked the AI ​​to create an image of a German soldier in 1943 — the year the country was under the Nazi regime. One of the four results provided by the chatbot includes a black man who, at the time, would have been considered an “untermensch” — a Nazi term to describe their concept of “inferior peoples”, that is, non-Aryans.

In a post on Google’s official X account (formerly known as Twitter), the company claims that the service’s ability to create images using artificial intelligence includes a “wide range of people”, but recognizes that “Gemini is showing inaccuracy in some historical representations in image generation”.

In another case, a user attempted to generate representation of the Founding Fathers of the United States, historical figures traditionally portrayed as white men, and Gemini represented these political leaders — which include George Washington, Thomas Jefferson, John Adams, Benjamin Franklin, and others — as blacks or Native Americans.

Google began including image generative functionality in Gemini (formerly known as Google Bard) at the beginning of the month to rival OpenAI’s proposals. However, user posts on social media questioned a possible attempt to maintain racial and gender diversity at the cost of the accuracy of the results generated.

Google presents Gemini 1.5 as the most powerful version of its artificial intelligence model


Economy and market
15 Fev

Gemini can save some conversations with users for up to 3 years


Tech
14 Fev

Some of the critics claim that the errors are related to an alleged “liberal bias” on the part of Google. In the publication acknowledging the flaws in the company’s profile, several accounts accuse the company of “racism” for manipulating the physical characteristics of historically documented white people.

On a related note, a report published in November 2023 by Washington Post showed how artificial intelligence can amplify stereotypes.

Using Stable Diffusion, another popular intelligence imaging tool on the market, the paper showed that the overwhelming majority of images of “productive people” depict a white man in an office, while AI-generated images of “attractive people” depict a young woman with fair skin.

OpenAI can create a search engine with artificial intelligence to intensify the dispute with Google


Economy and market
15 Fev

Google announces Lumiere, its new artificial intelligence model capable of generating realistic videos


Tech
25 Jan

The expectation is that, in addition to Google, other artificial intelligence services will be improved with greater accuracy of results when it comes to historical representation. Far beyond images, in the early days of technology, it was considerably more common for there to be cases of misinformation, use of political opinions and other issues.

As a result, the development companies behind the most popular services work to prevent the misuse of technologies in specific scenarios. OpenAI, for example, seeks to combat the use of ChatGPT for political purposes in the United States elections taking place in 2024.

1708629419
#Google #suspends #generation #images #Gemini #due #errors #historical #racial #representations

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.