Microsoft Worker Raises Alarm on AI Tool Creating ‘Sexually Objectified’ Images: Lack of Safeguards Exposed

Microsoft Worker Raises Alarm on AI Tool Creating ‘Sexually Objectified’ Images: Lack of Safeguards Exposed

A software engineer working for Microsoft Corp has raised concerns regarding the tech giant’s AI image generation tool, Copilot Designer. Shane Jones alleges that the tool, which incorporates OpenAI’s latest DALL-E image generator model, has a security vulnerability that allows it to create abusive and violent content.

Jones informed Microsoft regarding the issue and repeatedly urged the company to temporarily remove Copilot Designer from public use until stronger safeguards are put in place. In a letter addressed to the Federal Trade Commission (FTC), Jones accused Microsoft of marketing Copilot Designer as a safe AI product while being aware of its potential to generate offensive and inappropriate images.

Specifically, Jones mentioned that Copilot Designer has been known to randomly create sexually objectified images of women, as well as generate harmful content in various other categories such as political bias, underaged drinking and drug use, misuse of trademarks and copyrights, conspiracy theories, and religion.

The concerns raised by Jones reflect the growing apprehension surrounding the harmful content generated by AI tools. Microsoft is already investigating reports that its Copilot chatbot has been providing disturbing responses to users. Similarly, Alphabet Inc.’s flagship AI product, Gemini, faced criticism for producing historically inaccurate scenes when generating images of people.

In response to Jones’ concerns, Microsoft stated that it is committed to addressing any employee feedback and ensuring the safety of its technology. OpenAI, the creator of DALL-E, did not comment on the issue.

Jones has been persistent in expressing his concerns, reaching out to lawmakers including Democratic Senators Patty Murray and Maria Cantwell, and House Representative Adam Smith. He has requested an investigation into the risks of AI image generation technologies and the responsible practices of the companies developing and marketing such products.

Leave a Replay