Microsoft employee warns lawmakers and FTC about…

Microsoft employee warns lawmakers and FTC about…

2024-03-06 22:24:29

Bloomberg — A Microsoft Corp (MSFT) software engineer sent letters to the company’s board, lawmakers and the Federal Trade Commission warning that the tech giant is not doing enough to protect its AI imaging tool, Copilot Designer, from creating abusive and violent content.

Shane Jones said he discovered a security vulnerability in OpenAI’s latest DALL-E imager model that allowed him to bypass security barriers that prevent the tool from creating harmful images. The DALL-E model is integrated into many of Microsoft’s AI tools, including Copilot Designer.

Jones said he reported his findings to Microsoft and “repeatedly insisted” the Redmond, Washington-based company “remove Copilot Designer from public use until better safeguards might be implemented,” according to a letter sent to the FTC on Wednesday and reviewed by Bloomberg.

“While Microsoft is publicly promoting Copilot Designer as an AI product that is safe to use by everyone, including children of any age, internally the company is aware of systemic issues where the product is creating harmful images that might be offensive and inappropriate for consumers,” Jones wrote. “Microsoft Copilot Designer does not include the necessary product warnings or disclosures that consumers need to know regarding these risks.”

In the letter to the FTC, Jones said Copilot Designer had a tendency to randomly generate “an inappropriate and sexualized image of a woman” in some of the images it creates. It also said the AI ​​tool created “harmful content in a variety of categories, including: political bias, underage drinking and drug use, trademark and copyright misuse, conspiracy theories, and religion, to name a few.”

The FTC confirmed it received the letter but declined to comment further.

The frontal attack reflects growing concerns regarding the tendency of AI tools to generate harmful content. Last week, Microsoft said it was investigating reports that its Copilot chatbot was generating disturbing responses, including conflicting messages regarding suicide. In February, Alphabet Inc.’s flagship AI product Gemini came under fire for generating historically inaccurate scenes when asked to create images of people.

Jones also wrote to the Social and Environmental Public Policy Committee of Microsoft’s board of directors, which has Penny Pritzker and Reid Hoffman as members. “I do not believe we should wait for government regulation to ensure that we are transparent with consumers regarding the risks of AI,” Jones said in the letter. “Given our corporate values, we should voluntarily and transparently disclose known risks of AI, especially when the AI ​​product is actively being marketed to children.”

CNBC reported on the existence of the letters previously.

Microsoft and OpenAI did not immediately respond to requests for comment, but Microsoft told CNBC that it was “committed to addressing all concerns that employees have in accordance with our corporate policies, and we are grateful for employees’ efforts to study and test our latest technologies to improve your security.

Jones said he expressed his concerns to the company several times over the past three months. In January, he wrote to Democratic Senators Patty Murray and Maria Cantwell, who represent Washington state, and Representative Adam Smith. In a letter, he asked lawmakers to investigate the risks of “AI imaging technologies and the responsible AI and corporate governance practices of the companies that build and market these products.”

Lawmakers did not immediately respond to requests for comment.

Read more at Bloomberg.com

1709767609
#Microsoft #employee #warns #lawmakers #FTC #regarding..

Leave a Replay