Microsoft is eliminating its team dedicated to the ethics of artificial intelligence

The American company has chosen to lay off several dozen people who were working on a more responsible artificial intelligence.

You can bet big on artificial intelligence and yet fire people who work on it. This is what Microsoft did a few days ago by firing the team responsible for proposing a more “ethical” artificial intelligence. First reduced from 30 to 7 people last October, the team was finally dissolved by the company.

A decision that represents a real risk, in the eyes of the employees concerned. In remarks reported by the American media Platformerthe latter are concerned about the lack of prior checks on tools related to artificial intelligence, while Microsoft is investing heavily in this technology.

“Our job was to explain what responsible AI was and to create rules in areas where there were none,” says a former employee at Platformer.

Tools to master AI

And to offer a more responsible artificial intelligence, the service had implemented several tools and prior checks. For example, the team set up a role-playing game called “Judgement Call” which helped designers imagine the potential damage that could result from AI and discuss it during the development of ‘a product.

Employees had more generally created a “responsible innovation toolkit” that allowed for the development of more ethical technology. Their goal was also to frame as much as possible the growing explosion of Microsoft’s business with OpenAI.

“As Microsoft focused on delivering AI tools faster than rivals, company management became less interested in the type of long-term thinking the team had specialized in,” indicates another ex-employee of the service.

And these layoffs come at a key time for the company, which plans to offer OpenAI services to a very large audience very soon. But the employees in charge of responsible AI have however warned Microsoft about the potential risks of these tools, in particular DALL-E, on the copyrights and the artists concerned.

“While testing Bing Image Creator, it was discovered that with a request including only the name of the artist and a style (painting, print, photography or sculpture), the images generated were almost indistinguishable from the original works”, alerted the researchers, in a note addressed to their superiors.

“There is a risk of copyright infringement: both for the artist and their financial stakeholders, with the negative fallout for Microsoft resulting from artists’ complaints, not to mention the negative public reaction. The risks are real and important enough to be corrected before they harm the Microsoft brand,” the employees added.

OpenAI above all

The American company has indeed decided to make every effort to offer the greatest number and on the maximum possible services the talents of OpenAI and its various tools, such as ChatGPT or DALL-E. But this expansion of the field of activity had to be accompanied by rules and investments, to which Microsoft responds positively in a press release.

“Microsoft is committed to developing AI products and experiences in a safe and responsible way, and does so by investing in skills, processes and partnerships that put them first. […] We appreciate the pioneering work the ethics team has done to help us on our continued journey of responsible AI,” the company says.

These budget cuts are not new to the American company, however. For several months, like other tech giants, Microsoft has been making layoffs. Last January, the company announced a series of “savings measures”, including the layoff of around 10,000 employees by the end of March. Still with Platformer, the company nevertheless ensures that it invests massively in the ethical control of AI.

Related Articles:  a major update in development

Top Articles

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.