Microsoft proposes to train the US army in artificial intelligence

Microsoft proposes to train the US army in artificial intelligence

2024-05-07 10:37:36

This was revealed a few months following OpenAI ended its ban on using its technologies in military operations, which happened silently and was not revealed by the company to the media, but is instead appeared in the presentation’s internal documents. To see on The Intercept website.

Microsoft has invested more than $10 billion in OpenAI, and its name has been associated with the startup’s name in recent times regarding generative artificial intelligence technologies. The company’s presentation materials, titled “Generative AI in DoD Data,” provide general details on how the department leverages OpenAI’s machine learning tools and technologies, which include the ChatGPT Bot and the Data Generator. DAL E. images, in tasks ranging from document analysis. to help maintain the machines.

The documents provided by Microsoft were taken from a large set of documents presented at a US Department of Defense training symposium on “Artificial Intelligence Mastery and Education” hosted by the Defense Unit. US Air Force in Los Angeles in October 2023. The symposium included a different set of presentations from machine learning companies, including Microsoft and OpenAI, on what these companies can offer the US military.

The publicly available files appeared on the website of Alethia Labs, a nonprofit consulting firm that helps the federal government with technology, and were discovered by The Intercept reporter Jack Paulson. Alethea Labs has worked extensively with the Pentagon to help it quickly integrate artificial intelligence technologies into its arsenal, and since last year the company has contracted with the department’s Artificial Intelligence Central Office.

A page from Microsoft’s presentation highlights several common federal uses of OpenAI technology, including its use for military purposes. One of the items titled “Advanced Computer Vision Training” reads: “Combat Management Systems: Using DAL-E Models to Create Images for Combat Management Systems Training.” »

As the name suggests, a combat management system is a command and control software package that provides military commands with an overview of the combat scenario on the battlefield, allowing them to coordinate combat-related elements such as artillery bombardments, identification of targets for air strikes. , and the movements of soldiers on the ground. The reference to computer vision training suggests that images generated by the DAL-E model might help Pentagon computers better see battlefield conditions, a particular advantage for identifying and destroying targets.

The presentation files do not provide further details on exactly how the DAL-E model will be used in combat management systems on the battlefield, but training for these systems may include the ability to use DAL- E to provide the Pentagon with “synthetic training data.” “Imaginary and artificial scenes that faithfully imitate scenes from the real world.

For example, a large quantity of fake aerial photographs of aircraft landing strips or tank rows produced by the DAL-E model might be displayed on military software designed to detect enemy ground targets, with the aim of ‘improve the software’s ability to identify such targets in the real world.

In an interview last month with the Center for Strategic and International Studies, Captain M. Xavier Legault of the US Navy created a military application of synthetic data, similar to what DAL-E can produce, suggesting that these fake images might be used to train drones to better see and recognize the world below them.

The US Air Force is currently working to create the Advanced Combat Management System, part of the Department of Defense’s larger, multibillion-dollar project called Joint All Domain Command and Control (JADC2), which aims to connecting the entire US military. to expand… Expand communication between U.S. military branches, analyze data powered by artificial intelligence, and ultimately improve warfighting capability.

Through this project, the ministry envisions an imminent future in which cameras from Air Force drones, radars from Navy warships, Army tanks and soldiers on the ground will exchange transparent data on the enemy to better destroy him. On April 3, US Central Command revealed that it had already begun using elements of this project in the Middle East.

Ethical objections aside, the effectiveness of this approach is questionable. “It is known that the model’s accuracy and ability to correctly process data deteriorates each time it is trained on AI-generated content,” said Heidi Khallaf, machine learning integrity engineer. who has already worked with OpenAI.

Khallaf added: “The images generated by DAL-E are far from accurate and do not produce images that reflect actual reality, even if they are adjusted to the combat management system inputs on the battlefield. Image generation models cannot accurately produce the correct image. number of human limbs or fingers, then how “they can be counted on to be accurate with respect to the details of actual presence on the ground.”

Microsoft commented in an emailed statement that although it had proposed to the US Department of Defense to use DAL-E to train its software on the battlefield, it had not yet begun to implement this proposition. She continued: “This is an example of potential use cases based on conversations with customers regarding what generative AI can offer. »

For her part, OpenAI spokesperson Liz Burgos said her company had no role in Microsoft’s bid and had not entered into any agreements to sell tools or technologies for the Ministry of Defense. She added: “OpenAI policies prohibit the use of our tools to develop or use weapons, harm others, or destroy property. We did not participate in this presentation and have not had conversations with U.S. defense agencies regarding the hypothetical use cases described in the presentation. .”

Brianna Rosen, a researcher in the field of technology ethics at the University of Oxford, commented: “It is not possible to create a combat management system in a way that does not contribute to harm to civilians. , at least indirectly. Rosen, who served on the National Security Council during President Barack Obama’s administration, explained that open AI technologies can be used to help people as well as harm them, and that the use of the latter by any government is a political choice. .

“Unless companies like OpenAI get written assurances from governments that they will not use this technology to harm civilians, which is probably not legally binding, I see no way for companies to state with certainty that this technology will not be used or abused,” Rosen added. …By methods which have hostile effects.

1715078678
#Microsoft #proposes #train #army #artificial #intelligence

Leave a Replay