The advancement of artificial intelligence (AI) brings both promise and peril.While AI has the potential to revolutionize many sectors, its request in warfare raises profound ethical concerns. A recent partnership between OpenAI, creator of the groundbreaking ChatGPT, and Anduril, a California-based weapons company, highlights this growing trend of integrating AI into military technology.
This collaboration was formalized in November with a test of the joint system in California. The system is designed to enable data sharing between external parties for real-time decision-making on the battlefield. This aligns with the US military’s and OpenAI’s stated goals of normalizing the use of AI in warfare.
Anduril, known for its AI-powered drones, missiles, and radar systems, including Sentry towers deployed at US military bases worldwide, the US-Mexico border, and even the British coastline to detect migrants on boats, secured a three-year contract with the Pentagon in December. This contract focuses on providing soldiers with AI-driven solutions during attacks.
OpenAI’s apparent shift towards military applications comes amidst a controversial change to its usage policy.In January, the company removed a direct ban on activities involving a “high risk of physical harm,” specifically citing ”military and warfare” and ”weapons development.”
Less than a week later, OpenAI announced a partnership with the Pentagon on cybersecurity initiatives. These developments directly contradict OpenAI’s own charter, which proclaims a commitment to building “safe and beneficial AGI [Artificial General Intelligence]” that does not “harm humanity.” The prospect of ChatGPT being used to generate code for autonomous weapons, analyze data for bombings, or assist in invasions is deeply troubling.
“OpenAI’s lurch into the war industry is in total antithesis to its own charter.”
The use of AI for lethal purposes is not a new phenomenon. Both Israel and the US have been experimenting with and deploying AI in palestine for years. Hebron, such as, has become known as a “smart city,” where the occupying forces utilize a network of motion and heat sensors, facial recognition technologies, and CCTV surveillance. At the heart of this oppressive surveillance system lies the Blue Wolf System, an AI tool that scans and tracks Palestinians.
A Call for Transparency and Ethical Guidelines
The rapid integration of AI into warfare demands urgent attention.We need transparent and rigorous ethical guidelines to govern the development and deployment of AI-powered weapons. The potential consequences of unchecked AI militarization are too meaningful to ignore. The international community must work together to ensure that AI technology is used responsibly and does not further exacerbate conflicts or threaten global security.
the faces of Palestinians, when they are photographed by Israeli occupation soldiers, and refers to a biometric database in which information about them is stored. Upon inputting the photo into the system, each person is classified by a color-coded rating based on their perceived ‘threat level’ to dictate whether the soldier should allow them to pass or arrest them. The IOF soldiers are rewarded with prizes for taking the most photographs, which they have termed “Facebook for Palestinians”, according to revelations from the Washington Post in 2021.
OpenAI’s war technology comes as the Biden administration is pushing for the US to use the technology to “fulfill national security objectives.” this was in fact part of the title of a White House
memorandum released in October this year calling for rapid development of artificial intelligence “especially in the context of national security systems.” While not explicitly naming China, it is clear that a perceived ‘AI arms race’ with China is also a central motivation of the biden administration for such a call. Not solely is this for weapons for war, but also racing for the development of technology writ large. Earlier this month, the US banned the export of HBM chips to China, a critical component of AI and high-level graphics processing units (GPU).Former Google CEO Eric schmidt warned that China is two to three years ahead of the US when it comes to AI, a major change from his statements earlier this year where he remarked that the US is ahead of China. When he says there is a “threat escalation matrix” when there are developments in AI, he reveals that the US sees the technology only as a tool of war and a way to assert hegemony. AI is the latest in the US’ unrelenting - and risky – provocation and fear mongering with China, who they cannot bear to see advance them.In response to the White House memorandum, OpenAI released a statement of its own where it re-asserted many of the White House’s lines about “democratic values” and “national security.” But what is democratic about a company developing technology to better target and bomb people? Who is made secure by the collection of information to better determine war technology? This surely reveals the alignment of the company with the Biden administration’s anti-China rhetoric and imperialist justifications. As the company that has surely pushed AGI systems within general society, it is deeply alarming that they have ditched all codes and jumped right in with the Pentagon. While it’s not surprising that companies like Palantir or even Anduril itself are using AI for war, from companies like OpenAI – a supposedly mission-driven nonprofit - we should expect better.
AI is being used to streamline killing. At the US-Mexico border, in Palestine, and in US imperial outposts across the globe. While AI systems seem innocently embedded within our daily lives, from search engines to music streaming sites, we must forget these same companies are using the same technology lethally. While ChatGPT might give you ten ways to protest,it is likely being trained to kill,better and faster.