AI Systems in Weapons and Warfare Were Once Taboo in the Tech Industry. Money Changed That

Google has changed its stance on developing AI-powered weapons, becoming the latest big tech company to follow a growing trend.

AI systems in weapons and warfare were once a taboo
No comments Twitter Flipboard E-mail
javier-pastor

Javier Pastor

Senior Writer
javier-pastor

Javier Pastor

Senior Writer

Computer scientist turned tech journalist. I've written about almost everything related to technology, but I specialize in hardware, operating systems and cryptocurrencies. I like writing about tech so much that I do it both for Xataka and Incognitosis, my personal blog.

184 publications by Javier Pastor

“Those are my principles, and if you don’t like them... well, I have others.” Groucho Marx’s famous quote now seems to define AI companies that once opposed using their technology for military purposes. That stance has shifted for one reason: money.

Google. The company no longer opposes AI for military use. This is evident in its decision to remove from its “AI Principles” a commitment not to apply the technology to systems that “cause or are likely to cause harm.”

Different times. In April 2018, controversy erupted when news broke that Google was working with the Pentagon on Project Maven, a defense initiative. More than 3,000 employees signed an open letter protesting the project, leading Google to back down and cancel it.

Don’t be evil. This seemed to harken back to Google’s original mantra: “Don’t be evil.” But that phrase has since been criticized, as the company evolved from a disruptive startup to a corporate giant focused on the bottom line.

AI giants chase military contracts. Google isn’t the first and won’t be the last to enter the AI arms race. Anthropic has partnered with Palantir—like Microsoft—and AWS to offer its Claude model to U.S. intelligence and defense agencies. Meta has announced new terms in line with military applications, and OpenAI has adjusted its policies while negotiating contracts with the Department of Defense.

A lucrative market (with room to grow). Military spending remains massive, with U.S. government contracts for AI-related systems soaring 1,200% between 2022 and 2023. Spending is expected to reach at least $1.8 billion by 2025, a drop in the bucket compared to the 2023 military budget of $830 billion.

The red button debate. In September 2024, South Korea hosted the REAIM summit on AI systems in warfare. A key question: Should machines decide when to use nuclear weapons? Except for China, all participating countries agreed that only humans should make that call. Russia, banned due to the war in Ukraine, didn’t attend.

The ethics of AI systems in war. Summit participants signed an agreement stating that AI systems in military operations “must be applied in accordance with national and international law” and remain “ethical and human-centered.”

But AI weapons are already on the battlefield. In Ukraine, AI-controlled drones can autonomously lock onto and attack targets. Even handheld gaming consoles like the Steam Deck have been repurposed to control machine guns.

Image | Airman Magazine

Related | If China ‘Provokes’ Taiwan Again in the Next Two Years, It Could Face a Surprise: The U.S. Military

Home o Index