Over the past fortnight several news articles have mentioned that the American forces in the Middle East have used Claude to bomb Iran. Obviously therefore we are all keen to understand how exactly does AI help improve the odds of success on the battlefield. (Claude had also been used by the US Marines who captured Venezuela’s Maduro.) In this article, Parmy Olson, author of the bestselling book “Supremacy: AI, Chat GPT and the Race that Will Change the World”, explains this unique use case for AI. She writes:
“US Central Command used Anthropic’s Claude AI for “intelligence assessments, target identification and simulating battle scenarios” during the strikes on the country, according to a report in the Wall Street Journal.
Hours earlier, US President Donald Trump had ordered federal agencies to stop using Claude after a dispute with its maker, but the tool was so deeply baked into the Pentagon’s systems that it would take months to untangle in favor of a more compliant rival.
But what does “intelligence assessments” and “target identification” mean in practice? Was Claude flagging locations to strike or making casualty estimates? Nobody has made that disclosure and, alarmingly, no one has an obligation to.
Artificial intelligence has long been used in warfare for things like analyzing satellite imagery, detecting cyber threats and guiding missile-defense systems. But the use of chatbots…is now being used in the battlefield.
Last November, Anthropic partnered with Palantir Technologies Inc…turning its large language model Claude into the reasoning engine inside a decision-support system for the military.
Then, in January, Anthropic submitted a $100 million proposal to the Pentagon to develop voice-controlled autonomous drone swarming technology, Bloomberg News reported. The company’s pitch: Use Claude to translate a commander’s intent into digital instructions to coordinate a fleet of drones.
Its bid was rejected, but the contest called for much more than just summarizing intelligence reports, as you might expect a chatbot to do. This contract was to develop “target-related awareness and “launch to termination” for potentially lethal drone swarms.”
However, Claude is not the first time AI being used on the battlefield. The pioneers in this regard were the Americans’ new best friends, the Israelis: “Lavender was an AI-driven database used to help identify military targets associated with Hamas in Gaza. It was not a large language model but analyzed vast amounts of surveillance data, such as social connections and location history, to assign each individual a score from 1 to 100. When someone’s score passed a certain threshold, Lavender flagged them as a military target.”
And that brings us to the problem with using AI in warfare – it makes mistakes which cost lives: “The problem was that Lavender was wrong 10% of the time, according to an investigative report published by the Israeli-Palestinian outlet +972. “Around 3,600 people were targeted by mistake,” Mariarosaria Taddeo, a professor of digital ethics and defense technology at the Oxford Internet Institute, tells me.
“There are such incredible vulnerabilities in these systems and such extreme unreliability… for something so dynamic, sensitive and human as warfare,” says Elke Schwarz, a professor in political theory at Queen Mary University London and author of Death Machines: The Ethics of Violent Technologies.”
Whilst AI’s fallibility is legally acceptable in a credit scoring model, putting AI enabled weapons on the battlefield poses legal risks: “Article 36 of the Geneva Convention requires new weapons systems to be tested before deployment, but an AI system that learns from its environment becomes a new system every time it updates. That makes it almost impossible to apply the rule.
In an ideal world, governments like the US would disclose how these systems are used on the battlefield, and there is a precedent. The Americans started using armed drones after 9/11 and expanded their use under the Barack Obama administration, refusing to acknowledge that such a program existed.
It took nearly 15 years of leaked documents, sustained pressure from the press and lawsuits from the American Civil Liberties Union before the Obama White House finally published in 2016 the casualty numbers from drone strikes. They were widely seen as under-counting, but they allowed the public, Congress and media to hold the government accountable for the first time.
AI’s policing be will harder still, requiring even more public and legislative pressure to force a recalcitrant Trump administration to create a similar kind of reporting framework.”
And lest we forget in this current moment of killing, the ethical question regarding the use of AI will still remain long after governments have been forced to disclose the use of AI to kill the enemy: ““We haven’t decided as a society if we’re fine with a machine deciding if a human being should be killed or not,” says Taddeo.”
If you want to read our other published material, please visit https://marcellus.in/blog/
Note: The above material is neither investment research, nor financial advice. Marcellus does not seek payment for or business from this publication in any shape or form. The information provided is intended for educational purposes only. Marcellus Investment Managers is regulated by the Securities and Exchange Board of India (SEBI) and is also an FME (Non-Retail) with the International Financial Services Centres Authority (IFSCA) as a provider of Portfolio Management Services. Additionally, Marcellus is also registered with US Securities and Exchange Commission (“US SEC”) as an Investment Advisor.