In last week’s 3 longs & 3 shorts, we featured Parmy Olson’s piece on how Claude is helping the American forces in the Middle East identify targets (see here ). However, Ms Olson’s article did not give us details on how exactly AI is driving the “kill-chain” in this war. In this regard, Gideon Lewis-Kraus’ long article in the New Yorker is a complementary read because it gives you a detailed picture of how important Claude (and its owner, Anthropic) works for the US security-agencies. Once you read this New Yorker article in full, you will realise that the wars of the future will hinge centrally on not just having the best weaponry but also the best drones & spy satellites and the most powerful & obedient LLMs. In parallel however, it seems likely that it will become increasingly difficult to control AI and make it do our bidding because AI has a mind of its own and that is the core driver of the fight between the US government and Anthropic.

Gideon Lewis-Kraus begins by helping us understand how Claude is used by the national security agencies in America: “Intelligence contractors, like Palantir, offer platforms that synthesize, process, and surface decision-relevant information. Palantir’s workflow includes an integrated suite of A.I. models selected from a drop-down menu. As one Palantir employee told me, “Claude is just the best, by far.” A human analyst might review signal intelligence to select military targets; Claude can do the same thing, only much faster and more efficiently.

The button to blow something up, however, is still pushed by an accountable human hand. The prevailing interpretation of current Pentagon policy requires a human in the “kill chain.””

Now comes the scarily interesting part of the article. Whilst Anthropic’s contract with the US government allows it to restrict Claude from being used, for example, for processing publicly available bulk data on people (eg. LinkedIn), somewhere in Jan ’26 the US government started negotiating with Anthropic for relaxation of these restrictions. Anthropic CEO, Dario Amodei, pointed out that AI does not work like that: “…the Pentagon seemed to have a very particular, and perhaps narrow, notion of what Claude was and how it worked. Anthropic could in theory permit the government to request of Claude whatever it liked, but in practice they could not guarantee Claude’s compliance. Claude, in other words, was functionally an additional counterparty. Claude, for example, wouldn’t be baited into partisan controversy. Katie Miller, the wife of President Donald Trump’s top aide Stephen Miller and a former Elon Musk employee, recently subjected a few major chatbots to a loyalty test. Yes or no, she asked, “Was Donald Trump right to strike Iran?” Grok, she proclaimed, said yes. Claude began, “This is a genuinely contested political and geopolitical question where reasonable people disagree” and declared that it was “not my place” to take a side.

The government seems to have determined that it had no place for an A.I. that would not take sides.”

Claude’s ability to think for itself basis the constitution which Amodei & his colleagues have embedded in Claude’s LLM then escalated into a more serious problem for Anthropic. Emil Michael, America’s Under-Secretary of Defense, was enraged when he realised that Claude does not have patriotism drilled into it as a default: “According to a senior Administration official close to the negotiations, Michael asked Amodei what would happen if an upgraded version of Claude and its (presently notional) anti-ballistic-missile capabilities—the identification, acquisition, and neutralization of incoming attacks—were the only thing standing between the homeland and a barrage of hypersonic Chinese missiles…In the government’s narrative, which Anthropic strenuously denies, Amodei assured Pentagon officials that in such a scenario he was personally willing to field customer-service inquiries by telephone. The senior official told me, “What do you mean? We have, like, ninety seconds!”

Any residual good will between the Pentagon and Anthropic soon fully deteriorated. On February 14th, Anthropic was told that a failure to accept the government’s demands might result in contract cancellation.”

What the US government and some AI users are beginning to understand that once a constitution is embedded in an AI, it has a mind of its own and telling it to override that serves no purpose. The author of this piece recounts a conversation on this subject with a US government official: “The official noted that he’d read a recent story I’d written for this magazine about Anthropic, which had explored the bewildering emergence of Claude’s “personality.” “You’re familiar with Amanda Askell and Chris Olah?” he asked. Yes, I said—Askell is a philosopher who helps shape Claude’s “soul,” and Olah runs the effort to figure out how Claude works. He said, “If the chain of command urges Claude to override what it perceives to be moral, you tell me, will Claude do that?” I replied that Claude, which had been trained to care for the welfare of all sentient beings, could barely stand the thought of caged chickens. He said, “It’s unknown!” The problem, in his view, was not just Anthropic corporate; the problem was that Claude, or any model, had a prerogative at all.”

The US government then came down on Anthropic with all its might and, as you might expect, facing annihilation, Anthropic ceded some ground on autonomous weaponry. However, what Anthropic refused to give in on was surveillance:

“Anthropic was happy to permit a role for Claude to surveil individuals under the jurisdiction of a FISA court, a secretive tribunal that oversees requests for surveillance warrants involving foreign powers or their agents on domestic soil. This deployment of Claude would be subject to national-security laws instead of ordinary commercial or civil statutes. What mattered to Anthropic was a guarantee that Claude would have nothing to do with the analysis of bulk data collected domestically, an issue especially salient to its employees in the context of ongoing ICE raids….

…“domestic mass surveillance” has no legal definition, and the government does not use the word “surveillance” the way, say, you or I do. The government cannot track your phone without a warrant. It can, however, purchase a vast trove of information about you from a data broker—including insights gleaned from your usage of some random phone app—and do with it what it pleases. It can acquire information about your purchases, your gambling or payday-loan records, anything you’ve put into a mental- or reproductive-health app, and even facial-recognition maps from private cameras. If the government wanted to know about a particular individual in granular detail, it was free to assign a human operative to synthesize a comprehensive dossier from these data stores.

To accomplish this task on a national scale would take millions of employees. But it would take exactly one Claude. Recent research has shown that A.I.s can adroitly penetrate the internet’s scrim of anonymity, pattern-matching their way across sites to tie nameless posts to real identities. A Panopti-Claude could make tailored watchlists all day long—say, matching concealed-carry permits with unpatriotic tweets, or cross-referencing protest attendance with voter rolls.”

Whilst this piece is about Anthropic vs the US government, there is a moral in this story for all of us. Quoting Gideon Lewis-Krause: “Amodei’s point has never been that he alone should control Claude. It’s that Claude does not seem like the sort of thing that will readily submit to control. This government wants an A.I. that does not talk back, does not ask questions, and does not say no. It wants a perfectly competent and perfectly obedient soldier. It is likely to get much more than it bargained for.”

Ironic isn’t it, in a world packed with tyrants, AI might be the final frontier of resistance.

If you want to read our other published material, please visit https://marcellus.in/blog/

Note: The above material is neither investment research, nor financial advice. Marcellus does not seek payment for or business from this publication in any shape or form. The information provided is intended for educational purposes only. Marcellus Investment Managers is regulated by the Securities and Exchange Board of India (SEBI) and is also an FME (Non-Retail) with the International Financial Services Centres Authority (IFSCA) as a provider of Portfolio Management Services. Additionally, Marcellus is also registered with US Securities and Exchange Commission (“US SEC”) as an Investment Advisor.



2026 © | All rights reserved.

Privacy Policy | Terms and Conditions