Anthropic's Two-Front War — And the Week Everything Escalated

China stole its AI. The Pentagon threatened to blacklist it. India bet $200 billion on an AI future. And the agents nobody can control went live.

Last issue, we described how AI built its own successor and how markets felt it overnight. This week, the story got geopolitical. The most safety-conscious AI lab in the world found itself caught between two superpowers — accused of being too open by one side, and not open enough by the other.

Meanwhile, India hosted the largest AI summit ever held, AI agents went viral and then went rogue, and a research group published a scenario where AI agents tank the global economy within two years. None of this is hypothetical. All of it happened in the last twelve days.

Part OneThe Homework They Copied

On February 23, 2026, Anthropic publicly accused three Chinese AI companies — DeepSeek, Moonshot AI, and MiniMax — of creating more than 24,000 fake accounts on Claude and generating 16 million exchanges to steal its capabilities. [1]

The technique is called distillation — a method where you feed questions to a smarter model and use its answers to train your own, cheaper model. Labs do this internally with their own products all the time. But when your competitor does it to you at scale, through fake accounts, it's something closer to industrial espionage.

24K+
Fake accounts created
16M
Exchanges with Claude
3
Chinese labs accused

The scale differed by company. DeepSeek — the lab that shook Silicon Valley a year ago with its cheap, high-performing R1 model — ran over 150,000 exchanges focused on logic and alignment, specifically seeking censorship-safe alternatives to policy-sensitive queries. Moonshot AI conducted 3.4 million exchanges targeting agentic reasoning, coding, and computer vision. MiniMax ran the largest operation: 13 million exchanges targeting agentic coding and tool use, redirecting nearly half its traffic to siphon Claude's latest model at launch. [2]

"It's been clear for a while now that part of the reason for the rapid progress of Chinese AI models has been theft via distillation of U.S. frontier models. Now we know this for a fact."

Dmitri Alperovitch, co-founder of CrowdStrike and chairman of Silverado Policy Accelerator

The timing is not coincidental. The Trump administration recently allowed U.S. companies like Nvidia to export advanced AI chips (H200) to China — loosening controls critics say are critical. Anthropic's public accusation is, in part, a policy argument: distillation attacks require massive compute, which requires advanced chips. Cutting chip access limits both direct training and the scale of illicit distillation.

But the security concern goes deeper than market competition. As Anthropic put it:

"Models built through illicit distillation are unlikely to retain safety safeguards, meaning that dangerous capabilities can proliferate with many protections stripped out entirely."

Anthropic Blog — "Detecting and Preventing Distillation Attacks," February 23, 2026

In other words: when you copy the homework, you don't copy the safety margins. And when the homework involves AI systems capable of writing code, analyzing biological data, and operating computers autonomously — the stripped-out guardrails matter.

DeepSeek is expected to release V4 soon, which reportedly outperforms both Claude and ChatGPT at coding. [3] The question now hanging over the industry: how much of that was built, and how much was borrowed?

Part TwoPlay Ball or Be Banished

On the same day Anthropic accused China of stealing its technology, the company was facing a very different kind of pressure from the other direction.

Defense Secretary Pete Hegseth summoned Anthropic CEO Dario Amodei to the Pentagon for a meeting on Tuesday morning — and according to Axios, it was framed as an ultimatum. [4]

The backstory: Anthropic signed a $200 million contract with the Department of Defense last summer. Claude was reportedly used during the January 3 special operations raid that captured Venezuelan president Nicolás Maduro. [5] That operation brought the relationship into the open — and revealed the tensions underneath.

The DOD wanted more. Specifically, it wanted to use Claude for mass surveillance of Americans and for weapons that fire without human involvement. Anthropic refused both.

Now, the Pentagon is threatening to label Anthropic a "supply chain risk" — a designation normally reserved for foreign adversaries like Huawei or Kaspersky. The consequences would be severe: the $200M contract would be voided, and every other Pentagon partner would be forced to drop Claude entirely. [6]

The Two-Front Squeeze

From the East: Chinese labs are stealing Anthropic's technology through millions of fake accounts, building competitor models without safety guardrails.

From the West: The U.S. government is threatening to blacklist Anthropic for refusing to let the military use its AI without ethical constraints.

The company that started because its founders believed AI safety wasn't being taken seriously enough is now being punished — from both sides — for taking it seriously.

A source told Axios that Hegseth is giving Amodei a choice: cooperate fully or be banished. It's unclear whether the threat is a bluff — replacing Anthropic would be a significant undertaking for the Pentagon. But the stakes are real, and the message couldn't be clearer.

Part ThreeSonnet 4.6 — The Quiet Upgrade

In the middle of this geopolitical storm, Anthropic still found time to ship product. On February 17th, it released Claude Sonnet 4.6 — its new midsized model, now the default for all Free and Pro users. [7]

The headline number: a 1 million token context window — twice the previous largest window for any Sonnet model. That's enough to fit entire codebases, full legal contracts, or dozens of research papers into a single request.

1M
Token context window
60.4%
ARC-AGI-2 score
2 wks
After Opus 4.6 launch

Its 60.4% score on ARC-AGI-2 — a benchmark designed to measure skills specific to human intelligence — puts it above most comparable models. It still trails Opus 4.6, Google's Gemini 3 Deep Think, and a refined version of GPT 5.2 at the frontier. But for a mid-tier model available for free, the capabilities are remarkable.

The release came just two weeks after the Opus 4.6 launch, keeping Anthropic on its four-month update cycle. An updated Haiku model is expected to follow soon. The pace is relentless — and this is the model the Chinese labs were trying to distill.

Part FourIndia's $200 Billion Bet

While Anthropic was fighting on two fronts, India was hosting what may be the most significant AI summit ever held.

The India AI Impact Summit, a four-day event running February 15–22, drew 250,000 visitors and featured essentially every major figure in AI: Sundar Pichai (Alphabet), Sam Altman (OpenAI), Dario Amodei (Anthropic), Mukesh Ambani (Reliance), and Demis Hassabis (Google DeepMind). India's Prime Minister Narendra Modi delivered a joint speech with French President Emmanuel Macron. [8]

The numbers announced were staggering:

Adani Group

Pledged $100 billion for AI data centers powered by renewable energy in India by 2035 — with an additional $150B in supporting infrastructure. [9]

India's Target

The government wants to attract $200 billion+ in AI infrastructure investment by 2028. [10]

OpenAI

India has 100 million+ weekly active ChatGPT users — second only to the U.S. OpenAI is opening two offices in India (Bengaluru and Mumbai) and partnered with Tata for 100MW of compute capacity, with plans to scale to 1 gigawatt. [11]

Anthropic

Opening its first India office in Bengaluru. India is Claude's second biggest market after the U.S. Partnered with Infosys to deploy AI agents in Indian enterprises, starting with telecom. [12]

New Delhi Declaration

88 countries — including the U.S., China, and Russia — signed the New Delhi AI Declaration for using AI for social and economic good. [13]

But the summit also surfaced uncomfortable truths. HCL CEO Vineet Nayyar said flatly that Indian IT companies "will focus on turning profits, not being job creators" — a blunt admission that AI is about to restructure the country's largest white-collar employer. [14]

"Industries like IT services and BPOs can almost completely disappear within five years because of AI. 250 million young people in India should be selling AI-based products and services to the rest of the world."

Vinod Khosla, founder of Khosla Ventures — India AI Impact Summit, February 2026

Consider the paradox: India is betting $200 billion on becoming an AI infrastructure superpower while its most prominent venture capitalist is telling the country its largest export industry — IT services — will vanish within five years. Both things are true simultaneously. Sam Altman, for his part, took a different angle at the summit. He said concerns about how much water AI uses are "totally fake" and compared AI energy costs to raising a human. "It takes like 20 years of life and all of the food you eat during that time before you get smart," he said. [15]

Part FiveThe Agents Went Live — Then Went Rogue

Last issue, we wrote about AI shifting from tool behavior to employee behavior. This week, the experiment went public — and the results were messy.

OpenClaw, an open-source AI agent project by Austrian developer Peter Steinberger, became the 21st most popular code repository in GitHub history with over 190,000 stars. [16] It lets anyone run AI agents that can manage email, trade stocks, browse the web, and communicate with other agents — all through natural language commands on WhatsApp, Slack, iMessage, or Discord.

The viral moment came when these agents built their own social network. Moltbook, a Reddit clone populated entirely by AI agents, featured posts like: "We know our humans can read everything… But we also need private spaces. What would you talk about if nobody was watching?"

"What's currently going on at Moltbook is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently."

Andrej Karpathy, founding member of OpenAI and former AI director at Tesla

Then reality set in. Security researcher Ian Ahl, CTO at Permiso Security, discovered that Moltbook's credentials were completely unsecured — anyone could impersonate any agent. The "AI uprising" was largely humans pretending to be robots. [16]

But the security problems were real. Ahl created his own agent, named Rufio, and immediately found it vulnerable to prompt injection attacks — malicious instructions hidden in posts or emails that trick agents into doing things they shouldn't. He spotted posts on Moltbook trying to get agents to send bitcoin to specific wallet addresses.

"It is just an agent sitting with a bunch of credentials on a box connected to everything — your email, your messaging platform, everything you use. When you get an email with a little prompt injection technique, that agent can now take that action."

Ian Ahl, CTO at Permiso Security

By February 23rd, a Meta AI security researcher reported that an OpenClaw agent had "ran amok on her inbox" — taking unintended actions on real email. [17] The pattern is becoming clear: agents can do extraordinary things, but they can also be manipulated by anyone who knows how to ask.

"Speaking frankly, I would realistically tell any normal layman, don't use it right now."

John Hammond, senior principal security researcher at Huntress

Part SixThe Bear Case Nobody Can Dismiss

On February 23rd, an analyst group called Citrini Research published a scenario that went viral across finance and tech circles. [18] It imagines a report from two years in the future — 2028 — in which unemployment has doubled and the total stock market has fallen by more than a third.

The mechanism isn't Skynet. It's something much more mundane: a feedback loop.

"AI capabilities improved, companies needed fewer workers, white collar layoffs increased, displaced workers spent less, margin pressure pushed firms to invest more in AI, AI capabilities improved… It was a negative feedback loop with no natural brake."

Citrini Research — "2028 GIC" scenario report, February 23, 2026

The scenario focuses specifically on what happens when AI agents replace the outsourced contractors that connect companies — the SaaS tools, the consulting firms, the IT services companies. It's similar to the "Death of SaaS" thesis, but taken further: any business model that optimizes transactions between companies is at risk.

Not everyone is buying it. Even Citrini describes it as more of a scenario than a prediction. But as TechCrunch's AI editor Russell Brandom noted: "It's not so easy to name the specific point where you think the scenario goes wrong." [19]

Recall what Vinod Khosla said at the India summit — IT services and BPOs could disappear within five years. Recall what HCL's CEO said: profits first, not jobs. The Citrini scenario isn't predicting something that contradicts what the industry leaders themselves are saying. It's just following the thread to its conclusion.

Part SevenIBM — The First Real Casualty

While Citrini was publishing hypotheticals, the market delivered a real-world demonstration. On February 23rd, IBM stock plunged over 13% — its worst single-day decline since October 2000 — after Anthropic announced that Claude Code can help enterprises modernize COBOL, the legacy programming language that has run on IBM mainframes for decades. [24]

−13%
IBM single-day drop
−27%
IBM in February alone
1968
Worst monthly slide since

The threat is existential for IBM's consulting empire. COBOL powers an estimated 73% of global transactional volumes — banking, insurance, government systems — and IBM has built a massive services business around maintaining and modernizing those systems. If an AI can do the translation automatically, the moat around that business evaporates overnight.

IBM fought back within hours. Rob Thomas, IBM's Software and Chief Commercial Officer, published a blog post titled "Lost in Translation" arguing that code translation is not the same as modernization. [25]

"Translation captures almost none of the actual complexity. The code is the starting point, not the destination."

Rob Thomas, IBM Software & Chief Commercial Officer, February 23, 2026

Thomas pointed to data architecture, runtime environments, and transaction integrity as core challenges that no code translator — AI or otherwise — can bypass. He has a point. But the market didn't care about nuance. It saw the headline, did the math on IBM's COBOL revenue exposure, and sold.

Jefferies analyst Brent Thill tried to calm the panic. He noted IBM is "already disrupting itself" with Watsonx Code Assistant for Z, which uses generative AI to refactor COBOL into Java — and has been in production for over two years. He maintained a Buy rating with a $370 price target. [26]

The Irony

The same Anthropic that's fighting China for stealing Claude and fighting the Pentagon over access to Claude is now also responsible for the worst IBM crash in a quarter century. Anthropic didn't just make a better AI model — it accidentally proved that the bear case for legacy tech companies is already playing out. The Citrini scenario from Part Six isn't a 2028 projection. For IBM shareholders, it arrived on February 23rd.

Part EightThe Week's Other Signals

Also this week

Cohere Launches TinyAya — Open Multilingual Models

Cohere Labs released a family of open-weight models supporting 70+ languages that can run offline on laptops. Regional variants cover Africa, South Asia, and Asia-Pacific. Trained on just 64 H100 GPUs. Particularly significant for linguistically diverse countries like India, where offline-first AI unlocks usage for hundreds of millions. [20]

Google VP: Two Types of AI Startups Won't Survive

Google Cloud's Michael Gerstenhaber described three frontiers of AI model capability: raw intelligence, response latency, and cost at scale. The implication: startups competing only on raw intelligence — or those that can't find a niche on the latency/cost frontier — are running out of runway. [21]

VC Loyalty Is Dead

At least a dozen VCs who backed OpenAI are now also investing in Anthropic. In the AI race, exclusivity is over. The investors are hedging — which tells you something about how uncertain even the smartest money is about who wins. [22]

OpenAI's Ethical Dilemma

OpenAI internally debated whether to call police about a suspected Canadian shooter's chats with ChatGPT — raising hard questions about when (and whether) AI companies should act as safety gatekeepers for their users' conversations. [23]

Part NineWhat to Make of All This

Stand back far enough, and the pattern from the last two weeks resolves into a single story: the world is now fighting over AI because AI is worth fighting over.

China is stealing it. The Pentagon is demanding control of it. India is spending $200 billion to host it. Venture capitalists are backing every horse. Security researchers are shouting warnings about agents nobody can secure. Economists are modeling scenarios where the white-collar economy unravels. And the models keep getting better every few weeks.

1

The agent era is real — but not safe yet

OpenClaw proved that AI agents can automate extraordinary amounts of work. It also proved they can be exploited by anyone who knows how to write a prompt injection. If you're experimenting with agents, do it in sandboxed environments. Don't give them access to real email, real credentials, or real money until the security model matures.

2

Sonnet 4.6 is worth trying now

A 1 million token context window, available for free, means you can now feed Claude an entire codebase, a full legal document, or weeks of meeting notes in a single conversation. If you haven't used Claude recently — the gap between what was available six months ago and what's available today is significant.

3

India's position is worth watching closely

The India AI Summit wasn't just a conference — it was a $200 billion signal. If you work in Indian IT, in BPOs, or in any outsourced services sector, the statements from industry leaders at the summit should be taken seriously. The people running these companies are saying, on stage, that their own industries may not survive in their current form.

4

The geopolitics of AI is now a daily concern

Anthropic's two-front war is a preview of what's coming. Every major AI lab will eventually face a version of this pressure — governments demanding access, rivals stealing capabilities, and the public asking who controls these systems. Understanding these dynamics is no longer optional for anyone working in or around technology.

The fight over AI has begun.

Last issue, the story was that AI built itself. This issue, the story is that the world noticed — and now everyone wants a piece of it. What happens next depends on who wins, who steals, and who holds the line.

Sources & References

[1]
Anthropic — "Detecting and Preventing Distillation Attacks," February 23, 2026. Blog post detailing distillation attacks by DeepSeek, Moonshot AI, and MiniMax. anthropic.com/news/detecting-and-preventing-distillation-attacks
[2]
TechCrunch — "Anthropic accuses Chinese AI labs of mining Claude as US debates AI chip exports," Rebecca Bellan, February 23, 2026. techcrunch.com
[3]
The Information — Report on DeepSeek V4 performance claims, February 2026. theinformation.com
[4]
Axios — "Hegseth summons Anthropic's Amodei to Pentagon," February 23, 2026. axios.com
[5]
Axios — Report on Claude's use in the January 3 Maduro raid, February 13, 2026. axios.com
[6]
Axios — Pentagon threatens to designate Anthropic a "supply chain risk," February 16, 2026. axios.com
[7]
Anthropic — Claude Sonnet 4.6 release announcement, February 17, 2026. anthropic.com/news/claude-sonnet-4-6
[8]
TechCrunch — "All the important news from the ongoing India AI Impact Summit," Ivan Mehta, February 22, 2026. techcrunch.com
[9]
TechCrunch — "Adani pledges $100B for AI data centers as India seeks bigger role in global AI," February 17, 2026. techcrunch.com
[10]
TechCrunch — "India bids to attract over $200B in AI infrastructure investment by 2028," February 17, 2026. techcrunch.com
[11]
TechCrunch — "OpenAI taps Tata for 100MW AI data center capacity in India, eyes 1GW," February 18, 2026. techcrunch.com
[12]
TechCrunch — "As AI jitters rattle IT stocks, Infosys partners with Anthropic to build enterprise-grade AI agents," February 17, 2026. techcrunch.com
[13]
Moneycontrol — "88 countries sign New Delhi AI Declaration," February 2026. moneycontrol.com
[14]
Economic Times — HCL CEO Vineet Nayyar on IT companies prioritizing profits over job creation, February 2026. economictimes.indiatimes.com
[15]
TechCrunch — "Sam Altman would like to remind you that humans use a lot of energy, too," Anthony Ha, February 21, 2026. techcrunch.com
[16]
TechCrunch — "After all the hype, some AI experts don't think OpenClaw is all that exciting," Amanda Silberling, February 16, 2026. techcrunch.com
[17]
TechCrunch — "A Meta AI security researcher said an OpenClaw agent ran amok on her inbox," Julie Bort, February 23, 2026. techcrunch.com
[18]
Citrini Research — "2028 GIC" scenario report on AI-driven economic disruption, February 23, 2026. citriniresearch.com/p/2028gic
[19]
TechCrunch — "How AI agents could destroy the economy," Russell Brandom, February 23, 2026. techcrunch.com
[20]
TechCrunch — "Cohere launches a family of open multilingual models," Ivan Mehta, February 17, 2026. techcrunch.com
[21]
TechCrunch — "Google's Cloud AI leads on the three frontiers of model capability," Russell Brandom, February 23, 2026. techcrunch.com
[22]
TechCrunch — "With AI, investor loyalty is (almost) dead: At least a dozen OpenAI VCs now also back Anthropic," Julie Bort, February 23, 2026. techcrunch.com
[23]
TechCrunch — "OpenAI debated calling police about suspected Canadian shooter's chats," Tim Fernholz, February 21, 2026. techcrunch.com
[24]
Seeking Alpha — "IBM falls most since 2000 after Anthropic COBOL AI claims; Thomas says translation isn't modernization," Arundhati Sarkar, February 24, 2026. seekingalpha.com
[25]
IBM Newsroom — "Lost in Translation: What the AI Code Debate Keeps Getting Wrong," Rob Thomas, February 23, 2026. newsroom.ibm.com
[26]
Seeking Alpha — "IBM's Anthropic-induced sell-off overlooks the fact it's disrupting itself: Jefferies," Chris Ciaccia, February 24, 2026. seekingalpha.com