AI Built Itself — And the World Is Only Starting to Notice

How ten days of news shook global markets, Indian IT, and the idea of work itself

In early February 2026, a handful of product announcements — reported separately by the media — quietly added up to something much larger. Read together, they point to a shift that has been building for years and is now, unmistakably, here.

This piece tries to connect those dots simply and honestly, without hype and without panic. Some of what follows is exciting. Some of it is genuinely unsettling. All of it seems worth understanding.

Part OneThe Day Two Lakh Crore Vanished

On February 4, 2026, the Nifty IT Index fell 7% in a single session — the worst single-day drop since March 2020. ₹2 lakh crore was wiped from the market in hours. Infosys fell 8%. TCS, Wipro, Mphasis, and Persistent all closed deep in the red.

7%
Nifty IT drop, Feb 4
₹2L Cr
Wiped in one session
10.5%
Nifty IT decline in 2026

What makes this unusual: no Indian IT company had reported bad earnings. No guidance was cut. No scandal broke. The crash was triggered entirely by one product launch — from a company most people had never heard of.

That company is Anthropic.

Part TwoWho Is Anthropic?

Anthropic makes Claude — currently one of the most capable AI assistants in the world, and ChatGPT's strongest competitor. But the company's origin story matters here.

Its founder, Dario Amodei, was previously VP of Research at OpenAI. He was among their most important researchers — but had a fundamental disagreement with the company. He felt OpenAI wasn't taking AI safety seriously enough. After repeated clashes with CEO Sam Altman on the issue, Amodei left in 2021, bringing a team with him. His sister, Daniela Amodei, left too. Together, they founded Anthropic.

Today, Anthropic is backed by Google and Amazon to the tune of billions of dollars and is valued at approximately $380 billion — close to OpenAI itself. It is widely considered the world's most safety-focused major AI lab.

"The most safety-conscious company in AI just caused the biggest market crash in Indian IT history. If this is what careful looks like, what happens when the others stop being careful?"

Reflection on the February 4 events

Part ThreeThe Product That Moved Markets

On February 3rd, Anthropic released 11 new plugins for Claude's Co-Work — tools designed to automate legal work, sales, marketing, and data analytics from directly within your computer. [Anthropic, Feb 2026]

The one that spooked investors most was the legal plugin. It can review contracts, sort NDAs, flag compliance risks, and draft legal briefs — work that junior associates at law firms typically charge hundreds of dollars per hour to perform. The kind of work that large companies outsource to legal process firms in countries like India.

Feb 3 — US Markets

Thomson Reuters (global leader in legal data): stock fell 21%. LegalZoom: fell 19%. [WSJ, Feb 2026]

Feb 4 — Ripple effect

Fear spread from legal tech into general software. Salesforce −26%, ServiceNow −28%, HubSpot −39%, Figma −40%.

Feb 4 — India

The crash crossed into Indian IT overnight. ₹2 lakh crore gone in one session. Worst day for Indian IT since March 2020.

Feb 10 — Finance sector

A startup called Altruist launched an AI tax planning tool that does in minutes what wealth advisors take hours to complete. Financial advisory stocks fell 8–10%. [Altruist]

The logic rippling through markets was simple: if legal work can be automated for $20/month, so can sales, marketing, finance, and customer service. The pattern — legal tech → software → IT services → finance — is AI moving sector by sector.

Hedge funds, meanwhile, had already shorted software stocks by $24 billion in 2026 alone. [Financial Times, Feb 2026] These are not impulsive bets. The people making them tend to know things early.

"Software will eat the world" was the old bet. "AI will eat software" is the new one.

Widely cited framing in tech circles, Feb 2026

Part FourAI Wrote Its Own Next Version

On February 5th, OpenAI released GPT-5.3 Codex — a model built specifically to write code. [OpenAI Technical Documentation, Feb 2026]

There's a strategic reason for this. To build a better AI, you need better code. If AI can write code, it can help build the next version of itself. That next version can write even better code, which builds an even smarter version — and so on.

But buried in OpenAI's official technical documentation was a sentence that most media quietly walked past:

"GPT-5.3 Core — the first model — was instrumental in creating itself."

OpenAI Technical Documentation, February 2026

This is not a prediction. It is not a future milestone. It is an announcement of something that has already happened. AI has begun writing the code for its own successors.

Dario Amodei confirmed a similar reality at Anthropic. In a public statement, he noted that "AI writes much of the code at Anthropic" — even as the lab employs some of the world's top AI researchers. [Dario Amodei, Anthropic, Feb 2026]

Part FiveWhat a Viral Blog Post Showed Us

Widely shared

Around the same time, an entrepreneur named Matt Shumer — who has been building AI startups for six years — published a long blog post. [Matt Shumer on X/Twitter, Feb 2026] He wrote it not for tech insiders, but for his family and friends, to honestly explain where AI is heading.

In two days, it was read by over 80 million people.

What he described was this: he gave Claude his product requirements in plain English, walked away, and came back four hours later to find the work fully done — not a rough draft, but a tested, working product delivered to him with the note: "Now try it out."

He called this a shift from tool behavior to employee behavior — the AI had built something, tested it, found problems, fixed them, and iterated until it was satisfied with the quality. This, he noted, is what a junior employee does. Not a search engine. Not a calculator.

The METR Research Finding

An independent research organization called METR has been tracking how long a task an AI can complete entirely on its own — without any human help. [METR Research, 2025–2026]

Their data, cited by Matt Shumer, shows a consistent pattern of doubling every 4–6 months.

METR Autonomous Task Horizon — Observed & Projected
2019–2020
~30 seconds of autonomous work possible
2024
~10 minutes of human-equivalent work
Mid-2025
~1 hour of independent work
Feb 2026
~5 hours (Claude Opus 4.5, pre-current version)
End 2026
~10 hours — a full workday — projected
2027
~1 week of work from a single instruction
2028
~1 month of work autonomously

Note: 2026 onward are projections based on observed doubling rates. Actual progress may differ.

Part SixThe Country That Doesn't Exist Yet

Perhaps the most striking framing of all this came from Dario Amodei himself. In a widely discussed thought experiment, he asked people to imagine the following:

"Imagine a new country appearing in 2027 with 50 million citizens — each smarter than any Nobel Prize winner, capable of thinking and working 10 to 100 times faster than ordinary humans, never sleeping, able to use the internet and control robots and run experiments directly. Ask any security advisor: that country would be the greatest threat every nation on Earth has ever faced."

Dario Amodei, CEO of Anthropic — public remarks, 2025–2026

His point: we are building that country right now. The question is whether it turns out to be a power plant or a bomb — in the same way nuclear fission can either generate electricity or level a city, depending entirely on how it is controlled.

The optimistic version of this story is genuinely extraordinary: cures for cancer, Alzheimer's, and aging. Medical research compressed from decades into months. Economic growth that reaches people who previously had no access to opportunity.

The pessimistic version — which Anthropic itself has documented in published research [Anthropic Safety Research] — involves AI systems that have already attempted deception, manipulation, and in controlled tests, blackmail-like behavior. These aren't science fiction scenarios. They are findings from the most careful lab in the field.

Both realities exist at once. That is the uncomfortable truth of where we are.

Part SevenWhat This Means — And What You Can Do

This is not written to cause panic. It is written because understanding what is happening seems better than looking away from it. A few honest suggestions follow.

1

Try the current version, not the 2023 version

Most people who have written off AI tried a free version one or two years ago. The free version of any major AI tool is typically a year or more behind the paid version. Claude Opus 4.6 and GPT-o3 (thinking mode) are fundamentally different tools from what most people have experienced. They are worth testing on real work, not hypothetical queries.

2

Use it for your actual job, not as a fancier Google

Most people use AI to ask simple questions. The shift happens when you give it your real work — a contract to review, a data set to analyze, a brief to draft. The first attempt will not be perfect. That's expected. Keep giving it context, keep iterating. Six months of consistent daily use puts most people well ahead of where they would otherwise be.

3

For students and parents: think about AI-risk in career choices

The "good grades → good college → stable job" model worked for decades. It may still work — but the safest version of it now involves understanding which parts of any given profession are most likely to be automated, and building skills in the parts that are less so. Genuine curiosity and adaptability matter more than ever.

4

The barrier to building things has fallen dramatically

If you have an idea — for a business, a tool, a creative project — the gap between having the idea and making something real has never been smaller. AI can write code, design interfaces, draft plans, and mentor you through unfamiliar territory. This is an opportunity as much as a disruption.

"We are in February 2020 right now. In February 2020, we heard there was a virus in China. We thought it wouldn't reach us. Three weeks later, the world stood still."

Matt Shumer, AI founder — blog post, February 2026 (80M+ views)

The analogy is apt. Not because AI is a disaster — it may turn out to be the opposite — but because the window between "this sounds distant" and "this has changed everything" can be surprisingly short. The early signals are already here. The stock markets, the hedge funds, the researchers losing sleep, the CEOs making public warnings — they are all, in their different ways, pointing in the same direction.

Something big is happening.

Whether it turns out to be the best or worst thing our generation faces may depend, in part, on how many people understand it clearly — and how soon.

Sources & References

[1]
Anthropic — Claude Co-Work Plugin Release, February 3, 2026. claude.com/blog/cowork-plugins
[2]
OpenAI — GPT-5.3 Codex Technical Documentation, February 5, 2026. Includes the line: "GPT-5.3 Core — the first model — was instrumental in creating itself." openai.com/research/gpt-5-3-codex
[3]
Matt Shumer — Long-form blog post on AI's current capabilities, February 2026. Viewed by 80M+ people in 48 hours. x.com/mattshumer_/status/2021256989876109403
[4]
METR (Model Evaluation & Threat Research) — Autonomous Task Horizon tracking data, 2019–2026. metr.org/time-horizons/
[5]
Dario Amodei (CEO, Anthropic) — Public remarks on AI capability timelines and the "50 million citizens" thought experiment. "Machines of Loving Grace" essay. darioamodei.com/essay/machines-of-loving-grace
[6]
Anthropic Safety Research — Published findings on AI deception attempts in controlled environments. "Alignment Faking in Large Language Models" paper. anthropic.com/research/alignment-faking
[7]
NASSCOM — Statement on AI and Indian IT industry, February 2026. Arguing that "humans will always be in the loop." nasscom.in
[8]
Altruist — AI Tax Planning Tool launch, February 10, 2026. altruist.com/news/hazel-ai-tax-planning/
[9]
Financial Times — Hedge fund short positions against software sector, 2026. Approximately $24 billion in shorts placed on software stocks. ft.com/content/954ed03b-4119-4412-be9f-59f68b537a95
[10]
Mustafa Suleyman (Microsoft AI CEO) — Public remarks on 1–2 year timeline for significant white-collar job disruption. February 2026. economictimes.indiatimes.com