On April 23, 2026, Anthropic CEO Dario Amodei made a striking statement: “I think we might be 6-12 months away from when Claude does most or maybe all of what we do end to end.”
This is not a startup founder’s casual愿景, but a specific timeline from the CEO of one of the world’s most influential AI safety companies.
Amodei’s Core Assessment
Amodei’s prediction is built on two foundations. First, Anthropic’s ongoing compute expansion—the company’s partnership with Amazon, announced on April 20, will add up to 5 gigawatts (5GW) of new compute, Anthropic’s largest compute investment ever. Second, Claude Opus 4.7 has already demonstrated capabilities: independently watching videos, automatically editing viral clips, auto-adding captions for virality.
More compelling is the actual data: Airtable’s CEO is running 30 parallel Claude Code instances in production, each paired with its own browser, fully autonomous—no IDE, no human intervention. Sequoia Capital previously estimated AI agents handle approximately 50% of software engineering work, and actual progress may be faster.
The Economics: $5 vs $100,000
Behind Amodei’s prediction is a clear economic logic: an “entry-level” role costs $100,000+ in salary and benefits, while completing the same task with a Claude Opus 4.7 agent might cost just $5 in API credits—and faster.
This unit economics difference is the core driver pushing AI to replace white-collar work. Goldman Sachs and McKinsey previously estimated agentic AI could impact 50% of white-collar roles within 1-5 years. Amodei’s 6-12 month timeline is directionally consistent but more aggressive.
Industry Response
Amodei’s prediction sparked polarized reactions in the AI community. Supporters argue that from Opus 4.7’s actual performance in coding, research, and multi-step tasks, this timeline is not science fiction. Critics point out that “end-to-end completing human work” is too broad—AI still has clear limitations in scenarios requiring creative judgment and social interaction.
Notably, Gary Marcus and other AI critics published commentary targeting Amodei in late April, questioning whether this “hype-style” AGI narrative might mask the real limitations of AI systems.
Market Outlook
Amodei’s prediction needs to be understood on two levels. From a business perspective, it’s Anthropic’s signal to investors and customers: we are approaching AGI, investing in us is investing in the future. From a safety perspective, it’s also part of Anthropic’s consistent narrative—if AGI is coming, AI safety is no longer academic research, but an urgent practical concern.
Regardless of whether Amodei’s prediction is accurate, one thing is certain: AI agent capabilities are shifting from “assistive tools” to “autonomous executors.” For enterprises, the key question is no longer “will AI replace human work” but “which jobs will be replaced when, and how should we prepare.”
Sources
- Anthropic Newsroom
- X platform Dario Amodei related discussion
- Airtable CEO’s Claude Code multi-instance practice sharing
- Sequoia Capital AI Agent industry estimates