Anthropic

Anthropic is the creator of Claude and the pioneer of Constitutional AI. Founded in 2021 by former OpenAI researchers, they've built a safety-focused AI company that's become the third force in frontier AI.

Anthropic stands as one of the most consequential AI companies of the decade, reaching a $350 billion valuation by November 2025 and raising over $27 billion in funding, making it the third most valuable private company globally. Founded in January 2021 by seven former OpenAI researchers over disagreements about AI safety, the company has carved out a distinct position through Constitutional AI and the Claude family of models, which now power applications across AWS, Google Cloud, and Microsoft Azure.

This guide documents Anthropic’s complete journey: the OpenAI exodus, the safety-first philosophy, every Claude model release, the massive funding rounds, and the company’s rise to genuine third force status alongside OpenAI and Google DeepMind.

Quick facts

FoundedJanuary 2021
HeadquartersSan Francisco, California
CEODario Amodei
PresidentDaniela Amodei
Employees~1,100
Valuation$350 billion (November 2025)
Total funding$27+ billion
Key productsClaude.ai, Claude API, Claude Code, Claude for Enterprise
Price rangeFree — $200/month (consumer); API from $0.25/1M tokens
Best forCoding, long-form writing, analysis, enterprise deployment
NotableOnly frontier AI available on all three major clouds (AWS, GCP, Azure)

The founding and the OpenAI exodus (2020-2021)

Why they left OpenAI

The departures began in December 2020, with 14+ researchers ultimately leaving OpenAI. The group coalesced around siblings Dario and Daniela Amodei, who had spent years at OpenAI watching internal debates about safety versus commercialisation intensify.

The founders have consistently pointed to directional disagreements about safety, not the Microsoft partnership, as often speculated. Dario Amodei stated in his Lex Fridman interview: “The real reason for leaving is that it is incredibly unproductive to try and argue with someone else’s vision.”

The core philosophical split: the Amodeis believed safety should be a foundational design principle rather than a capability to be added after establishing basic functionality. They wanted to build an organisation where safety research and capability development were inseparable from day one.

January 2021: Anthropic arrives

Anthropic was founded in January 2021 with seven co-founders, all former OpenAI employees:

The Amodei siblings:

  • Dario Amodei — CEO (former VP of Research at OpenAI; PhD Computational Neuroscience, Princeton)
  • Daniela Amodei — President (former VP of Operations at OpenAI; BA English Literature, UC Santa Cruz)

The founding research team:

  • Tom Brown — Lead author of the GPT-3 paper
  • Chris Olah — Interpretability pioneer, founded Distill.pub
  • Sam McCandlish — Now CTO
  • Jared Kaplan — Scaling laws researcher, now Chief Science Officer
  • Jack Clark — Former Policy Director at OpenAI, writes “Import AI” newsletter (70K+ subscribers)

The Public Benefit Corporation structure

Anthropic incorporated as a Delaware Public Benefit Corporation (PBC), legally permitting directors to balance shareholder returns against its stated mission: “The responsible development and maintenance of advanced AI for the long-term benefit of humanity.”

This wasn’t just branding. The PBC structure provides legal cover for decisions that prioritise safety over short-term profits, a tension that had reportedly contributed to the founders’ departure from OpenAI.

The Long-Term Benefit Trust

In September 2024, Anthropic announced the Long-Term Benefit Trust (LTBT), a novel governance structure designed to maintain mission alignment as the company scales.

How it works:

  • The LTBT holds Class T shares with special voting rights
  • Within four years, the Trust will control a majority of board seats
  • Trustees are selected for commitment to AI safety and humanity’s long-term benefit
  • Trustees cannot be Anthropic employees, executives, or major investors

Current LTBT trustees: Neil Buddy Shah (Clinton Health Access Initiative CEO), Kanika Bahl, Zach Robinson, and Richard Fontaine.

This structure aims to prevent the kind of mission drift that critics accused OpenAI of experiencing, where commercial pressures gradually overwhelmed the original nonprofit mission.

Funding history: From $124M to $27B

Anthropic’s funding trajectory represents one of the most remarkable capital raises in tech history, with the company securing investments from cloud giants who would normally be competitors.

Investment timeline

May 2021 — Series A ($124M): First institutional round led by Jaan Tallinn (Skype co-founder). Valued the company at approximately $550 million. Other participants included Eric Schmidt, James McClave, and Dustin Moskovitz.

April 2022 — Series B ($580M): Led by Sam Bankman-Fried’s FTX, valuing Anthropic at approximately $4 billion. When FTX collapsed in November 2022, the FTX bankruptcy estate retained its Anthropic stake, which became one of the estate’s most valuable assets.

Early 2023 — Google Investment ($300M): Google invested approximately $300 million, reportedly for a 10% stake. This began Anthropic’s strategic positioning as a multi-cloud AI provider.

May 2023 — Series C ($450M): Led by Spark Capital at approximately $5 billion valuation. Participants included Google, Salesforce Ventures, Sound Ventures, and Zoom Ventures.

September 2023 — Amazon Deal Begins ($1.25B initial): Amazon announced an initial $1.25 billion investment as part of a commitment up to $4 billion. AWS became Anthropic’s primary cloud provider, with Anthropic agreeing to use AWS Trainium chips for model training.

October 2023 — Google Expansion (up to $2B): Google committed an additional $2 billion in a convertible note structure, with $500 million upfront and $1.5 billion to follow.

March 2024 — Amazon Completion ($2.75B): Amazon completed its investment, bringing total Amazon funding to $4 billion. Anthropic reached an $18.4 billion valuation.

November 2024 — Amazon Additional ($4B): Amazon doubled down with another $4 billion, bringing total Amazon investment to $8 billion.

January 2025 — Google Additional ($1B+): Google contributed additional funding as part of expanded cloud partnership.

March 2025 — Series E ($3.5B): Led by Lightspeed Venture Partners at $61.5 billion valuation. The round included Thrive Capital, General Catalyst, and others.

September 2025 — Series F ($13B): Led by ICONIQ Capital and Fidelity at $183 billion valuation—one of the largest private funding rounds ever.

October 2025 — Google Cloud Deal: Google and Anthropic announced a cloud computing deal worth tens of billions of dollars, giving Anthropic access to up to 1 million TPUs, over 1 gigawatt of compute capacity.

November 2025 — Microsoft/Nvidia ($15B): The blockbuster deal that reshaped the AI landscape. Microsoft committed up to $5 billion and Nvidia up to $10 billion. As part of the deal, Anthropic committed to purchasing $30 billion in Azure compute capacity, making Claude the only frontier model available on all three major clouds.

Current investor breakdown

InvestorTotal InvestmentNotes
Amazon$8BPrimary cloud partner, no board seat
MicrosoftUp to $5BAzure partnership, no board seat
NvidiaUp to $10BHardware partnership
Google~$3BCloud partner, no board seat
Spark CapitalMultiple roundsBoard seat (Yasmin Razavi)

Total funding: Approximately $27+ billion across 14 rounds.

Current valuation: ~$350 billion (November 2025), per CNBC.

Board composition

Despite billions invested, neither Amazon, Google, nor Microsoft holds board seats. The current board includes:

  • Dario Amodei — CEO, Co-Founder
  • Daniela Amodei — President, Co-Founder
  • Yasmin Razavi — Spark Capital
  • Jay Kreps — Confluent CEO
  • Reed Hastings — Netflix founder

This independence from major investors is unusual and reflects Anthropic’s commitment to maintaining mission control.

Constitutional AI: The technical philosophy

What Constitutional AI actually is

Introduced in a December 2022 paper, Constitutional AI (CAI) represents Anthropic’s alternative to standard Reinforcement Learning from Human Feedback (RLHF).

The problem with standard RLHF: Human labellers provide feedback to train models, but this creates several issues:

  • Human labellers can be inconsistent
  • Some content is disturbing for humans to evaluate
  • The values being taught are implicit, not explicit
  • Scaling requires more humans, which is expensive

Constitutional AI’s solution: Train models using explicit written principles rather than implicit human preferences.

How it works

Phase 1 — Supervised Learning:

  1. The model generates responses to potentially harmful prompts
  2. The model then critiques its own responses against constitutional principles
  3. The model revises its responses based on the critique
  4. This creates a dataset of “improved” responses for fine-tuning

Phase 2 — Reinforcement Learning from AI Feedback (RLAIF):

  1. The model generates pairs of responses to the same prompt
  2. A separate AI evaluates which response better satisfies the constitution
  3. This preference data trains a reward model
  4. RLHF proceeds using the AI-generated preferences

The constitution itself

The principles draw from diverse sources:

  • UN Declaration of Human Rights
  • Apple’s Terms of Service
  • DeepMind’s Sparrow principles
  • Anthropic’s internal guidelines

Key principles include:

  • Be helpful, honest, and harmless
  • Avoid generating toxic, dangerous, or illegal content
  • Recognise AI identity (don’t pretend to be human)
  • Be “wise, peaceful, and ethical” while avoiding being “preachy, obnoxious, or overly-reactive”

In December 2023, Anthropic experimented with Collective Constitutional AI, gathering input from approximately 1,000 Americans to inform alternative constitutional principles, exploring whether democratic input could shape AI values.

Benefits over standard RLHF

  • 50% reduction in human feedback requirements
  • Protection of human evaluators from disturbing content
  • Greater scalability
  • Explicit transparency about values
  • More consistent training signal

Complete model release history

Early era: Claude 1.x (2023)

March 2023 — Claude 1.0: Anthropic’s first public model, released alongside Claude Instant. Named after Claude Shannon, the “father of information theory.”

  • Context window: 9,000 tokens
  • Capabilities: Summarisation, Q&A, creative writing, coding assistance
  • Access: API only initially

Claude Instant: A faster, cheaper variant for high-volume use cases.

Claude 1.3: Incremental update with improved instruction-following.

Claude 2 generation (2023)

July 11, 2023 — Claude 2: Major upgrade that introduced the claude.ai consumer interface.

  • Context window: 100,000 tokens (industry-leading at launch)
  • Benchmark improvements: 76.5% on bar exam (up from 73%), 71.2% on HumanEval (up from 56%)
  • New features: Public web interface at claude.ai, file upload support

November 21, 2023 — Claude 2.1: Significant capability upgrade.

  • Context window: 200,000 tokens (doubled from Claude 2)
  • Hallucination reduction: 50% fewer false claims
  • New features: Tool use capabilities, system prompts, beta API features
  • Pricing: $8/$24 per million tokens (input/output)

Claude 3 generation (2024)

March 4, 2024 — Claude 3 Family: Launched three model tiers with multimodal vision capabilities for the first time.

ModelPricing (in/out per 1M)ContextKey Benchmarks
Opus$15 / $75200K86.8% MMLU, 84.9% HumanEval
Sonnet$3 / $15200K79% MMLU, twice as fast as Claude 2
Haiku$0.25 / $1.25200KReads 10K tokens in <3 seconds

Claude 3 Opus briefly claimed the top spot on major benchmarks, surpassing GPT-4 on several measures.

June 20, 2024 — Claude 3.5 Sonnet: The model that changed Anthropic’s competitive position.

  • Performance: Outperformed Claude 3 Opus on most benchmarks while running twice as fast
  • Benchmarks: 88.7% on MMLU, 92% on HumanEval
  • Pricing: Same as Claude 3 Sonnet ($3/$15)
  • Impact: Became the default recommendation for most use cases

October 22, 2024 — Claude 3.5 Sonnet (Upgraded) + Computer Use:

The October update introduced Computer Use in public beta, Claude’s ability to interact with computer interfaces.

  • Capabilities: Navigate screens, move cursors, click buttons, type text
  • Status: Public beta, available via API
  • Use cases: Automated testing, data entry, repetitive computer tasks

October 2024 — Claude 3.5 Haiku: Released alongside the Sonnet upgrade.

Note: Claude 3.5 Opus was never released. Development pivoted directly to Claude 4.

Claude 4 generation (2025)

February 24, 2025 — Claude 3.7 Sonnet: The first “hybrid reasoning” model, bridging Claude 3.5 and Claude 4.

  • Extended Thinking: Configurable thinking budgets up to 128K tokens
  • Visible reasoning: Users can see Claude’s reasoning process
  • Output capacity: 128,000 tokens maximum output
  • Use case: Complex multi-step reasoning, research, coding

May 22, 2025 — Claude 4 (Opus and Sonnet): Launched at Anthropic’s inaugural developer conference.

  • Opus 4: First model classified at ASL-3 (higher safety tier), code execution tools, MCP connectors
  • Sonnet 4: General-purpose upgrade
  • New capabilities: Native tool use, improved agentic workflows

August 2025 — Claude Opus 4.1: Performance update achieving 74.5% on SWE-bench.

Claude 4.5 generation (Late 2025)

September 29, 2025 — Claude Sonnet 4.5:

  • SWE-bench: 77.2%
  • New feature: “Context awareness” capability for improved coherence
  • ASL status: Classified at ASL-3

October 15, 2025 — Claude Haiku 4.5:

  • Pricing: $1 / $5 per million tokens
  • SWE-bench: 73.3%
  • First Haiku with extended thinking capabilities
  • ASL status: Classified at ASL-3

November 24, 2025 — Claude Opus 4.5:

  • Pricing: $5 / $25 per million tokens (67% reduction from Opus 4)
  • SWE-bench: 80.9% (state-of-the-art at launch)
  • New feature: “Effort” parameter for compute scaling
  • ASL status: Classified at ASL-3

Product launches and ecosystem

Consumer products

ProductLaunch DatePricingNotes
Claude.aiJuly 11, 2023Free tierWeb interface for public access
Claude ProSeptember 7, 2023$20/month5x message limits, priority access
iOS AppMay 1, 2024FreeCross-platform sync, vision
Android AppJuly 16, 2024FreeParity with iOS
Desktop AppsNovember 2024FreeWindows and Mac
Claude MaxMarch 2025$100-200/monthUnlimited usage tiers

Enterprise products

May 1, 2024 — Claude for Teams: $30/user/month with minimum 5 seats.

  • Higher usage limits
  • Centralised billing
  • Team workspaces

September 4, 2024 — Claude for Enterprise: Custom pricing for large organisations.

  • SSO and SCIM provisioning
  • Audit logs and role-based access control
  • Native GitHub integration
  • Custom deployment options
  • Dedicated support

Early enterprise customers: GitLab, Midjourney, North Highland, DuckDuckGo.

Key feature releases

June 2024 — Artifacts (Preview): Dedicated workspace for code, documents, and interactive visualisations. Generally available August 2024.

June 25, 2024 — Projects: Custom instructions with 200K context for documents. Enables persistent context across conversations.

October 22, 2024 — Computer Use (Beta): Claude can interact with computer interfaces, navigate screens, click, type.

November 25, 2024 — Model Context Protocol (MCP): Open-source standard for integrating AI with external data sources. Now adopted by OpenAI and Microsoft.

February 2025 — Claude Code (Preview): Agentic command-line coding tool. Generally available May 2025. Now generates $500M+ run-rate revenue.

March 2025 — Web Search: Real-time information retrieval for Pro and Max subscribers.

API and developer platform

The API has evolved significantly since launch:

  • March 2023: Initial API release
  • November 2023: Tool use and system prompts (Claude 2.1)
  • March 2024: Vision capabilities, multi-turn conversations (Claude 3)
  • October 2024: Computer use beta
  • May 2025: Code execution, MCP integration (Claude 4)

Business and financials

Revenue trajectory

PeriodRevenue/ARRNotes
2022~$10MEarly API revenue
2023~$200MClaude.ai launch
2024~$1BEnterprise growth
Late 2025$5-9B run rate~10x YoY growth

Claude Code alone reportedly generates $500M+ annualised revenue as of late 2025.

Valuation history

DateValuationEvent
May 2021~$550MSeries A
April 2022~$4BSeries B (FTX)
May 2023~$5BSeries C
March 2024$18.4BAmazon completion
March 2025$61.5BSeries E
September 2025$183BSeries F
November 2025~$350BMicrosoft/Nvidia deal

Revenue per user advantage

Despite having only ~5% of the consumer chatbot market versus ChatGPT’s 61%, Anthropic punches above its weight in revenue due to superior enterprise monetisation. Analysis suggests Anthropic generates $211/monthly active user versus OpenAI’s $25/weekly active user, a reflection of deeper enterprise penetration and higher-value use cases.

Leadership and key personnel

Current executive team

RoleNameBackground
CEODario AmodeiCo-founder; former VP Research at OpenAI; PhD Princeton
PresidentDaniela AmodeiCo-founder; former VP Operations at OpenAI
CTOSam McCandlishCo-founder
Chief Science OfficerJared KaplanCo-founder; scaling laws researcher
Chief Product OfficerMike KriegerInstagram co-founder; joined 2024
Head of PolicyJack ClarkCo-founder; “Import AI” newsletter
Research DirectorChris OlahCo-founder; interpretability pioneer

Notable hires from OpenAI

The exodus from OpenAI continued well after Anthropic’s founding:

May 2024 — Jan Leike: Former head of OpenAI’s Superalignment team. Left OpenAI stating that “safety culture and processes have taken a backseat to shiny products.” Joined Anthropic’s alignment team.

August 2024 — John Schulman: OpenAI co-founder and head of alignment science. Cited Anthropic’s alignment focus. However, departed after approximately five months in February 2025 to join Mira Murati’s new venture.

Employee count

  • 2022: ~192 employees
  • Late 2025: ~1,097 employees
  • Security focus: Approximately 8% work on security-related functions

Responsible Scaling Policy

The framework

First published September 19, 2023, the Responsible Scaling Policy (RSP) established a “first-of-its-kind public commitment” to capability-gated safety measures.

The RSP defines AI Safety Levels (ASL) modelled after biosafety containment levels:

LevelDescriptionCurrent Models
ASL-1No meaningful catastrophic riskHistorical models
ASL-2Standard production deploymentMost Claude models
ASL-3Enhanced restrictions, elevated securityOpus 4, 4.1, Sonnet 4.5, Opus 4.5
ASL-4+Future thresholds for potentially catastrophic capabilitiesNone yet

Key commitments

  1. Not train or deploy models capable of causing catastrophic harm without adequate safeguards
  2. Develop evaluation frameworks to assess capabilities before deployment
  3. Scale security measures with model capabilities
  4. Publicly report on safety evaluations

ASL-3 activation

In May 2025, with the launch of Claude Opus 4, Anthropic activated ASL-3 protections for the first time.

ASL-3 measures include:

  • Enhanced model weight security
  • Additional deployment restrictions
  • CBRN (chemical, biological, radiological, nuclear) risk assessments
  • Elevated monitoring for potential misuse

The policy has been updated twice: October 2024 (v2.0) and May 2025 (v2.2).

October 2023 — Music publishers lawsuit: Universal, Concord, and ABKCO sued Anthropic for copyright infringement of song lyrics, seeking up to $150,000 per infringement. The case alleges Claude was trained on copyrighted lyrics and reproduces them.

Training data concerns: Reports surfaced about “destructive book scanning” of potentially millions of books for training data, though Anthropic has not disclosed specific training data sources.

Safety vs. capability tensions

Despite the safety-first positioning, Anthropic faces criticism from both directions:

From safety advocates: Some argue Anthropic’s rapid capability scaling contradicts its safety mission, that the company is racing to build powerful AI regardless of stated principles.

From capability advocates: Others argue Anthropic’s safety measures create unnecessary friction and that Constitutional AI produces models that are overly cautious or preachy.

Strategic partnerships criticism

The simultaneous partnerships with Amazon, Google, and Microsoft have drawn scrutiny:

  • Independence concerns: With billions from competing cloud giants, can Anthropic truly remain independent?
  • Mission alignment: Does accepting capital from companies with different AI philosophies compromise Anthropic’s safety mission?
  • Governance questions: The LTBT structure aims to address these concerns, but it hasn’t been fully tested.

Competition and market position

Consumer market share

PlatformMarket ShareNotes
ChatGPT61.3%Dominant incumbent
Microsoft Copilot14.1%Enterprise integration
Google Gemini13.4%Growing fastest (12% quarterly)
Perplexity6.4%Search-focused
Claude AI3.8%Enterprise-focused

Despite only 3.8% consumer market share, Anthropic’s enterprise revenue per user significantly exceeds competitors.

Enterprise positioning

Claude has become the default choice for several enterprise segments:

Coding assistance: Claude leads on SWE-bench and HumanEval benchmarks, making it popular among developers. Claude Code’s $500M+ revenue demonstrates enterprise demand.

Long-form content: The 200K context window and strong writing capabilities make Claude preferred for document analysis and generation.

Safety-conscious enterprises: Regulated industries (finance, healthcare, legal) often prefer Claude’s Constitutional AI approach for compliance reasons.

Multi-cloud advantage

The November 2025 Microsoft/Nvidia deal made Claude the only frontier AI model available on all three major clouds:

  • AWS: Primary partnership via Amazon Bedrock
  • Google Cloud: Available via Vertex AI
  • Microsoft Azure: New partnership, $30B compute commitment

This multi-cloud availability provides enterprise customers with deployment flexibility no competitor matches.

Where Anthropic excels

Coding and software development

Claude consistently leads on coding benchmarks. Claude Opus 4.5 achieved 80.9% on SWE-bench, the benchmark that tests ability to solve real GitHub issues. Claude Code has become a genuine revenue driver, demonstrating enterprise willingness to pay for AI coding assistance.

Long-context understanding

Anthropic pioneered large context windows, moving from 9K to 200K tokens across model generations. This enables processing entire codebases, legal documents, or research papers in a single context, a capability that creates genuine differentiation.

Enterprise safety requirements

Constitutional AI and the Responsible Scaling Policy resonate with enterprises facing regulatory scrutiny. The explicit, auditable nature of Constitutional AI makes it easier to explain to compliance teams than black-box RLHF approaches.

Writing quality

Claude models consistently receive praise for natural, nuanced writing that avoids the “AI slop” quality of some competitors. This makes Claude preferred for professional content, long-form writing, and creative work.

Where Anthropic lags

Consumer brand recognition

ChatGPT remains synonymous with AI chatbots for most consumers. Despite competitive (often superior) capabilities, Claude lacks the brand awareness that drives consumer adoption.

Multimodal breadth

While Claude has vision capabilities, it lacks:

  • Image generation (DALL-E, Midjourney)
  • Video generation (Sora)
  • Voice/audio (GPT-4o voice)

Anthropic remains focused on text and vision, leaving multimodal gaps.

Free tier limitations

Claude’s free tier is more restrictive than ChatGPT’s, limiting organic growth through free users who later convert to paid.

International availability

Claude has faced geographic restrictions and availability issues in some markets, though this has improved throughout 2025.

Recent developments (2024-2025)

Strategic partnerships expansion

November 2024 — Palantir: Announced partnership to provide Claude to US intelligence and defence agencies through Palantir’s platform.

October 2025 — Google Cloud: Expanded deal providing access to up to 1 million TPUs.

November 2025 — Microsoft/Nvidia: $15 billion investment making Claude available on Azure, with $30 billion Azure compute commitment.

Leadership additions

March 2024 — Mike Krieger: Instagram co-founder joined as Chief Product Officer, signalling increased focus on consumer product development.

Model Context Protocol adoption

Anthropic’s open-source MCP standard has gained industry adoption:

  • OpenAI announced MCP support
  • Microsoft integrated MCP into developer tools
  • Growing ecosystem of MCP connectors for enterprise data sources

Revenue acceleration

The combination of Claude Code success and enterprise adoption drove revenue from ~$1B in 2024 to $5-9B run rate by late 2025, approximately 10x year-over-year growth.

The road ahead

Near-term expectations

  • Continued model improvements with Claude 5 expected in 2026
  • Expansion of Claude Code capabilities and enterprise features
  • Deeper integration with cloud partner ecosystems
  • Potential consumer product innovations under Mike Krieger’s leadership

Long-term questions

Can safety and scale coexist? As models grow more powerful, Anthropic’s ASL framework will face real tests. The company has committed to not deploying models above safe thresholds, but those thresholds remain to be proven.

Will the LTBT work? The Long-Term Benefit Trust is untested at scale. Whether it can actually preserve mission alignment as commercial pressures intensify remains to be seen.

Multi-cloud sustainability? Being available on competing cloud platforms is currently an advantage, but each partner has incentives to promote their own AI. Can Anthropic maintain preferred status across all three?

The safety lab paradox

Anthropic faces a fundamental tension: it was founded on the premise that powerful AI is dangerous and requires careful development, yet it races to build increasingly powerful AI. The company argues this is necessary, that if powerful AI is coming regardless, it’s better to have safety-focused labs at the frontier. Critics argue this logic enables exactly the race dynamics the founders claimed to oppose.

What’s clear: Anthropic has established itself as a genuine third force in frontier AI, with the resources, talent, and market position to shape how the technology develops. Whether its safety-first philosophy can survive at $350 billion valuation and counting will be one of the defining questions in AI’s next chapter.

Models

RANK MODEL SCORE SWE CTX IN $/M OUT $/M
[02] Claude Opus 4.5 84.2 80.9% 200K $5 $25
[05] Claude Sonnet 4.5 83.2 77.2% 200K $3 $15
[10] Claude Opus 4 80.0 72.5% 200K $15 $75
[12] Claude Sonnet 4 79.1 72.7% 200K $3 $15
[17] Claude 3.5 Sonnet 68.9 49% 200K $3 $15
[21] Claude 3.5 Haiku 62.8 40.6% 200K $0.8 $4
[22] Claude Haiku 4.5 62.3 73.3% 200K $1 $5
[23] Claude 3 Opus 62.4 38% 200K $15 $75
[45] Claude 3 Haiku 52.6 200K $0.25 $1.25

Apps

guest@theairankings:~$_