Background Logo

Claude 2.1 vs GPT-4 0613: Benchmarks, Pricing, and Context Window Comparison

Claude 2.1 vs GPT-4 0613 compares provider, context window, token pricing, benchmark performance, and release timeline in one side-by-side view. Use this page to quickly identify which model is a better fit for your production constraints, quality targets, and estimated cost per request.

Verdict

Claude 2.1 has lower listed token pricing, while GPT-4 0613 can still be preferable if benchmark results better match your workload.

Author: Mirai Minds Research Team

Last updated:

Compare
Claude 2.1
to
GPT-4 0613
Overview
Claude 2.1 was released 6 months after GPT-4 0613.
Claude 2.1
GPT-4 0613
Provider
The entity that provides this model.
Anthropic
OpenAI
Input Context Window
The number of tokens supported by the input context window.
200K
tokens
8,192
tokens
Maximum Output Tokens
The number of tokens that can be generated by the model in a single request.
Not specified.
8,192
tokens
Release Date
When the model was first released.
2023-11-23
2023-06-13
Leaderboard
Claude 2.1
GPT-4 0613
Rank
25
12
Arena Elo
1119
1165
95% CI
+4/-3
+3/-3
Votes
39744
67038
License
Proprietary
Proprietary
Knowledge Cutoff
Unknown
9/2021
Pricing
Claude 2.1
GPT-4 0613
Input
Cost of input data provided to the model.
$8.00
per million tokens
$30.00
per million tokens
Output
Cost of output tokens generated by the model.
$24.00
per million tokens
$60.00
per million tokens
Benchmarks
Compare relevant benchmarks between Claude 2.1 and GPT-4 0613 Instruct.
Claude 2.1
GPT-4 0613
MMLU
Evaluating LLM knowledge acquisition in zero-shot and few-shot settings.
Benchmark not available.
Benchmark not available.
MMMU
A wide ranging multi-discipline and multimodal benchmark.
Benchmark not available.
Benchmark not available.
HellaSwag
A challenging sentence completion benchmark.
Benchmark not available.
Benchmark not available.