Background Logo

GPT-4 Turbo 2024-04-09 vs o1-mini: Benchmarks, Pricing, and Context Window Comparison

GPT-4 Turbo 2024-04-09 vs o1-mini compares provider, context window, token pricing, benchmark performance, and release timeline in one side-by-side view. Use this page to quickly identify which model is a better fit for your production constraints, quality targets, and estimated cost per request.

Verdict

o1-mini has lower listed token pricing, while GPT-4 Turbo 2024-04-09 can still be preferable if benchmark results better match your workload.

Author: Mirai Minds Research Team

Last updated:

Compare
GPT-4 Turbo 2024-04-09
to
o1-mini
Overview
GPT-4 Turbo 2024-04-09 was released 5 months before o1-mini.
GPT-4 Turbo 2024-04-09
o1-mini
Provider
The entity that provides this model.
OpenAI
OpenAI
Input Context Window
The number of tokens supported by the input context window.
128K
tokens
128000
tokens
Maximum Output Tokens
The number of tokens that can be generated by the model in a single request.
4,096
tokens
65536
tokens
Release Date
When the model was first released.
2024-04-09
2024-09-12
Leaderboard
GPT-4 Turbo 2024-04-09
o1-mini
Rank
1
5
Arena Elo
1257
1307
95% CI
+4/-3
+3/-4
Votes
30562
35691
License
Proprietary
Proprietary
Knowledge Cutoff
12/2023
4/2023
Pricing
GPT-4 Turbo 2024-04-09
o1-mini
Input
Cost of input data provided to the model.
$10.00
per million tokens
$1.10
per million tokens
Output
Cost of output tokens generated by the model.
$30.00
per million tokens
$4.40
per million tokens
Benchmarks
Compare relevant benchmarks between GPT-4 Turbo 2024-04-09 and o1-mini Instruct.
GPT-4 Turbo 2024-04-09
o1-mini
MMLU
Evaluating LLM knowledge acquisition in zero-shot and few-shot settings.
Benchmark not available.
85.2
(5-shot)
MMMU
A wide ranging multi-discipline and multimodal benchmark.
Benchmark not available.
Benchmark not available.
HellaSwag
A challenging sentence completion benchmark.
Benchmark not available.
Benchmark not available.