Background Logo

Gemini 1.0 Pro vs Qwen2.5-Vl-72B-Instruct: Benchmarks, Pricing, and Context Window Comparison

Gemini 1.0 Pro vs Qwen2.5-Vl-72B-Instruct compares provider, context window, token pricing, benchmark performance, and release timeline in one side-by-side view. Use this page to quickly identify which model is a better fit for your production constraints, quality targets, and estimated cost per request.

Verdict

Qwen2.5-Vl-72B-Instruct has lower listed token pricing, while Gemini 1.0 Pro can still be preferable if benchmark results better match your workload.

Author: Mirai Minds Research Team

Last updated:

Compare
Gemini 1.0 Pro
to
Qwen2.5-Vl-72B-Instruct
Overview
Gemini 1.0 Pro was released 9 months before Qwen2.5-Vl-72B-Instruct.
Gemini 1.0 Pro
Qwen2.5-Vl-72B-Instruct
Provider
The entity that provides this model.
Google
Qwen
Input Context Window
The number of tokens supported by the input context window.
32.8K
tokens
129,024
tokens
Maximum Output Tokens
The number of tokens that can be generated by the model in a single request.
8,192
tokens
8,192
tokens
Release Date
When the model was first released.
2023-12-13
2024-09-19
Leaderboard
Gemini 1.0 Pro
Qwen2.5-Vl-72B-Instruct
Rank
25
Unknown
Arena Elo
1115
Not specified.
95% CI
+6/-6
Not specified.
Votes
6818
Not specified.
License
Proprietary
Not specified.
Knowledge Cutoff
4/2023
Unknown
Pricing
Gemini 1.0 Pro
Qwen2.5-Vl-72B-Instruct
Input
Cost of input data provided to the model.
$12.50
per million tokens
$1.95
per million tokens
Output
Cost of output tokens generated by the model.
$37.50
per million tokens
$8.00
per million tokens
Benchmarks
Compare relevant benchmarks between Gemini 1.0 Pro and Qwen2.5-Vl-72B-Instruct Instruct.
Gemini 1.0 Pro
Qwen2.5-Vl-72B-Instruct
MMLU
Evaluating LLM knowledge acquisition in zero-shot and few-shot settings.
71.8
(5-shot)
Benchmark not available.
MMMU
A wide ranging multi-discipline and multimodal benchmark.
47.9
Benchmark not available.
HellaSwag
A challenging sentence completion benchmark.
Benchmark not available.
Benchmark not available.