Gemini 1.0 Pro vs GPT-4 32K 0613: Benchmarks, Pricing, and Context Window Comparison Gemini 1.0 Pro vs GPT-4 32K 0613 compares provider, context window, token pricing, benchmark performance, and release timeline in one side-by-side view. Use this page to quickly identify which model is a better fit for your production constraints, quality targets, and estimated cost per request.
Verdict Gemini 1.0 Pro has lower listed token pricing, while GPT-4 32K 0613 can still be preferable if benchmark results better match your workload.
Author: Mirai Minds Research Team
Last updated: March 17, 2026
Compare Gpt-3.5-turbo GPT-4 GPT-4 32K GPT-3.5 Turbo 16K GPT-4 0613 GPT-4 32K 0613 GPT-3.5 Turbo 1106 GPT-4 Turbo 1106 GPT-4 Turbo GPT-3.5 Turbo 0125 GPT-4 Turbo 0125 GPT-4 Turbo 2024-04-09 GPT-4o GPT-4o mini o1-mini GPT-4.5 Preview GPT-4.1 GPT-4.1 Nano GPT-4.1 Mini o3 o4 Mini Chat Bison Gemini 1.0 Pro Gemini Ultra Gemini 1.0 Ultra Gemini 1.5 Pro Gemma 2 9B Gemma 2 27B Gemini 1.5 Flash-8B Gemini 1.5 Flash (002) Gemini 2.0 Flash-Lite Gemini 2.0 Flash Gemini 2.0 Pro Gemini 2.5 Pro Gemini 2.5 Flash Preview Claude 2 Claude Instant 1.2 Claude 2.1 Claude 3 Opus Claude 3 Sonnet Claude 3 Haiku Claude 3.5 Sonnet(Jun 2024) Claude 3.5 Sonnet(Oct 2024) Claude 3.5 Haiku Claude 3.7 Sonnet Claude 3.7 Sonnet Extended Thinking Claude 4 Sonnet Claude 4 Opus Llama 2 Chat 70B Llama 2 Chat 13B LLaMA 3.1 (405B) Apollo LLaMA 4 Scout LLaMA 4 Maverick Phi-4 Mistral 7B Instruct Mistral 8x7B Instruct Mistral Large Mistral Large 2 Qwen1.5-110B-Chat Qwen2-72B-Instruct Qwen2.5-14b-Instruct-1m Qwen2.5-72b-Instruct Qwen2.5-Vl-72B-Instruct Qwen-Turbo QwQ-32B-Preview Qwen-Plus Qwen-Max Qwen3-235B-A22 Qwen3-32B DeepSeek-V3 Deep Seek-R1 to Gpt-3.5-turbo GPT-4 GPT-4 32K GPT-3.5 Turbo 16K GPT-4 0613 GPT-4 32K 0613 GPT-3.5 Turbo 1106 GPT-4 Turbo 1106 GPT-4 Turbo GPT-3.5 Turbo 0125 GPT-4 Turbo 0125 GPT-4 Turbo 2024-04-09 GPT-4o GPT-4o mini o1-mini GPT-4.5 Preview GPT-4.1 GPT-4.1 Nano GPT-4.1 Mini o3 o4 Mini Chat Bison Gemini 1.0 Pro Gemini Ultra Gemini 1.0 Ultra Gemini 1.5 Pro Gemma 2 9B Gemma 2 27B Gemini 1.5 Flash-8B Gemini 1.5 Flash (002) Gemini 2.0 Flash-Lite Gemini 2.0 Flash Gemini 2.0 Pro Gemini 2.5 Pro Gemini 2.5 Flash Preview Claude 2 Claude Instant 1.2 Claude 2.1 Claude 3 Opus Claude 3 Sonnet Claude 3 Haiku Claude 3.5 Sonnet(Jun 2024) Claude 3.5 Sonnet(Oct 2024) Claude 3.5 Haiku Claude 3.7 Sonnet Claude 3.7 Sonnet Extended Thinking Claude 4 Sonnet Claude 4 Opus Llama 2 Chat 70B Llama 2 Chat 13B LLaMA 3.1 (405B) Apollo LLaMA 4 Scout LLaMA 4 Maverick Phi-4 Mistral 7B Instruct Mistral 8x7B Instruct Mistral Large Mistral Large 2 Qwen1.5-110B-Chat Qwen2-72B-Instruct Qwen2.5-14b-Instruct-1m Qwen2.5-72b-Instruct Qwen2.5-Vl-72B-Instruct Qwen-Turbo QwQ-32B-Preview Qwen-Plus Qwen-Max Qwen3-235B-A22 Qwen3-32B DeepSeek-V3 Deep Seek-R1 Overview
Gemini 1.0 Pro was released 7 months after GPT-4 32K 0613.
Gemini 1.0 Pro
GPT-4 32K 0613
Provider
The entity that provides this model.
Google
OpenAI
Input Context Window
The number of tokens supported by the input context window.
Maximum Output Tokens
The number of tokens that can be generated by the model in a single request.
Release Date
When the model was first released.
Dec 13, 2023
over 1 yearago
2023-12-13
Gemini 1.0 Pro
GPT-4 32K 0613
Rank
25
Arena Elo
1115
95% CI
+6/-6
Votes
6818
License
Proprietary
Knowledge Cutoff
Gemini 1.0 Pro
GPT-4 32K 0613
Input
Cost of input data provided to the model.
Output
Cost of output tokens generated by the model.
$120.00
per million tokens
Benchmarks
Compare relevant benchmarks between Gemini 1.0 Pro and GPT-4 32K 0613 Instruct.
Gemini 1.0 Pro
GPT-4 32K 0613
MMLU Evaluating LLM knowledge acquisition in zero-shot and few-shot settings.
MMMU A wide ranging multi-discipline and multimodal benchmark.
HellaSwag A challenging sentence completion benchmark.