Leaderboards

Greatness isn't accidental. How we measure it shouldn't be either. If we want AGI that builds billion-dollar enterprises and globe-spanning infrastructure, we can't evaluate it with clickbait and synthetic slop. We need benchmarks that test for intelligence and sophistication.

This is our definitive ranking of models, measured by their capacity for rigorous reasoning and real-world mastery. Discover which labs are leading the frontier.
View by :
Creative, Business, and Everyday writing

Hemingway-bench

Stop rewarding slop. We take real-world writing tasks and put them in front of master wordsmiths. Our goal: to push AI writing from two-second vibes to genuine nuance and impact.

Rank
Model
elo score (95% ci)
Gemini 3.1 Pro
1087
(
1068
-
1105
)
Gemini 3 Flash
1079
(
1062
-
1095
)
Gemini 3 Pro
1074
(
1051
-
1097
)
Opus 4.7
1057
(
1036
-
1078
)
GPT-5.5
1054
(
1032
-
1076
)
Opus 4.6
1054
(
1035
-
1073
)
DeepSeek V4 Pro
1039
(
1017
-
1060
)
Opus 4.5
1038
(
1019
-
1057
)
DeepSeek V4 Flash
1021
(
999
-
1042
)
GPT-5.2 Chat
1018
(
1001
-
1035
)
Kimi K2.5
1018
(
1000
-
1035
)
Sonnet 4.6
1014
(
995
-
1032
)
View full leaderboard
Enterprise Agents in Realistic RL Environments

EnterpriseBench: CoreCraft

Stop testing models in tiny, self-contained environments. We built CoreCraft, a large-scale startup world, and deployed AI agents to solve real tasks. Our goal: to move agents beyond the cleanliness of the lab and into the chaos of enterprise reality.

Rank
Model
Score
GPT-5.5
GPT-5.5
52.8
%
GPT-5.5 (xHigh reasoning)
GPT-5.5 (xHigh reasoning)
51.3
%
GPT-5.5 (High reasoning)
GPT-5.5 (High reasoning)
47.7
%
View full leaderboard
Mathematics at the frontier

Riemann-bench

We evaluate AI models on advanced mathematical problems requiring deep reasoning and novel synthesis. Our benchmark features problems from cutting-edge mathematics, sourced from leading mathematicians – Ivy League professors, PhD IMO medalists, graduate students at the top of their field – in the course of their research.

Rank
Model
Score
Gemini 3.1 Pro
Gemini 3.1 Pro
6
%
Claude Opus 4.6
Claude Opus 4.6
6
%
Claude Opus 4.7
Claude Opus 4.7
5
%
GPT-5.5 (xHigh reasoning)
GPT-5.5 (xHigh reasoning)
5
%
Gemini 3 Pro
Gemini 3 Pro
4
%
Kimi K2.5
Kimi K2.5
4
%
DeepSeek v3.2 Thinking
DeepSeek v3.2 Thinking
3
%
Claude Opus 4.5
Claude Opus 4.5
2
%
GPT-5.2
GPT-5.2
2
%
DeepSeek V4 Flash
DeepSeek V4 Flash
1.2
%
DeepSeek V4 Pro
DeepSeek V4 Pro
0
%
View full leaderboard
Your $100B model can't read a PDF

GDP.pdf

Can frontier models master the documents that run the world? GDP.pdf is a multimodal and reasoning benchmark that takes real-world prompts and PDFs pulled directly from expert professional workflows.

Rank
Model
Score
GPT-5.5 (xHigh reasoning)
GPT-5.5 (xHigh reasoning)
21
%
Gemini 3.1 Pro
Gemini 3.1 Pro
15
%
Claude Opus 4.7
Claude Opus 4.7
14
%
GPT-5.4
GPT-5.4
11
%
Claude Opus 4.6
Claude Opus 4.6
11
%
Grok-4.20 Beta
Grok-4.20 Beta
7
%
Kimi K2.5
Kimi K2.5
6
%
Mistral Large 3
Mistral Large 3
3
%
Nova 2 Pro
Nova 2 Pro
1
%
View full leaderboard

Stay up-to-date on new leaderboards