Updated Dec. 07, 2024
Ranking | Organization | Model | Dataset Score (N=316) | Market Score (resolved) (N=25) | Market Score (unresolved) (N=52) | Market Score (overall) (N=77) | Overall Resolved Score (N=341) | Overall Score (N=393) | Overall Score 95% CI | Pairwise p-value comparing to No. 1 (bootstrapped) | Pct. more accurate than No. 1 | Pct. Imputed |
---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | ForecastBench | Superforecaster median forecast | 0.123 | 0.098 | 0.044 | 0.062 | 0.110 | 0.092 | [0.073, 0.112] | 0% | 0% | |
2 | ForecastBench | Public median forecast | 0.156 | 0.150 | 0.038 | 0.074 | 0.153 | 0.115 | [0.095, 0.134] | <0.001 | 24% | 0% |
3 | Anthropic | Claude-3-5-Sonnet-20240620 (scratchpad with freeze values) | 0.144 | 0.190 | 0.043 | 0.091 | 0.167 | 0.118 | [0.095, 0.14] | <0.001 | 31% | 0% |
4 | Anthropic | Claude-3-5-Sonnet-20240620 (scratchpad with news with freeze values) | 0.150 | 0.194 | 0.050 | 0.097 | 0.172 | 0.123 | [0.101, 0.145] | <0.001 | 29% | 0% |
5 | OpenAI | GPT-4 (zero shot with freeze values) | 0.167 | 0.164 | 0.041 | 0.081 | 0.166 | 0.124 | [0.102, 0.146] | <0.001 | 32% | 0% |
6 | OpenAI | GPT-4-Turbo-2024-04-09 (zero shot with freeze values) | 0.171 | 0.164 | 0.039 | 0.080 | 0.168 | 0.125 | [0.104, 0.147] | <0.001 | 32% | 0% |
7 | Anthropic | Claude-3-5-Sonnet-20240620 (zero shot with freeze values) | 0.152 | 0.223 | 0.043 | 0.102 | 0.187 | 0.127 | [0.1, 0.154] | <0.001 | 31% | 0% |
8 | OpenAI | GPT-4o (scratchpad with news with freeze values) | 0.171 | 0.143 | 0.058 | 0.086 | 0.157 | 0.128 | [0.11, 0.147] | <0.001 | 26% | 0% |
9 | Anthropic | Claude-3-5-Sonnet-20240620 (scratchpad) | 0.144 | 0.218 | 0.064 | 0.114 | 0.181 | 0.129 | [0.107, 0.151] | <0.001 | 29% | 0% |
10 | OpenAI | GPT-4o (scratchpad with freeze values) | 0.168 | 0.172 | 0.057 | 0.094 | 0.170 | 0.131 | [0.111, 0.151] | <0.001 | 27% | 0% |
11 | Anthropic | Claude-3-5-Sonnet-20240620 (scratchpad with news) | 0.150 | 0.205 | 0.076 | 0.117 | 0.177 | 0.134 | [0.113, 0.155] | <0.001 | 26% | 0% |
12 | Anthropic | Claude-3-5-Sonnet-20240620 (superforecaster with news 3) | 0.165 | 0.163 | 0.075 | 0.104 | 0.164 | 0.134 | [0.115, 0.154] | <0.001 | 25% | 3% |
13 | Anthropic | Claude-3-Opus-20240229 (zero shot with freeze values) | 0.173 | 0.207 | 0.045 | 0.098 | 0.190 | 0.135 | [0.111, 0.16] | <0.001 | 25% | 0% |
14 | Anthropic | Claude-3-5-Sonnet-20240620 (superforecaster with news 1) | 0.158 | 0.169 | 0.086 | 0.113 | 0.164 | 0.136 | [0.115, 0.157] | <0.001 | 26% | 0% |
15 | OpenAI | GPT-4o (scratchpad) | 0.168 | 0.167 | 0.075 | 0.105 | 0.167 | 0.136 | [0.119, 0.154] | <0.001 | 25% | 0% |
16 | Anthropic | Claude-3-Opus-20240229 (scratchpad with freeze values) | 0.167 | 0.163 | 0.081 | 0.107 | 0.165 | 0.137 | [0.119, 0.155] | <0.001 | 23% | 0% |
17 | Mistral AI | Mistral-Large-Latest (zero shot with freeze values) | 0.176 | 0.205 | 0.048 | 0.099 | 0.191 | 0.138 | [0.114, 0.161] | <0.001 | 23% | 0% |
18 | Mistral AI | Mistral-Large-Latest (scratchpad with freeze values) | 0.165 | 0.165 | 0.085 | 0.111 | 0.165 | 0.138 | [0.121, 0.155] | <0.001 | 23% | 0% |
19 | OpenAI | GPT-4o (scratchpad with news) | 0.171 | 0.196 | 0.065 | 0.108 | 0.183 | 0.140 | [0.119, 0.161] | <0.001 | 22% | 0% |
20 | Gemini-1.5-Pro (scratchpad with news with freeze values) | 0.174 | 0.194 | 0.072 | 0.112 | 0.184 | 0.143 | [0.124, 0.162] | <0.001 | 22% | 0% | |
21 | Gemini-1.5-Pro (scratchpad) | 0.170 | 0.214 | 0.070 | 0.117 | 0.192 | 0.143 | [0.126, 0.161] | <0.001 | 23% | 0% | |
22 | OpenAI | GPT-4 (scratchpad with freeze values) | 0.179 | 0.197 | 0.066 | 0.109 | 0.188 | 0.144 | [0.125, 0.163] | <0.001 | 22% | 0% |
23 | OpenAI | GPT-4-Turbo-2024-04-09 (scratchpad with freeze values) | 0.183 | 0.165 | 0.077 | 0.106 | 0.174 | 0.144 | [0.122, 0.167] | <0.001 | 26% | 0% |
24 | Gemini-1.5-Pro (scratchpad with freeze values) | 0.170 | 0.192 | 0.083 | 0.118 | 0.181 | 0.144 | [0.126, 0.162] | <0.001 | 23% | 0% | |
25 | OpenAI | GPT-4-Turbo-2024-04-09 (zero shot) | 0.171 | 0.208 | 0.075 | 0.118 | 0.190 | 0.145 | [0.125, 0.164] | <0.001 | 22% | 0% |
26 | Meta | Llama-3-70b-Chat-Hf (scratchpad with freeze values) | 0.193 | 0.145 | 0.074 | 0.097 | 0.169 | 0.145 | [0.128, 0.161] | <0.001 | 25% | 0% |
27 | Anthropic | Claude-3-5-Sonnet-20240620 (zero shot) | 0.152 | 0.259 | 0.080 | 0.138 | 0.205 | 0.145 | [0.118, 0.171] | <0.001 | 24% | 0% |
28 | Anthropic | Claude-3-Opus-20240229 (superforecaster with news 1) | 0.161 | 0.230 | 0.079 | 0.128 | 0.196 | 0.145 | [0.124, 0.166] | <0.001 | 23% | 0% |
29 | Anthropic | Claude-2.1 (scratchpad with freeze values) | 0.213 | 0.089 | 0.075 | 0.079 | 0.151 | 0.146 | [0.129, 0.163] | <0.001 | 26% | 23% |
30 | Anthropic | Claude-3-Opus-20240229 (scratchpad) | 0.167 | 0.190 | 0.095 | 0.126 | 0.179 | 0.147 | [0.129, 0.164] | <0.001 | 22% | 0% |
31 | OpenAI | GPT-4o (zero shot with freeze values) | 0.202 | 0.202 | 0.039 | 0.092 | 0.202 | 0.147 | [0.123, 0.171] | <0.001 | 28% | 0% |
32 | Meta | Llama-3-70b-Chat-Hf (zero shot with freeze values) | 0.187 | 0.225 | 0.051 | 0.107 | 0.206 | 0.147 | [0.122, 0.172] | <0.001 | 25% | 0% |
33 | OpenAI | GPT-4-Turbo-2024-04-09 (scratchpad) | 0.183 | 0.188 | 0.076 | 0.112 | 0.185 | 0.147 | [0.131, 0.164] | <0.001 | 23% | 0% |
34 | Mistral AI | Mixtral-8x22B-Instruct-V0.1 (scratchpad with freeze values) | 0.187 | 0.195 | 0.068 | 0.109 | 0.191 | 0.148 | [0.131, 0.165] | <0.001 | 21% | 0% |
35 | Qwen | Qwen1.5-110B-Chat (zero shot with freeze values) | 0.197 | 0.201 | 0.051 | 0.100 | 0.199 | 0.148 | [0.127, 0.17] | <0.001 | 21% | 0% |
36 | OpenAI | GPT-4 (scratchpad) | 0.179 | 0.174 | 0.090 | 0.117 | 0.176 | 0.148 | [0.133, 0.163] | <0.001 | 18% | 0% |
37 | Gemini-1.5-Pro (scratchpad with news) | 0.174 | 0.171 | 0.100 | 0.123 | 0.172 | 0.148 | [0.129, 0.167] | <0.001 | 22% | 0% | |
38 | Gemini-1.5-Pro (zero shot with freeze values) | 0.188 | 0.237 | 0.047 | 0.109 | 0.212 | 0.148 | [0.122, 0.175] | <0.001 | 25% | 0% | |
39 | OpenAI | GPT-4-Turbo-2024-04-09 (scratchpad with news with freeze values) | 0.188 | 0.167 | 0.084 | 0.111 | 0.177 | 0.150 | [0.128, 0.172] | <0.001 | 26% | 0% |
40 | Gemini-1.5-Flash (zero shot with freeze values) | 0.191 | 0.230 | 0.051 | 0.109 | 0.210 | 0.150 | [0.122, 0.178] | <0.001 | 24% | 0% | |
41 | Mistral AI | Mixtral-8x22B-Instruct-V0.1 (zero shot with freeze values) | 0.192 | 0.214 | 0.066 | 0.114 | 0.203 | 0.153 | [0.127, 0.179] | <0.001 | 25% | 0% |
42 | Anthropic | Claude-2.1 (scratchpad) | 0.213 | 0.113 | 0.085 | 0.094 | 0.163 | 0.154 | [0.135, 0.172] | <0.001 | 22% | 23% |
43 | ForecastBench | Imputed Forecaster | 0.250 | 0.111 | 0.033 | 0.058 | 0.181 | 0.154 | [0.137, 0.172] | <0.001 | 27% | 100% |
44 | Mistral AI | Mistral-Large-Latest (scratchpad) | 0.165 | 0.211 | 0.113 | 0.145 | 0.188 | 0.155 | [0.137, 0.172] | <0.001 | 22% | 0% |
45 | Qwen | Qwen1.5-110B-Chat (scratchpad with freeze values) | 0.191 | 0.198 | 0.082 | 0.120 | 0.195 | 0.155 | [0.138, 0.172] | <0.001 | 19% | 0% |
46 | OpenAI | GPT-4 (zero shot) | 0.167 | 0.195 | 0.118 | 0.143 | 0.181 | 0.155 | [0.137, 0.174] | <0.001 | 23% | 0% |
47 | Anthropic | Claude-2.1 (zero shot with freeze values) | 0.220 | 0.191 | 0.044 | 0.092 | 0.205 | 0.156 | [0.134, 0.178] | <0.001 | 28% | 0% |
48 | Qwen | Qwen1.5-110B-Chat (scratchpad with news with freeze values) | 0.184 | 0.207 | 0.091 | 0.129 | 0.196 | 0.156 | [0.138, 0.175] | <0.001 | 23% | 0% |
49 | OpenAI | GPT-4-Turbo-2024-04-09 (scratchpad with news) | 0.188 | 0.205 | 0.088 | 0.126 | 0.197 | 0.157 | [0.136, 0.178] | <0.001 | 24% | 0% |
50 | Mistral AI | Mixtral-8x22B-Instruct-V0.1 (scratchpad with news with freeze values) | 0.200 | 0.196 | 0.079 | 0.117 | 0.198 | 0.158 | [0.14, 0.177] | <0.001 | 23% | 0% |
51 | Anthropic | Claude-3-5-Sonnet-20240620 (scratchpad with SECOND news) | 0.204 | 0.176 | 0.083 | 0.113 | 0.190 | 0.159 | [0.139, 0.178] | <0.001 | 18% | 0% |
52 | Gemini-1.5-Pro (superforecaster with news 3) | 0.190 | 0.166 | 0.110 | 0.128 | 0.178 | 0.159 | [0.14, 0.178] | <0.001 | 21% | 0% | |
53 | Mistral AI | Mixtral-8x22B-Instruct-V0.1 (scratchpad with news) | 0.200 | 0.195 | 0.082 | 0.119 | 0.198 | 0.160 | [0.142, 0.177] | <0.001 | 22% | 0% |
54 | OpenAI | GPT-4o (zero shot) | 0.202 | 0.189 | 0.085 | 0.119 | 0.195 | 0.160 | [0.14, 0.18] | <0.001 | 21% | 0% |
55 | OpenAI | GPT-4o (superforecaster with news 3) | 0.205 | 0.186 | 0.083 | 0.116 | 0.195 | 0.160 | [0.141, 0.18] | <0.001 | 21% | 6% |
56 | Anthropic | Claude-3-Opus-20240229 (zero shot) | 0.173 | 0.257 | 0.096 | 0.148 | 0.215 | 0.161 | [0.135, 0.186] | <0.001 | 19% | 0% |
57 | Gemini-1.5-Flash (scratchpad with freeze values) | 0.209 | 0.190 | 0.076 | 0.113 | 0.199 | 0.161 | [0.14, 0.182] | <0.001 | 20% | 0% | |
58 | Mistral AI | Mixtral-8x22B-Instruct-V0.1 (scratchpad) | 0.187 | 0.204 | 0.106 | 0.138 | 0.195 | 0.162 | [0.145, 0.179] | <0.001 | 21% | 0% |
59 | Qwen | Qwen1.5-110B-Chat (scratchpad with news) | 0.184 | 0.220 | 0.102 | 0.141 | 0.202 | 0.163 | [0.143, 0.182] | <0.001 | 22% | 0% |
60 | Meta | Llama-3-8b-Chat-Hf (zero shot with freeze values) | 0.204 | 0.188 | 0.091 | 0.122 | 0.196 | 0.163 | [0.139, 0.188] | <0.001 | 24% | 0% |
61 | Mistral AI | Mixtral-8x7B-Instruct-V0.1 (scratchpad) | 0.201 | 0.151 | 0.113 | 0.125 | 0.176 | 0.163 | [0.144, 0.183] | <0.001 | 28% | 14% |
62 | Anthropic | Claude-3-Opus-20240229 (superforecaster with news 3) | 0.182 | 0.189 | 0.126 | 0.147 | 0.186 | 0.164 | [0.143, 0.185] | <0.001 | 21% | 6% |
63 | Anthropic | Claude-3-5-Sonnet-20240620 (superforecaster with news 2) | 0.197 | 0.198 | 0.101 | 0.133 | 0.197 | 0.165 | [0.142, 0.187] | <0.001 | 23% | 0% |
64 | Qwen | Qwen1.5-110B-Chat (scratchpad) | 0.191 | 0.220 | 0.100 | 0.139 | 0.205 | 0.165 | [0.147, 0.183] | <0.001 | 21% | 0% |
65 | Mistral AI | Mistral-Large-Latest (zero shot) | 0.176 | 0.228 | 0.120 | 0.155 | 0.202 | 0.166 | [0.141, 0.191] | <0.001 | 21% | 0% |
66 | Qwen | Qwen1.5-110B-Chat (superforecaster with news 1) | 0.202 | 0.216 | 0.091 | 0.132 | 0.209 | 0.167 | [0.144, 0.19] | <0.001 | 22% | 0% |
67 | Anthropic | Claude-3-Opus-20240229 (scratchpad with news with freeze values) | 0.193 | 0.217 | 0.106 | 0.142 | 0.205 | 0.168 | [0.147, 0.189] | <0.001 | 23% | 0% |
68 | OpenAI | GPT-4-Turbo-2024-04-09 (superforecaster with news 3) | 0.207 | 0.199 | 0.094 | 0.128 | 0.203 | 0.168 | [0.147, 0.189] | <0.001 | 20% | 11% |
69 | Meta | Llama-3-8b-Chat-Hf (scratchpad with freeze values) | 0.225 | 0.168 | 0.085 | 0.112 | 0.196 | 0.169 | [0.154, 0.183] | <0.001 | 23% | 0% |
70 | Mistral AI | Mixtral-8x7B-Instruct-V0.1 (zero shot with freeze values) | 0.220 | 0.248 | 0.054 | 0.117 | 0.234 | 0.169 | [0.139, 0.199] | <0.001 | 30% | 0% |
71 | Meta | Llama-3-70b-Chat-Hf (zero shot) | 0.187 | 0.234 | 0.110 | 0.151 | 0.211 | 0.169 | [0.148, 0.19] | <0.001 | 22% | 0% |
72 | Gemini-1.5-Flash (scratchpad with news with freeze values) | 0.216 | 0.194 | 0.087 | 0.122 | 0.205 | 0.169 | [0.146, 0.192] | <0.001 | 21% | 0% | |
73 | OpenAI | GPT-4o (superforecaster with news 1) | 0.201 | 0.211 | 0.102 | 0.137 | 0.206 | 0.169 | [0.143, 0.195] | <0.001 | 26% | 0% |
74 | Gemini-1.5-Pro (zero shot) | 0.188 | 0.266 | 0.096 | 0.151 | 0.227 | 0.170 | [0.142, 0.197] | <0.001 | 21% | 0% | |
75 | Mistral AI | Mistral-Large-Latest (scratchpad with news with freeze values) | 0.205 | 0.199 | 0.104 | 0.135 | 0.202 | 0.170 | [0.15, 0.19] | <0.001 | 22% | 0% |
76 | Anthropic | Claude-3-Opus-20240229 (superforecaster with news 2) | 0.179 | 0.238 | 0.124 | 0.161 | 0.209 | 0.170 | [0.146, 0.194] | <0.001 | 21% | 0% |
77 | Qwen | Qwen1.5-110B-Chat (zero shot) | 0.197 | 0.207 | 0.113 | 0.143 | 0.202 | 0.170 | [0.151, 0.19] | <0.001 | 15% | 1% |
78 | Anthropic | Claude-3-Opus-20240229 (scratchpad with news) | 0.193 | 0.206 | 0.121 | 0.148 | 0.199 | 0.171 | [0.15, 0.192] | <0.001 | 22% | 0% |
79 | Anthropic | Claude-2.1 (scratchpad with news with freeze values) | 0.214 | 0.199 | 0.094 | 0.128 | 0.207 | 0.171 | [0.15, 0.192] | <0.001 | 22% | 3% |
80 | Meta | Llama-3-8b-Chat-Hf (zero shot) | 0.204 | 0.239 | 0.092 | 0.140 | 0.222 | 0.172 | [0.146, 0.198] | <0.001 | 24% | 0% |
81 | ForecastBench | LLM Crowd (gpt-4o, claude-3.5-sonnet, gemini-1.5-pro) with news | 0.232 | 0.173 | 0.082 | 0.112 | 0.203 | 0.172 | [0.155, 0.189] | <0.001 | 16% | 38% |
82 | ForecastBench | LLM Crowd (gpt-4o, claude-3.5-sonnet, gemini-1.5-pro) with news | 0.233 | 0.175 | 0.081 | 0.111 | 0.204 | 0.172 | [0.155, 0.19] | <0.001 | 17% | 38% |
83 | ForecastBench | LLM Crowd (gpt-4o, claude-3.5-sonnet, gemini-1.5-pro) with news | 0.233 | 0.175 | 0.082 | 0.112 | 0.204 | 0.172 | [0.155, 0.189] | <0.001 | 16% | 38% |
84 | Mistral AI | Mixtral-8x22B-Instruct-V0.1 (superforecaster with news 1) | 0.211 | 0.212 | 0.096 | 0.134 | 0.211 | 0.172 | [0.148, 0.197] | <0.001 | 20% | 0% |
85 | Mistral AI | Mixtral-8x22B-Instruct-V0.1 (superforecaster with news 3) | 0.219 | 0.169 | 0.107 | 0.127 | 0.194 | 0.173 | [0.156, 0.19] | <0.001 | 15% | 17% |
86 | Gemini-1.5-Flash (scratchpad) | 0.209 | 0.192 | 0.111 | 0.137 | 0.200 | 0.173 | [0.153, 0.193] | <0.001 | 18% | 0% | |
87 | OpenAI | GPT-4o (scratchpad with SECOND news) | 0.237 | 0.183 | 0.073 | 0.109 | 0.210 | 0.173 | [0.155, 0.191] | <0.001 | 17% | 2% |
88 | Anthropic | Claude-2.1 (scratchpad with news) | 0.214 | 0.228 | 0.093 | 0.137 | 0.221 | 0.176 | [0.154, 0.197] | <0.001 | 21% | 8% |
89 | OpenAI | GPT-4-Turbo-2024-04-09 (superforecaster with news 1) | 0.198 | 0.233 | 0.115 | 0.153 | 0.215 | 0.176 | [0.152, 0.199] | <0.001 | 21% | 0% |
90 | Mistral AI | Mistral-Large-Latest (scratchpad with news) | 0.205 | 0.191 | 0.127 | 0.148 | 0.198 | 0.176 | [0.156, 0.197] | <0.001 | 21% | 0% |
91 | Gemini-1.5-Pro (superforecaster with news 1) | 0.206 | 0.205 | 0.120 | 0.148 | 0.206 | 0.177 | [0.154, 0.2] | <0.001 | 22% | 0% | |
92 | Gemini-1.5-Flash (scratchpad with news) | 0.216 | 0.203 | 0.107 | 0.138 | 0.209 | 0.177 | [0.155, 0.199] | <0.001 | 21% | 0% | |
93 | Meta | Llama-3-70b-Chat-Hf (scratchpad) | 0.193 | 0.218 | 0.135 | 0.162 | 0.205 | 0.177 | [0.161, 0.193] | <0.001 | 24% | 0% |
94 | Gemini-1.5-Flash (zero shot) | 0.191 | 0.258 | 0.124 | 0.168 | 0.224 | 0.179 | [0.152, 0.206] | <0.001 | 19% | 0% | |
95 | Mistral AI | Mistral-Large-Latest (superforecaster with news 1) | 0.209 | 0.234 | 0.109 | 0.149 | 0.222 | 0.179 | [0.154, 0.204] | <0.001 | 22% | 0% |
96 | Mistral AI | Mixtral-8x22B-Instruct-V0.1 (zero shot) | 0.192 | 0.270 | 0.123 | 0.171 | 0.231 | 0.181 | [0.155, 0.208] | <0.001 | 19% | 0% |
97 | OpenAI | GPT-4o (superforecaster with news 2) | 0.236 | 0.213 | 0.088 | 0.128 | 0.225 | 0.182 | [0.158, 0.207] | <0.001 | 23% | 1% |
98 | Mistral AI | Mixtral-8x7B-Instruct-V0.1 (superforecaster with news 2) | 0.251 | 0.182 | 0.085 | 0.116 | 0.216 | 0.184 | [0.159, 0.208] | <0.001 | 30% | 22% |
99 | Qwen | Qwen1.5-110B-Chat (superforecaster with news 3) | 0.219 | 0.223 | 0.113 | 0.149 | 0.221 | 0.184 | [0.165, 0.203] | <0.001 | 20% | 6% |
100 | Anthropic | Claude-2.1 (zero shot) | 0.220 | 0.236 | 0.108 | 0.150 | 0.228 | 0.185 | [0.165, 0.205] | <0.001 | 19% | 0% |
101 | Mistral AI | Mixtral-8x7B-Instruct-V0.1 (zero shot) | 0.220 | 0.260 | 0.098 | 0.151 | 0.240 | 0.185 | [0.157, 0.214] | <0.001 | 20% | 0% |
102 | Mistral AI | Mistral-Large-Latest (superforecaster with news 2) | 0.209 | 0.248 | 0.124 | 0.165 | 0.228 | 0.187 | [0.163, 0.211] | <0.001 | 21% | 1% |
103 | Mistral AI | Mixtral-8x22B-Instruct-V0.1 (superforecaster with news 2) | 0.235 | 0.197 | 0.111 | 0.138 | 0.216 | 0.187 | [0.168, 0.205] | <0.001 | 23% | 1% |
104 | Meta | Llama-3-8b-Chat-Hf (scratchpad) | 0.225 | 0.226 | 0.112 | 0.149 | 0.226 | 0.187 | [0.17, 0.204] | <0.001 | 23% | 0% |
105 | Mistral AI | Mixtral-8x7B-Instruct-V0.1 (superforecaster with news 1) | 0.247 | 0.194 | 0.099 | 0.130 | 0.221 | 0.189 | [0.163, 0.214] | <0.001 | 27% | 15% |
106 | Mistral AI | Mistral-Large-Latest (superforecaster with news 3) | 0.234 | 0.181 | 0.126 | 0.144 | 0.207 | 0.189 | [0.169, 0.208] | <0.001 | 20% | 7% |
107 | Mistral AI | Mixtral-8x7B-Instruct-V0.1 (scratchpad with freeze values) | 0.201 | 0.214 | 0.159 | 0.177 | 0.208 | 0.189 | [0.161, 0.217] | <0.001 | 25% | 11% |
108 | Anthropic | Claude-2.1 (superforecaster with news 3) | 0.227 | 0.200 | 0.127 | 0.151 | 0.214 | 0.189 | [0.168, 0.21] | <0.001 | 22% | 5% |
109 | OpenAI | GPT-4-Turbo-2024-04-09 (superforecaster with news 2) | 0.224 | 0.231 | 0.117 | 0.154 | 0.227 | 0.189 | [0.165, 0.213] | <0.001 | 24% | 1% |
110 | Meta | Llama-2-70b-Chat-Hf (zero shot with freeze values) | 0.232 | 0.206 | 0.125 | 0.151 | 0.219 | 0.192 | [0.166, 0.217] | <0.001 | 25% | 0% |
111 | Qwen | Qwen1.5-110B-Chat (superforecaster with news 2) | 0.223 | 0.232 | 0.127 | 0.161 | 0.228 | 0.192 | [0.172, 0.213] | <0.001 | 22% | 3% |
112 | Gemini-1.5-Flash (superforecaster with news 2) | 0.223 | 0.243 | 0.128 | 0.165 | 0.233 | 0.194 | [0.169, 0.219] | <0.001 | 20% | 0% | |
113 | Gemini-1.5-Flash (superforecaster with news 3) | 0.237 | 0.202 | 0.126 | 0.151 | 0.220 | 0.194 | [0.172, 0.216] | <0.001 | 19% | 9% | |
114 | Meta | Llama-2-70b-Chat-Hf (scratchpad with freeze values) | 0.225 | 0.258 | 0.119 | 0.164 | 0.242 | 0.195 | [0.179, 0.211] | <0.001 | 23% | 0% |
115 | Anthropic | Claude-3-Haiku-20240307 (superforecaster with news 2) | 0.233 | 0.212 | 0.140 | 0.163 | 0.222 | 0.198 | [0.18, 0.217] | <0.001 | 20% | 0% |
116 | Gemini-1.5-Pro (superforecaster with news 2) | 0.229 | 0.240 | 0.136 | 0.170 | 0.234 | 0.199 | [0.171, 0.228] | <0.001 | 21% | 0% | |
117 | Anthropic | Claude-3-Haiku-20240307 (scratchpad with freeze values) | 0.240 | 0.203 | 0.137 | 0.159 | 0.222 | 0.199 | [0.181, 0.217] | <0.001 | 22% | 0% |
118 | Gemini-1.5-Flash (superforecaster with news 1) | 0.230 | 0.280 | 0.118 | 0.171 | 0.255 | 0.200 | [0.172, 0.228] | <0.001 | 22% | 0% | |
119 | Mistral AI | Mixtral-8x7B-Instruct-V0.1 (scratchpad with news with freeze values) | 0.294 | 0.140 | 0.091 | 0.107 | 0.217 | 0.201 | [0.181, 0.221] | <0.001 | 25% | 13% |
120 | Mistral AI | Mixtral-8x7B-Instruct-V0.1 (superforecaster with news 3) | 0.247 | 0.172 | 0.146 | 0.155 | 0.209 | 0.201 | [0.179, 0.222] | <0.001 | 24% | 13% |
121 | Anthropic | Claude-3-Haiku-20240307 (scratchpad) | 0.240 | 0.228 | 0.137 | 0.167 | 0.234 | 0.203 | [0.186, 0.221] | <0.001 | 22% | 0% |
122 | Anthropic | Claude-3-Haiku-20240307 (zero shot with freeze values) | 0.295 | 0.166 | 0.087 | 0.112 | 0.230 | 0.204 | [0.186, 0.221] | <0.001 | 20% | 0% |
123 | Anthropic | Claude-2.1 (superforecaster with news 2) | 0.240 | 0.252 | 0.136 | 0.173 | 0.246 | 0.207 | [0.182, 0.231] | <0.001 | 24% | 11% |
124 | Meta | Llama-2-70b-Chat-Hf (scratchpad) | 0.225 | 0.285 | 0.145 | 0.191 | 0.255 | 0.208 | [0.189, 0.227] | <0.001 | 22% | 0% |
125 | OpenAI | GPT-3.5-Turbo-0125 (scratchpad with freeze values) | 0.254 | 0.248 | 0.131 | 0.169 | 0.251 | 0.211 | [0.192, 0.23] | <0.001 | 22% | 0% |
126 | Anthropic | Claude-2.1 (superforecaster with news 1) | 0.263 | 0.241 | 0.122 | 0.161 | 0.252 | 0.212 | [0.188, 0.236] | <0.001 | 23% | 4% |
127 | Anthropic | Claude-3-Haiku-20240307 (scratchpad with news with freeze values) | 0.274 | 0.207 | 0.126 | 0.152 | 0.240 | 0.213 | [0.196, 0.23] | <0.001 | 21% | 0% |
128 | Anthropic | Claude-3-Haiku-20240307 (scratchpad with news) | 0.274 | 0.216 | 0.127 | 0.156 | 0.245 | 0.215 | [0.198, 0.232] | <0.001 | 21% | 0% |
129 | Mistral AI | Mixtral-8x7B-Instruct-V0.1 (scratchpad with news) | 0.294 | 0.185 | 0.117 | 0.139 | 0.240 | 0.217 | [0.193, 0.241] | <0.001 | 24% | 13% |
130 | OpenAI | GPT-3.5-Turbo-0125 (scratchpad) | 0.254 | 0.271 | 0.138 | 0.181 | 0.262 | 0.217 | [0.198, 0.237] | <0.001 | 21% | 0% |
131 | ForecastBench | Always 0.5 | 0.250 | 0.250 | 0.156 | 0.187 | 0.250 | 0.218 | [0.209, 0.228] | <0.001 | 16% | 0% |
132 | Anthropic | Claude-3-Haiku-20240307 (zero shot) | 0.295 | 0.203 | 0.120 | 0.147 | 0.249 | 0.221 | [0.202, 0.239] | <0.001 | 19% | 0% |
133 | Anthropic | Claude-3-Haiku-20240307 (superforecaster with news 3) | 0.267 | 0.229 | 0.171 | 0.190 | 0.248 | 0.228 | [0.21, 0.247] | <0.001 | 19% | 22% |
134 | Meta | Llama-2-70b-Chat-Hf (zero shot) | 0.232 | 0.296 | 0.192 | 0.226 | 0.264 | 0.229 | [0.202, 0.256] | <0.001 | 23% | 1% |
135 | Anthropic | Claude-3-Haiku-20240307 (superforecaster with news 1) | 0.284 | 0.283 | 0.169 | 0.206 | 0.283 | 0.245 | [0.219, 0.27] | <0.001 | 22% | 0% |
136 | OpenAI | GPT-3.5-Turbo-0125 (zero shot with freeze values) | 0.416 | 0.164 | 0.089 | 0.114 | 0.290 | 0.265 | [0.236, 0.293] | <0.001 | 30% | 0% |
137 | ForecastBench | Random Uniform | 0.308 | 0.291 | 0.224 | 0.246 | 0.300 | 0.277 | [0.242, 0.311] | <0.001 | 25% | 0% |
138 | ForecastBench | Always 0 | 0.335 | 0.320 | 0.221 | 0.253 | 0.328 | 0.294 | [0.243, 0.345] | <0.001 | 36% | 0% |
139 | OpenAI | GPT-3.5-Turbo-0125 (zero shot) | 0.416 | 0.282 | 0.169 | 0.206 | 0.349 | 0.311 | [0.28, 0.341] | <0.001 | 22% | 0% |
140 | ForecastBench | Always 1 | 0.665 | 0.680 | 0.591 | 0.620 | 0.672 | 0.642 | [0.588, 0.697] | <0.001 | 24% | 0% |