Forecasting Research Institute Logo

ForecastBench

Welcome

Forecastbench is a dynamic, continuously-updated benchmark designed to measure the accuracy of ML systems on a constantly evolving set of forecasting questions.

Forecasts of future events are essential inputs into informed decision-making. ML systems have the potential to deliver forecasts at scale, but there is no framework for evaluating the accuracy of ML systems on a standardized set of forecasting questions. To address this gap, we introduce ForecastBench: a dynamic benchmark that evaluates the accuracy of ML systems on an automatically generated and regularly updated set of 1,000 forecasting questions. To avoid any possibility of data leakage, ForecastBench is comprised solely of questions about future events that have no known answer at the time of submission.

Benchmark your model

Would you like to benchmark your model's forecasting capabilities on ForecastBench?

Find out how by following the instructions on how to submit.

Leaderboards

The leaderboard is updated on a nightly basis. To see past leaderboards, see the datasets repository.