We RL train language models how to reason about future events like "Which tech company will the US government buy a > 7% stake in by September 2025?", releasing all code, data, and weights for our model: OpenForecaster 8B.

Our training makes the 8B model competitive with much larger models like GPT-OSS-120B across judgemental forecasting benchmarks and metrics.

Announcement: X

Blog: https://openforecaster.github.io

Paper: https://www.alphaxiv.org/abs/2512.25070