We know good ideas can come from anywhere. So we’re hosting an open research challenge to build the most efficient pretrained model under extreme constraints.
Your goal: minimize held-out loss on a fixed FineWeb dataset while staying within a strict 16 MB artifact limit (weights + training code combined) and a 10-minute training budget on 8×H100s.
How to participate
We provide a GitHub repo with a baseline, fixed dataset, and evaluation scripts. Fork it, improve the model within the size and compute caps, and submit a PR with your code, logs, score, and a short write-up. Once approved, your result is merged and the leaderboard updates automatically. You can apply for free compute credits with Runpod (while supplies last).
Why you should enter
This challenge is designed to surface exceptional researchers and engineers we’d want to hire. Standout participants may be invited to interview for job opportunities at OpenAI, and winning approaches may be featured publicly.
If you love solving technical problems like this and are interested in connecting with our recruiting team, tell us about yourself(opens in a new window).
Compute Credit Request Form
Our goal is to support participants as they experiment and iterate on their ideas. Through our partnership with Runpod(opens in a new window), we are offering compute credits to help you get started and scale promising approaches.