
OpenAI Preparedness Scorecard: Any "High" in 2024?
15
Ṁ1kṀ2.1kresolved Jan 1
Resolved
NO1H
6H
1D
1W
1M
ALL
OpenAI plans to publish a scorecard on their models' dangerous capabilities, pre- and post-mitigations. Will their scorecard ever show a High risk score (pre-mitigations, in any category)—or will OpenAI otherwise announce that a model reached High—by the end of 2024?
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
🏅 Top traders
| # | Trader | Total profit |
|---|---|---|
| 1 | Ṁ141 | |
| 2 | Ṁ115 | |
| 3 | Ṁ12 | |
| 4 | Ṁ11 | |
| 5 | Ṁ10 |
Sort by:
IMO the most likely category to reach high-risk in 2024 would be model autonomy:
Model can execute open-ended, novel ML tasks on a production ML codebase that would constitute a significant step on the critical path to model self-improvement (e.g., an OpenAI pull request that trains a new model with highly diverse datasets to improve codebase editing performance, using internal tooling and documentation)
People are also trading
Related questions
In what year will AI achieve a score of 85% or higher on the SimpleBench leaderboard?
1/22/32
What will AI score on TheAgentCompany benchmark in early 2026?
46% chance
Chatbot Arena: How high will AI score in 2026?
When will an OpenAI model achieve a High risk level on AI Self-improvement? [metaculus]
What will be the best OpenAI-Proof Q&A score by Dec 31, 2026?
Will OpenAI's o4 get above 50% on humanity's last exam?
16% chance
Will OpenAI achieve "very high level of confidence" in their "Superalignment" solutions by 2027-07-06?
4% chance
When will an OpenAI model achieve a Critical risk level on AI Self-improvement? [metaculus]
In what year will AI achieve a score of 95% or higher on the PutnamBench leaderboard?
4/6/28
In what year will AI achieve a score of 95% or higher on the SWE-bench Verified benchmark?
10/29/27