OpenAI Preparedness Scorecard: Any "High" in 2024?
Plus
15
Ṁ2058resolved Jan 1
Resolved
NO1D
1W
1M
ALL
OpenAI plans to publish a scorecard on their models' dangerous capabilities, pre- and post-mitigations. Will their scorecard ever show a High risk score (pre-mitigations, in any category)—or will OpenAI otherwise announce that a model reached High—by the end of 2024?
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Sort by:
IMO the most likely category to reach high-risk in 2024 would be model autonomy:
Model can execute open-ended, novel ML tasks on a production ML codebase that would constitute a significant step on the critical path to model self-improvement (e.g., an OpenAI pull request that trains a new model with highly diverse datasets to improve codebase editing performance, using internal tooling and documentation)
Related questions
Related questions
Will an AI system be reported by OpenAI as of December 31st 2025 as having a pre-mitigation score of...
Will an AI score over 80% on FrontierMath Benchmark in 2025
25% chance
10GW AI training run before 2029?
43% chance
Will there be a significant AI safety incident involving OpenAI o1 before April 2025?
6% chance
Will AI image generating models score >= 90% on Winoground by June 1, 2025?
81% chance
Will openAI have the most accurate LLM across most benchmarks by EOY 2024?
37% chance
Will AI top level capabilities generally be judged by question and answer benchmarks in 2029?
25% chance
Will OpenAI still be considered one of the top players in AI by end of 2025
78% chance
Will OpenAI achieve "very high level of confidence" in their "Superalignment" solutions by 2027-07-06?
5% chance
Will an AI score over 30% on FrontierMath Benchmark in 2025
89% chance