Will "Why Do Some Language Models Fake Alignment Wh..." make the top fifty posts in LessWrong's 2025 Annual Review?
2
Ṁ1kṀ342027
16%
chance
1H
6H
1D
1W
1M
ALL
As part of LessWrong's Annual Review, the community nominates, writes reviews, and votes on the most valuable posts. Posts are reviewable once they have been up for at least 12 months, and the 2025 Review resolves in February 2027.
This market will resolve to 100% if the post Why Do Some Language Models Fake Alignment While Others Don't? is one of the top fifty posts of the 2025 Review, and 0% otherwise. The market was initialized to 14%.
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
People are also trading
Related questions
Will "Tips for Empirical Alignment Research" make the top fifty posts in LessWrong's 2024 Annual Review?
16% chance
Will "Alignment Faking Revisited: Improved Classifi..." make the top fifty posts in LessWrong's 2025 Annual Review?
13% chance
Will "What Is The Alignment Problem?" make the top fifty posts in LessWrong's 2025 Annual Review?
15% chance
Will "Alignment Pretraining: AI Discourse Causes Se..." make the top fifty posts in LessWrong's 2025 Annual Review?
23% chance
Will "Alignment remains a hard, unsolved problem" make the top fifty posts in LessWrong's 2025 Annual Review?
24% chance
Will "6 reasons why “alignment-is-hard” discourse s..." make the top fifty posts in LessWrong's 2025 Annual Review?
20% chance
Will "Auditing language models for hidden objectives" make the top fifty posts in LessWrong's 2025 Annual Review?
11% chance
Will "Self-fulfilling misalignment data might be po..." make the top fifty posts in LessWrong's 2025 Annual Review?
14% chance
Will "Towards Alignment Auditing as a Numbers-Go-Up..." make the top fifty posts in LessWrong's 2025 Annual Review?
14% chance
Will "Announcing: OpenAI's Alignment Research Blog" make the top fifty posts in LessWrong's 2025 Annual Review?
6% chance