Will "Training AGI in Secret would be Unsafe and Un..." make the top fifty posts in LessWrong's 2025 Annual Review?
3
Ṁ1kṀ3002027
11%
chance
1H
6H
1D
1W
1M
ALL
As part of LessWrong's Annual Review, the community nominates, writes reviews, and votes on the most valuable posts. Posts are reviewable once they have been up for at least 12 months, and the 2025 Review resolves in February 2027.
This market will resolve to 100% if the post Training AGI in Secret would be Unsafe and Unethical is one of the top fifty posts of the 2025 Review, and 0% otherwise. The market was initialized to 14%.
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
People are also trading
Related questions
Will "A short course on AGI safety from the GDM Ali..." make the top fifty posts in LessWrong's 2025 Annual Review?
14% chance
Will "My AGI safety research—2025 review, ’26 plans" make the top fifty posts in LessWrong's 2025 Annual Review?
9% chance
Will "AGI Safety & Alignment @ Google DeepMind is h..." make the top fifty posts in LessWrong's 2025 Annual Review?
7% chance
Will "Learnings from AI safety course so far" make the top fifty posts in LessWrong's 2025 Annual Review?
14% chance
Will "How AI Is Learning to Think in Secret" make the top fifty posts in LessWrong's 2026 Annual Review?
21% chance
Will "The Problem with Defining an "AGI Ban" by Out..." make the top fifty posts in LessWrong's 2025 Annual Review?
20% chance
Will "Shallow review of technical AI safety, 2025" make the top fifty posts in LessWrong's 2025 Annual Review?
14% chance
Will "Legible vs. Illegible AI Safety Problems" make the top fifty posts in LessWrong's 2025 Annual Review?
49% chance
Will "Third-wave AI safety needs sociopolitical thi..." make the top fifty posts in LessWrong's 2025 Annual Review?
11% chance
Will "Five Hinge‑Questions That Decide Whether AGI ..." make the top fifty posts in LessWrong's 2025 Annual Review?
16% chance