Will prioritizing corrigible AI produce safe results?
3
1kṀ1602050
45%
chance
1D
1W
1M
ALL
This market is conditional on the market "Will the company that produces the first AGI have prioritized Corrigibility?" (https://manifold.markets/PeterMcCluskey/will-the-company-that-produces-the). This market will resolve as N/A if that market resolves as NO or N/A.
If that market resolves as YES, this market will resolve one year later, to the same result that the market "Will AGI create a consensus among experts on how to safely increase AI capabilities?" (https://manifold.markets/PeterMcCluskey/will-agi-create-a-consensus-among-e) is resolved as.
I will not trade in this market.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
Related questions
Related questions
Is slowing down AGI good for AI safety? [resolves to poll]
83% chance
Is RLHF good for AI safety? [resolves to poll]
45% chance
Will Anthropic be the best on AI safety among major AI labs at the end of 2025?
84% chance
Will AGI create a consensus among experts on how to safely increase AI capabilities?
31% chance
By 2027 will there be a well-accepted training procedure(s) for making AI honest?
15% chance
Will I still consider improving AI X-Safety my top priority on EOY 2024?
73% chance
Will AI be considered safe in 2030? (resolves to poll)
72% chance
How much should society prioritize AI safety research, relative to how much it is currently prioritized?
POLL
Will the ARC Prize Foundation succeed at making a new benchmark that is easy for humans but still hard for the best AIs?
82% chance
AI honesty #2: by 2027 will we have a reasonable outer alignment procedure for training honest AI?
25% chance