
If Artificial General Intelligence has a poor outcome, what will be the reason?
17
Ṁ110Ṁ8262030
1H
6H
1D
1W
1M
ALL
85%
Something from Eliezer's list of lethalities occurs.
63%
Someone successfully aligns AI to cause a poor outcome
59%
Someone finds a solution to alignment, but fails to communicate it before dangerous AI gains control.
25%
Alignment is impossible.
Inverse of https://manifold.markets/EliezerYudkowsky/if-artificial-general-intelligence-539844cd3ba1?r=S3JhbnR6.
Will not resolve.
Primarily for users to explore particular lethalities.
Please add responses.
"poor" = human extinction or mass human suffering
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
People are also trading
Related questions
If Artificial General Intelligence has an okay outcome, what will be the reason?
If Artificial General Intelligence has an okay outcome, what will be the reason?
If Artificial General Intelligence has an okay outcome, what will be the reason?
Why will "If Artificial General Intelligence has an okay outcome, what will be the reason?" resolve N/A?
If Artificial General Intelligence (AGI) has an okay outcome, which of these tags will make up the reason?
[Independent MC Version] If Artificial General Intelligence has an okay outcome, what will be the reasons?
If we survive general artificial intelligence before 2100, what will be the reason?
If we survive general artificial intelligence, what will be the reason?
Will Eliezer's "If Artificial General Intelligence has an okay outcome, what will be the reason?" market resolve N/A?
29% chance
The probability of "extremely bad outcomes e.g., human extinction" from AGI will be >5% in next survey of AI experts
79% chance