
Will @EliezerYudkowsky reverse his opinion on AI safety, before 2030?
39
1kṀ42052030
9%
chance
1H
6H
1D
1W
1M
ALL
For example, if he decides that actually we should try to build ASI even if it means a great risk to the human race. Or if he decides that the creation of ASI doesn't actually pose a great risk to the human race.
Update 2025-02-25 (PST) (AI summary of creator comment): Key Resolution Update:
The reversal must include an explicit admission that Yudkowsky was wrong about his previous stance on AI safety.
Merely adjusting his perspective (e.g., claiming he was only slightly off or that we just got lucky) will not meet the criteria.
The explicit admission is the central and decisive component for a valid resolution.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Related questions
At the beginning of 2035, will Eliezer Yudkowsky still believe that AI doom is coming soon with high probability?
55% chance
By EOY 2026, will Yann LeCun do a 180 on his view of x-risk from AI?
24% chance
Which well-known scientist will Eliezer Yudkowsky have a long recorded conversation with about AI risk, before 2026?
Will Yudkowsky agree that his "death with dignity" post overstated the risk of extinction from AI, by end of 2029?
18% chance
Will Eliezer Yudkowsky work for any major AI-related entity by 2027?
20% chance
Will Eliezer Yudowsky become romantically involved with an AI before 2030?
15% chance
Will AI be considered safe in 2030? (resolves to poll)
72% chance
It's end of 2025, a global AI moratorium is in effect, Eliezer Yudkowsky endorses it. What were its decisive causes?
If Elon Musk does something as a result of his AI angst by 2025, will Eliezer Yudkowsky judge it to be a positive or neutral initiative (as opposed to negative)?
12% chance
Will Yudkowsky claim that he is more than 50% sure that AI will kill everyone no later than 1 year after the claim?
30% chance