Will @EliezerYudkowsky reverse his opinion on AI safety, before 2030?
39
1kṀ4205
2030
9%
chance

For example, if he decides that actually we should try to build ASI even if it means a great risk to the human race. Or if he decides that the creation of ASI doesn't actually pose a great risk to the human race.

  • Update 2025-02-25 (PST) (AI summary of creator comment): Key Resolution Update:

    • The reversal must include an explicit admission that Yudkowsky was wrong about his previous stance on AI safety.

    • Merely adjusting his perspective (e.g., claiming he was only slightly off or that we just got lucky) will not meet the criteria.

    • The explicit admission is the central and decisive component for a valid resolution.

Get
Ṁ1,000
to start trading!
© Manifold Markets, Inc.TermsPrivacy