Will someone commit violence in the name of AI safety by 2030?
25
1kṀ1573
2029
65%
chance

Some are concerned that the development of a very advanced AI (an artificial superintelligence or ASI) could lead to human extinction, due to the ASI regarding humanity as an obstacle or even an enemy to its goals. The risk this could happen is known as existential risk, or X-risk, and if the risk is coming from AI, AI X-risk.

Since literally everything hangs in the balance if one considers this risk to be real, it wouldn't be entirely surprising if someone becomes radicalized by this perspective and commits an assassination of either an AI researcher or executive, for example.

This market resolves to yes if such an assassination happens. It will also resolve yes if someone else is killed due to fears of AI-driven human extinction (for example, a politician perceived to be increasing AI X-risk), or if its a mass-casualty event (e.g. OpenAI HQ gets blown up, or someone dies as collateral in a datacenter bombing). Finally, it also resolves yes if a targeted attack with the intent of slowing or stopping AI happens, even if there are no casualties. Riots also will resolve towards yes, if the motivation was AI safety.

Get
Ṁ1,000
to start trading!
© Manifold Markets, Inc.TermsPrivacy