Will existential risks from AI still be considered a top problem to work on within the EA community by the end of 2024?
10
Ṁ1kṀ1.1kresolved Jan 11
Resolved
YES1H
6H
1D
1W
1M
ALL
Resolves according to my own judgement - I'd look at 80K, discourse on the forum, what seems to be getting funded etc.
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
🏅 Top traders
| # | Trader | Total profit |
|---|---|---|
| 1 | Ṁ16 | |
| 2 | Ṁ15 | |
| 3 | Ṁ13 | |
| 4 | Ṁ5 | |
| 5 | Ṁ3 |
People are also trading
Related questions
OpenAI CEO doesn't think existential risk from AI is a serious concern in Jan 2026
27% chance
Will AI xrisk seem to be handled seriously by the end of 2026?
14% chance
Are AI and its effects are the most important existential risk, given only public information available in 2021?
89% chance
Will AI existential risk be mentioned in the white house briefing room again by May 2029?
87% chance
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
33% chance
In 2050, will the general consensus among experts be that the concern over AI risk in the 2020s was justified?
72% chance
Will "AI Control May Increase Existential Risk" make the top fifty posts in LessWrong's 2025 Annual Review?
15% chance
Will humanity wipe out AI x-risk before 2030?
11% chance
How much will AI advances impact EA research effectiveness, by 2030?
Will there be a highly risky or catastrophic AI agent proliferation event before 2035?
81% chance