
Will the Gates Foundation give more than $100mn to AI Safety work before 2025?
19
Ṁ1kṀ5.7kresolved Jan 22
Resolved
NO1H
6H
1D
1W
1M
ALL
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
🏅 Top traders
| # | Trader | Total profit |
|---|---|---|
| 1 | Ṁ302 | |
| 2 | Ṁ90 | |
| 3 | Ṁ70 | |
| 4 | Ṁ67 | |
| 5 | Ṁ19 |
People are also trading
Will non-profit funding for AI safety reach 100 billion US dollars in a year before 2030?
38% chance
Will the US Federal Government spend more than 1/1000th of its budget on AI Safety by 2028?
12% chance
I make a contribution to AI safety that is endorsed by at least one high profile AI alignment researcher by the end of 2026
40% chance
Will National Governments Collectively Give More than $100M a year in funding for AI Alignment by 2030?
81% chance
WIll I work (at some point) at a top AI lab on safety in the next 5 years?
74% chance
Will a >$10B AI alignment megaproject start work before 2030?
37% chance
Will a Turing Award be given out for work on AI alignment or existential safety by 2040?
79% chance
Will mass-movement political activism for AI regulation (such as "PauseAI") get $10m+ from EA funders before 2030?
71% chance
Will xAI stop working on AI research by 2029?
24% chance
$100B AI training cluster before 2029?
88% chance
Sort by:
@NathanpmYoung Adverse consequences from militarization of AI, where the focus of the grant is not on existential or near-existential risk. That seems somewhere vaguely in between the core "AI ethics" and core "AI safety" camps? Basically, anything where the consequence is non-localized death but is not an existential / near-existential risk could be unclear.
People are also trading
Related questions
Will non-profit funding for AI safety reach 100 billion US dollars in a year before 2030?
38% chance
Will the US Federal Government spend more than 1/1000th of its budget on AI Safety by 2028?
12% chance
I make a contribution to AI safety that is endorsed by at least one high profile AI alignment researcher by the end of 2026
40% chance
Will National Governments Collectively Give More than $100M a year in funding for AI Alignment by 2030?
81% chance
WIll I work (at some point) at a top AI lab on safety in the next 5 years?
74% chance
Will a >$10B AI alignment megaproject start work before 2030?
37% chance
Will a Turing Award be given out for work on AI alignment or existential safety by 2040?
79% chance
Will mass-movement political activism for AI regulation (such as "PauseAI") get $10m+ from EA funders before 2030?
71% chance
Will xAI stop working on AI research by 2029?
24% chance
$100B AI training cluster before 2029?
88% chance