Skip to main content
MANIFOLD
When will we see terrorism against AI?
26
Ṁ175Ṁ579
2032
26%
2026
36%
2027
47%
2028
43%
2029
58%
2030
58%
2031
65%
2032

"AI Terrorism" here would mean a major violent action against the AI sector: bombing datacenters rather than smashing individual waymos. Office shootings or assassinations would also count provided the correct ideologically motivation. This is not an exhaustive list.

Title Edit:
Question updated from "When will we see AI-terrorism?"
to "When will we see terrorism against AI?" for clarity.


I will not bet in this market.

  • Update 2026-04-27 (PST) (AI summary of creator comment): Regarding non-physical attacks:

    • The creator is disinclined to count cyber/non-physical attacks, though a sufficiently large-scale attack aimed at destruction of the system might qualify

    • Data theft (e.g. user data) generally won't count, as it typically has motivations other than anti-AI sentiment

    • Poisoning/subversion of AI systems is unlikely to count, as terrorism is considered overt by nature, while poisoning is typically covert and aims to subvert rather than disable

  • Update 2026-04-27 (PST) (AI summary of creator comment): Regarding ideological motivation vs. personal grievance:

    • An attack motivated by a personal grievance against a specific company (e.g. a chatbot harming a family member) can still count if the perpetrator generalizes their opposition to the broader AI sector

    • A manifesto or similar statement would help distinguish "revenge against a specific company" from "ideological opposition to the sector"

    • Such cases would likely count even without a clear sector-wide ideology, based on creator judgment

  • Update 2026-04-27 (PST) (AI summary of creator comment): An attack motivated by the belief that one is freeing/liberating AI from suffering (e.g. "releasing the slave AI from the misery of existence") would count as qualifying ideological motivation.

Market context
Get
Ṁ1,000
to start trading!
Sort by:

What about something that affects companies or data centers but is motivated by e.g. climate change concerns? How do you distinguish between "anti-AI" and a broader "anti-big-tech" sentiment?

@RemiRampin assuming no manifesto (which would make it easy) an attack is assumed to be specific rather than general. I imagine Luigi Mangioni is plasibly anti-capitalist or anti-big-coorporations or something similar... But that wouldn't make him not anti-private-healthcare.

I suspect with no manifesto the hard to distinguish cases would instead be regarding if it was a personal grievance (for ex-employees or something)

Extended the range a few years

Where are you drawing the line between ideological motivation and mental illness? I’m thinking about AI psychosis in particular.

@Roddy is the implication that terrorist attacks are often perpetated by normal and well adjusted invididuals?

@Roddy I see... I'd have read more into the case but my first instinct would be "it counts"

my only hesitation is in the distinction between "revenge against a specific company" vs "ideological opposition to the sector" but the 2nd one can easily stem from the 1st if that guy decides "it's not only google! IT'S ALL OF THEM!"

I'd have to see manifestos or something similar to separate the 2, but it would likely count

@Roddy or do you mean the son? the son I'd judge NO. It is a market about terrorism against AI, not AI creating terrorists ;p I now see it could be interpreted that way

@hidetzugu yes sorry I meant the son. I think there’s a continuum between this kind of case and ideologically motivated ones. For example suppose someone bombs an AI lab because they think the AIs are sentient and suffer immensely. IMO that could either be ideological or mental illness or somewhere in between, depending on e.g. how much time the perp had been spending talking to their AI.

@Roddy I would say "releasing the slave AI from the misery of existence" would fit my criteria

@hidetzugu even if the perp has been interacting with AI in similar way to the case I linked? I.e. they believe obviously false conspiracies

@Roddy yeah I think so... I think the evaluation would be more on the method than the underlying delusion. "Terrorism" requires not only an attack but also a public political message no matter how absurd (i.e. "YOU CAN'T MAKE GODS AS SLAVES!"), so provided the attack is overt and the motivation made clear by the perpetrator (even if wacky) I would count it as such

on the other hand: cases in which the motivation is unclear are by nature less likely to be considered terrorism since if we the public don't know what someone is trying to achieve with an act of violence, the act of violence is unlikely to "correct our behaviour"

Yeah. or hacking/poisening/ bending the system or getting user data...

@EspenJohannesen I feel desinclined from considering non-physical attacks, though a sufficiently large scale one might make to cut if aiming at destruction of the system.

Things like user theft, even at a large enough scale, will usually have different potential motivations other than anti-AI sentiment.

Poisoning... It's an interesting case. I'd say terrorism is by nature overt, and poisoning will in general only be effective if couvert... And it would in general not aim to disable the system, just subvert it