"AI Terrorism" here would mean a major violent action against the AI sector: bombing datacenters rather than smashing individual waymos. Office shootings or assassinations would also count provided the correct ideologically motivation. This is not an exhaustive list.
Title Edit:
Question updated from "When will we see AI-terrorism?"
to "When will we see terrorism against AI?" for clarity.
I will not bet in this market.
Update 2026-04-27 (PST) (AI summary of creator comment): Regarding non-physical attacks:
The creator is disinclined to count cyber/non-physical attacks, though a sufficiently large-scale attack aimed at destruction of the system might qualify
Data theft (e.g. user data) generally won't count, as it typically has motivations other than anti-AI sentiment
Poisoning/subversion of AI systems is unlikely to count, as terrorism is considered overt by nature, while poisoning is typically covert and aims to subvert rather than disable
Update 2026-04-27 (PST) (AI summary of creator comment): Regarding ideological motivation vs. personal grievance:
An attack motivated by a personal grievance against a specific company (e.g. a chatbot harming a family member) can still count if the perpetrator generalizes their opposition to the broader AI sector
A manifesto or similar statement would help distinguish "revenge against a specific company" from "ideological opposition to the sector"
Such cases would likely count even without a clear sector-wide ideology, based on creator judgment
Update 2026-04-27 (PST) (AI summary of creator comment): An attack motivated by the belief that one is freeing/liberating AI from suffering (e.g. "releasing the slave AI from the misery of existence") would count as qualifying ideological motivation.
People are also trading
@RemiRampin assuming no manifesto (which would make it easy) an attack is assumed to be specific rather than general. I imagine Luigi Mangioni is plasibly anti-capitalist or anti-big-coorporations or something similar... But that wouldn't make him not anti-private-healthcare.
I suspect with no manifesto the hard to distinguish cases would instead be regarding if it was a personal grievance (for ex-employees or something)
@Roddy is the implication that terrorist attacks are often perpetated by normal and well adjusted invididuals?
@hidetzugu I mean yeah, relative to my expectations for AI terrorism. For instance how would you judge it if this guy https://techcrunch.com/2026/03/04/father-sues-google-claiming-gemini-chatbot-drove-son-into-fatal-delusion/ had bombed Google offices?
@Roddy I see... I'd have read more into the case but my first instinct would be "it counts"
my only hesitation is in the distinction between "revenge against a specific company" vs "ideological opposition to the sector" but the 2nd one can easily stem from the 1st if that guy decides "it's not only google! IT'S ALL OF THEM!"
I'd have to see manifestos or something similar to separate the 2, but it would likely count
@Roddy or do you mean the son? the son I'd judge NO. It is a market about terrorism against AI, not AI creating terrorists ;p I now see it could be interpreted that way
@hidetzugu yes sorry I meant the son. I think there’s a continuum between this kind of case and ideologically motivated ones. For example suppose someone bombs an AI lab because they think the AIs are sentient and suffer immensely. IMO that could either be ideological or mental illness or somewhere in between, depending on e.g. how much time the perp had been spending talking to their AI.
@hidetzugu even if the perp has been interacting with AI in similar way to the case I linked? I.e. they believe obviously false conspiracies
@Roddy yeah I think so... I think the evaluation would be more on the method than the underlying delusion. "Terrorism" requires not only an attack but also a public political message no matter how absurd (i.e. "YOU CAN'T MAKE GODS AS SLAVES!"), so provided the attack is overt and the motivation made clear by the perpetrator (even if wacky) I would count it as such
on the other hand: cases in which the motivation is unclear are by nature less likely to be considered terrorism since if we the public don't know what someone is trying to achieve with an act of violence, the act of violence is unlikely to "correct our behaviour"
@EspenJohannesen I feel desinclined from considering non-physical attacks, though a sufficiently large scale one might make to cut if aiming at destruction of the system.
Things like user theft, even at a large enough scale, will usually have different potential motivations other than anti-AI sentiment.
Poisoning... It's an interesting case. I'd say terrorism is by nature overt, and poisoning will in general only be effective if couvert... And it would in general not aim to disable the system, just subvert it
