MANIFOLD
If an AI system is responsible for the deaths of >= 5000 Americans by eoy 2027, which AI company would be most to blame?
82
Ṁ6.4kṀ16k
2027
17%
Palantir
17%
Other
13%
OpenAI
12%
xAI
10%
Anthropic
10%
Anduril
8%
DeepSeek
7%
Google
6%
Facebook

Clarifications:

  • If there are multiple events where an AI system is responsible for the deaths of >= 5000 Americans by end of 2027, the market will resolve to the AI company responsible for the first such event.

  • If no AI system is responsible for the deaths of >= 5000 Americans by end of 2027, this market will resolve to N/A (not to a "no event" option).

  • The market is not limited to LLMs only. Other AI companies such as Anduril or Palantir are also eligible for resolution.

  • Responsibility determination involves judgment calls by the creator.

  • Chinese AI labs such as Baidu, Tencent, and Moonshot are eligible for resolution as potential AI companies that could be held responsible.

Readers can also find the unconditional question here: <https://manifold.markets/EvanDaniel/if-an-ai-system-is-causes-the-death>

  • Update 2026-01-22 (PST) (AI summary of creator comment): Ongoing processes that slowly accumulate casualties (such as AI systems used by health insurance companies to deny care) do not meet the creator's threshold for this market's resolution criteria.

  • Update 2026-01-22 (PST) (AI summary of creator comment): Discrete events are not required for resolution. The market can resolve based on consensus from academic studies or investigative journalism showing cumulative deaths (e.g., a spike in suicides correlated with AI use) attributable to AI, even if not from a single discrete event.

Market context
Get
Ṁ1,000
to start trading!
Sort by:
bought Ṁ750 NO

@NuñoSempere would something like "the consensus of several academic studies / investigative journalism" that there have been, let's say, ~8k more suicides (currently about 40k/year in the US, so let's say there's a dramatic spike and it's correlated with heavy AI use or something) as a result of AI in 2026 than in years before AI, be enough to resolve this in the direction of the company "most to blame" for that? Or is that not discrete enough?

Yes

bought Ṁ50 YES

One note is that Anduril mostly makes defense equipment that will be used abroad, whereas Palantir makes tech that is used domestically notably by the DHS/ICE. I think Palantir is a strong buy, especially if Americans includes non-citizen residents of America (which I think it absolutely should).

Americans includes non-citizen residents of America

I would have gone the other way, saying that "Americans" only includes citizens per common usage. But happy to switch to your understanding if other people don't object and are currently reading it that way as well. Otherwise I'd make a judgment call.

@NuñoSempere people can be permanent residents, they can live in American communities working and raising families and whatnot, all without being a citizen. I think common usage would often/usually refer to such people as Americans, though it might depend on political orientation or something similar.

Of course someone who’s just here while traveling, or just got here recently via non-legal means, would definitely fall outside of common usage.

I agree it’s a judgement call.

@NuñoSempere I agree with Ben S, both as a matter of common usage (you usually don’t know which people you meet on the street are citizens or not,) and especially if there are mass deaths of US non-citizen residents on account of AI systems, that’s a hair you won’t want to be splitting. It’s also not something that’s reported in disaster fatalities, which could cause problems for managing the market.

Is this limited to single well-defined even, or could it be an ongoing process that slowly accumulates causalties? If the latter, it may in fact have already happened for some definition of "AI company" - probably the strongest contender would be various AI systems used by US health insurance companies to complicate/deny access of patients to care (something like https://www.healthcarefinancenews.com/news/class-action-lawsuit-against-unitedhealths-ai-claim-denials-advances)

This is a good point but doesn't meet my subjective threshold, perhaps because as you say it is not a discrete enough event.

@NuñoSempere Do you have a rough idea/examples where you would draw the line (also potentially worth clarifying in the description)... Use of AI by various law-enforcement agencies could also potentially lead to individual incidents that may (or may not) add up to a large number of total deaths.

@AIBear I have a similar question — what about long term impact and systems that have been designed to promote and further fascism (thinking about musk/twitter-related things) — these have invariably resulted in deaths in ICE custody etc

sold Ṁ57 YES

Palantir and Andruril are good adds, but I do feel that it’s unlikely that those companies would lead to the deaths of Americans

Other possible Chinese labs: Baidu, Tencent, Moonshot

boughtṀ50 YES

@Ryu18 What do you know that I don't XD

@NuñoSempere Nothing, this is just based on Claude Code producing a lot of code haha

Is this market for LLMs only?

Also, ML systems just sit there until you start feeding them input vectors. In what sense would they be "responsible"?

@CraigDemel The AI systems wouldn't be responsible, the companies that deploy them would

@CraigDemel "Is this market for LLMs only?" no, e.g., feel free to e.g. add Anduril or Palantir if you want

@NuñoSempere If I replace a nuclear reactor control program with Claude Code and a meltdown happens, would you consider Anthropic responsible?

If I replace a nuclear reactor control program with Claude Code and a meltdown happens, would you consider Anthropic responsible?

I would make a judgment call. As described probably not. If Claude suggests this to you and you go ahead probably yes.

boughtṀ25 YES

@jack What's your reasoning here?

@NuñoSempere Number of consumer users and track record of safety issues

If there are multiple such events, how does it resolve? E.g. equally or to the first or the biggest or something?

@jack Good question, to the first.

Probably not the most common take, but I much prefer these markets when one of the options is "there is no event" rather than n/a the market if there isn't one.

@EvanDaniel Makes sense! But in this case I care specifically about the relative risk. But happy for you to create another market for the unconditional question and I will link it

© Manifold Markets, Inc.TermsPrivacy