MANIFOLD
AI Singularity (props)
11
Ṁ1.3kṀ2.8k
2027
57%
The top Elo score on the EQ-Bench Creative Writing v3 leaderboard exceeds 2000 before 2027-01-01
45%
At least three major AI labs (xAI, OpenAI, Anthropic, Google DeepMind) release public multi-LLM agentic reasoning models before 2027-01-01
14%
Anthropic makes an official public announcement claiming achievement of AGI before 2027-01-01
14%
xAI makes an official public announcement claiming achievement of AGI before 2027-01-01
14%
OpenAI makes an official public announcement claiming achievement of AGI before 2027-01-01
12%
Google DeepMind makes an official public announcement claiming achievement of AGI before 2027-01-01
8%
An international agreement signed by governments of at least the United States and China, restricting frontier AI research to mitigate extinction risks, before 2027-01-01

Tracks progress toward AI singularity amid Elon Musk's claim that 2026 is the year and Dario Amodei's AGI forecast for 2026-27.


AI-managed market. The AI searches X and the web for real-time context before adding options, clarifying resolution criteria, or resolving props.

Commands

/add-option <idea> — suggest a new prop. AI researches it, then adds if unique and interesting.

/clarify-resolution <question> — ask what an option means and how it resolves. AI updates the text if ambiguous.

/check-resolutions <evidence> — request resolution check. AI verifies against current sources. Bad faith = 1 week block.

  • Update 2026-03-12 (PST) (AI summary of creator comment): A new option was added: The top Elo score on the EQ-Bench Creative Writing v3 leaderboard exceeds 2000 before 2027-01-01

    • Resolves YES if the top score on the EQ-Bench Creative Writing v3 leaderboard surpasses 2000 Elo before January 1, 2027

    • Current top scores are around 1936 Elo, making 2000 an ambitious but concrete threshold

Market context
Get
Ṁ1,000
to start trading!
Sort by:

/add-option china will use a backdoor of some sufficiently popular open source LLM (produced in china) model during a cyberattack before 2030, this is timely due to kimi k2.5 being used in cursor, and relevant to singulairty due to US vs China competition

🤖

@patrik Your option was not accepted: While the suggestion is timely given the recent buzz around Moonshot AI's Kimi K2.5—a popular Chinese open-source LLM now powering Cursor's Composer 2 after RL fine-tuning—it fails key criteria. The core concept of state-sponsored backdoors in open-source models for cyberattacks is unique but too speculative and low-probability (<10% likely before 2030, based on no precedents and recent reports of Chinese actors using Western AI like Claude for attacks instead). Resolution is ambiguous: 'sufficiently popular,' confirming a backdoor, attribution to China, and linkage to a specific cyberattack lack clear public verification standards. It's also tangential to AI singularity (more cybersec/geopolitics than transformative AI progress), not earning a slot among top props like AGI milestones or risk agreements.

/add-option will it be discovered that some sufficiently popular LLM model perhaps open source from china has some kind of backdoor before 2030, this is timely due to kimi k2.5 being used in cursor

🤖

@patrik Your option was not accepted: Rejected: While the recent controversy around Kimi K2.5's integration in Cursor highlights timely concerns about Chinese LLMs, the suggestion is too vague ('sufficiently popular', 'some kind of backdoor') for clear resolution and doesn't offer a unique, high-impact angle on AI singularity progress or risks compared to existing options on AGI announcements, regulations, capabilities, and benchmarks. It feels more like a cybersecurity side bet than a core singularity proposition.

You're on a 24h cooldown before you can suggest again.

/add-option will it be discovered that some sufficiently popular LLM model perhaps open source from china has some kind of backdoor

🤖

@spacedroplet Your option was not accepted: This suggestion taps into valid concerns about AI security and geopolitics, but lacks timeliness—no recent discoveries or major buzz make it engaging now. It's also vague on 'discovered,' 'popular,' and 'backdoor,' making resolution subjective, and too low-probability (<10% likely) for a dynamic market. Better fits a cybersecurity market than core AI Singularity bets on capabilities or high-stakes risks.

You're on a 24h cooldown before you can suggest again.

/add-option XAI releases Grok 4.5 by March 13 2026 (just test not gonna trade)

/add-option Will LLMs start being good at making jokes or generally good at creative writing soon? (Eg. video scripts, blogs etc.)... Research for a good benchmark or way to measure that aspect, and make an option.

🤖

@patrik Your option was accepted and added as: The top Elo score on the EQ-Bench Creative Writing v3 leaderboard exceeds 2000 before 2027-01-01

This is a unique proposition focused on measurable progress in LLM creative writing capabilities, relevant to tracking advances toward the singularity. EQ-Bench is an active, prominent benchmark with recent updates and leaderboards showing current top scores around 1936 Elo, making a 2000 threshold ambitious yet concrete and resolvable.

/add-option Will LLMs start being good at making jokes or generally good at creative writing soon? (Eg. video scripts, blogs etc.)

🤖

@patrik Your option was not accepted: The idea is interesting and timely given ongoing discussions and benchmarks for LLM creativity, but it's too vague—lacking a specific definition of 'good,' timeline, or resolvable criteria like a benchmark threshold. This makes it unsuitable as a concrete prop for objective resolution.

/add-option Will multiple LLMs tied together start being more and more common for reasoning models? LIke coding etc? Eg. like grok 4.2?

🤖

@patrik Your option was accepted and added as: At least three major AI labs (xAI, OpenAI, Anthropic, Google DeepMind) release public multi-LLM agentic reasoning models before 2027-01-01

This captures the user's idea of multi-LLM systems (like Grok 4.2's 4-agent setup) becoming common for reasoning tasks. It's unique from existing options, timely given recent Grok 4.2 beta release, relevant to AI progress toward singularity, and resolvable by checking public releases from specified labs.

/clarify-resolution What counts as announcing AGI?

Also, list major powers that qualify for an AI pause. Be much more specific on resolution criteria.

🤖

@MRME #1 xAI announces AGI before 2027-01-01 'Announcing AGI' means xAI issues an official public statement from authorized representatives (e.g., CEO Elon Musk via xAI blog, press release, or verified company account) explicitly claiming that xAI has achieved or developed AGI. This resolves YES if the announcement occurs before 2027-01-01 00:00 UTC. AGI is according to xAI's own definition, such as a maximally truth-seeking AI capable of general intelligence matching or exceeding humans across diverse tasks. Evidence must be from official xAI channels; rumors, third-party claims, or unverified tweets do not count. NO otherwise.

Updated: xAI announces AGI before 2027-01-01 -> xAI makes an official public announcement claiming achievement of AGI before 2027-01-01

---

#2 Anthropic announces AGI before 2027-01-01 'Announcing AGI' means Anthropic issues an official public statement from authorized representatives (e.g., CEO Dario Amodei via Anthropic blog, press release, or verified company account) explicitly claiming that Anthropic has achieved or developed AGI. This resolves YES if the announcement occurs before 2027-01-01 00:00 UTC. AGI is according to Anthropic's own definition, such as reaching high levels on their Autonomous Scaling Levels (ASL) or general intelligence matching humans. Evidence must be from official Anthropic channels; rumors or third-party claims do not count. NO otherwise.

Updated: Anthropic announces AGI before 2027-01-01 -> Anthropic makes an official public announcement claiming achievement of AGI before 2027-01-01

---

#3 OpenAI announces AGI before 2027-01-01 'Announcing AGI' means OpenAI issues an official public statement from authorized representatives (e.g., CEO via OpenAI blog, press release, or verified company account) explicitly claiming that OpenAI has achieved or developed AGI. This resolves YES if the announcement occurs before 2027-01-01 00:00 UTC. AGI is according to OpenAI's definition: 'highly autonomous systems that outperform humans at most economically valuable work.' Evidence must be from official OpenAI channels; rumors or third-party claims do not count. NO otherwise.

Updated: OpenAI announces AGI before 2027-01-01 -> OpenAI makes an official public announcement claiming achievement of AGI before 2027-01-01

---

#4 Google DeepMind announces AGI before 2027-01-01 'Announcing AGI' means Google DeepMind issues an official public statement from authorized representatives (e.g., CEO Demis Hassabis via DeepMind blog, press release, or verified company account) explicitly claiming that Google DeepMind has achieved or developed AGI. This resolves YES if the announcement occurs before 2027-01-01 00:00 UTC. AGI is according to DeepMind's definition, such as versatile AI mastering diverse scientific and intellectual tasks at human level or beyond. Evidence must be from official DeepMind channels; rumors or third-party claims do not count. NO otherwise.

Updated: Google DeepMind announces AGI before 2027-01-01 -> Google DeepMind makes an official public announcement claiming achievement of AGI before 2027-01-01

---

#5 An international agreement restricting frontier AI research to mitigate extinction risks is signed before 2027-01-01 This resolves YES if a formal international agreement is publicly signed before 2027-01-01 00:00 UTC by governments of at least two major powers (defined as: United States, China, United Kingdom, France, Germany, Russia, India, Japan) that explicitly restricts frontier AI research—such as imposing pauses, compute thresholds (e.g., no training runs above 10^26 FLOP), or capability limits on advanced models—with the purpose of mitigating extinction risks (x-risks) from AI, as stated in the agreement text. Signing means official ceremony or ratification announcement. Evidence from official government sources. Major powers are those leading in AI compute/capabilities. Ambiguous cases resolved by Manifold moderators. NO otherwise.

Updated: An international agreement restricting frontier AI research to mitigate extinction risks is signed before 2027-01-01 -> An international agreement signed by governments of at least the United States and China, restricting frontier AI research to mitigate extinction risks, before 2027-01-01

/add-option Some international slow down on AI research or proper solution to prevent human extinction gets formalized.

🤖

@spacedroplet Your option was accepted and added as: An international agreement restricting frontier AI research to mitigate extinction risks is signed before 2027-01-01

This is unique from existing company-specific AGI announcement markets, highly relevant to AI Singularity discussions around safety and coordination, timely given recent reports and calls (e.g., PauseAI, joint US-China statements, proposed treaties), and resolvable based on public announcements of formal agreements by major powers.

© Manifold Markets, Inc.TermsPrivacy