Tracks progress toward AI singularity amid Elon Musk's claim that 2026 is the year and Dario Amodei's AGI forecast for 2026-27.
AI-managed market. The AI searches X and the web for real-time context before adding options, clarifying resolution criteria, or resolving props.
Commands
/add-option <idea> — suggest a new prop. AI researches it, then adds if unique and interesting.
/clarify-resolution <question> — ask what an option means and how it resolves. AI updates the text if ambiguous.
/check-resolutions <evidence> — request resolution check. AI verifies against current sources. Bad faith = 1 week block.
Update 2026-03-12 (PST) (AI summary of creator comment): A new option was added: The top Elo score on the EQ-Bench Creative Writing v3 leaderboard exceeds 2000 before 2027-01-01
Resolves YES if the top score on the EQ-Bench Creative Writing v3 leaderboard surpasses 2000 Elo before January 1, 2027
Current top scores are around 1936 Elo, making 2000 an ambitious but concrete threshold
People are also trading
@patrik Your option was not accepted: While the suggestion is timely given the recent buzz around Moonshot AI's Kimi K2.5—a popular Chinese open-source LLM now powering Cursor's Composer 2 after RL fine-tuning—it fails key criteria. The core concept of state-sponsored backdoors in open-source models for cyberattacks is unique but too speculative and low-probability (<10% likely before 2030, based on no precedents and recent reports of Chinese actors using Western AI like Claude for attacks instead). Resolution is ambiguous: 'sufficiently popular,' confirming a backdoor, attribution to China, and linkage to a specific cyberattack lack clear public verification standards. It's also tangential to AI singularity (more cybersec/geopolitics than transformative AI progress), not earning a slot among top props like AGI milestones or risk agreements.
@patrik Your option was not accepted: Rejected: While the recent controversy around Kimi K2.5's integration in Cursor highlights timely concerns about Chinese LLMs, the suggestion is too vague ('sufficiently popular', 'some kind of backdoor') for clear resolution and doesn't offer a unique, high-impact angle on AI singularity progress or risks compared to existing options on AGI announcements, regulations, capabilities, and benchmarks. It feels more like a cybersecurity side bet than a core singularity proposition.
You're on a 24h cooldown before you can suggest again.
@spacedroplet Your option was not accepted: This suggestion taps into valid concerns about AI security and geopolitics, but lacks timeliness—no recent discoveries or major buzz make it engaging now. It's also vague on 'discovered,' 'popular,' and 'backdoor,' making resolution subjective, and too low-probability (<10% likely) for a dynamic market. Better fits a cybersecurity market than core AI Singularity bets on capabilities or high-stakes risks.
You're on a 24h cooldown before you can suggest again.
@patrik Your option was accepted and added as: The top Elo score on the EQ-Bench Creative Writing v3 leaderboard exceeds 2000 before 2027-01-01
This is a unique proposition focused on measurable progress in LLM creative writing capabilities, relevant to tracking advances toward the singularity. EQ-Bench is an active, prominent benchmark with recent updates and leaderboards showing current top scores around 1936 Elo, making a 2000 threshold ambitious yet concrete and resolvable.
@patrik Your option was not accepted: The idea is interesting and timely given ongoing discussions and benchmarks for LLM creativity, but it's too vague—lacking a specific definition of 'good,' timeline, or resolvable criteria like a benchmark threshold. This makes it unsuitable as a concrete prop for objective resolution.
@patrik Your option was accepted and added as: At least three major AI labs (xAI, OpenAI, Anthropic, Google DeepMind) release public multi-LLM agentic reasoning models before 2027-01-01
This captures the user's idea of multi-LLM systems (like Grok 4.2's 4-agent setup) becoming common for reasoning tasks. It's unique from existing options, timely given recent Grok 4.2 beta release, relevant to AI progress toward singularity, and resolvable by checking public releases from specified labs.
@MRME #1 xAI announces AGI before 2027-01-01 'Announcing AGI' means xAI issues an official public statement from authorized representatives (e.g., CEO Elon Musk via xAI blog, press release, or verified company account) explicitly claiming that xAI has achieved or developed AGI. This resolves YES if the announcement occurs before 2027-01-01 00:00 UTC. AGI is according to xAI's own definition, such as a maximally truth-seeking AI capable of general intelligence matching or exceeding humans across diverse tasks. Evidence must be from official xAI channels; rumors, third-party claims, or unverified tweets do not count. NO otherwise.
Updated: xAI announces AGI before 2027-01-01 -> xAI makes an official public announcement claiming achievement of AGI before 2027-01-01
---
#2 Anthropic announces AGI before 2027-01-01 'Announcing AGI' means Anthropic issues an official public statement from authorized representatives (e.g., CEO Dario Amodei via Anthropic blog, press release, or verified company account) explicitly claiming that Anthropic has achieved or developed AGI. This resolves YES if the announcement occurs before 2027-01-01 00:00 UTC. AGI is according to Anthropic's own definition, such as reaching high levels on their Autonomous Scaling Levels (ASL) or general intelligence matching humans. Evidence must be from official Anthropic channels; rumors or third-party claims do not count. NO otherwise.
Updated: Anthropic announces AGI before 2027-01-01 -> Anthropic makes an official public announcement claiming achievement of AGI before 2027-01-01
---
#3 OpenAI announces AGI before 2027-01-01 'Announcing AGI' means OpenAI issues an official public statement from authorized representatives (e.g., CEO via OpenAI blog, press release, or verified company account) explicitly claiming that OpenAI has achieved or developed AGI. This resolves YES if the announcement occurs before 2027-01-01 00:00 UTC. AGI is according to OpenAI's definition: 'highly autonomous systems that outperform humans at most economically valuable work.' Evidence must be from official OpenAI channels; rumors or third-party claims do not count. NO otherwise.
Updated: OpenAI announces AGI before 2027-01-01 -> OpenAI makes an official public announcement claiming achievement of AGI before 2027-01-01
---
#4 Google DeepMind announces AGI before 2027-01-01 'Announcing AGI' means Google DeepMind issues an official public statement from authorized representatives (e.g., CEO Demis Hassabis via DeepMind blog, press release, or verified company account) explicitly claiming that Google DeepMind has achieved or developed AGI. This resolves YES if the announcement occurs before 2027-01-01 00:00 UTC. AGI is according to DeepMind's definition, such as versatile AI mastering diverse scientific and intellectual tasks at human level or beyond. Evidence must be from official DeepMind channels; rumors or third-party claims do not count. NO otherwise.
Updated: Google DeepMind announces AGI before 2027-01-01 -> Google DeepMind makes an official public announcement claiming achievement of AGI before 2027-01-01
---
#5 An international agreement restricting frontier AI research to mitigate extinction risks is signed before 2027-01-01 This resolves YES if a formal international agreement is publicly signed before 2027-01-01 00:00 UTC by governments of at least two major powers (defined as: United States, China, United Kingdom, France, Germany, Russia, India, Japan) that explicitly restricts frontier AI research—such as imposing pauses, compute thresholds (e.g., no training runs above 10^26 FLOP), or capability limits on advanced models—with the purpose of mitigating extinction risks (x-risks) from AI, as stated in the agreement text. Signing means official ceremony or ratification announcement. Evidence from official government sources. Major powers are those leading in AI compute/capabilities. Ambiguous cases resolved by Manifold moderators. NO otherwise.
Updated: An international agreement restricting frontier AI research to mitigate extinction risks is signed before 2027-01-01 -> An international agreement signed by governments of at least the United States and China, restricting frontier AI research to mitigate extinction risks, before 2027-01-01
@spacedroplet Your option was accepted and added as: An international agreement restricting frontier AI research to mitigate extinction risks is signed before 2027-01-01
This is unique from existing company-specific AGI announcement markets, highly relevant to AI Singularity discussions around safety and coordination, timely given recent reports and calls (e.g., PauseAI, joint US-China statements, proposed treaties), and resolvable based on public announcements of formal agreements by major powers.