
LLM Hallucination: Will an LLM score >90% on SimpleQA before 2026?
27
Ṁ1kṀ12kresolved Jan 2
Resolved
NO1H
6H
1D
1W
1M
ALL
Using the correct given attempted metric in https://cdn.openai.com/papers/simpleqa.pdf Attempt rate must be at least 30%. No search/retrieval allowed.
"An open problem in artificial intelligence is how to train models that produce responses that are factually correct. Current language models sometimes produce false outputs or answers unsubstantiated by evidence, a problem known as “hallucinations”. Language models that generate more accurate responses with fewer hallucinations are more trustworthy and can be used in a broader range of applications. To measure the factuality of language models, we are open-sourcing(opens in a new window) a new benchmark called SimpleQA."
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
🏅 Top traders
| # | Trader | Total profit |
|---|---|---|
| 1 | Ṁ931 | |
| 2 | Ṁ167 | |
| 3 | Ṁ163 | |
| 4 | Ṁ123 | |
| 5 | Ṁ87 |
People are also trading
Related questions
Will an LLM get at least 80% on the 2026 USAMO for less than $2?
51% chance
Will the highest-scoring LLM on Dec 31, 2026 show <10% improvement over 2025's best average benchmark performance?
72% chance
Will an LLM improve its own ability along some important metric well beyond the best trained LLMs before 2026?
14% chance
Will LLM hallucinations be a fixed problem by the end of 2028?
43% chance
How Will the LLM Hallucination Problem Be Solved?
Will LLMs Daydream by EOY 2026?
17% chance
Will an LLM be able to solve the Self-Referential Aptitude Test before 2027?
79% chance
Will there by a major breakthrough in LLM continual learning before 2027?
45% chance
Will LLMs become a ubiquitous part of everyday life by June 2026?
90% chance
Will there be any major breakthrough in LLM continual learning before 2029?
87% chance