MANIFOLD
Will any model pass an "undergrad proofs exam" Turing test by 2027?
20
Ṁ1kṀ1.1k
2027
77%
chance
The model receives each question as text (or text + images), outputs an answer as text + images, and is graded as part of a pool with human students who also took the test. "Pass" means >=70% Has to be a proofs-based exam, e.g. abstract algebra, topology, linear algebra if it's proofs heavy. There are probably undergrad math exams *somewhere* that are very easy, so I will be exercising my judgment on whether the exam "counts". Unfortunately I do not have examples to hand of what I consider reasonable, but something like "would be a medium difficulty 200-level proofs exam at a top-tier university".
Market context
Get
Ṁ1,000
to start trading!
Sort by:

This market should have resolved YES months ago. What's going on? I don't have any bets in it to be clear.

There's not that many different types of math undergrad class these days, and each one doesn't have that many different types of exam question. Models are easily able to memorize the O(1000) "tricks" needed to ace these exams.

@pietrokc The question is not "can it", it's will it. Someone must actually run the experiment and I have to see the results.

@vluzko That seems kind of disingenuous. You can easily run this experiment yourself using one of the good free models, like GPT Thinking. I'd honestly be surprised if you can find any undergraduate math exam from MIT or Harvard that GPT 5.4 Thinking or Gemini 3.1 Pro do not ace.

@pietrokc I disagree that asking this question commits me to grading a proofs exam myself. I can do that, but I think it's unreasonable to restrict the set of people who can ask questions like this to "people who can run the verification experiment themselves" rather than "people who can assess whether someone else ran it".

Have you tested current models on this? All these proofs are extremely standard and are all over the internet, meaning models were trained on them. I'd honestly be surprised if o3 or Gemini Thinking couldn't get 70% on a standard such test.

@pietrokc That's probably true. It is not really the intent of the question (I made this market before LLMs had really taken off and did not think of the training data issue), but I don't think it can be avoided. Some time in the vaguely near future I'll try to find a proofs exam and test this.

Ultra-likely. Topology was almost trivial, especially comparable to IMO questions. I recall “prove at least 3 of 12 theorems” being the final exam, pretty sure an AI could blow past that even today. The handwriting and telling the AI not to be too giga-brained (make a few mistakes and don’t solve them all) would be harder than the solving.
(Not betting due to Gigacasting’s First Law of AI Optics: any research lab smart enough to pass a Turing test will be smart enough to know never to run one)
© Manifold Markets, Inc.TermsPrivacy