Will Adam optimizer no longer be the default optimizer for training the best open source models by the end of 2026?
➕
Plus
32
Ṁ2488
2026
38%
chance

This market is quite similar to the following market. However, instead of focusing on the papers, it focuses a bit more on the practicality aspect by considering a mixture of the most commonly used models and the SOTA models at the end of 2026.

I am including both Adam and close variants (e.g. AdamW).

I will sample a mixture of open source best models in 2026. Not participating in this market in case judgement is required.

If training details are not provided, can reach out to the creators of the models, if they decline, we move on and pick the next best thing

Get
Ṁ1,000
and
S3.00
Sort by:

Relevant: AlgoPerf

The AlgoPerf: Training algorithms benchmark competition opens on November 28, 2023, and is scheduled to close on March 28, 2024. To enter the competition please see the instructions on the competition website. Additionally, the accompanying technical report motivates and explains the design choices of the benchmark.

Sponsorship & Prize Money

MLCommons is offering a total prize pool of $50,000, to be awarded by a committee, for the top-performing submissions in each tuning ruleset. 

We would also like to express our gratitude to Google for their generous support in providing computational resources to score the top submissions, and resources to help score promising submissions from submitters with more limited resources.

Jack Clark says:

With AlgoPerf, we might finally have a decent, principled  way to evaluate new optimizers and work out if they’re actually any good

bought Ṁ125 NO

I tried submitting to AlgoPerf.

A problem that I ran into is that only Google Cloud offers machines that match their requirements for benchmarking, and at a steep price, so I ended up not submitting

My Lilith will dethrone it :)

Adam was introduced in 2014: https://arxiv.org/abs/1412.6980

predictedNO

@NiplavYushtun Yup. And aside from minor variations it’s been the staple ever since. It seems to survive everything and works on new architectures that weren’t even invented in 2014, like transformers. Makes me think it’ll still be around for a couple more years at least.

How will we know whether SOTA models are using Adam if training details are not disclosed for the best models at the time?

@AdamK we'll infer the best we can. Atm, can also switch to the best open sourced models

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules