
Efficient full homomorphic encryption for frontier model training by 2030?
19
1kṀ28782030
11%
chance
1H
6H
1D
1W
1M
ALL
Resolves YES if deep learning models at the compute frontier* could be** trained using full homomorphic encryption (FHE) with a <10x slowdown before 2030/1/1.
[*] Say, within 1 OOM of the highest-compute model deployed.
[**] No need for frontier models to be trained with FHE. Empirical evidence of smaller models trained with FHE at <10x slowdown plus a heuristic argument (e.g. dumb extrapolation) that larger models would also satisfy this will suffice.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Related questions
Will an AI achieve >85% performance on the FrontierMath benchmark before 2028?
55% chance
What will be the best performance on FrontierMath by December 31st 2025?
Will frontier AI effective training compute increase by a factor 10 billion between 2025 and 2035?
70% chance
Will any AI model score >80% on Epoch's Frontier Math Benchmark in 2025?
9% chance
Will any AI model achieve > 40% on Frontier Math before 2026?
68% chance
Will Alphaproof achieve >30% performance on the FrontierMath benchmark before 2026?
16% chance
Will an AI achieve >80% performance on the FrontierMath benchmark before 2027?
68% chance
Will an AI achieve >85% performance on the FrontierMath benchmark before 2027?
55% chance
Will Open AI release a model that can reliably compute a 20 digits multiplication correctly in 2025?
38% chance
Will there be an announcement of a model with a training compute of over 1e30 FLOPs by the end of 2025?
5% chance