Will different hardware be used for training and for inference of neural-networks? (before 2030)
16
1kṀ629
2030
95%
chance

Currently, most hardware that is used for training a neural network is the same as that which is used for performing inference with that network.

However, training a network has different requirements from performing inference. There already exist some differences between inference and training (dropout, quantization) and these differences generally improve the performance or speed of the NN.

This market resolves YES if, before 1 Jan 2030, the hardware used for training a model is "significantly" different from the hardware used to perform inference with the same model. I will not bet and will resolve this based on my judgement (examples are given below). The market resolves NO if the hardware is basically the same.

For these purposes, the hardware used will refer to common practice of production NN models. If this information is kept a secret or it's unclear that using different hardware is "common practice" in the industry, the market will resolve NA.

Examples of YES resolutions:

  • Inference is done on hardware which was explicitly designed to primarily perform NN inference

  • The hardware used for inference is significantly different from that used for training, and this difference was chosen because it provides some benefit

  • Documentation for popular ML libraries recommend using different hardware for production NN models

Get
Ṁ1,000
to start trading!
© Manifold Markets, Inc.TermsPrivacy