Quantum Brain
← Back to papers

Quantum advantage in training binary neural networks.

Yidong Liao, Daniel Ebler, Feiyang Liu, O. Dahlsten·October 30, 2018
PhysicsMathematics

AI Breakdown

Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.

Abstract

The performance of a neural network for a given task is largely determined by the initial calibration of the network parameters. Yet, it has been shown that the calibration, also referred to as training, is generally NP-complete. This includes networks with binary weights, an important class of networks due to their practical hardware implementations. We therefore suggest an alternative approach to training binary neural networks. It utilizes a quantum superposition of weight configurations. We show that the quantum training guarantees with high probability convergence towards the globally optimal set of network parameters. This resolves two prominent issues of classical training: (1) the vanishing gradient problem and (2) common convergence to suboptimal network parameters. Moreover we achieve a provable polynomial---sometimes exponential---speedup over classical training for certain classes of tasks. We design an explicit training algorithm and implement it in numerical simulations.

Related Research

Quantum Intelligence

Ask about quantum research, companies, or market developments.