Quantum Brain
← Back to papers

Scrambling ability of quantum neural network architectures

Yadong Wu, Pengfei Zhang, H. Zhai·November 16, 2020·DOI: 10.1103/PhysRevResearch.3.L032057
Computer SciencePhysics

AI Breakdown

Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.

Abstract

In this letter we propose a general principle for how to build up a quantum neural network with high learning efficiency. Our stratagem is based on the equivalence between extracting information from input state to readout qubit and scrambling information from the readout qubit to input qubits. We characterize the quantum information scrambling by operator size growth, and by Haar random averaging over operator sizes, we propose an averaged operator size to describe the information scrambling ability for a given quantum neural network architectures, and argue this quantity is positively correlated with the learning efficiency of this architecture. As examples, we compute the averaged operator size for several different architectures, and we also consider two typical learning tasks, which are a regression task of a quantum problem and a classification task on classical images, respectively. In both cases, we find that, for the architecture with a larger averaged operator size, the loss function decreases faster or the prediction accuracy in the testing dataset increases faster as the training epoch increases, which means higher learning efficiency. Our results can be generalized to more complicated quantum versions of machine learning algorithms.

Related Research

Quantum Intelligence

Ask about quantum research, companies, or market developments.