Loss Behavior in Supervised Learning With Entangled States
AI Breakdown
Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.
Abstract
Quantum machine learning (QML) aims to leverage the principles of quantum mechanics to speed up the process of solving machine learning problems or improve the quality of solutions. In supervised learning, training samples that are entangled with an auxiliary system were shown to reduce the approximation error of the trained model. However, also the training process is affected by various properties of the supervised learning task, including the model's circuit structure, the used cost function, and noise on the quantum computer. To evaluate the applicability of entanglement in supervised learning, the effect of highly entangled training data on the trainability is investigated in this work. This study shows that for highly expressive models, i.e., models capable of expressing a large number of candidate solutions, the possible improvement of loss function values in constrained neighborhoods during optimization is severely limited when maximally entangled states are used. Furthermore, this finding is experimentally supported by simulating training with Parameterized Quantum Circuits (PQCs). As the expressivity of the PQC increases, this training process becomes more susceptible to loss concentration induced by entangled training data. Lastly, for non‐maximally entangled states, the experiments highlight the fundamental role of entanglement entropy as a predictor for the trainability.