Efficient Closest Matrix Product State Learning in Logarithmic Depth
AI Breakdown
Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.
Abstract
Learning the closest matrix product state (MPS) representation of a quantum state is known to enable useful tools for prediction and analysis of complex quantum systems. In this work, we study the problem of learning MPS in following setting: given many copies of an input MPS, the task is to recover a classical description of the state. The best known polynomial-time algorithm, introduced by [LCLP10, CPF+10], requires linear circuit depth and $O(n^5)$ samples, and has seen no improvement in over a decade. The strongest known lower bound is only $Ω(n)$. The combination of linear depth and high sample complexity renders existing algorithms impractical for near-term or even early fault-tolerant quantum devices. We show a new efficient MPS learning algorithm that runs in $O(\log n)$ depth and has sample complexity $O(n^3)$. Also, we can generalize our algorithm to learn closest MPS state, in which the input state is not guaranteed to be close to the MPS with a fixed bond dimension. Our algorithms also improve both sample complexity and circuit depth of previous known algorithm.