Learning at the Edge of Causality: Optimal Learning-Sample Complexity from No-Signaling Constraints
AI Breakdown
Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.
Abstract
What ultimately fixes the sample cost of quantum learning -- algorithmic ingenuity or physical law? We study this question in an arena where computation, learning, and causality collide. A twist on Grover's search that reflects about an a priori unknown state can collapse the query complexity from $O(\sqrt{N})$ to $O(\log N)$ over a search space $N$, i.e., an exponential speedup. Yet, standard quantum theory forbids such a unknown-state reflection (no-reflection theorem). We therefore build a state-learning-assisted architecture, called ``amplify-learn,'' which alternates the coherent amplitude amplification with state learning. Embedding this amplify-learn into the Bao-Bouland-Jordan no-signaling framework, we show that the logarithmic-round dream would open a super-luminal communication channel unless each round expends the learning-sample and reflection-circuit budgets scaling at least as $Ω(\sqrt{N}/\log N)$. In parallel, we derive tight computational learning-theoretic sample bounds for learning circuit-generated pure states, revealing a state-universal ansatz ``lock'' at order $N$ in the worst case. The dramatic closure is that no-signaling does not merely veto the unphysical primitive, but it fixes the only consistent reflection-circuit complexity, and feeding this causality-enforced complexity into the computational learning bound makes it collapse onto the very same $\sqrt{N}/\log N$ scaling demanded by no-signaling alone. No-signaling thus acts as a regulator of learnability: a constraint that mediates between physics and computation, welding query, gate, and sample complexities into a single causality-compatible triangle.