End-to-end Optimization of Single-Shot Quantum Machine Learning for Bayesian Inference
AI Breakdown
Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.
Abstract
We introduce an end-to-end optimization strategy for quantum machine learning that directly targets performance under finite measurement resources, where learning objectives are defined directly at the level of task performance. The method is applied on a Bayesian quantum metrology task since it provides a natural testbed with known fundamental limits and scaling with system size. The sampling-aware hybrid algorithm achieves a single-shot risk within 1 dB of the -20 dB Bayesian limit using 32 qubits. We extend the Bayesian framework from parameter estimation to global function inference, where the task is to infer a target function of the sensor input drawn from an arbitrary prior, and we demonstrate a clear computational-sensing advantage for direct functional inference over indirect reconstruction. We relate the corresponding Bayesian risk to the Capacity metric and argue that the Resolvable Expressive Capacity provides a natural measure of the space of functions accessible in a single shot. The resulting eigentask analysis identifies noise-robust feature combinations that yield compact estimators with improved accuracy and reduced optimization cost in resource-limited or real-time on-device settings.