Reinforcement Learning for Quantum Network Control with Application-Driven Objectives
AI Breakdown
Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.
Abstract
Optimized control of quantum networks is essential for enabling distributed quantum applications with strict performance requirements. In near-term architectures with constrained hardware, effective control may determine the feasibility of deploying such applications. Because quantum network dynamics are suitable for being modeled as a Markov decision process, dynamic programming and reinforcement learning (RL) offer promising tools for optimizing control strategies. However, key quantum network performance measures -- such as secret key rate in quantum key distribution -- often involve a non-linear relationship between interdependent variables that describe quantum state quality and generation rate. Such objectives are not easily captured by standard RL approaches based on additive rewards. We propose a novel gradient-based RL framework that directly optimizes non-linear, differentiable objective functions, while accounting for uncertainties introduced by classical communication delays. We evaluate this framework in the context of entanglement distillation between two quantum network nodes equipped with multiplexing capability, and demonstrate up to 20-23% improvement over heuristic baselines in certain parameter regimes. Our work comprises the first step towards non-linear objective function optimization in quantum networks with RL, opening a path towards more advanced use cases.