Loss-aware state space geometry for quantum variational algorithms
AI Breakdown
Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.
Abstract
The natural gradient descent optimisation technique is an efficient optimising protocol for broad classes of classical and quantum systems that takes the underlying geometry of the parameter manifold into account by means of using either the Fisher information metric of the classical probability distribution function or the Fubini-Study tensor of the associated parametrised quantum states in the consequent update rules. Even though the natural gradient descent procedure utilises the geometry of the space of probability or states, it is, however, insensitive to the measure of parametrised distance on the space of possible outcomes when the corresponding optimising problem is considered for the expectation value of a classical or quantum observable with respect to the probability distribution or the quantum state. In this work, we introduce a generic optimising principle, where the intrinsic geometry of the space of outcomes has been taken into account suitably, either by using an ambient space construction with a base statistical manifold with the usual Fisher information metric (or the Fubini-Study tensor), where the loss hypersurface is embedded to, or by means of a first-principle construction from the overlap of nearby quantum states on the projective Hilbert space. This construction as well as a family of conformal variants yields a form of loss-aware natural gradient updates that rescale the effective step size while preserving the descent direction. We benchmark the resulting optimisers on variational quantum circuit examples and on a classical neural network task, finding that, while the standard natural gradient remains the most robust on average, the proposed conformal schemes can improve best-case convergence in favourable regimes.