← Back to papers

A rigorous hybridization of variational quantum eigensolver and classical neural network

Minwoo Kim, Kyoung Keun Park, Kyungmin Lee, Jeongho Bang, Taehyun Kim·February 19, 2026
Quantum Physics

AI Breakdown

Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.

Abstract

Neural post-processing has been proposed as a lightweight route to enhance variational quantum eigensolvers by learning how to reweight measurement outcomes. In this work, we identify three general desiderata for such data-driven neural post-processing -- (i) self-contained training without prior knowledge, (ii) polynomial resources, and (iii) variational consistency -- and show that current approaches, such as diagonal non-unitary post-processing (DNP), cannot satisfy these requirements simultaneously. The obstruction is intrinsic: with finite sampling, normalization becomes a statistical bottleneck, and support mismatch between numerator and denominator estimators can render the empirical objective ill-conditioned and even sub-variational. Moreover, to reproduce the ground state with constant-depth ansatzes or with linear-depth circuits forming unitary 2-designs, the required reweighting range (and hence the sampling cost) grows exponentially with the number of qubits. Motivated by this no-go result, we develop a normalization-free alternative, the unitary variational quantum-neural hybrid eigensolver (U-VQNHE). U-VQNHE retains the practical appeal of a learnable diagonal post-processing layer while guaranteeing variational safety, and numerical experiments on transverse-field Ising models demonstrate improved accuracy and robustness over both VQE and DNP-based variants.

Related Research