Approaching the Thermodynamic Limit with Neural-Network Quantum States
AI Breakdown
Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.
Abstract
Accessing the thermodynamic-limit properties of strongly correlated quantum matter requires simulations on very large lattices, a regime that remains challenging for numerical methods, especially in frustrated two-dimensional systems. We introduce the Spatial Attention mechanism, a minimal and physically interpretable inductive bias for Neural-Network Quantum States, implemented as a single learned length scale within the Transformer architecture. This bias stabilizes large-scale optimization and enables access to thermodynamic-limit physics through highly accurate simulations on unprecedented system sizes within the Variational Monte Carlo framework. Applied to the spin-$\tfrac12$ triangular-lattice Heisenberg antiferromagnet, our approach achieves state-of-the-art results on clusters of up to $42\times42$ sites. The ability to simulate such large systems allows controlled finite-size scaling of energies and order parameters, enabling the extraction of experimentally relevant quantities such as spin-wave velocities and uniform susceptibilities. In turn, we find extrapolated thermodynamic limit energies systematically better than those obtained with tensor-network approaches such as iPEPS. The resulting magnetization is strongly renormalized, $M_0=0.148(1)$ (about $30\%$ of the classical value), revealing that less accurate variational states systematically overestimate magnetic order. Analysis of the optimized wave function further suggests an intrinsically non-local sign structure, indicating that the sign problem cannot be removed by local basis transformations. We finally demonstrate the generality of the method by obtaining state-of-the-art energies for a $J_1$-$J_2$ Heisenberg model on a $20\times20$ square lattice, outperforming Residual Convolutional Neural Networks.