Quantum Brain
← Back to papers

Partitioned Expansions for Approximate Tensor Network Contractions

Glen Evenbly, Johnnie Gray, Garnet Kin-Lic Chan·December 11, 2025
Quantum Physicscond-mat.str-el

AI Breakdown

Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.

Abstract

We propose a method for approximating the contraction of a tensor network by partitioning the network into a sum of computationally cheaper networks. This method, which we call a partitioned network expansion (PNE), builds upon recent work that systematically improves belief propagation (BP) approximations using loop corrections. However, in contrast to previous approaches, our expansion does not require a known BP fixed point to be implemented and can still yield accurate results even in cases where BP fails entirely. The flexibility of our approach is demonstrated through applications to a variety of example networks, including finite 2D and 3D networks, infinite networks, networks with open indices, and networks with degenerate BP fixed points. Benchmark numerical results for networks composed of Ising, AKLT, and random tensors typically show an improvement in accuracy over BP by several orders of magnitude (when BP solutions are obtainable) and also demonstrate improved performance over traditional network approximations based on singular value decomposition (SVD) for certain tasks.

Related Research

Quantum Intelligence

Ask about quantum research, companies, or market developments.