Belief propagation for general graphical models with loops
AI Breakdown
Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.
Abstract
There is an increasing interest in scaling tensor network methods through belief propagation (BP), as well as increasing the accuracy of BP through tensor network methods. We develop a unification framework that takes an arbitrary graphical model with loops and provides message passing update rules and inference equations. We show that recent state-of-the-art methods regarding tensors and BP, like block belief propagation and tensor network message passing, are special instances of our framework. From a practical perspective, we discuss how our framework can be useful to understand the benefits of scheduling in BP, and show how it can be used for decoding purposes in quantum error correction. We simulate the computation of marginals, internal energy, Shannon entropy and the partition function on synthetic topologies (Kagome lattice and lattices resembling quantum error-correcting codes) and a real world topology of a power grid. The results show orders of magnitude accuracy increases for modest computational overheads. For the marginals, for example, we show that our framework can achieve an accuracy improvement of more than six orders of magnitude over tensor network BP.