Glen Evenbly, Johnnie Gray, Garnet Kin-Lic Chan (Dec 12 2025).
Abstract: We propose a method for approximating the contraction of a tensor network by partitioning the network into a sum of computationally cheaper networks. This method, which we call a partitioned network expansion (PNE), builds upon recent work that systematically improves belief propagation (BP) approximations using loop corrections. However, in contrast to previous approaches, our expansion does not require a known BP fixed point to be implemented and can still yield accurate results even in cases where BP fails entirely. The flexibility of our approach is demonstrated through applications to a variety of example networks, including finite 2D and 3D networks, infinite networks, networks with open indices, and networks with degenerate BP fixed points. Benchmark numerical results for networks composed of Ising, AKLT, and random tensors typically show an improvement in accuracy over BP by several orders of magnitude (when BP solutions are obtainable) and also demonstrate improved performance over traditional network approximations based on singular value decomposition (SVD) for certain tasks.