Sign problem in tensor network contraction
We investigate how the computational difficulty of contracting tensor networks depends on the sign structure of the tensor entries. Using results from computational complexity, we observe that the approximate contraction of tensor networks with only positive entries has lower complexity. This raises...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We investigate how the computational difficulty of contracting tensor
networks depends on the sign structure of the tensor entries. Using results
from computational complexity, we observe that the approximate contraction of
tensor networks with only positive entries has lower complexity. This raises
the question how this transition in computational complexity manifests itself
in the hardness of different contraction schemes. We pursue this question by
studying random tensor networks with varying bias towards positive entries.
First, we consider contraction via Monte Carlo sampling, and find that the
transition from hard to easy occurs when the entries become predominantly
positive; this can be seen as a tensor network manifestation of the Quantum
Monte Carlo sign problem. Second, we analyze the commonly used contraction
based on boundary tensor networks. Its performance is governed by the amount of
correlations (entanglement) in the tensor network. Remarkably, we find that the
transition from hard to easy (i.e., from a volume law to a boundary law scaling
of entanglement) occurs already for a slight bias towards a positive mean, and
the earlier the larger the bond dimension is. This is in contrast to both
expectations and the behavior found in Monte Carlo contraction. We gain further
insight into this early transition from the study of an effective statmech
model. Finally, we investigate the computational difficulty of computing
expectation values of tensor network wavefunctions, i.e., PEPS, where we find
that the complexity of entanglement-based contraction always remains low. We
explain this by providing a local transformation which maps PEPS expectation
values to a positive-valued tensor network. This not only provides insight into
the origin of the observed boundary law entanglement scaling, but also suggests
new approaches towards PEPS contraction based on positive decompositions. |
---|---|
DOI: | 10.48550/arxiv.2404.19023 |