Positive bias makes tensor-network contraction tractable
Tensor network contraction is a powerful computational tool in quantum many-body physics, quantum information and quantum chemistry. The complexity of contracting a tensor network is thought to mainly depend on its entanglement properties, as reflected by the Schmidt rank across bipartite cuts. Here...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Tensor network contraction is a powerful computational tool in quantum
many-body physics, quantum information and quantum chemistry. The complexity of
contracting a tensor network is thought to mainly depend on its entanglement
properties, as reflected by the Schmidt rank across bipartite cuts. Here, we
study how the complexity of tensor-network contraction depends on a different
notion of quantumness, namely, the sign structure of its entries. We tackle
this question rigorously by investigating the complexity of contracting tensor
networks whose entries have a positive bias.
We show that for intermediate bond dimension d>~n, a small positive mean
value >~1/d of the tensor entries already dramatically decreases the
computational complexity of approximately contracting random tensor networks,
enabling a quasi-polynomial time algorithm for arbitrary 1/poly(n)
multiplicative approximation. At the same time exactly contracting such tensor
networks remains #P-hard, like for the zero-mean case [HHEG20]. The mean value
1/d matches the phase transition point observed in [CJHS24]. Our proof makes
use of Barvinok's method for approximate counting and the technique of mapping
random instances to statistical mechanical models. We further consider the
worst-case complexity of approximate contraction of positive tensor networks,
where all entries are non-negative. We first give a simple proof showing that a
multiplicative approximation with error exponentially close to one is at least
StoqMA-hard. We then show that when considering additive error in the matrix
1-norm, the contraction of positive tensor network is BPP-Complete. This result
compares to Arad and Landau's [AL10] result, which shows that for general
tensor networks, approximate contraction up to matrix 2-norm additive error is
BQP-Complete. |
---|---|
DOI: | 10.48550/arxiv.2410.05414 |