Density Matrix Renormalization Group with Tensor Processing Units

Google's Tensor Processing Units (TPUs) are integrated circuits specifically built to accelerate and scale up machine learning workloads. They can perform fast distributed matrix multiplications and therefore be repurposed for other computationally intensive tasks. In this work we demonstrate t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2022-04
Hauptverfasser: Ganahl, Martin, Beall, Jackson, Hauru, Markus, Lewis, Adam G M, Yoo, Jae Hyeon, Zou, Yijian, Vidal, Guifre
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Google's Tensor Processing Units (TPUs) are integrated circuits specifically built to accelerate and scale up machine learning workloads. They can perform fast distributed matrix multiplications and therefore be repurposed for other computationally intensive tasks. In this work we demonstrate the use of TPUs for accelerating and scaling up the density matrix renormalization group (DMRG), a powerful numerical approach to compute the ground state of a local quantum many-body Hamiltonian. The cost of DMRG scales with system size \(N\) as \(O(ND^3)\), where the so-called bond dimension \(D\) regulates how expressive the underlying matrix product state (MPS) variational ansatz is. We consider lattice models in two spatial dimensions, with square lattices of size \(10\times 10\) (free fermions) and \(20\times 20\) (transverse field Ising model), for which the required MPS bond dimension is known to scale at least as \(\exp(\sqrt{N})\). Using half of a TPU v3 pod (namely \(1,\!024\) TPU v3 cores) we reached an unprecedentedly large bond dimension \(D = 2^{16} = 65,\!536\), for which optimizing a single MPS tensor took about 2 minutes.
ISSN:2331-8422
DOI:10.48550/arxiv.2204.05693