TPU v4: An Optically Reconfigurable Supercomputer for Machine Learning with Hardware Support for Embeddings

In response to innovations in machine learning (ML) models, production workloads changed radically and rapidly. TPU v4 is the fifth Google domain specific architecture (DSA) and its third supercomputer for such ML models. Optical circuit switches (OCSes) dynamically reconfigure its interconnect topo...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Jouppi, Norman P, Kurian, George, Li, Sheng, Ma, Peter, Nagarajan, Rahul, Nai, Lifeng, Patil, Nishant, Subramanian, Suvinay, Swing, Andy, Towles, Brian, Young, Cliff, Zhou, Xiang, Zhou, Zongwei, Patterson, David
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Jouppi, Norman P
Kurian, George
Li, Sheng
Ma, Peter
Nagarajan, Rahul
Nai, Lifeng
Patil, Nishant
Subramanian, Suvinay
Swing, Andy
Towles, Brian
Young, Cliff
Zhou, Xiang
Zhou, Zongwei
Patterson, David
description In response to innovations in machine learning (ML) models, production workloads changed radically and rapidly. TPU v4 is the fifth Google domain specific architecture (DSA) and its third supercomputer for such ML models. Optical circuit switches (OCSes) dynamically reconfigure its interconnect topology to improve scale, availability, utilization, modularity, deployment, security, power, and performance; users can pick a twisted 3D torus topology if desired. Much cheaper, lower power, and faster than Infiniband, OCSes and underlying optical components are
doi_str_mv 10.48550/arxiv.2304.01433
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2304_01433</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2304_01433</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-da4e46bc38d5bdd24b48cda5f78fac6ec132c8035feb27b947edc0d7314bedc43</originalsourceid><addsrcrecordid>eNotj8tOwzAURL1hgQofwAr_QIITOw_YVVWhSKmKaLqOrq-vW4u85CYt_XtKYDWbMzM6jD1EIlR5kogn8N_uFMZSqFBESspb9lV-7PhJvfB5yzf94BDq-sI_CbvWuv3oQdfEt2NPHrumHwfy3HaerwEPriVeEPjWtXt-dsOBr8CbM_ip0Hd-mNBlo8mYK3O8YzcW6iPd_-eMla_LcrEKis3b-2JeBJBmMjCgSKUaZW4SbUystMrRQGKz3AKmhJGMMRcysaTjTD-rjAwKk8lIXY9QyRl7_JudbKveuwb8pfq1riZr-QP9uVSl</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>TPU v4: An Optically Reconfigurable Supercomputer for Machine Learning with Hardware Support for Embeddings</title><source>arXiv.org</source><creator>Jouppi, Norman P ; Kurian, George ; Li, Sheng ; Ma, Peter ; Nagarajan, Rahul ; Nai, Lifeng ; Patil, Nishant ; Subramanian, Suvinay ; Swing, Andy ; Towles, Brian ; Young, Cliff ; Zhou, Xiang ; Zhou, Zongwei ; Patterson, David</creator><creatorcontrib>Jouppi, Norman P ; Kurian, George ; Li, Sheng ; Ma, Peter ; Nagarajan, Rahul ; Nai, Lifeng ; Patil, Nishant ; Subramanian, Suvinay ; Swing, Andy ; Towles, Brian ; Young, Cliff ; Zhou, Xiang ; Zhou, Zongwei ; Patterson, David</creatorcontrib><description>In response to innovations in machine learning (ML) models, production workloads changed radically and rapidly. TPU v4 is the fifth Google domain specific architecture (DSA) and its third supercomputer for such ML models. Optical circuit switches (OCSes) dynamically reconfigure its interconnect topology to improve scale, availability, utilization, modularity, deployment, security, power, and performance; users can pick a twisted 3D torus topology if desired. Much cheaper, lower power, and faster than Infiniband, OCSes and underlying optical components are &lt;5% of system cost and &lt;3% of system power. Each TPU v4 includes SparseCores, dataflow processors that accelerate models that rely on embeddings by 5x-7x yet use only 5% of die area and power. Deployed since 2020, TPU v4 outperforms TPU v3 by 2.1x and improves performance/Watt by 2.7x. The TPU v4 supercomputer is 4x larger at 4096 chips and thus ~10x faster overall, which along with OCS flexibility helps large language models. For similar sized systems, it is ~4.3x-4.5x faster than the Graphcore IPU Bow and is 1.2x-1.7x faster and uses 1.3x-1.9x less power than the Nvidia A100. TPU v4s inside the energy-optimized warehouse scale computers of Google Cloud use ~3x less energy and produce ~20x less CO2e than contemporary DSAs in a typical on-premise data center.</description><identifier>DOI: 10.48550/arxiv.2304.01433</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Hardware Architecture ; Computer Science - Learning ; Computer Science - Performance</subject><creationdate>2023-04</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2304.01433$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2304.01433$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Jouppi, Norman P</creatorcontrib><creatorcontrib>Kurian, George</creatorcontrib><creatorcontrib>Li, Sheng</creatorcontrib><creatorcontrib>Ma, Peter</creatorcontrib><creatorcontrib>Nagarajan, Rahul</creatorcontrib><creatorcontrib>Nai, Lifeng</creatorcontrib><creatorcontrib>Patil, Nishant</creatorcontrib><creatorcontrib>Subramanian, Suvinay</creatorcontrib><creatorcontrib>Swing, Andy</creatorcontrib><creatorcontrib>Towles, Brian</creatorcontrib><creatorcontrib>Young, Cliff</creatorcontrib><creatorcontrib>Zhou, Xiang</creatorcontrib><creatorcontrib>Zhou, Zongwei</creatorcontrib><creatorcontrib>Patterson, David</creatorcontrib><title>TPU v4: An Optically Reconfigurable Supercomputer for Machine Learning with Hardware Support for Embeddings</title><description>In response to innovations in machine learning (ML) models, production workloads changed radically and rapidly. TPU v4 is the fifth Google domain specific architecture (DSA) and its third supercomputer for such ML models. Optical circuit switches (OCSes) dynamically reconfigure its interconnect topology to improve scale, availability, utilization, modularity, deployment, security, power, and performance; users can pick a twisted 3D torus topology if desired. Much cheaper, lower power, and faster than Infiniband, OCSes and underlying optical components are &lt;5% of system cost and &lt;3% of system power. Each TPU v4 includes SparseCores, dataflow processors that accelerate models that rely on embeddings by 5x-7x yet use only 5% of die area and power. Deployed since 2020, TPU v4 outperforms TPU v3 by 2.1x and improves performance/Watt by 2.7x. The TPU v4 supercomputer is 4x larger at 4096 chips and thus ~10x faster overall, which along with OCS flexibility helps large language models. For similar sized systems, it is ~4.3x-4.5x faster than the Graphcore IPU Bow and is 1.2x-1.7x faster and uses 1.3x-1.9x less power than the Nvidia A100. TPU v4s inside the energy-optimized warehouse scale computers of Google Cloud use ~3x less energy and produce ~20x less CO2e than contemporary DSAs in a typical on-premise data center.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Hardware Architecture</subject><subject>Computer Science - Learning</subject><subject>Computer Science - Performance</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tOwzAURL1hgQofwAr_QIITOw_YVVWhSKmKaLqOrq-vW4u85CYt_XtKYDWbMzM6jD1EIlR5kogn8N_uFMZSqFBESspb9lV-7PhJvfB5yzf94BDq-sI_CbvWuv3oQdfEt2NPHrumHwfy3HaerwEPriVeEPjWtXt-dsOBr8CbM_ip0Hd-mNBlo8mYK3O8YzcW6iPd_-eMla_LcrEKis3b-2JeBJBmMjCgSKUaZW4SbUystMrRQGKz3AKmhJGMMRcysaTjTD-rjAwKk8lIXY9QyRl7_JudbKveuwb8pfq1riZr-QP9uVSl</recordid><startdate>20230403</startdate><enddate>20230403</enddate><creator>Jouppi, Norman P</creator><creator>Kurian, George</creator><creator>Li, Sheng</creator><creator>Ma, Peter</creator><creator>Nagarajan, Rahul</creator><creator>Nai, Lifeng</creator><creator>Patil, Nishant</creator><creator>Subramanian, Suvinay</creator><creator>Swing, Andy</creator><creator>Towles, Brian</creator><creator>Young, Cliff</creator><creator>Zhou, Xiang</creator><creator>Zhou, Zongwei</creator><creator>Patterson, David</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230403</creationdate><title>TPU v4: An Optically Reconfigurable Supercomputer for Machine Learning with Hardware Support for Embeddings</title><author>Jouppi, Norman P ; Kurian, George ; Li, Sheng ; Ma, Peter ; Nagarajan, Rahul ; Nai, Lifeng ; Patil, Nishant ; Subramanian, Suvinay ; Swing, Andy ; Towles, Brian ; Young, Cliff ; Zhou, Xiang ; Zhou, Zongwei ; Patterson, David</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-da4e46bc38d5bdd24b48cda5f78fac6ec132c8035feb27b947edc0d7314bedc43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Hardware Architecture</topic><topic>Computer Science - Learning</topic><topic>Computer Science - Performance</topic><toplevel>online_resources</toplevel><creatorcontrib>Jouppi, Norman P</creatorcontrib><creatorcontrib>Kurian, George</creatorcontrib><creatorcontrib>Li, Sheng</creatorcontrib><creatorcontrib>Ma, Peter</creatorcontrib><creatorcontrib>Nagarajan, Rahul</creatorcontrib><creatorcontrib>Nai, Lifeng</creatorcontrib><creatorcontrib>Patil, Nishant</creatorcontrib><creatorcontrib>Subramanian, Suvinay</creatorcontrib><creatorcontrib>Swing, Andy</creatorcontrib><creatorcontrib>Towles, Brian</creatorcontrib><creatorcontrib>Young, Cliff</creatorcontrib><creatorcontrib>Zhou, Xiang</creatorcontrib><creatorcontrib>Zhou, Zongwei</creatorcontrib><creatorcontrib>Patterson, David</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Jouppi, Norman P</au><au>Kurian, George</au><au>Li, Sheng</au><au>Ma, Peter</au><au>Nagarajan, Rahul</au><au>Nai, Lifeng</au><au>Patil, Nishant</au><au>Subramanian, Suvinay</au><au>Swing, Andy</au><au>Towles, Brian</au><au>Young, Cliff</au><au>Zhou, Xiang</au><au>Zhou, Zongwei</au><au>Patterson, David</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>TPU v4: An Optically Reconfigurable Supercomputer for Machine Learning with Hardware Support for Embeddings</atitle><date>2023-04-03</date><risdate>2023</risdate><abstract>In response to innovations in machine learning (ML) models, production workloads changed radically and rapidly. TPU v4 is the fifth Google domain specific architecture (DSA) and its third supercomputer for such ML models. Optical circuit switches (OCSes) dynamically reconfigure its interconnect topology to improve scale, availability, utilization, modularity, deployment, security, power, and performance; users can pick a twisted 3D torus topology if desired. Much cheaper, lower power, and faster than Infiniband, OCSes and underlying optical components are &lt;5% of system cost and &lt;3% of system power. Each TPU v4 includes SparseCores, dataflow processors that accelerate models that rely on embeddings by 5x-7x yet use only 5% of die area and power. Deployed since 2020, TPU v4 outperforms TPU v3 by 2.1x and improves performance/Watt by 2.7x. The TPU v4 supercomputer is 4x larger at 4096 chips and thus ~10x faster overall, which along with OCS flexibility helps large language models. For similar sized systems, it is ~4.3x-4.5x faster than the Graphcore IPU Bow and is 1.2x-1.7x faster and uses 1.3x-1.9x less power than the Nvidia A100. TPU v4s inside the energy-optimized warehouse scale computers of Google Cloud use ~3x less energy and produce ~20x less CO2e than contemporary DSAs in a typical on-premise data center.</abstract><doi>10.48550/arxiv.2304.01433</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2304.01433
ispartof
issn
language eng
recordid cdi_arxiv_primary_2304_01433
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Hardware Architecture
Computer Science - Learning
Computer Science - Performance
title TPU v4: An Optically Reconfigurable Supercomputer for Machine Learning with Hardware Support for Embeddings
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T07%3A25%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=TPU%20v4:%20An%20Optically%20Reconfigurable%20Supercomputer%20for%20Machine%20Learning%20with%20Hardware%20Support%20for%20Embeddings&rft.au=Jouppi,%20Norman%20P&rft.date=2023-04-03&rft_id=info:doi/10.48550/arxiv.2304.01433&rft_dat=%3Carxiv_GOX%3E2304_01433%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true