Artificial intelligence workload migration for planet-scale artificial intelligence infrastructure service

The disclosure herein describes platform-level migration for deep learning training (DLT) jobs from a checkpointed stated between a source node and a destination node. The checkpointing is performed through capturing GPU state (e.g., device state) and CPU state (e.g., host state). The GPU state incl...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Sivathanu, Muthian, Nehme, Rimma Vladimirovna, Xun, Lu, Shukla, Dharma Kiritkumar
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Sivathanu, Muthian
Nehme, Rimma Vladimirovna
Xun, Lu
Shukla, Dharma Kiritkumar
description The disclosure herein describes platform-level migration for deep learning training (DLT) jobs from a checkpointed stated between a source node and a destination node. The checkpointing is performed through capturing GPU state (e.g., device state) and CPU state (e.g., host state). The GPU state includes GPU data (e.g., model parameters, optimizer state, etc.) that is located in the GPU and GPU context (e.g., the default stream in GPU, various handles created by libraries). Restoring the DLT job on the destination node involves resumption of processing of a destination GPU at the same checkpointed state.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US12166829B2</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US12166829B2</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US12166829B23</originalsourceid><addsrcrecordid>eNqNjDEOwjAMALswIOAP5gEdGqQKRkAgdmCuLONULiaJnBS-DwMrEtMtdzethq0V8UKCChIKq0rPgRhe0e4a8QYP6Q2LxAA-GiTFwKXOhMqAP1oJ3jAXG6mMxpDZnkI8ryYeNfPiy1m1PB4u-1PNKXacExJ_1t313Limbddus3Orf5w3WfNCwA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Artificial intelligence workload migration for planet-scale artificial intelligence infrastructure service</title><source>esp@cenet</source><creator>Sivathanu, Muthian ; Nehme, Rimma Vladimirovna ; Xun, Lu ; Shukla, Dharma Kiritkumar</creator><creatorcontrib>Sivathanu, Muthian ; Nehme, Rimma Vladimirovna ; Xun, Lu ; Shukla, Dharma Kiritkumar</creatorcontrib><description>The disclosure herein describes platform-level migration for deep learning training (DLT) jobs from a checkpointed stated between a source node and a destination node. The checkpointing is performed through capturing GPU state (e.g., device state) and CPU state (e.g., host state). The GPU state includes GPU data (e.g., model parameters, optimizer state, etc.) that is located in the GPU and GPU context (e.g., the default stream in GPU, various handles created by libraries). Restoring the DLT job on the destination node involves resumption of processing of a destination GPU at the same checkpointed state.</description><language>eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; ELECTRIC COMMUNICATION TECHNIQUE ; ELECTRIC DIGITAL DATA PROCESSING ; ELECTRICITY ; IMAGE DATA PROCESSING OR GENERATION, IN GENERAL ; PHYSICS ; TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHICCOMMUNICATION</subject><creationdate>2024</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20241210&amp;DB=EPODOC&amp;CC=US&amp;NR=12166829B2$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25564,76547</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20241210&amp;DB=EPODOC&amp;CC=US&amp;NR=12166829B2$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>Sivathanu, Muthian</creatorcontrib><creatorcontrib>Nehme, Rimma Vladimirovna</creatorcontrib><creatorcontrib>Xun, Lu</creatorcontrib><creatorcontrib>Shukla, Dharma Kiritkumar</creatorcontrib><title>Artificial intelligence workload migration for planet-scale artificial intelligence infrastructure service</title><description>The disclosure herein describes platform-level migration for deep learning training (DLT) jobs from a checkpointed stated between a source node and a destination node. The checkpointing is performed through capturing GPU state (e.g., device state) and CPU state (e.g., host state). The GPU state includes GPU data (e.g., model parameters, optimizer state, etc.) that is located in the GPU and GPU context (e.g., the default stream in GPU, various handles created by libraries). Restoring the DLT job on the destination node involves resumption of processing of a destination GPU at the same checkpointed state.</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>ELECTRIC COMMUNICATION TECHNIQUE</subject><subject>ELECTRIC DIGITAL DATA PROCESSING</subject><subject>ELECTRICITY</subject><subject>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</subject><subject>PHYSICS</subject><subject>TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHICCOMMUNICATION</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2024</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqNjDEOwjAMALswIOAP5gEdGqQKRkAgdmCuLONULiaJnBS-DwMrEtMtdzethq0V8UKCChIKq0rPgRhe0e4a8QYP6Q2LxAA-GiTFwKXOhMqAP1oJ3jAXG6mMxpDZnkI8ryYeNfPiy1m1PB4u-1PNKXacExJ_1t313Limbddus3Orf5w3WfNCwA</recordid><startdate>20241210</startdate><enddate>20241210</enddate><creator>Sivathanu, Muthian</creator><creator>Nehme, Rimma Vladimirovna</creator><creator>Xun, Lu</creator><creator>Shukla, Dharma Kiritkumar</creator><scope>EVB</scope></search><sort><creationdate>20241210</creationdate><title>Artificial intelligence workload migration for planet-scale artificial intelligence infrastructure service</title><author>Sivathanu, Muthian ; Nehme, Rimma Vladimirovna ; Xun, Lu ; Shukla, Dharma Kiritkumar</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US12166829B23</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2024</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>ELECTRIC COMMUNICATION TECHNIQUE</topic><topic>ELECTRIC DIGITAL DATA PROCESSING</topic><topic>ELECTRICITY</topic><topic>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</topic><topic>PHYSICS</topic><topic>TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHICCOMMUNICATION</topic><toplevel>online_resources</toplevel><creatorcontrib>Sivathanu, Muthian</creatorcontrib><creatorcontrib>Nehme, Rimma Vladimirovna</creatorcontrib><creatorcontrib>Xun, Lu</creatorcontrib><creatorcontrib>Shukla, Dharma Kiritkumar</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Sivathanu, Muthian</au><au>Nehme, Rimma Vladimirovna</au><au>Xun, Lu</au><au>Shukla, Dharma Kiritkumar</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Artificial intelligence workload migration for planet-scale artificial intelligence infrastructure service</title><date>2024-12-10</date><risdate>2024</risdate><abstract>The disclosure herein describes platform-level migration for deep learning training (DLT) jobs from a checkpointed stated between a source node and a destination node. The checkpointing is performed through capturing GPU state (e.g., device state) and CPU state (e.g., host state). The GPU state includes GPU data (e.g., model parameters, optimizer state, etc.) that is located in the GPU and GPU context (e.g., the default stream in GPU, various handles created by libraries). Restoring the DLT job on the destination node involves resumption of processing of a destination GPU at the same checkpointed state.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng
recordid cdi_epo_espacenet_US12166829B2
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
ELECTRIC COMMUNICATION TECHNIQUE
ELECTRIC DIGITAL DATA PROCESSING
ELECTRICITY
IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
PHYSICS
TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHICCOMMUNICATION
title Artificial intelligence workload migration for planet-scale artificial intelligence infrastructure service
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-30T23%3A44%3A51IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=Sivathanu,%20Muthian&rft.date=2024-12-10&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS12166829B2%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true