Distributed Machine Learning System

A distributed machine learning system and method are disclosed. According to some implementations of this disclosure, the method includes identifying one or more available computing resources and receiving a task object that indicates a training job to perform. The method includes retrieving a conta...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Srinivasan, Nikhil Vikram, Kern, Alexander Simon
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Srinivasan, Nikhil Vikram
Kern, Alexander Simon
description A distributed machine learning system and method are disclosed. According to some implementations of this disclosure, the method includes identifying one or more available computing resources and receiving a task object that indicates a training job to perform. The method includes retrieving a container image based on the type of model architecture. The container image includes the model architecture and a filesystem. The method includes retrieving and mounting a base model to the filesystem of the container image. The method further includes retrieving and mounting a volume of training data to the filesystem of the container image to obtain a training container. In some implementations, the method further includes executing the training container on at least one of the one or more available computing resources and receiving a trained model from the container after the container completes the training job. The method further includes storing the trained model.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US2018300653A1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US2018300653A1</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US2018300653A13</originalsourceid><addsrcrecordid>eNrjZFB2ySwuKcpMKi1JTVHwTUzOyMxLVfBJTSzKy8xLVwiuLC5JzeVhYE1LzClO5YXS3AzKbq4hzh66qQX58anFBYnJqXmpJfGhwUYGhhbGBgZmpsaOhsbEqQIAdQgnQw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Distributed Machine Learning System</title><source>esp@cenet</source><creator>Srinivasan, Nikhil Vikram ; Kern, Alexander Simon</creator><creatorcontrib>Srinivasan, Nikhil Vikram ; Kern, Alexander Simon</creatorcontrib><description>A distributed machine learning system and method are disclosed. According to some implementations of this disclosure, the method includes identifying one or more available computing resources and receiving a task object that indicates a training job to perform. The method includes retrieving a container image based on the type of model architecture. The container image includes the model architecture and a filesystem. The method includes retrieving and mounting a base model to the filesystem of the container image. The method further includes retrieving and mounting a volume of training data to the filesystem of the container image to obtain a training container. In some implementations, the method further includes executing the training container on at least one of the one or more available computing resources and receiving a trained model from the container after the container completes the training job. The method further includes storing the trained model.</description><language>eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; ELECTRIC COMMUNICATION TECHNIQUE ; ELECTRIC DIGITAL DATA PROCESSING ; ELECTRICITY ; PHYSICS ; TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHICCOMMUNICATION</subject><creationdate>2018</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20181018&amp;DB=EPODOC&amp;CC=US&amp;NR=2018300653A1$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25563,76318</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20181018&amp;DB=EPODOC&amp;CC=US&amp;NR=2018300653A1$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>Srinivasan, Nikhil Vikram</creatorcontrib><creatorcontrib>Kern, Alexander Simon</creatorcontrib><title>Distributed Machine Learning System</title><description>A distributed machine learning system and method are disclosed. According to some implementations of this disclosure, the method includes identifying one or more available computing resources and receiving a task object that indicates a training job to perform. The method includes retrieving a container image based on the type of model architecture. The container image includes the model architecture and a filesystem. The method includes retrieving and mounting a base model to the filesystem of the container image. The method further includes retrieving and mounting a volume of training data to the filesystem of the container image to obtain a training container. In some implementations, the method further includes executing the training container on at least one of the one or more available computing resources and receiving a trained model from the container after the container completes the training job. The method further includes storing the trained model.</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>ELECTRIC COMMUNICATION TECHNIQUE</subject><subject>ELECTRIC DIGITAL DATA PROCESSING</subject><subject>ELECTRICITY</subject><subject>PHYSICS</subject><subject>TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHICCOMMUNICATION</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2018</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZFB2ySwuKcpMKi1JTVHwTUzOyMxLVfBJTSzKy8xLVwiuLC5JzeVhYE1LzClO5YXS3AzKbq4hzh66qQX58anFBYnJqXmpJfGhwUYGhhbGBgZmpsaOhsbEqQIAdQgnQw</recordid><startdate>20181018</startdate><enddate>20181018</enddate><creator>Srinivasan, Nikhil Vikram</creator><creator>Kern, Alexander Simon</creator><scope>EVB</scope></search><sort><creationdate>20181018</creationdate><title>Distributed Machine Learning System</title><author>Srinivasan, Nikhil Vikram ; Kern, Alexander Simon</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US2018300653A13</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2018</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>ELECTRIC COMMUNICATION TECHNIQUE</topic><topic>ELECTRIC DIGITAL DATA PROCESSING</topic><topic>ELECTRICITY</topic><topic>PHYSICS</topic><topic>TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHICCOMMUNICATION</topic><toplevel>online_resources</toplevel><creatorcontrib>Srinivasan, Nikhil Vikram</creatorcontrib><creatorcontrib>Kern, Alexander Simon</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Srinivasan, Nikhil Vikram</au><au>Kern, Alexander Simon</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Distributed Machine Learning System</title><date>2018-10-18</date><risdate>2018</risdate><abstract>A distributed machine learning system and method are disclosed. According to some implementations of this disclosure, the method includes identifying one or more available computing resources and receiving a task object that indicates a training job to perform. The method includes retrieving a container image based on the type of model architecture. The container image includes the model architecture and a filesystem. The method includes retrieving and mounting a base model to the filesystem of the container image. The method further includes retrieving and mounting a volume of training data to the filesystem of the container image to obtain a training container. In some implementations, the method further includes executing the training container on at least one of the one or more available computing resources and receiving a trained model from the container after the container completes the training job. The method further includes storing the trained model.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng
recordid cdi_epo_espacenet_US2018300653A1
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
ELECTRIC COMMUNICATION TECHNIQUE
ELECTRIC DIGITAL DATA PROCESSING
ELECTRICITY
PHYSICS
TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHICCOMMUNICATION
title Distributed Machine Learning System
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-12T00%3A05%3A10IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=Srinivasan,%20Nikhil%20Vikram&rft.date=2018-10-18&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS2018300653A1%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true