TensorFlow Estimators: Managing Simplicity vs. Flexibility in High-Level Machine Learning Frameworks
We present a framework for specifying, training, evaluating, and deploying machine learning models. Our focus is on simplifying cutting edge machine learning for practitioners in order to bring such technologies into production. Recognizing the fast evolution of the field of deep learning, we make n...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2017-08 |
---|---|
Hauptverfasser: | , , , , , , , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Heng-Tze Cheng Haque, Zakaria Hong, Lichan Ispir, Mustafa Mewald, Clemens Polosukhin, Illia Roumpos, Georgios Sculley, D Smith, Jamie Soergel, David Tang, Yuan Tucker, Philipp Wicke, Martin Xia, Cassandra Xie, Jianwei |
description | We present a framework for specifying, training, evaluating, and deploying machine learning models. Our focus is on simplifying cutting edge machine learning for practitioners in order to bring such technologies into production. Recognizing the fast evolution of the field of deep learning, we make no attempt to capture the design space of all possible model architectures in a domain- specific language (DSL) or similar configuration language. We allow users to write code to define their models, but provide abstractions that guide develop- ers to write models in ways conducive to productionization. We also provide a unifying Estimator interface, making it possible to write downstream infrastructure (e.g. distributed training, hyperparameter tuning) independent of the model implementation. We balance the competing demands for flexibility and simplicity by offering APIs at different levels of abstraction, making common model architectures available out of the box, while providing a library of utilities designed to speed up experimentation with model architectures. To make out of the box models flexible and usable across a wide range of problems, these canned Estimators are parameterized not only over traditional hyperparameters, but also using feature columns, a declarative specification describing how to interpret input data. We discuss our experience in using this framework in re- search and production environments, and show the impact on code health, maintainability, and development speed. |
doi_str_mv | 10.48550/arxiv.1708.02637 |
format | Article |
fullrecord | <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_1708_02637</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2075805258</sourcerecordid><originalsourceid>FETCH-LOGICAL-a528-e2c09d4f78bb2a73e61cdb8b2df5e9b239b7d0aa37742344c5d8b1c43b19eaf93</originalsourceid><addsrcrecordid>eNotkEtPwzAQhC0kJKrSH8AJS5wTHD9qhxuqGooUxIHeIzvZtC6pU-z09e-btJxWq50ZzX4IPSUk5koI8qr9yR7iRBIVEzpl8g6NKGNJpDilD2gSwoaQ_iCpEGyEqiW40PqsaY94Hjq71V3rwxv-0k6vrFvhH7vdNba03RkfQoyzBk7W2GbYrcMLu1pHORyg6R3l2jrAOWjvBmfm9RaOrf8Nj-i-1k2Ayf8co2U2X84WUf798Tl7zyMtqIqAliSteC2VMVRLBtOkrIwytKoFpIay1MiKaM2k5JRxXopKmaTkzCQp6DplY_R8i70iKHa-_8afiwFFcUXRK15uip1v__YQumLT7r3rOxWUSKGIoEKxC1V_YrE</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2075805258</pqid></control><display><type>article</type><title>TensorFlow Estimators: Managing Simplicity vs. Flexibility in High-Level Machine Learning Frameworks</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Heng-Tze Cheng ; Haque, Zakaria ; Hong, Lichan ; Ispir, Mustafa ; Mewald, Clemens ; Polosukhin, Illia ; Roumpos, Georgios ; Sculley, D ; Smith, Jamie ; Soergel, David ; Tang, Yuan ; Tucker, Philipp ; Wicke, Martin ; Xia, Cassandra ; Xie, Jianwei</creator><creatorcontrib>Heng-Tze Cheng ; Haque, Zakaria ; Hong, Lichan ; Ispir, Mustafa ; Mewald, Clemens ; Polosukhin, Illia ; Roumpos, Georgios ; Sculley, D ; Smith, Jamie ; Soergel, David ; Tang, Yuan ; Tucker, Philipp ; Wicke, Martin ; Xia, Cassandra ; Xie, Jianwei</creatorcontrib><description>We present a framework for specifying, training, evaluating, and deploying machine learning models. Our focus is on simplifying cutting edge machine learning for practitioners in order to bring such technologies into production. Recognizing the fast evolution of the field of deep learning, we make no attempt to capture the design space of all possible model architectures in a domain- specific language (DSL) or similar configuration language. We allow users to write code to define their models, but provide abstractions that guide develop- ers to write models in ways conducive to productionization. We also provide a unifying Estimator interface, making it possible to write downstream infrastructure (e.g. distributed training, hyperparameter tuning) independent of the model implementation. We balance the competing demands for flexibility and simplicity by offering APIs at different levels of abstraction, making common model architectures available out of the box, while providing a library of utilities designed to speed up experimentation with model architectures. To make out of the box models flexible and usable across a wide range of problems, these canned Estimators are parameterized not only over traditional hyperparameters, but also using feature columns, a declarative specification describing how to interpret input data. We discuss our experience in using this framework in re- search and production environments, and show the impact on code health, maintainability, and development speed.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.1708.02637</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial intelligence ; Computer Science - Distributed, Parallel, and Cluster Computing ; Computer Science - Learning ; Domain specific languages ; Estimators ; Experimentation ; Flexibility ; Machine learning ; Maintainability ; Training ; Utilities</subject><ispartof>arXiv.org, 2017-08</ispartof><rights>2017. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,780,881,27902</link.rule.ids><backlink>$$Uhttps://doi.org/10.1145/3097983.3098171$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.48550/arXiv.1708.02637$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Heng-Tze Cheng</creatorcontrib><creatorcontrib>Haque, Zakaria</creatorcontrib><creatorcontrib>Hong, Lichan</creatorcontrib><creatorcontrib>Ispir, Mustafa</creatorcontrib><creatorcontrib>Mewald, Clemens</creatorcontrib><creatorcontrib>Polosukhin, Illia</creatorcontrib><creatorcontrib>Roumpos, Georgios</creatorcontrib><creatorcontrib>Sculley, D</creatorcontrib><creatorcontrib>Smith, Jamie</creatorcontrib><creatorcontrib>Soergel, David</creatorcontrib><creatorcontrib>Tang, Yuan</creatorcontrib><creatorcontrib>Tucker, Philipp</creatorcontrib><creatorcontrib>Wicke, Martin</creatorcontrib><creatorcontrib>Xia, Cassandra</creatorcontrib><creatorcontrib>Xie, Jianwei</creatorcontrib><title>TensorFlow Estimators: Managing Simplicity vs. Flexibility in High-Level Machine Learning Frameworks</title><title>arXiv.org</title><description>We present a framework for specifying, training, evaluating, and deploying machine learning models. Our focus is on simplifying cutting edge machine learning for practitioners in order to bring such technologies into production. Recognizing the fast evolution of the field of deep learning, we make no attempt to capture the design space of all possible model architectures in a domain- specific language (DSL) or similar configuration language. We allow users to write code to define their models, but provide abstractions that guide develop- ers to write models in ways conducive to productionization. We also provide a unifying Estimator interface, making it possible to write downstream infrastructure (e.g. distributed training, hyperparameter tuning) independent of the model implementation. We balance the competing demands for flexibility and simplicity by offering APIs at different levels of abstraction, making common model architectures available out of the box, while providing a library of utilities designed to speed up experimentation with model architectures. To make out of the box models flexible and usable across a wide range of problems, these canned Estimators are parameterized not only over traditional hyperparameters, but also using feature columns, a declarative specification describing how to interpret input data. We discuss our experience in using this framework in re- search and production environments, and show the impact on code health, maintainability, and development speed.</description><subject>Artificial intelligence</subject><subject>Computer Science - Distributed, Parallel, and Cluster Computing</subject><subject>Computer Science - Learning</subject><subject>Domain specific languages</subject><subject>Estimators</subject><subject>Experimentation</subject><subject>Flexibility</subject><subject>Machine learning</subject><subject>Maintainability</subject><subject>Training</subject><subject>Utilities</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><sourceid>GOX</sourceid><recordid>eNotkEtPwzAQhC0kJKrSH8AJS5wTHD9qhxuqGooUxIHeIzvZtC6pU-z09e-btJxWq50ZzX4IPSUk5koI8qr9yR7iRBIVEzpl8g6NKGNJpDilD2gSwoaQ_iCpEGyEqiW40PqsaY94Hjq71V3rwxv-0k6vrFvhH7vdNba03RkfQoyzBk7W2GbYrcMLu1pHORyg6R3l2jrAOWjvBmfm9RaOrf8Nj-i-1k2Ayf8co2U2X84WUf798Tl7zyMtqIqAliSteC2VMVRLBtOkrIwytKoFpIay1MiKaM2k5JRxXopKmaTkzCQp6DplY_R8i70iKHa-_8afiwFFcUXRK15uip1v__YQumLT7r3rOxWUSKGIoEKxC1V_YrE</recordid><startdate>20170808</startdate><enddate>20170808</enddate><creator>Heng-Tze Cheng</creator><creator>Haque, Zakaria</creator><creator>Hong, Lichan</creator><creator>Ispir, Mustafa</creator><creator>Mewald, Clemens</creator><creator>Polosukhin, Illia</creator><creator>Roumpos, Georgios</creator><creator>Sculley, D</creator><creator>Smith, Jamie</creator><creator>Soergel, David</creator><creator>Tang, Yuan</creator><creator>Tucker, Philipp</creator><creator>Wicke, Martin</creator><creator>Xia, Cassandra</creator><creator>Xie, Jianwei</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20170808</creationdate><title>TensorFlow Estimators: Managing Simplicity vs. Flexibility in High-Level Machine Learning Frameworks</title><author>Heng-Tze Cheng ; Haque, Zakaria ; Hong, Lichan ; Ispir, Mustafa ; Mewald, Clemens ; Polosukhin, Illia ; Roumpos, Georgios ; Sculley, D ; Smith, Jamie ; Soergel, David ; Tang, Yuan ; Tucker, Philipp ; Wicke, Martin ; Xia, Cassandra ; Xie, Jianwei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a528-e2c09d4f78bb2a73e61cdb8b2df5e9b239b7d0aa37742344c5d8b1c43b19eaf93</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Artificial intelligence</topic><topic>Computer Science - Distributed, Parallel, and Cluster Computing</topic><topic>Computer Science - Learning</topic><topic>Domain specific languages</topic><topic>Estimators</topic><topic>Experimentation</topic><topic>Flexibility</topic><topic>Machine learning</topic><topic>Maintainability</topic><topic>Training</topic><topic>Utilities</topic><toplevel>online_resources</toplevel><creatorcontrib>Heng-Tze Cheng</creatorcontrib><creatorcontrib>Haque, Zakaria</creatorcontrib><creatorcontrib>Hong, Lichan</creatorcontrib><creatorcontrib>Ispir, Mustafa</creatorcontrib><creatorcontrib>Mewald, Clemens</creatorcontrib><creatorcontrib>Polosukhin, Illia</creatorcontrib><creatorcontrib>Roumpos, Georgios</creatorcontrib><creatorcontrib>Sculley, D</creatorcontrib><creatorcontrib>Smith, Jamie</creatorcontrib><creatorcontrib>Soergel, David</creatorcontrib><creatorcontrib>Tang, Yuan</creatorcontrib><creatorcontrib>Tucker, Philipp</creatorcontrib><creatorcontrib>Wicke, Martin</creatorcontrib><creatorcontrib>Xia, Cassandra</creatorcontrib><creatorcontrib>Xie, Jianwei</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection (ProQuest)</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Heng-Tze Cheng</au><au>Haque, Zakaria</au><au>Hong, Lichan</au><au>Ispir, Mustafa</au><au>Mewald, Clemens</au><au>Polosukhin, Illia</au><au>Roumpos, Georgios</au><au>Sculley, D</au><au>Smith, Jamie</au><au>Soergel, David</au><au>Tang, Yuan</au><au>Tucker, Philipp</au><au>Wicke, Martin</au><au>Xia, Cassandra</au><au>Xie, Jianwei</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>TensorFlow Estimators: Managing Simplicity vs. Flexibility in High-Level Machine Learning Frameworks</atitle><jtitle>arXiv.org</jtitle><date>2017-08-08</date><risdate>2017</risdate><eissn>2331-8422</eissn><abstract>We present a framework for specifying, training, evaluating, and deploying machine learning models. Our focus is on simplifying cutting edge machine learning for practitioners in order to bring such technologies into production. Recognizing the fast evolution of the field of deep learning, we make no attempt to capture the design space of all possible model architectures in a domain- specific language (DSL) or similar configuration language. We allow users to write code to define their models, but provide abstractions that guide develop- ers to write models in ways conducive to productionization. We also provide a unifying Estimator interface, making it possible to write downstream infrastructure (e.g. distributed training, hyperparameter tuning) independent of the model implementation. We balance the competing demands for flexibility and simplicity by offering APIs at different levels of abstraction, making common model architectures available out of the box, while providing a library of utilities designed to speed up experimentation with model architectures. To make out of the box models flexible and usable across a wide range of problems, these canned Estimators are parameterized not only over traditional hyperparameters, but also using feature columns, a declarative specification describing how to interpret input data. We discuss our experience in using this framework in re- search and production environments, and show the impact on code health, maintainability, and development speed.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.1708.02637</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2017-08 |
issn | 2331-8422 |
language | eng |
recordid | cdi_arxiv_primary_1708_02637 |
source | arXiv.org; Free E- Journals |
subjects | Artificial intelligence Computer Science - Distributed, Parallel, and Cluster Computing Computer Science - Learning Domain specific languages Estimators Experimentation Flexibility Machine learning Maintainability Training Utilities |
title | TensorFlow Estimators: Managing Simplicity vs. Flexibility in High-Level Machine Learning Frameworks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-30T23%3A29%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=TensorFlow%20Estimators:%20Managing%20Simplicity%20vs.%20Flexibility%20in%20High-Level%20Machine%20Learning%20Frameworks&rft.jtitle=arXiv.org&rft.au=Heng-Tze%20Cheng&rft.date=2017-08-08&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.1708.02637&rft_dat=%3Cproquest_arxiv%3E2075805258%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2075805258&rft_id=info:pmid/&rfr_iscdi=true |