GPU Accelerated Automatic Differentiation With Clad

Automatic Differentiation (AD) is instrumental for science and industry. It is a tool to evaluate the derivative of a function specified through a computer program. The range of AD application domain spans from Machine Learning to Robotics to High Energy Physics. Computing gradients with the help of...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of physics. Conference series 2023-02, Vol.2438 (1), p.12043
Hauptverfasser: Ifrim, Ioana, Vassilev, Vassil, Lange, David J
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 1
container_start_page 12043
container_title Journal of physics. Conference series
container_volume 2438
creator Ifrim, Ioana
Vassilev, Vassil
Lange, David J
description Automatic Differentiation (AD) is instrumental for science and industry. It is a tool to evaluate the derivative of a function specified through a computer program. The range of AD application domain spans from Machine Learning to Robotics to High Energy Physics. Computing gradients with the help of AD is guaranteed to be more precise than the numerical alternative and have a low, constant factor more arithmetical operations compared to the original function. Moreover, AD applications to domain problems typically are computationally bound. They are often limited by the computational requirements of high-dimensional parameters and thus can benefit from parallel implementations on graphics processing units (GPUs). Clad aims to enable differential analysis for C/C++ and CUDA and is a compiler-assisted AD tool available both as a compiler extension and in ROOT. Moreover, Clad works as a plugin extending the Clang compiler; as a plugin extending the interactive interpreter Cling; and as a Jupyter kernel extension based on xeus-cling. We demonstrate the advantages of parallel gradient computations on GPUs with Clad. We explain how to bring forth a new layer of optimization and a proportional speed up by extending Clad to support CUDA. The gradients of well-behaved C++ functions can be automatically executed on a GPU. The library can be easily integrated into existing frameworks or used interactively. Furthermore, we demonstrate the achieved application performance improvements, including (≈10x) in ROOT histogram fitting and corresponding performance gains from offloading to GPUs.
doi_str_mv 10.1088/1742-6596/2438/1/012043
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2777067450</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2777067450</sourcerecordid><originalsourceid>FETCH-LOGICAL-c359t-cc8f4d4c5d6f1af10001e04369d0213e541fcd307ac4f595e818bdd5ee09f25e3</originalsourceid><addsrcrecordid>eNqFkN9LwzAQx4MoOKd_gwXfhLpLkzTt46huKgMHOnwMMblgx7bWNHvwvzelMhEE7-XuuM_9-hJySeGGQlFMqORZmosyn2ScxXQCNAPOjsjoUDk-xEVxSs66bg3AoskRYfPlKpkagxv0OqBNpvvQbHWoTXJbO4ced6GOabNLXuvwnlQbbc_JidObDi--_ZisZncv1X26eJo_VNNFapgoQ2pM4bjlRtjcUe0oAFCMl-WlhYwyFJw6YxlIbbgTpcCCFm_WCkQoXSaQjcnVMLf1zcceu6DWzd7v4kqVSSkhl1xApORAGd90nUenWl9vtf9UFFSvkOp_V70OqldIUTUoFDuvh866aX9GPy6r59-gaq2LMPsD_m_FF4FkdB4</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2777067450</pqid></control><display><type>article</type><title>GPU Accelerated Automatic Differentiation With Clad</title><source>Institute of Physics IOPscience extra</source><source>Open Access: IOP Publishing Free Content</source><source>Alma/SFX Local Collection</source><source>Free Full-Text Journals in Chemistry</source><source>EZB Electronic Journals Library</source><creator>Ifrim, Ioana ; Vassilev, Vassil ; Lange, David J</creator><creatorcontrib>Ifrim, Ioana ; Vassilev, Vassil ; Lange, David J</creatorcontrib><description>Automatic Differentiation (AD) is instrumental for science and industry. It is a tool to evaluate the derivative of a function specified through a computer program. The range of AD application domain spans from Machine Learning to Robotics to High Energy Physics. Computing gradients with the help of AD is guaranteed to be more precise than the numerical alternative and have a low, constant factor more arithmetical operations compared to the original function. Moreover, AD applications to domain problems typically are computationally bound. They are often limited by the computational requirements of high-dimensional parameters and thus can benefit from parallel implementations on graphics processing units (GPUs). Clad aims to enable differential analysis for C/C++ and CUDA and is a compiler-assisted AD tool available both as a compiler extension and in ROOT. Moreover, Clad works as a plugin extending the Clang compiler; as a plugin extending the interactive interpreter Cling; and as a Jupyter kernel extension based on xeus-cling. We demonstrate the advantages of parallel gradient computations on GPUs with Clad. We explain how to bring forth a new layer of optimization and a proportional speed up by extending Clad to support CUDA. The gradients of well-behaved C++ functions can be automatically executed on a GPU. The library can be easily integrated into existing frameworks or used interactively. Furthermore, we demonstrate the achieved application performance improvements, including (≈10x) in ROOT histogram fitting and corresponding performance gains from offloading to GPUs.</description><identifier>ISSN: 1742-6588</identifier><identifier>EISSN: 1742-6596</identifier><identifier>DOI: 10.1088/1742-6596/2438/1/012043</identifier><language>eng</language><publisher>Bristol: IOP Publishing</publisher><subject>C++ (programming language) ; Compilers ; Differentiation ; Domains ; Graphics processing units ; Histograms ; Machine learning ; Optimization ; Physics ; Robotics</subject><ispartof>Journal of physics. Conference series, 2023-02, Vol.2438 (1), p.12043</ispartof><rights>Published under licence by IOP Publishing Ltd</rights><rights>Published under licence by IOP Publishing Ltd. This work is published under http://creativecommons.org/licenses/by/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c359t-cc8f4d4c5d6f1af10001e04369d0213e541fcd307ac4f595e818bdd5ee09f25e3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://iopscience.iop.org/article/10.1088/1742-6596/2438/1/012043/pdf$$EPDF$$P50$$Giop$$Hfree_for_read</linktopdf><link.rule.ids>314,780,784,27924,27925,38868,38890,53840,53867</link.rule.ids></links><search><creatorcontrib>Ifrim, Ioana</creatorcontrib><creatorcontrib>Vassilev, Vassil</creatorcontrib><creatorcontrib>Lange, David J</creatorcontrib><title>GPU Accelerated Automatic Differentiation With Clad</title><title>Journal of physics. Conference series</title><addtitle>J. Phys.: Conf. Ser</addtitle><description>Automatic Differentiation (AD) is instrumental for science and industry. It is a tool to evaluate the derivative of a function specified through a computer program. The range of AD application domain spans from Machine Learning to Robotics to High Energy Physics. Computing gradients with the help of AD is guaranteed to be more precise than the numerical alternative and have a low, constant factor more arithmetical operations compared to the original function. Moreover, AD applications to domain problems typically are computationally bound. They are often limited by the computational requirements of high-dimensional parameters and thus can benefit from parallel implementations on graphics processing units (GPUs). Clad aims to enable differential analysis for C/C++ and CUDA and is a compiler-assisted AD tool available both as a compiler extension and in ROOT. Moreover, Clad works as a plugin extending the Clang compiler; as a plugin extending the interactive interpreter Cling; and as a Jupyter kernel extension based on xeus-cling. We demonstrate the advantages of parallel gradient computations on GPUs with Clad. We explain how to bring forth a new layer of optimization and a proportional speed up by extending Clad to support CUDA. The gradients of well-behaved C++ functions can be automatically executed on a GPU. The library can be easily integrated into existing frameworks or used interactively. Furthermore, we demonstrate the achieved application performance improvements, including (≈10x) in ROOT histogram fitting and corresponding performance gains from offloading to GPUs.</description><subject>C++ (programming language)</subject><subject>Compilers</subject><subject>Differentiation</subject><subject>Domains</subject><subject>Graphics processing units</subject><subject>Histograms</subject><subject>Machine learning</subject><subject>Optimization</subject><subject>Physics</subject><subject>Robotics</subject><issn>1742-6588</issn><issn>1742-6596</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>O3W</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqFkN9LwzAQx4MoOKd_gwXfhLpLkzTt46huKgMHOnwMMblgx7bWNHvwvzelMhEE7-XuuM_9-hJySeGGQlFMqORZmosyn2ScxXQCNAPOjsjoUDk-xEVxSs66bg3AoskRYfPlKpkagxv0OqBNpvvQbHWoTXJbO4ced6GOabNLXuvwnlQbbc_JidObDi--_ZisZncv1X26eJo_VNNFapgoQ2pM4bjlRtjcUe0oAFCMl-WlhYwyFJw6YxlIbbgTpcCCFm_WCkQoXSaQjcnVMLf1zcceu6DWzd7v4kqVSSkhl1xApORAGd90nUenWl9vtf9UFFSvkOp_V70OqldIUTUoFDuvh866aX9GPy6r59-gaq2LMPsD_m_FF4FkdB4</recordid><startdate>20230201</startdate><enddate>20230201</enddate><creator>Ifrim, Ioana</creator><creator>Vassilev, Vassil</creator><creator>Lange, David J</creator><general>IOP Publishing</general><scope>O3W</scope><scope>TSCCA</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>H8D</scope><scope>HCIFZ</scope><scope>L7M</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope></search><sort><creationdate>20230201</creationdate><title>GPU Accelerated Automatic Differentiation With Clad</title><author>Ifrim, Ioana ; Vassilev, Vassil ; Lange, David J</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c359t-cc8f4d4c5d6f1af10001e04369d0213e541fcd307ac4f595e818bdd5ee09f25e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>C++ (programming language)</topic><topic>Compilers</topic><topic>Differentiation</topic><topic>Domains</topic><topic>Graphics processing units</topic><topic>Histograms</topic><topic>Machine learning</topic><topic>Optimization</topic><topic>Physics</topic><topic>Robotics</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ifrim, Ioana</creatorcontrib><creatorcontrib>Vassilev, Vassil</creatorcontrib><creatorcontrib>Lange, David J</creatorcontrib><collection>Open Access: IOP Publishing Free Content</collection><collection>IOPscience (Open Access)</collection><collection>CrossRef</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>Aerospace Database</collection><collection>SciTech Premium Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>Journal of physics. Conference series</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ifrim, Ioana</au><au>Vassilev, Vassil</au><au>Lange, David J</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>GPU Accelerated Automatic Differentiation With Clad</atitle><jtitle>Journal of physics. Conference series</jtitle><addtitle>J. Phys.: Conf. Ser</addtitle><date>2023-02-01</date><risdate>2023</risdate><volume>2438</volume><issue>1</issue><spage>12043</spage><pages>12043-</pages><issn>1742-6588</issn><eissn>1742-6596</eissn><abstract>Automatic Differentiation (AD) is instrumental for science and industry. It is a tool to evaluate the derivative of a function specified through a computer program. The range of AD application domain spans from Machine Learning to Robotics to High Energy Physics. Computing gradients with the help of AD is guaranteed to be more precise than the numerical alternative and have a low, constant factor more arithmetical operations compared to the original function. Moreover, AD applications to domain problems typically are computationally bound. They are often limited by the computational requirements of high-dimensional parameters and thus can benefit from parallel implementations on graphics processing units (GPUs). Clad aims to enable differential analysis for C/C++ and CUDA and is a compiler-assisted AD tool available both as a compiler extension and in ROOT. Moreover, Clad works as a plugin extending the Clang compiler; as a plugin extending the interactive interpreter Cling; and as a Jupyter kernel extension based on xeus-cling. We demonstrate the advantages of parallel gradient computations on GPUs with Clad. We explain how to bring forth a new layer of optimization and a proportional speed up by extending Clad to support CUDA. The gradients of well-behaved C++ functions can be automatically executed on a GPU. The library can be easily integrated into existing frameworks or used interactively. Furthermore, we demonstrate the achieved application performance improvements, including (≈10x) in ROOT histogram fitting and corresponding performance gains from offloading to GPUs.</abstract><cop>Bristol</cop><pub>IOP Publishing</pub><doi>10.1088/1742-6596/2438/1/012043</doi><tpages>7</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1742-6588
ispartof Journal of physics. Conference series, 2023-02, Vol.2438 (1), p.12043
issn 1742-6588
1742-6596
language eng
recordid cdi_proquest_journals_2777067450
source Institute of Physics IOPscience extra; Open Access: IOP Publishing Free Content; Alma/SFX Local Collection; Free Full-Text Journals in Chemistry; EZB Electronic Journals Library
subjects C++ (programming language)
Compilers
Differentiation
Domains
Graphics processing units
Histograms
Machine learning
Optimization
Physics
Robotics
title GPU Accelerated Automatic Differentiation With Clad
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-20T16%3A58%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=GPU%20Accelerated%20Automatic%20Differentiation%20With%20Clad&rft.jtitle=Journal%20of%20physics.%20Conference%20series&rft.au=Ifrim,%20Ioana&rft.date=2023-02-01&rft.volume=2438&rft.issue=1&rft.spage=12043&rft.pages=12043-&rft.issn=1742-6588&rft.eissn=1742-6596&rft_id=info:doi/10.1088/1742-6596/2438/1/012043&rft_dat=%3Cproquest_cross%3E2777067450%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2777067450&rft_id=info:pmid/&rfr_iscdi=true