Backpropagation through nonlinear units for the all-optical training of neural networks

We propose a practical scheme for end-to-end optical backpropagation in neural networks. Using saturable absorption for the nonlinear units, we find that the backward-propagating gradients required to train the network can be approximated in a surprisingly simple pump-probe scheme that requires only...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Photonics research (Washington, DC) DC), 2021-03, Vol.9 (3), p.B71
Hauptverfasser: Guo, Xianxin, Barrett, Thomas D., Wang, Zhiming M., Lvovsky, A. I.
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 3
container_start_page B71
container_title Photonics research (Washington, DC)
container_volume 9
creator Guo, Xianxin
Barrett, Thomas D.
Wang, Zhiming M.
Lvovsky, A. I.
description We propose a practical scheme for end-to-end optical backpropagation in neural networks. Using saturable absorption for the nonlinear units, we find that the backward-propagating gradients required to train the network can be approximated in a surprisingly simple pump-probe scheme that requires only simple passive optical elements. Simulations show that, with readily obtainable optical depths, our approach can achieve equivalent performance to state-of-the-art computational networks on image classification benchmarks, even in deep networks with multiple sequential gradient approximation. With backpropagation through nonlinear units being an outstanding challenge to the field, this work provides a feasible path toward truly all-optical neural networks.
doi_str_mv 10.1364/PRJ.411104
format Article
fullrecord <record><control><sourceid>crossref</sourceid><recordid>TN_cdi_crossref_primary_10_1364_PRJ_411104</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_1364_PRJ_411104</sourcerecordid><originalsourceid>FETCH-LOGICAL-c297t-ba5a089d49e2fe89147f1d5a0761212870b39f54795ea63a63ef3b15682fd5303</originalsourceid><addsrcrecordid>eNpNkE1LxDAQhoMouKx78RfkLHTN5KNpjrr4yYIiiseStkk3bk1KkiL-eyvrwZeBd3gG5vAgdA5kDazkl88vj2sOAIQfoQVlVBYKqDj-t5-iVUofZI7iwES5QO_Xut2PMYy619kFj_MuhqnfYR_84LzREU_e5YRtiPPNYD0MRRiza_WAc9TOO9_jYLE3U5yRN_krxH06QydWD8ms_nqJ3m5vXjf3xfbp7mFztS1aqmQuGi00qVTHlaHWVAq4tNDNTJZAgVaSNExZwaUSRpdsHmNZA6KsqO0EI2yJLg5_2xhSisbWY3SfOn7XQOpfK_VspT5YYT-IMlU_</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Backpropagation through nonlinear units for the all-optical training of neural networks</title><source>EZB-FREE-00999 freely available EZB journals</source><source>Optica Publishing Group Journals</source><creator>Guo, Xianxin ; Barrett, Thomas D. ; Wang, Zhiming M. ; Lvovsky, A. I.</creator><creatorcontrib>Guo, Xianxin ; Barrett, Thomas D. ; Wang, Zhiming M. ; Lvovsky, A. I.</creatorcontrib><description>We propose a practical scheme for end-to-end optical backpropagation in neural networks. Using saturable absorption for the nonlinear units, we find that the backward-propagating gradients required to train the network can be approximated in a surprisingly simple pump-probe scheme that requires only simple passive optical elements. Simulations show that, with readily obtainable optical depths, our approach can achieve equivalent performance to state-of-the-art computational networks on image classification benchmarks, even in deep networks with multiple sequential gradient approximation. With backpropagation through nonlinear units being an outstanding challenge to the field, this work provides a feasible path toward truly all-optical neural networks.</description><identifier>ISSN: 2327-9125</identifier><identifier>EISSN: 2327-9125</identifier><identifier>DOI: 10.1364/PRJ.411104</identifier><language>eng</language><ispartof>Photonics research (Washington, DC), 2021-03, Vol.9 (3), p.B71</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c297t-ba5a089d49e2fe89147f1d5a0761212870b39f54795ea63a63ef3b15682fd5303</citedby><cites>FETCH-LOGICAL-c297t-ba5a089d49e2fe89147f1d5a0761212870b39f54795ea63a63ef3b15682fd5303</cites><orcidid>0000-0003-2094-0123 ; 0000-0001-6241-3028</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,3258,27924,27925</link.rule.ids></links><search><creatorcontrib>Guo, Xianxin</creatorcontrib><creatorcontrib>Barrett, Thomas D.</creatorcontrib><creatorcontrib>Wang, Zhiming M.</creatorcontrib><creatorcontrib>Lvovsky, A. I.</creatorcontrib><title>Backpropagation through nonlinear units for the all-optical training of neural networks</title><title>Photonics research (Washington, DC)</title><description>We propose a practical scheme for end-to-end optical backpropagation in neural networks. Using saturable absorption for the nonlinear units, we find that the backward-propagating gradients required to train the network can be approximated in a surprisingly simple pump-probe scheme that requires only simple passive optical elements. Simulations show that, with readily obtainable optical depths, our approach can achieve equivalent performance to state-of-the-art computational networks on image classification benchmarks, even in deep networks with multiple sequential gradient approximation. With backpropagation through nonlinear units being an outstanding challenge to the field, this work provides a feasible path toward truly all-optical neural networks.</description><issn>2327-9125</issn><issn>2327-9125</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNpNkE1LxDAQhoMouKx78RfkLHTN5KNpjrr4yYIiiseStkk3bk1KkiL-eyvrwZeBd3gG5vAgdA5kDazkl88vj2sOAIQfoQVlVBYKqDj-t5-iVUofZI7iwES5QO_Xut2PMYy619kFj_MuhqnfYR_84LzREU_e5YRtiPPNYD0MRRiza_WAc9TOO9_jYLE3U5yRN_krxH06QydWD8ms_nqJ3m5vXjf3xfbp7mFztS1aqmQuGi00qVTHlaHWVAq4tNDNTJZAgVaSNExZwaUSRpdsHmNZA6KsqO0EI2yJLg5_2xhSisbWY3SfOn7XQOpfK_VspT5YYT-IMlU_</recordid><startdate>20210301</startdate><enddate>20210301</enddate><creator>Guo, Xianxin</creator><creator>Barrett, Thomas D.</creator><creator>Wang, Zhiming M.</creator><creator>Lvovsky, A. I.</creator><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0003-2094-0123</orcidid><orcidid>https://orcid.org/0000-0001-6241-3028</orcidid></search><sort><creationdate>20210301</creationdate><title>Backpropagation through nonlinear units for the all-optical training of neural networks</title><author>Guo, Xianxin ; Barrett, Thomas D. ; Wang, Zhiming M. ; Lvovsky, A. I.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c297t-ba5a089d49e2fe89147f1d5a0761212870b39f54795ea63a63ef3b15682fd5303</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Guo, Xianxin</creatorcontrib><creatorcontrib>Barrett, Thomas D.</creatorcontrib><creatorcontrib>Wang, Zhiming M.</creatorcontrib><creatorcontrib>Lvovsky, A. I.</creatorcontrib><collection>CrossRef</collection><jtitle>Photonics research (Washington, DC)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Guo, Xianxin</au><au>Barrett, Thomas D.</au><au>Wang, Zhiming M.</au><au>Lvovsky, A. I.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Backpropagation through nonlinear units for the all-optical training of neural networks</atitle><jtitle>Photonics research (Washington, DC)</jtitle><date>2021-03-01</date><risdate>2021</risdate><volume>9</volume><issue>3</issue><spage>B71</spage><pages>B71-</pages><issn>2327-9125</issn><eissn>2327-9125</eissn><abstract>We propose a practical scheme for end-to-end optical backpropagation in neural networks. Using saturable absorption for the nonlinear units, we find that the backward-propagating gradients required to train the network can be approximated in a surprisingly simple pump-probe scheme that requires only simple passive optical elements. Simulations show that, with readily obtainable optical depths, our approach can achieve equivalent performance to state-of-the-art computational networks on image classification benchmarks, even in deep networks with multiple sequential gradient approximation. With backpropagation through nonlinear units being an outstanding challenge to the field, this work provides a feasible path toward truly all-optical neural networks.</abstract><doi>10.1364/PRJ.411104</doi><orcidid>https://orcid.org/0000-0003-2094-0123</orcidid><orcidid>https://orcid.org/0000-0001-6241-3028</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 2327-9125
ispartof Photonics research (Washington, DC), 2021-03, Vol.9 (3), p.B71
issn 2327-9125
2327-9125
language eng
recordid cdi_crossref_primary_10_1364_PRJ_411104
source EZB-FREE-00999 freely available EZB journals; Optica Publishing Group Journals
title Backpropagation through nonlinear units for the all-optical training of neural networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T12%3A24%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Backpropagation%20through%20nonlinear%20units%20for%20the%20all-optical%20training%20of%20neural%20networks&rft.jtitle=Photonics%20research%20(Washington,%20DC)&rft.au=Guo,%20Xianxin&rft.date=2021-03-01&rft.volume=9&rft.issue=3&rft.spage=B71&rft.pages=B71-&rft.issn=2327-9125&rft.eissn=2327-9125&rft_id=info:doi/10.1364/PRJ.411104&rft_dat=%3Ccrossref%3E10_1364_PRJ_411104%3C/crossref%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true