A Negative Result on Gradient Matching for Selective Backprop

With increasing scale in model and dataset size, the training of deep neural networks becomes a massive computational burden. One approach to speed up the training process is Selective Backprop. For this approach, we perform a forward pass to obtain a loss value for each data point in a minibatch. T...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Balles, Lukas, Archambeau, Cedric, Zappella, Giovanni
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Balles, Lukas
Archambeau, Cedric
Zappella, Giovanni
description With increasing scale in model and dataset size, the training of deep neural networks becomes a massive computational burden. One approach to speed up the training process is Selective Backprop. For this approach, we perform a forward pass to obtain a loss value for each data point in a minibatch. The backward pass is then restricted to a subset of that minibatch, prioritizing high-loss examples. We build on this approach, but seek to improve the subset selection mechanism by choosing the (weighted) subset which best matches the mean gradient over the entire minibatch. We use the gradients w.r.t. the model's last layer as a cheap proxy, resulting in virtually no overhead in addition to the forward pass. At the same time, for our experiments we add a simple random selection baseline which has been absent from prior work. Surprisingly, we find that both the loss-based as well as the gradient-matching strategy fail to consistently outperform the random baseline.
doi_str_mv 10.48550/arxiv.2312.05021
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2312_05021</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2312_05021</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-4ec0a1c9d42e360b3bcf355ea3ad5637cf0b8ba9e140ecd506859a8cd9b3077f3</originalsourceid><addsrcrecordid>eNotj71OwzAURr0woNIHYMIvkHAdx_kZGEpVClIBqe0eXV9fF6tpErmhgrdHBKazfDr6jhC3CtK8MgbuMX6FS5pplaVgIFPX4mEh3_iAY7iw3PL5sx1l38l1RBe4G-UrjvQRuoP0fZQ7bpmm5SPScYj9cCOuPLZnnv9zJvZPq_3yOdm8r1-Wi02CRamSnAlQUe3yjHUBVlvy2hhGjc4UuiQPtrJYs8qByRkoKlNjRa62GsrS65m4-9NO_5shhhPG7-a3o5k69A_ZJEKf</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>A Negative Result on Gradient Matching for Selective Backprop</title><source>arXiv.org</source><creator>Balles, Lukas ; Archambeau, Cedric ; Zappella, Giovanni</creator><creatorcontrib>Balles, Lukas ; Archambeau, Cedric ; Zappella, Giovanni</creatorcontrib><description>With increasing scale in model and dataset size, the training of deep neural networks becomes a massive computational burden. One approach to speed up the training process is Selective Backprop. For this approach, we perform a forward pass to obtain a loss value for each data point in a minibatch. The backward pass is then restricted to a subset of that minibatch, prioritizing high-loss examples. We build on this approach, but seek to improve the subset selection mechanism by choosing the (weighted) subset which best matches the mean gradient over the entire minibatch. We use the gradients w.r.t. the model's last layer as a cheap proxy, resulting in virtually no overhead in addition to the forward pass. At the same time, for our experiments we add a simple random selection baseline which has been absent from prior work. Surprisingly, we find that both the loss-based as well as the gradient-matching strategy fail to consistently outperform the random baseline.</description><identifier>DOI: 10.48550/arxiv.2312.05021</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning ; Mathematics - Optimization and Control</subject><creationdate>2023-12</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2312.05021$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2312.05021$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Balles, Lukas</creatorcontrib><creatorcontrib>Archambeau, Cedric</creatorcontrib><creatorcontrib>Zappella, Giovanni</creatorcontrib><title>A Negative Result on Gradient Matching for Selective Backprop</title><description>With increasing scale in model and dataset size, the training of deep neural networks becomes a massive computational burden. One approach to speed up the training process is Selective Backprop. For this approach, we perform a forward pass to obtain a loss value for each data point in a minibatch. The backward pass is then restricted to a subset of that minibatch, prioritizing high-loss examples. We build on this approach, but seek to improve the subset selection mechanism by choosing the (weighted) subset which best matches the mean gradient over the entire minibatch. We use the gradients w.r.t. the model's last layer as a cheap proxy, resulting in virtually no overhead in addition to the forward pass. At the same time, for our experiments we add a simple random selection baseline which has been absent from prior work. Surprisingly, we find that both the loss-based as well as the gradient-matching strategy fail to consistently outperform the random baseline.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><subject>Mathematics - Optimization and Control</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj71OwzAURr0woNIHYMIvkHAdx_kZGEpVClIBqe0eXV9fF6tpErmhgrdHBKazfDr6jhC3CtK8MgbuMX6FS5pplaVgIFPX4mEh3_iAY7iw3PL5sx1l38l1RBe4G-UrjvQRuoP0fZQ7bpmm5SPScYj9cCOuPLZnnv9zJvZPq_3yOdm8r1-Wi02CRamSnAlQUe3yjHUBVlvy2hhGjc4UuiQPtrJYs8qByRkoKlNjRa62GsrS65m4-9NO_5shhhPG7-a3o5k69A_ZJEKf</recordid><startdate>20231208</startdate><enddate>20231208</enddate><creator>Balles, Lukas</creator><creator>Archambeau, Cedric</creator><creator>Zappella, Giovanni</creator><scope>AKY</scope><scope>AKZ</scope><scope>GOX</scope></search><sort><creationdate>20231208</creationdate><title>A Negative Result on Gradient Matching for Selective Backprop</title><author>Balles, Lukas ; Archambeau, Cedric ; Zappella, Giovanni</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-4ec0a1c9d42e360b3bcf355ea3ad5637cf0b8ba9e140ecd506859a8cd9b3077f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><topic>Mathematics - Optimization and Control</topic><toplevel>online_resources</toplevel><creatorcontrib>Balles, Lukas</creatorcontrib><creatorcontrib>Archambeau, Cedric</creatorcontrib><creatorcontrib>Zappella, Giovanni</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Mathematics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Balles, Lukas</au><au>Archambeau, Cedric</au><au>Zappella, Giovanni</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Negative Result on Gradient Matching for Selective Backprop</atitle><date>2023-12-08</date><risdate>2023</risdate><abstract>With increasing scale in model and dataset size, the training of deep neural networks becomes a massive computational burden. One approach to speed up the training process is Selective Backprop. For this approach, we perform a forward pass to obtain a loss value for each data point in a minibatch. The backward pass is then restricted to a subset of that minibatch, prioritizing high-loss examples. We build on this approach, but seek to improve the subset selection mechanism by choosing the (weighted) subset which best matches the mean gradient over the entire minibatch. We use the gradients w.r.t. the model's last layer as a cheap proxy, resulting in virtually no overhead in addition to the forward pass. At the same time, for our experiments we add a simple random selection baseline which has been absent from prior work. Surprisingly, we find that both the loss-based as well as the gradient-matching strategy fail to consistently outperform the random baseline.</abstract><doi>10.48550/arxiv.2312.05021</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2312.05021
ispartof
issn
language eng
recordid cdi_arxiv_primary_2312_05021
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Learning
Mathematics - Optimization and Control
title A Negative Result on Gradient Matching for Selective Backprop
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-02T05%3A48%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Negative%20Result%20on%20Gradient%20Matching%20for%20Selective%20Backprop&rft.au=Balles,%20Lukas&rft.date=2023-12-08&rft_id=info:doi/10.48550/arxiv.2312.05021&rft_dat=%3Carxiv_GOX%3E2312_05021%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true