Efficient Gradient Estimation via Adaptive Sampling and Importance Sampling
Machine learning problems rely heavily on stochastic gradient descent (SGD) for optimization. The effectiveness of SGD is contingent upon accurately estimating gradients from a mini-batch of data samples. Instead of the commonly used uniform sampling, adaptive or importance sampling reduces noise in...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-11 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Salaün, Corentin Huang, Xingchang Georgiev, Iliyan Mitra, Niloy J Singh, Gurprit |
description | Machine learning problems rely heavily on stochastic gradient descent (SGD) for optimization. The effectiveness of SGD is contingent upon accurately estimating gradients from a mini-batch of data samples. Instead of the commonly used uniform sampling, adaptive or importance sampling reduces noise in gradient estimation by forming mini-batches that prioritize crucial data points. Previous research has suggested that data points should be selected with probabilities proportional to their gradient norm. Nevertheless, existing algorithms have struggled to efficiently integrate importance sampling into machine learning frameworks. In this work, we make two contributions. First, we present an algorithm that can incorporate existing importance functions into our framework. Second, we propose a simplified importance function that relies solely on the loss gradient of the output layer. By leveraging our proposed gradient estimation techniques, we observe improved convergence in classification and regression tasks with minimal computational overhead. We validate the effectiveness of our adaptive and importance-sampling approach on image and point-cloud datasets. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2894592188</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2894592188</sourcerecordid><originalsourceid>FETCH-proquest_journals_28945921883</originalsourceid><addsrcrecordid>eNqNissKwjAQRYMgWLT_EHBdqJNW06VIfeBS92VoU5nSJjFJ-_2KCG5d3cM5d8YiEGKTyAxgwWLvuzRNYbuDPBcRu5ZtSzUpHfjJYfOB0gcaMJDRfCLk-wZtoEnxGw62J_3gqBt-GaxxAXX98ys2b7H3Kv7ukq2P5f1wTqwzz1H5UHVmdPqdKpBFlhewkVL893oBDio9QQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2894592188</pqid></control><display><type>article</type><title>Efficient Gradient Estimation via Adaptive Sampling and Importance Sampling</title><source>Free E- Journals</source><creator>Salaün, Corentin ; Huang, Xingchang ; Georgiev, Iliyan ; Mitra, Niloy J ; Singh, Gurprit</creator><creatorcontrib>Salaün, Corentin ; Huang, Xingchang ; Georgiev, Iliyan ; Mitra, Niloy J ; Singh, Gurprit</creatorcontrib><description>Machine learning problems rely heavily on stochastic gradient descent (SGD) for optimization. The effectiveness of SGD is contingent upon accurately estimating gradients from a mini-batch of data samples. Instead of the commonly used uniform sampling, adaptive or importance sampling reduces noise in gradient estimation by forming mini-batches that prioritize crucial data points. Previous research has suggested that data points should be selected with probabilities proportional to their gradient norm. Nevertheless, existing algorithms have struggled to efficiently integrate importance sampling into machine learning frameworks. In this work, we make two contributions. First, we present an algorithm that can incorporate existing importance functions into our framework. Second, we propose a simplified importance function that relies solely on the loss gradient of the output layer. By leveraging our proposed gradient estimation techniques, we observe improved convergence in classification and regression tasks with minimal computational overhead. We validate the effectiveness of our adaptive and importance-sampling approach on image and point-cloud datasets.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Adaptive sampling ; Algorithms ; Data points ; Effectiveness ; Estimation ; Importance sampling ; Machine learning</subject><ispartof>arXiv.org, 2023-11</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Salaün, Corentin</creatorcontrib><creatorcontrib>Huang, Xingchang</creatorcontrib><creatorcontrib>Georgiev, Iliyan</creatorcontrib><creatorcontrib>Mitra, Niloy J</creatorcontrib><creatorcontrib>Singh, Gurprit</creatorcontrib><title>Efficient Gradient Estimation via Adaptive Sampling and Importance Sampling</title><title>arXiv.org</title><description>Machine learning problems rely heavily on stochastic gradient descent (SGD) for optimization. The effectiveness of SGD is contingent upon accurately estimating gradients from a mini-batch of data samples. Instead of the commonly used uniform sampling, adaptive or importance sampling reduces noise in gradient estimation by forming mini-batches that prioritize crucial data points. Previous research has suggested that data points should be selected with probabilities proportional to their gradient norm. Nevertheless, existing algorithms have struggled to efficiently integrate importance sampling into machine learning frameworks. In this work, we make two contributions. First, we present an algorithm that can incorporate existing importance functions into our framework. Second, we propose a simplified importance function that relies solely on the loss gradient of the output layer. By leveraging our proposed gradient estimation techniques, we observe improved convergence in classification and regression tasks with minimal computational overhead. We validate the effectiveness of our adaptive and importance-sampling approach on image and point-cloud datasets.</description><subject>Adaptive sampling</subject><subject>Algorithms</subject><subject>Data points</subject><subject>Effectiveness</subject><subject>Estimation</subject><subject>Importance sampling</subject><subject>Machine learning</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNissKwjAQRYMgWLT_EHBdqJNW06VIfeBS92VoU5nSJjFJ-_2KCG5d3cM5d8YiEGKTyAxgwWLvuzRNYbuDPBcRu5ZtSzUpHfjJYfOB0gcaMJDRfCLk-wZtoEnxGw62J_3gqBt-GaxxAXX98ys2b7H3Kv7ukq2P5f1wTqwzz1H5UHVmdPqdKpBFlhewkVL893oBDio9QQ</recordid><startdate>20231127</startdate><enddate>20231127</enddate><creator>Salaün, Corentin</creator><creator>Huang, Xingchang</creator><creator>Georgiev, Iliyan</creator><creator>Mitra, Niloy J</creator><creator>Singh, Gurprit</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20231127</creationdate><title>Efficient Gradient Estimation via Adaptive Sampling and Importance Sampling</title><author>Salaün, Corentin ; Huang, Xingchang ; Georgiev, Iliyan ; Mitra, Niloy J ; Singh, Gurprit</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28945921883</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Adaptive sampling</topic><topic>Algorithms</topic><topic>Data points</topic><topic>Effectiveness</topic><topic>Estimation</topic><topic>Importance sampling</topic><topic>Machine learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Salaün, Corentin</creatorcontrib><creatorcontrib>Huang, Xingchang</creatorcontrib><creatorcontrib>Georgiev, Iliyan</creatorcontrib><creatorcontrib>Mitra, Niloy J</creatorcontrib><creatorcontrib>Singh, Gurprit</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Salaün, Corentin</au><au>Huang, Xingchang</au><au>Georgiev, Iliyan</au><au>Mitra, Niloy J</au><au>Singh, Gurprit</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Efficient Gradient Estimation via Adaptive Sampling and Importance Sampling</atitle><jtitle>arXiv.org</jtitle><date>2023-11-27</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Machine learning problems rely heavily on stochastic gradient descent (SGD) for optimization. The effectiveness of SGD is contingent upon accurately estimating gradients from a mini-batch of data samples. Instead of the commonly used uniform sampling, adaptive or importance sampling reduces noise in gradient estimation by forming mini-batches that prioritize crucial data points. Previous research has suggested that data points should be selected with probabilities proportional to their gradient norm. Nevertheless, existing algorithms have struggled to efficiently integrate importance sampling into machine learning frameworks. In this work, we make two contributions. First, we present an algorithm that can incorporate existing importance functions into our framework. Second, we propose a simplified importance function that relies solely on the loss gradient of the output layer. By leveraging our proposed gradient estimation techniques, we observe improved convergence in classification and regression tasks with minimal computational overhead. We validate the effectiveness of our adaptive and importance-sampling approach on image and point-cloud datasets.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-11 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2894592188 |
source | Free E- Journals |
subjects | Adaptive sampling Algorithms Data points Effectiveness Estimation Importance sampling Machine learning |
title | Efficient Gradient Estimation via Adaptive Sampling and Importance Sampling |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-07T18%3A59%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Efficient%20Gradient%20Estimation%20via%20Adaptive%20Sampling%20and%20Importance%20Sampling&rft.jtitle=arXiv.org&rft.au=Sala%C3%BCn,%20Corentin&rft.date=2023-11-27&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2894592188%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2894592188&rft_id=info:pmid/&rfr_iscdi=true |