Safe Feature Elimination for the LASSO and Sparse Supervised Learning Problems

We describe a fast method to eliminate features (variables) in l1 -penalized least-square regression (or LASSO) problems. The elimination of features leads to a potentially substantial reduction in running time, specially for large values of the penalty parameter. Our method is not heuristic: it onl...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2011-05
Hauptverfasser: Laurent El Ghaoui, Viallon, Vivian, Rabbani, Tarek
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Laurent El Ghaoui
Viallon, Vivian
Rabbani, Tarek
description We describe a fast method to eliminate features (variables) in l1 -penalized least-square regression (or LASSO) problems. The elimination of features leads to a potentially substantial reduction in running time, specially for large values of the penalty parameter. Our method is not heuristic: it only eliminates features that are guaranteed to be absent after solving the LASSO problem. The feature elimination step is easy to parallelize and can test each feature for elimination independently. Moreover, the computational effort of our method is negligible compared to that of solving the LASSO problem - roughly it is the same as single gradient step. Our method extends the scope of existing LASSO algorithms to treat larger data sets, previously out of their reach. We show how our method can be extended to general l1 -penalized convex problems and present preliminary results for the Sparse Support Vector Machine and Logistic Regression problems.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2087018632</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2087018632</sourcerecordid><originalsourceid>FETCH-proquest_journals_20870186323</originalsourceid><addsrcrecordid>eNqNyr0KwjAUQOEgCBbtO1xwLsTE_qwiLQ5FhbiXSG81pU3qTerz6-ADOJ3hOwsWCSl3SbEXYsVi73vOuchykaYyYmelO4QKdZgJoRzMaKwOxlnoHEF4ItQHpS6gbQtq0uQR1DwhvY3HFmrUZI19wJXcfcDRb9iy04PH-Nc121bl7XhKJnKvGX1oejeT_VIjeJHzXZFJIf-7PmEiPWo</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2087018632</pqid></control><display><type>article</type><title>Safe Feature Elimination for the LASSO and Sparse Supervised Learning Problems</title><source>Free E- Journals</source><creator>Laurent El Ghaoui ; Viallon, Vivian ; Rabbani, Tarek</creator><creatorcontrib>Laurent El Ghaoui ; Viallon, Vivian ; Rabbani, Tarek</creatorcontrib><description>We describe a fast method to eliminate features (variables) in l1 -penalized least-square regression (or LASSO) problems. The elimination of features leads to a potentially substantial reduction in running time, specially for large values of the penalty parameter. Our method is not heuristic: it only eliminates features that are guaranteed to be absent after solving the LASSO problem. The feature elimination step is easy to parallelize and can test each feature for elimination independently. Moreover, the computational effort of our method is negligible compared to that of solving the LASSO problem - roughly it is the same as single gradient step. Our method extends the scope of existing LASSO algorithms to treat larger data sets, previously out of their reach. We show how our method can be extended to general l1 -penalized convex problems and present preliminary results for the Sparse Support Vector Machine and Logistic Regression problems.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Heuristic methods ; Parallel processing ; Run time (computers) ; Supervised learning ; Support vector machines</subject><ispartof>arXiv.org, 2011-05</ispartof><rights>2011. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Laurent El Ghaoui</creatorcontrib><creatorcontrib>Viallon, Vivian</creatorcontrib><creatorcontrib>Rabbani, Tarek</creatorcontrib><title>Safe Feature Elimination for the LASSO and Sparse Supervised Learning Problems</title><title>arXiv.org</title><description>We describe a fast method to eliminate features (variables) in l1 -penalized least-square regression (or LASSO) problems. The elimination of features leads to a potentially substantial reduction in running time, specially for large values of the penalty parameter. Our method is not heuristic: it only eliminates features that are guaranteed to be absent after solving the LASSO problem. The feature elimination step is easy to parallelize and can test each feature for elimination independently. Moreover, the computational effort of our method is negligible compared to that of solving the LASSO problem - roughly it is the same as single gradient step. Our method extends the scope of existing LASSO algorithms to treat larger data sets, previously out of their reach. We show how our method can be extended to general l1 -penalized convex problems and present preliminary results for the Sparse Support Vector Machine and Logistic Regression problems.</description><subject>Algorithms</subject><subject>Heuristic methods</subject><subject>Parallel processing</subject><subject>Run time (computers)</subject><subject>Supervised learning</subject><subject>Support vector machines</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2011</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNyr0KwjAUQOEgCBbtO1xwLsTE_qwiLQ5FhbiXSG81pU3qTerz6-ADOJ3hOwsWCSl3SbEXYsVi73vOuchykaYyYmelO4QKdZgJoRzMaKwOxlnoHEF4ItQHpS6gbQtq0uQR1DwhvY3HFmrUZI19wJXcfcDRb9iy04PH-Nc121bl7XhKJnKvGX1oejeT_VIjeJHzXZFJIf-7PmEiPWo</recordid><startdate>20110518</startdate><enddate>20110518</enddate><creator>Laurent El Ghaoui</creator><creator>Viallon, Vivian</creator><creator>Rabbani, Tarek</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20110518</creationdate><title>Safe Feature Elimination for the LASSO and Sparse Supervised Learning Problems</title><author>Laurent El Ghaoui ; Viallon, Vivian ; Rabbani, Tarek</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_20870186323</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2011</creationdate><topic>Algorithms</topic><topic>Heuristic methods</topic><topic>Parallel processing</topic><topic>Run time (computers)</topic><topic>Supervised learning</topic><topic>Support vector machines</topic><toplevel>online_resources</toplevel><creatorcontrib>Laurent El Ghaoui</creatorcontrib><creatorcontrib>Viallon, Vivian</creatorcontrib><creatorcontrib>Rabbani, Tarek</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Laurent El Ghaoui</au><au>Viallon, Vivian</au><au>Rabbani, Tarek</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Safe Feature Elimination for the LASSO and Sparse Supervised Learning Problems</atitle><jtitle>arXiv.org</jtitle><date>2011-05-18</date><risdate>2011</risdate><eissn>2331-8422</eissn><abstract>We describe a fast method to eliminate features (variables) in l1 -penalized least-square regression (or LASSO) problems. The elimination of features leads to a potentially substantial reduction in running time, specially for large values of the penalty parameter. Our method is not heuristic: it only eliminates features that are guaranteed to be absent after solving the LASSO problem. The feature elimination step is easy to parallelize and can test each feature for elimination independently. Moreover, the computational effort of our method is negligible compared to that of solving the LASSO problem - roughly it is the same as single gradient step. Our method extends the scope of existing LASSO algorithms to treat larger data sets, previously out of their reach. We show how our method can be extended to general l1 -penalized convex problems and present preliminary results for the Sparse Support Vector Machine and Logistic Regression problems.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2011-05
issn 2331-8422
language eng
recordid cdi_proquest_journals_2087018632
source Free E- Journals
subjects Algorithms
Heuristic methods
Parallel processing
Run time (computers)
Supervised learning
Support vector machines
title Safe Feature Elimination for the LASSO and Sparse Supervised Learning Problems
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-26T11%3A57%3A51IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Safe%20Feature%20Elimination%20for%20the%20LASSO%20and%20Sparse%20Supervised%20Learning%20Problems&rft.jtitle=arXiv.org&rft.au=Laurent%20El%20Ghaoui&rft.date=2011-05-18&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2087018632%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2087018632&rft_id=info:pmid/&rfr_iscdi=true