Rule-Based Ensemble Solutions for Regression
We describe a lightweight learning method that induces an ensemble of decision-rule solutions for regression problems. Instead of direct prediction of a continuous output variable, the method discretizes the variable by k-means clustering and solves the resultant classification problem. Predictions...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Buchkapitel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 72 |
---|---|
container_issue | |
container_start_page | 62 |
container_title | |
container_volume | 2123 |
creator | Indurkhya, Nitin Weiss, Sholom M. |
description | We describe a lightweight learning method that induces an ensemble of decision-rule solutions for regression problems. Instead of direct prediction of a continuous output variable, the method discretizes the variable by k-means clustering and solves the resultant classification problem. Predictions on new examples are made by averaging the mean values of classes with votes that are close in number to the most likely class. We provide experimental evidence that this indirect approach can often yield strong results for many applications, generally outperforming direct approaches such as regression trees and rivaling bagged regression trees. |
doi_str_mv | 10.1007/3-540-44596-X_6 |
format | Book Chapter |
fullrecord | <record><control><sourceid>proquest_pasca</sourceid><recordid>TN_cdi_pascalfrancis_primary_1017646</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>EBC3071933_12_72</sourcerecordid><originalsourceid>FETCH-LOGICAL-p265t-a70c87723048cf485dc2cfc6b734fe432f4d1b45e90b49d78d81dd52475195d43</originalsourceid><addsrcrecordid>eNotUMtOwzAQNE8RSs9cc-CIwfbacXyEqjykSkgFpN4sx3ZKIU2CnR74e9zHXlazMzurHYSuKbmjhMh7wIITzLlQBV7o4ghdQhrs8OIYZbSgFANwdYLGSpY7joFQ9BRlBAjDSnI4R5kSpWBAOLlA4xi_SSpgXEiZodv5pvH40UTv8mkb_bpqfP7eNZth1bUxr7uQz_0y-BgTvkJntWmiHx_6CH0-TT8mL3j29vw6eZjhnhViwEYSW0q5vVjampfCWWZrW1QSeO05sJo7WnHhFam4crJ0JXVOMC4FVcJxGKGbvW9vojVNHUxrV1H3YbU24U9TQmXBiyTDe1lMTLv0QVdd9xMTr7fpadApEL1LS6f0kp4dbEP3u_Fx0H67YH07BNPYL9MPPkQNRFIFoCnT6Yd_SxdrGg</addsrcrecordid><sourcetype>Index Database</sourcetype><iscdi>true</iscdi><recordtype>book_chapter</recordtype><pqid>EBC3071933_12_72</pqid></control><display><type>book_chapter</type><title>Rule-Based Ensemble Solutions for Regression</title><source>Springer Books</source><creator>Indurkhya, Nitin ; Weiss, Sholom M.</creator><contributor>Perner, Petra</contributor><creatorcontrib>Indurkhya, Nitin ; Weiss, Sholom M. ; Perner, Petra</creatorcontrib><description>We describe a lightweight learning method that induces an ensemble of decision-rule solutions for regression problems. Instead of direct prediction of a continuous output variable, the method discretizes the variable by k-means clustering and solves the resultant classification problem. Predictions on new examples are made by averaging the mean values of classes with votes that are close in number to the most likely class. We provide experimental evidence that this indirect approach can often yield strong results for many applications, generally outperforming direct approaches such as regression trees and rivaling bagged regression trees.</description><identifier>ISSN: 0302-9743</identifier><identifier>ISBN: 9783540423591</identifier><identifier>ISBN: 3540423591</identifier><identifier>EISSN: 1611-3349</identifier><identifier>EISBN: 354044596X</identifier><identifier>EISBN: 9783540445968</identifier><identifier>DOI: 10.1007/3-540-44596-X_6</identifier><identifier>OCLC: 958523040</identifier><identifier>LCCallNum: Q337.5</identifier><language>eng</language><publisher>Germany: Springer Berlin / Heidelberg</publisher><subject>Applied sciences ; Artificial intelligence ; Computer science; control theory; systems ; Exact sciences and technology ; Learning and adaptive systems ; Mean Absolute Deviation ; Regression Problem ; Regression Tree ; Rule Induction ; Training Case</subject><ispartof>Lecture notes in computer science, 2001, Vol.2123, p.62-72</ispartof><rights>Springer-Verlag Berlin Heidelberg 2001</rights><rights>2001 INIST-CNRS</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><relation>Lecture Notes in Computer Science</relation></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Uhttps://ebookcentral.proquest.com/covers/3071933-l.jpg</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/3-540-44596-X_6$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/3-540-44596-X_6$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>309,310,775,776,780,785,786,789,4036,4037,27902,38232,41418,42487</link.rule.ids><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&idt=1017646$$DView record in Pascal Francis$$Hfree_for_read</backlink></links><search><contributor>Perner, Petra</contributor><creatorcontrib>Indurkhya, Nitin</creatorcontrib><creatorcontrib>Weiss, Sholom M.</creatorcontrib><title>Rule-Based Ensemble Solutions for Regression</title><title>Lecture notes in computer science</title><description>We describe a lightweight learning method that induces an ensemble of decision-rule solutions for regression problems. Instead of direct prediction of a continuous output variable, the method discretizes the variable by k-means clustering and solves the resultant classification problem. Predictions on new examples are made by averaging the mean values of classes with votes that are close in number to the most likely class. We provide experimental evidence that this indirect approach can often yield strong results for many applications, generally outperforming direct approaches such as regression trees and rivaling bagged regression trees.</description><subject>Applied sciences</subject><subject>Artificial intelligence</subject><subject>Computer science; control theory; systems</subject><subject>Exact sciences and technology</subject><subject>Learning and adaptive systems</subject><subject>Mean Absolute Deviation</subject><subject>Regression Problem</subject><subject>Regression Tree</subject><subject>Rule Induction</subject><subject>Training Case</subject><issn>0302-9743</issn><issn>1611-3349</issn><isbn>9783540423591</isbn><isbn>3540423591</isbn><isbn>354044596X</isbn><isbn>9783540445968</isbn><fulltext>true</fulltext><rsrctype>book_chapter</rsrctype><creationdate>2001</creationdate><recordtype>book_chapter</recordtype><recordid>eNotUMtOwzAQNE8RSs9cc-CIwfbacXyEqjykSkgFpN4sx3ZKIU2CnR74e9zHXlazMzurHYSuKbmjhMh7wIITzLlQBV7o4ghdQhrs8OIYZbSgFANwdYLGSpY7joFQ9BRlBAjDSnI4R5kSpWBAOLlA4xi_SSpgXEiZodv5pvH40UTv8mkb_bpqfP7eNZth1bUxr7uQz_0y-BgTvkJntWmiHx_6CH0-TT8mL3j29vw6eZjhnhViwEYSW0q5vVjampfCWWZrW1QSeO05sJo7WnHhFam4crJ0JXVOMC4FVcJxGKGbvW9vojVNHUxrV1H3YbU24U9TQmXBiyTDe1lMTLv0QVdd9xMTr7fpadApEL1LS6f0kp4dbEP3u_Fx0H67YH07BNPYL9MPPkQNRFIFoCnT6Yd_SxdrGg</recordid><startdate>2001</startdate><enddate>2001</enddate><creator>Indurkhya, Nitin</creator><creator>Weiss, Sholom M.</creator><general>Springer Berlin / Heidelberg</general><general>Springer Berlin Heidelberg</general><general>Springer</general><scope>FFUUA</scope><scope>IQODW</scope></search><sort><creationdate>2001</creationdate><title>Rule-Based Ensemble Solutions for Regression</title><author>Indurkhya, Nitin ; Weiss, Sholom M.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-p265t-a70c87723048cf485dc2cfc6b734fe432f4d1b45e90b49d78d81dd52475195d43</frbrgroupid><rsrctype>book_chapters</rsrctype><prefilter>book_chapters</prefilter><language>eng</language><creationdate>2001</creationdate><topic>Applied sciences</topic><topic>Artificial intelligence</topic><topic>Computer science; control theory; systems</topic><topic>Exact sciences and technology</topic><topic>Learning and adaptive systems</topic><topic>Mean Absolute Deviation</topic><topic>Regression Problem</topic><topic>Regression Tree</topic><topic>Rule Induction</topic><topic>Training Case</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Indurkhya, Nitin</creatorcontrib><creatorcontrib>Weiss, Sholom M.</creatorcontrib><collection>ProQuest Ebook Central - Book Chapters - Demo use only</collection><collection>Pascal-Francis</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Indurkhya, Nitin</au><au>Weiss, Sholom M.</au><au>Perner, Petra</au><format>book</format><genre>bookitem</genre><ristype>CHAP</ristype><atitle>Rule-Based Ensemble Solutions for Regression</atitle><btitle>Lecture notes in computer science</btitle><seriestitle>Lecture Notes in Computer Science</seriestitle><date>2001</date><risdate>2001</risdate><volume>2123</volume><spage>62</spage><epage>72</epage><pages>62-72</pages><issn>0302-9743</issn><eissn>1611-3349</eissn><isbn>9783540423591</isbn><isbn>3540423591</isbn><eisbn>354044596X</eisbn><eisbn>9783540445968</eisbn><abstract>We describe a lightweight learning method that induces an ensemble of decision-rule solutions for regression problems. Instead of direct prediction of a continuous output variable, the method discretizes the variable by k-means clustering and solves the resultant classification problem. Predictions on new examples are made by averaging the mean values of classes with votes that are close in number to the most likely class. We provide experimental evidence that this indirect approach can often yield strong results for many applications, generally outperforming direct approaches such as regression trees and rivaling bagged regression trees.</abstract><cop>Germany</cop><pub>Springer Berlin / Heidelberg</pub><doi>10.1007/3-540-44596-X_6</doi><oclcid>958523040</oclcid><tpages>11</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0302-9743 |
ispartof | Lecture notes in computer science, 2001, Vol.2123, p.62-72 |
issn | 0302-9743 1611-3349 |
language | eng |
recordid | cdi_pascalfrancis_primary_1017646 |
source | Springer Books |
subjects | Applied sciences Artificial intelligence Computer science control theory systems Exact sciences and technology Learning and adaptive systems Mean Absolute Deviation Regression Problem Regression Tree Rule Induction Training Case |
title | Rule-Based Ensemble Solutions for Regression |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-08T07%3A52%3A47IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pasca&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=bookitem&rft.atitle=Rule-Based%20Ensemble%20Solutions%20for%20Regression&rft.btitle=Lecture%20notes%20in%20computer%20science&rft.au=Indurkhya,%20Nitin&rft.date=2001&rft.volume=2123&rft.spage=62&rft.epage=72&rft.pages=62-72&rft.issn=0302-9743&rft.eissn=1611-3349&rft.isbn=9783540423591&rft.isbn_list=3540423591&rft_id=info:doi/10.1007/3-540-44596-X_6&rft_dat=%3Cproquest_pasca%3EEBC3071933_12_72%3C/proquest_pasca%3E%3Curl%3E%3C/url%3E&rft.eisbn=354044596X&rft.eisbn_list=9783540445968&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=EBC3071933_12_72&rft_id=info:pmid/&rfr_iscdi=true |