Learning monotone nonlinear models using the Choquet integral

The learning of predictive models that guarantee monotonicity in the input variables has received increasing attention in machine learning in recent years. By trend, the difficulty of ensuring monotonicity increases with the flexibility or, say, nonlinearity of a model. In this paper, we advocate th...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Machine learning 2012-10, Vol.89 (1-2), p.183-211
Hauptverfasser: Fallah Tehrani, Ali, Cheng, Weiwei, Dembczyński, Krzysztof, Hüllermeier, Eyke
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 211
container_issue 1-2
container_start_page 183
container_title Machine learning
container_volume 89
creator Fallah Tehrani, Ali
Cheng, Weiwei
Dembczyński, Krzysztof
Hüllermeier, Eyke
description The learning of predictive models that guarantee monotonicity in the input variables has received increasing attention in machine learning in recent years. By trend, the difficulty of ensuring monotonicity increases with the flexibility or, say, nonlinearity of a model. In this paper, we advocate the so-called Choquet integral as a tool for learning monotone nonlinear models. While being widely used as a flexible aggregation operator in different fields, such as multiple criteria decision making, the Choquet integral is much less known in machine learning so far. Apart from combining monotonicity and flexibility in a mathematically sound and elegant manner, the Choquet integral has additional features making it attractive from a machine learning point of view. Notably, it offers measures for quantifying the importance of individual predictor variables and the interaction between groups of variables. Analyzing the Choquet integral from a classification perspective, we provide upper and lower bounds on its VC-dimension. Moreover, as a methodological contribution, we propose a generalization of logistic regression. The basic idea of our approach, referred to as choquistic regression, is to replace the linear function of predictor variables, which is commonly used in logistic regression to model the log odds of the positive class, by the Choquet integral. First experimental results are quite promising and suggest that the combination of monotonicity and flexibility offered by the Choquet integral facilitates strong performance in practical applications.
doi_str_mv 10.1007/s10994-012-5318-3
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_1753531315</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1753531315</sourcerecordid><originalsourceid>FETCH-LOGICAL-c491t-484b059c3b23c397c45e8dbdbb735508ae737b65004c46f6179de69d744b1143</originalsourceid><addsrcrecordid>eNqFkEtLxDAUhYMoOI7-AHcFN26iub15deFCBl9QcDP70EdmpkMnGZN24b83pS5EEFcXzv3OvZxDyDWwO2BM3UdgRcEpg5wKBE3xhCxAKKRMSHFKFkxrQSXk4pxcxLhnjOVSywV5KG0VXOe22cE7P3hnM-dd37kkJ6m1fczGOO2Hnc1WO_8x2iHr3GC3oeovydmm6qO9-p5Lsn5-Wq9eafn-8rZ6LGnDCxgo17xmomiwzrHBQjVcWN3WbV0rFILpyipUtRSM8YbLjQRVtFYWreK8BuC4JLfz2WOY_sfBHLrY2L6vnPVjNKAEptQI4n8UUHJA1HlCb36hez8Gl3IYYKhylSCVKJipJvgYg92YY-gOVfhMkJmqN3P1JlVvpuoNJk8-e2Ji3daGn5f_Mn0BvDOELA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1037273827</pqid></control><display><type>article</type><title>Learning monotone nonlinear models using the Choquet integral</title><source>SpringerLink Journals (MCLS)</source><creator>Fallah Tehrani, Ali ; Cheng, Weiwei ; Dembczyński, Krzysztof ; Hüllermeier, Eyke</creator><creatorcontrib>Fallah Tehrani, Ali ; Cheng, Weiwei ; Dembczyński, Krzysztof ; Hüllermeier, Eyke</creatorcontrib><description>The learning of predictive models that guarantee monotonicity in the input variables has received increasing attention in machine learning in recent years. By trend, the difficulty of ensuring monotonicity increases with the flexibility or, say, nonlinearity of a model. In this paper, we advocate the so-called Choquet integral as a tool for learning monotone nonlinear models. While being widely used as a flexible aggregation operator in different fields, such as multiple criteria decision making, the Choquet integral is much less known in machine learning so far. Apart from combining monotonicity and flexibility in a mathematically sound and elegant manner, the Choquet integral has additional features making it attractive from a machine learning point of view. Notably, it offers measures for quantifying the importance of individual predictor variables and the interaction between groups of variables. Analyzing the Choquet integral from a classification perspective, we provide upper and lower bounds on its VC-dimension. Moreover, as a methodological contribution, we propose a generalization of logistic regression. The basic idea of our approach, referred to as choquistic regression, is to replace the linear function of predictor variables, which is commonly used in logistic regression to model the log odds of the positive class, by the Choquet integral. First experimental results are quite promising and suggest that the combination of monotonicity and flexibility offered by the Choquet integral facilitates strong performance in practical applications.</description><identifier>ISSN: 0885-6125</identifier><identifier>EISSN: 1573-0565</identifier><identifier>DOI: 10.1007/s10994-012-5318-3</identifier><language>eng</language><publisher>Boston: Springer US</publisher><subject>Artificial Intelligence ; Computer engineering ; Computer Science ; Control ; Flexibility ; Integrals ; Learning ; Machine learning ; Mathematical analysis ; Mathematical models ; Mechatronics ; Natural Language Processing (NLP) ; Nonlinearity ; Regression ; Robotics ; Simulation and Modeling</subject><ispartof>Machine learning, 2012-10, Vol.89 (1-2), p.183-211</ispartof><rights>The Author(s) 2012</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c491t-484b059c3b23c397c45e8dbdbb735508ae737b65004c46f6179de69d744b1143</citedby><cites>FETCH-LOGICAL-c491t-484b059c3b23c397c45e8dbdbb735508ae737b65004c46f6179de69d744b1143</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10994-012-5318-3$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10994-012-5318-3$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27901,27902,41464,42533,51294</link.rule.ids></links><search><creatorcontrib>Fallah Tehrani, Ali</creatorcontrib><creatorcontrib>Cheng, Weiwei</creatorcontrib><creatorcontrib>Dembczyński, Krzysztof</creatorcontrib><creatorcontrib>Hüllermeier, Eyke</creatorcontrib><title>Learning monotone nonlinear models using the Choquet integral</title><title>Machine learning</title><addtitle>Mach Learn</addtitle><description>The learning of predictive models that guarantee monotonicity in the input variables has received increasing attention in machine learning in recent years. By trend, the difficulty of ensuring monotonicity increases with the flexibility or, say, nonlinearity of a model. In this paper, we advocate the so-called Choquet integral as a tool for learning monotone nonlinear models. While being widely used as a flexible aggregation operator in different fields, such as multiple criteria decision making, the Choquet integral is much less known in machine learning so far. Apart from combining monotonicity and flexibility in a mathematically sound and elegant manner, the Choquet integral has additional features making it attractive from a machine learning point of view. Notably, it offers measures for quantifying the importance of individual predictor variables and the interaction between groups of variables. Analyzing the Choquet integral from a classification perspective, we provide upper and lower bounds on its VC-dimension. Moreover, as a methodological contribution, we propose a generalization of logistic regression. The basic idea of our approach, referred to as choquistic regression, is to replace the linear function of predictor variables, which is commonly used in logistic regression to model the log odds of the positive class, by the Choquet integral. First experimental results are quite promising and suggest that the combination of monotonicity and flexibility offered by the Choquet integral facilitates strong performance in practical applications.</description><subject>Artificial Intelligence</subject><subject>Computer engineering</subject><subject>Computer Science</subject><subject>Control</subject><subject>Flexibility</subject><subject>Integrals</subject><subject>Learning</subject><subject>Machine learning</subject><subject>Mathematical analysis</subject><subject>Mathematical models</subject><subject>Mechatronics</subject><subject>Natural Language Processing (NLP)</subject><subject>Nonlinearity</subject><subject>Regression</subject><subject>Robotics</subject><subject>Simulation and Modeling</subject><issn>0885-6125</issn><issn>1573-0565</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2012</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqFkEtLxDAUhYMoOI7-AHcFN26iub15deFCBl9QcDP70EdmpkMnGZN24b83pS5EEFcXzv3OvZxDyDWwO2BM3UdgRcEpg5wKBE3xhCxAKKRMSHFKFkxrQSXk4pxcxLhnjOVSywV5KG0VXOe22cE7P3hnM-dd37kkJ6m1fczGOO2Hnc1WO_8x2iHr3GC3oeovydmm6qO9-p5Lsn5-Wq9eafn-8rZ6LGnDCxgo17xmomiwzrHBQjVcWN3WbV0rFILpyipUtRSM8YbLjQRVtFYWreK8BuC4JLfz2WOY_sfBHLrY2L6vnPVjNKAEptQI4n8UUHJA1HlCb36hez8Gl3IYYKhylSCVKJipJvgYg92YY-gOVfhMkJmqN3P1JlVvpuoNJk8-e2Ji3daGn5f_Mn0BvDOELA</recordid><startdate>20121001</startdate><enddate>20121001</enddate><creator>Fallah Tehrani, Ali</creator><creator>Cheng, Weiwei</creator><creator>Dembczyński, Krzysztof</creator><creator>Hüllermeier, Eyke</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7XB</scope><scope>88I</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0N</scope><scope>M2P</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope></search><sort><creationdate>20121001</creationdate><title>Learning monotone nonlinear models using the Choquet integral</title><author>Fallah Tehrani, Ali ; Cheng, Weiwei ; Dembczyński, Krzysztof ; Hüllermeier, Eyke</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c491t-484b059c3b23c397c45e8dbdbb735508ae737b65004c46f6179de69d744b1143</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2012</creationdate><topic>Artificial Intelligence</topic><topic>Computer engineering</topic><topic>Computer Science</topic><topic>Control</topic><topic>Flexibility</topic><topic>Integrals</topic><topic>Learning</topic><topic>Machine learning</topic><topic>Mathematical analysis</topic><topic>Mathematical models</topic><topic>Mechatronics</topic><topic>Natural Language Processing (NLP)</topic><topic>Nonlinearity</topic><topic>Regression</topic><topic>Robotics</topic><topic>Simulation and Modeling</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Fallah Tehrani, Ali</creatorcontrib><creatorcontrib>Cheng, Weiwei</creatorcontrib><creatorcontrib>Dembczyński, Krzysztof</creatorcontrib><creatorcontrib>Hüllermeier, Eyke</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Science Database (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Computing Database</collection><collection>Science Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><jtitle>Machine learning</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Fallah Tehrani, Ali</au><au>Cheng, Weiwei</au><au>Dembczyński, Krzysztof</au><au>Hüllermeier, Eyke</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning monotone nonlinear models using the Choquet integral</atitle><jtitle>Machine learning</jtitle><stitle>Mach Learn</stitle><date>2012-10-01</date><risdate>2012</risdate><volume>89</volume><issue>1-2</issue><spage>183</spage><epage>211</epage><pages>183-211</pages><issn>0885-6125</issn><eissn>1573-0565</eissn><abstract>The learning of predictive models that guarantee monotonicity in the input variables has received increasing attention in machine learning in recent years. By trend, the difficulty of ensuring monotonicity increases with the flexibility or, say, nonlinearity of a model. In this paper, we advocate the so-called Choquet integral as a tool for learning monotone nonlinear models. While being widely used as a flexible aggregation operator in different fields, such as multiple criteria decision making, the Choquet integral is much less known in machine learning so far. Apart from combining monotonicity and flexibility in a mathematically sound and elegant manner, the Choquet integral has additional features making it attractive from a machine learning point of view. Notably, it offers measures for quantifying the importance of individual predictor variables and the interaction between groups of variables. Analyzing the Choquet integral from a classification perspective, we provide upper and lower bounds on its VC-dimension. Moreover, as a methodological contribution, we propose a generalization of logistic regression. The basic idea of our approach, referred to as choquistic regression, is to replace the linear function of predictor variables, which is commonly used in logistic regression to model the log odds of the positive class, by the Choquet integral. First experimental results are quite promising and suggest that the combination of monotonicity and flexibility offered by the Choquet integral facilitates strong performance in practical applications.</abstract><cop>Boston</cop><pub>Springer US</pub><doi>10.1007/s10994-012-5318-3</doi><tpages>29</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0885-6125
ispartof Machine learning, 2012-10, Vol.89 (1-2), p.183-211
issn 0885-6125
1573-0565
language eng
recordid cdi_proquest_miscellaneous_1753531315
source SpringerLink Journals (MCLS)
subjects Artificial Intelligence
Computer engineering
Computer Science
Control
Flexibility
Integrals
Learning
Machine learning
Mathematical analysis
Mathematical models
Mechatronics
Natural Language Processing (NLP)
Nonlinearity
Regression
Robotics
Simulation and Modeling
title Learning monotone nonlinear models using the Choquet integral
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T03%3A27%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20monotone%20nonlinear%20models%20using%20the%20Choquet%20integral&rft.jtitle=Machine%20learning&rft.au=Fallah%C2%A0Tehrani,%20Ali&rft.date=2012-10-01&rft.volume=89&rft.issue=1-2&rft.spage=183&rft.epage=211&rft.pages=183-211&rft.issn=0885-6125&rft.eissn=1573-0565&rft_id=info:doi/10.1007/s10994-012-5318-3&rft_dat=%3Cproquest_cross%3E1753531315%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1037273827&rft_id=info:pmid/&rfr_iscdi=true