The mechanism of additive composition

Additive composition (Foltz et al. in Discourse Process 15:285–307, 1998 ; Landauer and Dumais in Psychol Rev 104(2):211, 1997 ; Mitchell and Lapata in Cognit Sci 34(8):1388–1429, 2010 ) is a widely used method for computing meanings of phrases, which takes the average of vector representations of t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Machine learning 2017-07, Vol.106 (7), p.1083-1130
Hauptverfasser: Tian, Ran, Okazaki, Naoaki, Inui, Kentaro
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1130
container_issue 7
container_start_page 1083
container_title Machine learning
container_volume 106
creator Tian, Ran
Okazaki, Naoaki
Inui, Kentaro
description Additive composition (Foltz et al. in Discourse Process 15:285–307, 1998 ; Landauer and Dumais in Psychol Rev 104(2):211, 1997 ; Mitchell and Lapata in Cognit Sci 34(8):1388–1429, 2010 ) is a widely used method for computing meanings of phrases, which takes the average of vector representations of the constituent words. In this article, we prove an upper bound for the bias of additive composition, which is the first theoretical analysis on compositional frameworks from a machine learning point of view. The bound is written in terms of collocation strength; we prove that the more exclusively two successive words tend to occur together, the more accurate one can guarantee their additive composition as an approximation to the natural phrase vector. Our proof relies on properties of natural language data that are empirically verified, and can be theoretically derived from an assumption that the data is generated from a Hierarchical Pitman–Yor Process. The theory endorses additive composition as a reasonable operation for calculating meanings of phrases, and suggests ways to improve additive compositionality, including: transforming entries of distributional word vectors by a function that meets a specific condition, constructing a novel type of vector representations to make additive composition sensitive to word order, and utilizing singular value decomposition to train word vectors.
doi_str_mv 10.1007/s10994-017-5634-8
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_1912487282</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1912487282</sourcerecordid><originalsourceid>FETCH-LOGICAL-c359t-d08121b6e7f41c7c945896e0805860b3803b09f87689b25ee756859b46cf06c03</originalsourceid><addsrcrecordid>eNp1kEFLxDAQhYMoWFd_gLeCeIzOpE0yOcqiq7DgZT2HNk3dLrZZk67gv7dLPXjxNO_wvjfwMXaNcIcA-j4hGFNyQM2lKkpOJyxDqQsOUslTlgGR5AqFPGcXKe0AQChSGbvdbH3ee7ethi71eWjzqmm6sfvyuQv9PqQph-GSnbXVR_JXv3fB3p4eN8tnvn5dvSwf1twV0oy8AUKBtfK6LdFpZ0pJRnkgkKSgLgiKGkxLWpGphfReS0XS1KVyLSgHxYLdzLv7GD4PPo12Fw5xmF5aNChK0oLE1MK55WJIKfrW7mPXV_HbItijDTvbsJMNe7RhaWLEzKSpO7z7-Gf5X-gH_gtfWw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1912487282</pqid></control><display><type>article</type><title>The mechanism of additive composition</title><source>SpringerLink Journals - AutoHoldings</source><creator>Tian, Ran ; Okazaki, Naoaki ; Inui, Kentaro</creator><creatorcontrib>Tian, Ran ; Okazaki, Naoaki ; Inui, Kentaro</creatorcontrib><description>Additive composition (Foltz et al. in Discourse Process 15:285–307, 1998 ; Landauer and Dumais in Psychol Rev 104(2):211, 1997 ; Mitchell and Lapata in Cognit Sci 34(8):1388–1429, 2010 ) is a widely used method for computing meanings of phrases, which takes the average of vector representations of the constituent words. In this article, we prove an upper bound for the bias of additive composition, which is the first theoretical analysis on compositional frameworks from a machine learning point of view. The bound is written in terms of collocation strength; we prove that the more exclusively two successive words tend to occur together, the more accurate one can guarantee their additive composition as an approximation to the natural phrase vector. Our proof relies on properties of natural language data that are empirically verified, and can be theoretically derived from an assumption that the data is generated from a Hierarchical Pitman–Yor Process. The theory endorses additive composition as a reasonable operation for calculating meanings of phrases, and suggests ways to improve additive compositionality, including: transforming entries of distributional word vectors by a function that meets a specific condition, constructing a novel type of vector representations to make additive composition sensitive to word order, and utilizing singular value decomposition to train word vectors.</description><identifier>ISSN: 0885-6125</identifier><identifier>EISSN: 1573-0565</identifier><identifier>DOI: 10.1007/s10994-017-5634-8</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Approximation ; Artificial Intelligence ; Bias ; Collocation ; Computer Science ; Construction specifications ; Control ; Discourse analysis ; Linguistics ; Machine learning ; Mechatronics ; Natural Language Processing (NLP) ; Proving ; Representations ; Robotics ; Semantics ; Simulation and Modeling ; Singular value decomposition</subject><ispartof>Machine learning, 2017-07, Vol.106 (7), p.1083-1130</ispartof><rights>The Author(s) 2017</rights><rights>Machine Learning is a copyright of Springer, 2017.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c359t-d08121b6e7f41c7c945896e0805860b3803b09f87689b25ee756859b46cf06c03</citedby><cites>FETCH-LOGICAL-c359t-d08121b6e7f41c7c945896e0805860b3803b09f87689b25ee756859b46cf06c03</cites><orcidid>0000-0001-9146-2486</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10994-017-5634-8$$EPDF$$P50$$Gspringer$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10994-017-5634-8$$EHTML$$P50$$Gspringer$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,27901,27902,41464,42533,51294</link.rule.ids></links><search><creatorcontrib>Tian, Ran</creatorcontrib><creatorcontrib>Okazaki, Naoaki</creatorcontrib><creatorcontrib>Inui, Kentaro</creatorcontrib><title>The mechanism of additive composition</title><title>Machine learning</title><addtitle>Mach Learn</addtitle><description>Additive composition (Foltz et al. in Discourse Process 15:285–307, 1998 ; Landauer and Dumais in Psychol Rev 104(2):211, 1997 ; Mitchell and Lapata in Cognit Sci 34(8):1388–1429, 2010 ) is a widely used method for computing meanings of phrases, which takes the average of vector representations of the constituent words. In this article, we prove an upper bound for the bias of additive composition, which is the first theoretical analysis on compositional frameworks from a machine learning point of view. The bound is written in terms of collocation strength; we prove that the more exclusively two successive words tend to occur together, the more accurate one can guarantee their additive composition as an approximation to the natural phrase vector. Our proof relies on properties of natural language data that are empirically verified, and can be theoretically derived from an assumption that the data is generated from a Hierarchical Pitman–Yor Process. The theory endorses additive composition as a reasonable operation for calculating meanings of phrases, and suggests ways to improve additive compositionality, including: transforming entries of distributional word vectors by a function that meets a specific condition, constructing a novel type of vector representations to make additive composition sensitive to word order, and utilizing singular value decomposition to train word vectors.</description><subject>Approximation</subject><subject>Artificial Intelligence</subject><subject>Bias</subject><subject>Collocation</subject><subject>Computer Science</subject><subject>Construction specifications</subject><subject>Control</subject><subject>Discourse analysis</subject><subject>Linguistics</subject><subject>Machine learning</subject><subject>Mechatronics</subject><subject>Natural Language Processing (NLP)</subject><subject>Proving</subject><subject>Representations</subject><subject>Robotics</subject><subject>Semantics</subject><subject>Simulation and Modeling</subject><subject>Singular value decomposition</subject><issn>0885-6125</issn><issn>1573-0565</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><sourceid>C6C</sourceid><sourceid>BENPR</sourceid><recordid>eNp1kEFLxDAQhYMoWFd_gLeCeIzOpE0yOcqiq7DgZT2HNk3dLrZZk67gv7dLPXjxNO_wvjfwMXaNcIcA-j4hGFNyQM2lKkpOJyxDqQsOUslTlgGR5AqFPGcXKe0AQChSGbvdbH3ee7ethi71eWjzqmm6sfvyuQv9PqQph-GSnbXVR_JXv3fB3p4eN8tnvn5dvSwf1twV0oy8AUKBtfK6LdFpZ0pJRnkgkKSgLgiKGkxLWpGphfReS0XS1KVyLSgHxYLdzLv7GD4PPo12Fw5xmF5aNChK0oLE1MK55WJIKfrW7mPXV_HbItijDTvbsJMNe7RhaWLEzKSpO7z7-Gf5X-gH_gtfWw</recordid><startdate>20170701</startdate><enddate>20170701</enddate><creator>Tian, Ran</creator><creator>Okazaki, Naoaki</creator><creator>Inui, Kentaro</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>C6C</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7XB</scope><scope>88I</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0N</scope><scope>M2P</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0001-9146-2486</orcidid></search><sort><creationdate>20170701</creationdate><title>The mechanism of additive composition</title><author>Tian, Ran ; Okazaki, Naoaki ; Inui, Kentaro</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c359t-d08121b6e7f41c7c945896e0805860b3803b09f87689b25ee756859b46cf06c03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Approximation</topic><topic>Artificial Intelligence</topic><topic>Bias</topic><topic>Collocation</topic><topic>Computer Science</topic><topic>Construction specifications</topic><topic>Control</topic><topic>Discourse analysis</topic><topic>Linguistics</topic><topic>Machine learning</topic><topic>Mechatronics</topic><topic>Natural Language Processing (NLP)</topic><topic>Proving</topic><topic>Representations</topic><topic>Robotics</topic><topic>Semantics</topic><topic>Simulation and Modeling</topic><topic>Singular value decomposition</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Tian, Ran</creatorcontrib><creatorcontrib>Okazaki, Naoaki</creatorcontrib><creatorcontrib>Inui, Kentaro</creatorcontrib><collection>Springer Nature OA Free Journals</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Science Database (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Computing Database</collection><collection>Science Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><jtitle>Machine learning</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Tian, Ran</au><au>Okazaki, Naoaki</au><au>Inui, Kentaro</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>The mechanism of additive composition</atitle><jtitle>Machine learning</jtitle><stitle>Mach Learn</stitle><date>2017-07-01</date><risdate>2017</risdate><volume>106</volume><issue>7</issue><spage>1083</spage><epage>1130</epage><pages>1083-1130</pages><issn>0885-6125</issn><eissn>1573-0565</eissn><abstract>Additive composition (Foltz et al. in Discourse Process 15:285–307, 1998 ; Landauer and Dumais in Psychol Rev 104(2):211, 1997 ; Mitchell and Lapata in Cognit Sci 34(8):1388–1429, 2010 ) is a widely used method for computing meanings of phrases, which takes the average of vector representations of the constituent words. In this article, we prove an upper bound for the bias of additive composition, which is the first theoretical analysis on compositional frameworks from a machine learning point of view. The bound is written in terms of collocation strength; we prove that the more exclusively two successive words tend to occur together, the more accurate one can guarantee their additive composition as an approximation to the natural phrase vector. Our proof relies on properties of natural language data that are empirically verified, and can be theoretically derived from an assumption that the data is generated from a Hierarchical Pitman–Yor Process. The theory endorses additive composition as a reasonable operation for calculating meanings of phrases, and suggests ways to improve additive compositionality, including: transforming entries of distributional word vectors by a function that meets a specific condition, constructing a novel type of vector representations to make additive composition sensitive to word order, and utilizing singular value decomposition to train word vectors.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s10994-017-5634-8</doi><tpages>48</tpages><orcidid>https://orcid.org/0000-0001-9146-2486</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0885-6125
ispartof Machine learning, 2017-07, Vol.106 (7), p.1083-1130
issn 0885-6125
1573-0565
language eng
recordid cdi_proquest_journals_1912487282
source SpringerLink Journals - AutoHoldings
subjects Approximation
Artificial Intelligence
Bias
Collocation
Computer Science
Construction specifications
Control
Discourse analysis
Linguistics
Machine learning
Mechatronics
Natural Language Processing (NLP)
Proving
Representations
Robotics
Semantics
Simulation and Modeling
Singular value decomposition
title The mechanism of additive composition
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-03T23%3A32%3A28IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=The%20mechanism%20of%20additive%20composition&rft.jtitle=Machine%20learning&rft.au=Tian,%20Ran&rft.date=2017-07-01&rft.volume=106&rft.issue=7&rft.spage=1083&rft.epage=1130&rft.pages=1083-1130&rft.issn=0885-6125&rft.eissn=1573-0565&rft_id=info:doi/10.1007/s10994-017-5634-8&rft_dat=%3Cproquest_cross%3E1912487282%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1912487282&rft_id=info:pmid/&rfr_iscdi=true