The mechanism of additive composition

Additive composition (Foltz et al. in Discourse Process 15:285–307, 1998 ; Landauer and Dumais in Psychol Rev 104(2):211, 1997 ; Mitchell and Lapata in Cognit Sci 34(8):1388–1429, 2010 ) is a widely used method for computing meanings of phrases, which takes the average of vector representations of t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Machine learning 2017-07, Vol.106 (7), p.1083-1130
Hauptverfasser: Tian, Ran, Okazaki, Naoaki, Inui, Kentaro
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Additive composition (Foltz et al. in Discourse Process 15:285–307, 1998 ; Landauer and Dumais in Psychol Rev 104(2):211, 1997 ; Mitchell and Lapata in Cognit Sci 34(8):1388–1429, 2010 ) is a widely used method for computing meanings of phrases, which takes the average of vector representations of the constituent words. In this article, we prove an upper bound for the bias of additive composition, which is the first theoretical analysis on compositional frameworks from a machine learning point of view. The bound is written in terms of collocation strength; we prove that the more exclusively two successive words tend to occur together, the more accurate one can guarantee their additive composition as an approximation to the natural phrase vector. Our proof relies on properties of natural language data that are empirically verified, and can be theoretically derived from an assumption that the data is generated from a Hierarchical Pitman–Yor Process. The theory endorses additive composition as a reasonable operation for calculating meanings of phrases, and suggests ways to improve additive compositionality, including: transforming entries of distributional word vectors by a function that meets a specific condition, constructing a novel type of vector representations to make additive composition sensitive to word order, and utilizing singular value decomposition to train word vectors.
ISSN:0885-6125
1573-0565
DOI:10.1007/s10994-017-5634-8