Bi-directional contextualized text description

Various examples described herein are directed to systems and methods for analyzing text. A computing device may train an autoencoder language model using a plurality of language model training samples. The autoencoder language mode may comprise a first convolutional layer. Also, a first language mo...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Kim, Sohyeong, Velkoski, Darko, Dinh, Hung Tu, Hussein, Faisal El, Reisswig, Christian
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Various examples described herein are directed to systems and methods for analyzing text. A computing device may train an autoencoder language model using a plurality of language model training samples. The autoencoder language mode may comprise a first convolutional layer. Also, a first language model training sample of the plurality of language model training samples may comprise a first set of ordered strings comprising a masked string, a first string preceding the masked string in the first set of ordered strings, and a second string after the masked string in the first set of ordered strings. The computing device may generate a first feature vector using an input sample and the autoencoder language model. The computing device may also generate a descriptor of the input sample using a target model, the input sample, and the first feature vector.