Nugget: Neural Agglomerative Embeddings of Text
ICML 2023 Embedding text sequences is a widespread requirement in modern language understanding. Existing approaches focus largely on constant-size representations. This is problematic, as the amount of information contained in text often varies with the length of the input. We propose a solution ca...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | ICML 2023 Embedding text sequences is a widespread requirement in modern language
understanding. Existing approaches focus largely on constant-size
representations. This is problematic, as the amount of information contained in
text often varies with the length of the input. We propose a solution called
Nugget, which encodes language into a representation based on a dynamically
selected subset of input tokens. These nuggets are learned through tasks like
autoencoding and machine translation, and intuitively segment language into
meaningful units. We demonstrate Nugget outperforms related approaches in tasks
involving semantic comparison. Finally, we illustrate these compact units allow
for expanding the contextual window of a language model (LM), suggesting new
future LMs that can condition on significantly larger amounts of content. |
---|---|
DOI: | 10.48550/arxiv.2310.01732 |