Nugget: Neural Agglomerative Embeddings of Text

ICML 2023 Embedding text sequences is a widespread requirement in modern language understanding. Existing approaches focus largely on constant-size representations. This is problematic, as the amount of information contained in text often varies with the length of the input. We propose a solution ca...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Qin, Guanghui, Van Durme, Benjamin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Qin, Guanghui
Van Durme, Benjamin
description ICML 2023 Embedding text sequences is a widespread requirement in modern language understanding. Existing approaches focus largely on constant-size representations. This is problematic, as the amount of information contained in text often varies with the length of the input. We propose a solution called Nugget, which encodes language into a representation based on a dynamically selected subset of input tokens. These nuggets are learned through tasks like autoencoding and machine translation, and intuitively segment language into meaningful units. We demonstrate Nugget outperforms related approaches in tasks involving semantic comparison. Finally, we illustrate these compact units allow for expanding the contextual window of a language model (LM), suggesting new future LMs that can condition on significantly larger amounts of content.
doi_str_mv 10.48550/arxiv.2310.01732
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2310_01732</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2310_01732</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-17643b558f1dae6766dfeea3df180376e0e9820dff7cc381295cac266abad90f3</originalsourceid><addsrcrecordid>eNotzrkKwkAUheFpLER9ACvnBaKzOEvsRNxAYpM-3GTuDIFEZYyib-9aHfiLw0fImLPp3CrFZhAf9X0q5DswbqTok1l2CwG7Bc3wFqGhyxCac4sRuvqOdN2W6Fx9Cld69jTHRzckPQ_NFUf_HZB8s85Xu-Rw3O5Xy0MC2oiEGz2XpVLWcweojdbOI4J0nlsmjUaGqRXMeW-qSlouUlVBJbSGElzKvByQye_2Ky4usW4hPouPvPjK5Qu5Hz2W</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Nugget: Neural Agglomerative Embeddings of Text</title><source>arXiv.org</source><creator>Qin, Guanghui ; Van Durme, Benjamin</creator><creatorcontrib>Qin, Guanghui ; Van Durme, Benjamin</creatorcontrib><description>ICML 2023 Embedding text sequences is a widespread requirement in modern language understanding. Existing approaches focus largely on constant-size representations. This is problematic, as the amount of information contained in text often varies with the length of the input. We propose a solution called Nugget, which encodes language into a representation based on a dynamically selected subset of input tokens. These nuggets are learned through tasks like autoencoding and machine translation, and intuitively segment language into meaningful units. We demonstrate Nugget outperforms related approaches in tasks involving semantic comparison. Finally, we illustrate these compact units allow for expanding the contextual window of a language model (LM), suggesting new future LMs that can condition on significantly larger amounts of content.</description><identifier>DOI: 10.48550/arxiv.2310.01732</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2023-10</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2310.01732$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2310.01732$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Qin, Guanghui</creatorcontrib><creatorcontrib>Van Durme, Benjamin</creatorcontrib><title>Nugget: Neural Agglomerative Embeddings of Text</title><description>ICML 2023 Embedding text sequences is a widespread requirement in modern language understanding. Existing approaches focus largely on constant-size representations. This is problematic, as the amount of information contained in text often varies with the length of the input. We propose a solution called Nugget, which encodes language into a representation based on a dynamically selected subset of input tokens. These nuggets are learned through tasks like autoencoding and machine translation, and intuitively segment language into meaningful units. We demonstrate Nugget outperforms related approaches in tasks involving semantic comparison. Finally, we illustrate these compact units allow for expanding the contextual window of a language model (LM), suggesting new future LMs that can condition on significantly larger amounts of content.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzrkKwkAUheFpLER9ACvnBaKzOEvsRNxAYpM-3GTuDIFEZYyib-9aHfiLw0fImLPp3CrFZhAf9X0q5DswbqTok1l2CwG7Bc3wFqGhyxCac4sRuvqOdN2W6Fx9Cld69jTHRzckPQ_NFUf_HZB8s85Xu-Rw3O5Xy0MC2oiEGz2XpVLWcweojdbOI4J0nlsmjUaGqRXMeW-qSlouUlVBJbSGElzKvByQye_2Ky4usW4hPouPvPjK5Qu5Hz2W</recordid><startdate>20231002</startdate><enddate>20231002</enddate><creator>Qin, Guanghui</creator><creator>Van Durme, Benjamin</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231002</creationdate><title>Nugget: Neural Agglomerative Embeddings of Text</title><author>Qin, Guanghui ; Van Durme, Benjamin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-17643b558f1dae6766dfeea3df180376e0e9820dff7cc381295cac266abad90f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Qin, Guanghui</creatorcontrib><creatorcontrib>Van Durme, Benjamin</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Qin, Guanghui</au><au>Van Durme, Benjamin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Nugget: Neural Agglomerative Embeddings of Text</atitle><date>2023-10-02</date><risdate>2023</risdate><abstract>ICML 2023 Embedding text sequences is a widespread requirement in modern language understanding. Existing approaches focus largely on constant-size representations. This is problematic, as the amount of information contained in text often varies with the length of the input. We propose a solution called Nugget, which encodes language into a representation based on a dynamically selected subset of input tokens. These nuggets are learned through tasks like autoencoding and machine translation, and intuitively segment language into meaningful units. We demonstrate Nugget outperforms related approaches in tasks involving semantic comparison. Finally, we illustrate these compact units allow for expanding the contextual window of a language model (LM), suggesting new future LMs that can condition on significantly larger amounts of content.</abstract><doi>10.48550/arxiv.2310.01732</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2310.01732
ispartof
issn
language eng
recordid cdi_arxiv_primary_2310_01732
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
Computer Science - Learning
title Nugget: Neural Agglomerative Embeddings of Text
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T01%3A11%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Nugget:%20Neural%20Agglomerative%20Embeddings%20of%20Text&rft.au=Qin,%20Guanghui&rft.date=2023-10-02&rft_id=info:doi/10.48550/arxiv.2310.01732&rft_dat=%3Carxiv_GOX%3E2310_01732%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true