Self-Adaptive Reconstruction with Contrastive Learning for Unsupervised Sentence Embeddings

Unsupervised sentence embeddings task aims to convert sentences to semantic vector representations. Most previous works directly use the sentence representations derived from pretrained language models. However, due to the token bias in pretrained language models, the models can not capture the fine...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-02
Hauptverfasser: Liu, Junlong, Shang, Xichen, Feng, Huawen, Zheng, Junhao, Ma, Qianli
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Liu, Junlong
Shang, Xichen
Feng, Huawen
Zheng, Junhao
Ma, Qianli
description Unsupervised sentence embeddings task aims to convert sentences to semantic vector representations. Most previous works directly use the sentence representations derived from pretrained language models. However, due to the token bias in pretrained language models, the models can not capture the fine-grained semantics in sentences, which leads to poor predictions. To address this issue, we propose a novel Self-Adaptive Reconstruction Contrastive Sentence Embeddings (SARCSE) framework, which reconstructs all tokens in sentences with an AutoEncoder to help the model to preserve more fine-grained semantics during tokens aggregating. In addition, we proposed a self-adaptive reconstruction loss to alleviate the token bias towards frequency. Experimental results show that SARCSE gains significant improvements compared with the strong baseline SimCSE on the 7 STS tasks.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2931849329</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2931849329</sourcerecordid><originalsourceid>FETCH-proquest_journals_29318493293</originalsourceid><addsrcrecordid>eNqNysEKgkAUQNEhCJLyHwZaCzqjpcsQo0WrrFULMX3WiL2xeaP9fhJ9QKu7OHfGHCFl4MWhEAvmErW-74vNVkSRdNg1h67xdnXZWzUCP0GlkawZKqs08reyD55qtKakrx-hNKjwzhtt-AVp6MGMiqDmOaAFrIBnzxvU9fTQis2bsiNwf12y9T47pwevN_o1ANmi1YPBiQqRyCAOEzn1v-sD9JpEBA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2931849329</pqid></control><display><type>article</type><title>Self-Adaptive Reconstruction with Contrastive Learning for Unsupervised Sentence Embeddings</title><source>Free E- Journals</source><creator>Liu, Junlong ; Shang, Xichen ; Feng, Huawen ; Zheng, Junhao ; Ma, Qianli</creator><creatorcontrib>Liu, Junlong ; Shang, Xichen ; Feng, Huawen ; Zheng, Junhao ; Ma, Qianli</creatorcontrib><description>Unsupervised sentence embeddings task aims to convert sentences to semantic vector representations. Most previous works directly use the sentence representations derived from pretrained language models. However, due to the token bias in pretrained language models, the models can not capture the fine-grained semantics in sentences, which leads to poor predictions. To address this issue, we propose a novel Self-Adaptive Reconstruction Contrastive Sentence Embeddings (SARCSE) framework, which reconstructs all tokens in sentences with an AutoEncoder to help the model to preserve more fine-grained semantics during tokens aggregating. In addition, we proposed a self-adaptive reconstruction loss to alleviate the token bias towards frequency. Experimental results show that SARCSE gains significant improvements compared with the strong baseline SimCSE on the 7 STS tasks.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Bias ; Reconstruction ; Representations ; Semantics ; Sentences</subject><ispartof>arXiv.org, 2024-02</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Liu, Junlong</creatorcontrib><creatorcontrib>Shang, Xichen</creatorcontrib><creatorcontrib>Feng, Huawen</creatorcontrib><creatorcontrib>Zheng, Junhao</creatorcontrib><creatorcontrib>Ma, Qianli</creatorcontrib><title>Self-Adaptive Reconstruction with Contrastive Learning for Unsupervised Sentence Embeddings</title><title>arXiv.org</title><description>Unsupervised sentence embeddings task aims to convert sentences to semantic vector representations. Most previous works directly use the sentence representations derived from pretrained language models. However, due to the token bias in pretrained language models, the models can not capture the fine-grained semantics in sentences, which leads to poor predictions. To address this issue, we propose a novel Self-Adaptive Reconstruction Contrastive Sentence Embeddings (SARCSE) framework, which reconstructs all tokens in sentences with an AutoEncoder to help the model to preserve more fine-grained semantics during tokens aggregating. In addition, we proposed a self-adaptive reconstruction loss to alleviate the token bias towards frequency. Experimental results show that SARCSE gains significant improvements compared with the strong baseline SimCSE on the 7 STS tasks.</description><subject>Bias</subject><subject>Reconstruction</subject><subject>Representations</subject><subject>Semantics</subject><subject>Sentences</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNysEKgkAUQNEhCJLyHwZaCzqjpcsQo0WrrFULMX3WiL2xeaP9fhJ9QKu7OHfGHCFl4MWhEAvmErW-74vNVkSRdNg1h67xdnXZWzUCP0GlkawZKqs08reyD55qtKakrx-hNKjwzhtt-AVp6MGMiqDmOaAFrIBnzxvU9fTQis2bsiNwf12y9T47pwevN_o1ANmi1YPBiQqRyCAOEzn1v-sD9JpEBA</recordid><startdate>20240223</startdate><enddate>20240223</enddate><creator>Liu, Junlong</creator><creator>Shang, Xichen</creator><creator>Feng, Huawen</creator><creator>Zheng, Junhao</creator><creator>Ma, Qianli</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240223</creationdate><title>Self-Adaptive Reconstruction with Contrastive Learning for Unsupervised Sentence Embeddings</title><author>Liu, Junlong ; Shang, Xichen ; Feng, Huawen ; Zheng, Junhao ; Ma, Qianli</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_29318493293</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Bias</topic><topic>Reconstruction</topic><topic>Representations</topic><topic>Semantics</topic><topic>Sentences</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Junlong</creatorcontrib><creatorcontrib>Shang, Xichen</creatorcontrib><creatorcontrib>Feng, Huawen</creatorcontrib><creatorcontrib>Zheng, Junhao</creatorcontrib><creatorcontrib>Ma, Qianli</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Liu, Junlong</au><au>Shang, Xichen</au><au>Feng, Huawen</au><au>Zheng, Junhao</au><au>Ma, Qianli</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Self-Adaptive Reconstruction with Contrastive Learning for Unsupervised Sentence Embeddings</atitle><jtitle>arXiv.org</jtitle><date>2024-02-23</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Unsupervised sentence embeddings task aims to convert sentences to semantic vector representations. Most previous works directly use the sentence representations derived from pretrained language models. However, due to the token bias in pretrained language models, the models can not capture the fine-grained semantics in sentences, which leads to poor predictions. To address this issue, we propose a novel Self-Adaptive Reconstruction Contrastive Sentence Embeddings (SARCSE) framework, which reconstructs all tokens in sentences with an AutoEncoder to help the model to preserve more fine-grained semantics during tokens aggregating. In addition, we proposed a self-adaptive reconstruction loss to alleviate the token bias towards frequency. Experimental results show that SARCSE gains significant improvements compared with the strong baseline SimCSE on the 7 STS tasks.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-02
issn 2331-8422
language eng
recordid cdi_proquest_journals_2931849329
source Free E- Journals
subjects Bias
Reconstruction
Representations
Semantics
Sentences
title Self-Adaptive Reconstruction with Contrastive Learning for Unsupervised Sentence Embeddings
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-10T15%3A54%3A23IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Self-Adaptive%20Reconstruction%20with%20Contrastive%20Learning%20for%20Unsupervised%20Sentence%20Embeddings&rft.jtitle=arXiv.org&rft.au=Liu,%20Junlong&rft.date=2024-02-23&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2931849329%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2931849329&rft_id=info:pmid/&rfr_iscdi=true