Self-Adaptive Reconstruction with Contrastive Learning for Unsupervised Sentence Embeddings
Unsupervised sentence embeddings task aims to convert sentences to semantic vector representations. Most previous works directly use the sentence representations derived from pretrained language models. However, due to the token bias in pretrained language models, the models can not capture the fine...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Liu, Junlong Shang, Xichen Feng, Huawen Zheng, Junhao Ma, Qianli |
description | Unsupervised sentence embeddings task aims to convert sentences to semantic
vector representations. Most previous works directly use the sentence
representations derived from pretrained language models. However, due to the
token bias in pretrained language models, the models can not capture the
fine-grained semantics in sentences, which leads to poor predictions. To
address this issue, we propose a novel Self-Adaptive Reconstruction Contrastive
Sentence Embeddings (SARCSE) framework, which reconstructs all tokens in
sentences with an AutoEncoder to help the model to preserve more fine-grained
semantics during tokens aggregating. In addition, we proposed a self-adaptive
reconstruction loss to alleviate the token bias towards frequency. Experimental
results show that SARCSE gains significant improvements compared with the
strong baseline SimCSE on the 7 STS tasks. |
doi_str_mv | 10.48550/arxiv.2402.15153 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2402_15153</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2402_15153</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-316842bf8f6c0aa23f2e9a4ab4c38dca39faa242a714d276f9d77cf73d0059dc3</originalsourceid><addsrcrecordid>eNotj8tOwzAURL1hgQofwAr_QIJjO3GyrKLykCIh0bLqIrqxr4ul1olsN8DfUwKrkWaORjqE3BUsl3VZsgcIX27OuWQ8L8qiFNdkv8WjzdYGpuRmpG-oRx9TOOvkRk8_Xfqg7ehTgLjsHULwzh-oHQN99_E8YZhdREO36BN6jXRzGtCYCxNvyJWFY8Tb_1yR3eNm1z5n3evTS7vuMqiUyERR1ZIPtraVZgBcWI4NSBikFrXRIBp7aSUHVUjDVWUbo5S2ShjGysZosSL3f7eLXT8Fd4Lw3f9a9oul-AHswk8c</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Self-Adaptive Reconstruction with Contrastive Learning for Unsupervised Sentence Embeddings</title><source>arXiv.org</source><creator>Liu, Junlong ; Shang, Xichen ; Feng, Huawen ; Zheng, Junhao ; Ma, Qianli</creator><creatorcontrib>Liu, Junlong ; Shang, Xichen ; Feng, Huawen ; Zheng, Junhao ; Ma, Qianli</creatorcontrib><description>Unsupervised sentence embeddings task aims to convert sentences to semantic
vector representations. Most previous works directly use the sentence
representations derived from pretrained language models. However, due to the
token bias in pretrained language models, the models can not capture the
fine-grained semantics in sentences, which leads to poor predictions. To
address this issue, we propose a novel Self-Adaptive Reconstruction Contrastive
Sentence Embeddings (SARCSE) framework, which reconstructs all tokens in
sentences with an AutoEncoder to help the model to preserve more fine-grained
semantics during tokens aggregating. In addition, we proposed a self-adaptive
reconstruction loss to alleviate the token bias towards frequency. Experimental
results show that SARCSE gains significant improvements compared with the
strong baseline SimCSE on the 7 STS tasks.</description><identifier>DOI: 10.48550/arxiv.2402.15153</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2024-02</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2402.15153$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2402.15153$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Liu, Junlong</creatorcontrib><creatorcontrib>Shang, Xichen</creatorcontrib><creatorcontrib>Feng, Huawen</creatorcontrib><creatorcontrib>Zheng, Junhao</creatorcontrib><creatorcontrib>Ma, Qianli</creatorcontrib><title>Self-Adaptive Reconstruction with Contrastive Learning for Unsupervised Sentence Embeddings</title><description>Unsupervised sentence embeddings task aims to convert sentences to semantic
vector representations. Most previous works directly use the sentence
representations derived from pretrained language models. However, due to the
token bias in pretrained language models, the models can not capture the
fine-grained semantics in sentences, which leads to poor predictions. To
address this issue, we propose a novel Self-Adaptive Reconstruction Contrastive
Sentence Embeddings (SARCSE) framework, which reconstructs all tokens in
sentences with an AutoEncoder to help the model to preserve more fine-grained
semantics during tokens aggregating. In addition, we proposed a self-adaptive
reconstruction loss to alleviate the token bias towards frequency. Experimental
results show that SARCSE gains significant improvements compared with the
strong baseline SimCSE on the 7 STS tasks.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tOwzAURL1hgQofwAr_QIJjO3GyrKLykCIh0bLqIrqxr4ul1olsN8DfUwKrkWaORjqE3BUsl3VZsgcIX27OuWQ8L8qiFNdkv8WjzdYGpuRmpG-oRx9TOOvkRk8_Xfqg7ehTgLjsHULwzh-oHQN99_E8YZhdREO36BN6jXRzGtCYCxNvyJWFY8Tb_1yR3eNm1z5n3evTS7vuMqiUyERR1ZIPtraVZgBcWI4NSBikFrXRIBp7aSUHVUjDVWUbo5S2ShjGysZosSL3f7eLXT8Fd4Lw3f9a9oul-AHswk8c</recordid><startdate>20240223</startdate><enddate>20240223</enddate><creator>Liu, Junlong</creator><creator>Shang, Xichen</creator><creator>Feng, Huawen</creator><creator>Zheng, Junhao</creator><creator>Ma, Qianli</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240223</creationdate><title>Self-Adaptive Reconstruction with Contrastive Learning for Unsupervised Sentence Embeddings</title><author>Liu, Junlong ; Shang, Xichen ; Feng, Huawen ; Zheng, Junhao ; Ma, Qianli</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-316842bf8f6c0aa23f2e9a4ab4c38dca39faa242a714d276f9d77cf73d0059dc3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Junlong</creatorcontrib><creatorcontrib>Shang, Xichen</creatorcontrib><creatorcontrib>Feng, Huawen</creatorcontrib><creatorcontrib>Zheng, Junhao</creatorcontrib><creatorcontrib>Ma, Qianli</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Liu, Junlong</au><au>Shang, Xichen</au><au>Feng, Huawen</au><au>Zheng, Junhao</au><au>Ma, Qianli</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Self-Adaptive Reconstruction with Contrastive Learning for Unsupervised Sentence Embeddings</atitle><date>2024-02-23</date><risdate>2024</risdate><abstract>Unsupervised sentence embeddings task aims to convert sentences to semantic
vector representations. Most previous works directly use the sentence
representations derived from pretrained language models. However, due to the
token bias in pretrained language models, the models can not capture the
fine-grained semantics in sentences, which leads to poor predictions. To
address this issue, we propose a novel Self-Adaptive Reconstruction Contrastive
Sentence Embeddings (SARCSE) framework, which reconstructs all tokens in
sentences with an AutoEncoder to help the model to preserve more fine-grained
semantics during tokens aggregating. In addition, we proposed a self-adaptive
reconstruction loss to alleviate the token bias towards frequency. Experimental
results show that SARCSE gains significant improvements compared with the
strong baseline SimCSE on the 7 STS tasks.</abstract><doi>10.48550/arxiv.2402.15153</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2402.15153 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2402_15153 |
source | arXiv.org |
subjects | Computer Science - Computation and Language Computer Science - Learning |
title | Self-Adaptive Reconstruction with Contrastive Learning for Unsupervised Sentence Embeddings |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-10T15%3A37%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Self-Adaptive%20Reconstruction%20with%20Contrastive%20Learning%20for%20Unsupervised%20Sentence%20Embeddings&rft.au=Liu,%20Junlong&rft.date=2024-02-23&rft_id=info:doi/10.48550/arxiv.2402.15153&rft_dat=%3Carxiv_GOX%3E2402_15153%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |