Pairwise Supervised Contrastive Learning of Sentence Representations
Many recent successes in sentence representation learning have been achieved by simply fine-tuning on the Natural Language Inference (NLI) datasets with triplet loss or siamese loss. Nevertheless, they share a common weakness: sentences in a contradiction pair are not necessarily from different sema...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Zhang, Dejiao Li, Shang-Wen Xiao, Wei Zhu, Henghui Nallapati, Ramesh Arnold, Andrew O Xiang, Bing |
description | Many recent successes in sentence representation learning have been achieved
by simply fine-tuning on the Natural Language Inference (NLI) datasets with
triplet loss or siamese loss. Nevertheless, they share a common weakness:
sentences in a contradiction pair are not necessarily from different semantic
categories. Therefore, optimizing the semantic entailment and contradiction
reasoning objective alone is inadequate to capture the high-level semantic
structure. The drawback is compounded by the fact that the vanilla siamese or
triplet losses only learn from individual sentence pairs or triplets, which
often suffer from bad local optima. In this paper, we propose PairSupCon, an
instance discrimination based approach aiming to bridge semantic entailment and
contradiction understanding with high-level categorical concept encoding. We
evaluate PairSupCon on various downstream tasks that involve understanding
sentence semantics at different granularities. We outperform the previous
state-of-the-art method with $10\%$--$13\%$ averaged improvement on eight
clustering tasks, and $5\%$--$6\%$ averaged improvement on seven semantic
textual similarity (STS) tasks. |
doi_str_mv | 10.48550/arxiv.2109.05424 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2109_05424</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2109_05424</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-60f154bcd5fd4d09e32edb519df750b4e44072693d38c6e3d14067a0b42408b73</originalsourceid><addsrcrecordid>eNotj81KxDAUhbNxIaMP4Mq8QOtNc5O0S6m_UFCc2Ze0uZGApiWpVd_eOro65_DBgY-xCwEl1krBlU1fYS0rAU0JCis8ZTfPNqTPkInvP2ZK69Ycb6e4JJuXsBLvyKYY4iufPN9TXCiOxF9oTpS3ZZcwxXzGTrx9y3T-nzt2uLs9tA9F93T_2F53hdUGCw1eKBxGp7xDBw3JitygROO8UTAgIYKpdCOdrEdN0gkEbexGKoR6MHLHLv9ujxr9nMK7Td_9r05_1JE_2BlFmg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Pairwise Supervised Contrastive Learning of Sentence Representations</title><source>arXiv.org</source><creator>Zhang, Dejiao ; Li, Shang-Wen ; Xiao, Wei ; Zhu, Henghui ; Nallapati, Ramesh ; Arnold, Andrew O ; Xiang, Bing</creator><creatorcontrib>Zhang, Dejiao ; Li, Shang-Wen ; Xiao, Wei ; Zhu, Henghui ; Nallapati, Ramesh ; Arnold, Andrew O ; Xiang, Bing</creatorcontrib><description>Many recent successes in sentence representation learning have been achieved
by simply fine-tuning on the Natural Language Inference (NLI) datasets with
triplet loss or siamese loss. Nevertheless, they share a common weakness:
sentences in a contradiction pair are not necessarily from different semantic
categories. Therefore, optimizing the semantic entailment and contradiction
reasoning objective alone is inadequate to capture the high-level semantic
structure. The drawback is compounded by the fact that the vanilla siamese or
triplet losses only learn from individual sentence pairs or triplets, which
often suffer from bad local optima. In this paper, we propose PairSupCon, an
instance discrimination based approach aiming to bridge semantic entailment and
contradiction understanding with high-level categorical concept encoding. We
evaluate PairSupCon on various downstream tasks that involve understanding
sentence semantics at different granularities. We outperform the previous
state-of-the-art method with $10\%$--$13\%$ averaged improvement on eight
clustering tasks, and $5\%$--$6\%$ averaged improvement on seven semantic
textual similarity (STS) tasks.</description><identifier>DOI: 10.48550/arxiv.2109.05424</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2021-09</creationdate><rights>http://creativecommons.org/licenses/by-nc-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2109.05424$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2109.05424$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhang, Dejiao</creatorcontrib><creatorcontrib>Li, Shang-Wen</creatorcontrib><creatorcontrib>Xiao, Wei</creatorcontrib><creatorcontrib>Zhu, Henghui</creatorcontrib><creatorcontrib>Nallapati, Ramesh</creatorcontrib><creatorcontrib>Arnold, Andrew O</creatorcontrib><creatorcontrib>Xiang, Bing</creatorcontrib><title>Pairwise Supervised Contrastive Learning of Sentence Representations</title><description>Many recent successes in sentence representation learning have been achieved
by simply fine-tuning on the Natural Language Inference (NLI) datasets with
triplet loss or siamese loss. Nevertheless, they share a common weakness:
sentences in a contradiction pair are not necessarily from different semantic
categories. Therefore, optimizing the semantic entailment and contradiction
reasoning objective alone is inadequate to capture the high-level semantic
structure. The drawback is compounded by the fact that the vanilla siamese or
triplet losses only learn from individual sentence pairs or triplets, which
often suffer from bad local optima. In this paper, we propose PairSupCon, an
instance discrimination based approach aiming to bridge semantic entailment and
contradiction understanding with high-level categorical concept encoding. We
evaluate PairSupCon on various downstream tasks that involve understanding
sentence semantics at different granularities. We outperform the previous
state-of-the-art method with $10\%$--$13\%$ averaged improvement on eight
clustering tasks, and $5\%$--$6\%$ averaged improvement on seven semantic
textual similarity (STS) tasks.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81KxDAUhbNxIaMP4Mq8QOtNc5O0S6m_UFCc2Ze0uZGApiWpVd_eOro65_DBgY-xCwEl1krBlU1fYS0rAU0JCis8ZTfPNqTPkInvP2ZK69Ycb6e4JJuXsBLvyKYY4iufPN9TXCiOxF9oTpS3ZZcwxXzGTrx9y3T-nzt2uLs9tA9F93T_2F53hdUGCw1eKBxGp7xDBw3JitygROO8UTAgIYKpdCOdrEdN0gkEbexGKoR6MHLHLv9ujxr9nMK7Td_9r05_1JE_2BlFmg</recordid><startdate>20210912</startdate><enddate>20210912</enddate><creator>Zhang, Dejiao</creator><creator>Li, Shang-Wen</creator><creator>Xiao, Wei</creator><creator>Zhu, Henghui</creator><creator>Nallapati, Ramesh</creator><creator>Arnold, Andrew O</creator><creator>Xiang, Bing</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210912</creationdate><title>Pairwise Supervised Contrastive Learning of Sentence Representations</title><author>Zhang, Dejiao ; Li, Shang-Wen ; Xiao, Wei ; Zhu, Henghui ; Nallapati, Ramesh ; Arnold, Andrew O ; Xiang, Bing</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-60f154bcd5fd4d09e32edb519df750b4e44072693d38c6e3d14067a0b42408b73</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Dejiao</creatorcontrib><creatorcontrib>Li, Shang-Wen</creatorcontrib><creatorcontrib>Xiao, Wei</creatorcontrib><creatorcontrib>Zhu, Henghui</creatorcontrib><creatorcontrib>Nallapati, Ramesh</creatorcontrib><creatorcontrib>Arnold, Andrew O</creatorcontrib><creatorcontrib>Xiang, Bing</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhang, Dejiao</au><au>Li, Shang-Wen</au><au>Xiao, Wei</au><au>Zhu, Henghui</au><au>Nallapati, Ramesh</au><au>Arnold, Andrew O</au><au>Xiang, Bing</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Pairwise Supervised Contrastive Learning of Sentence Representations</atitle><date>2021-09-12</date><risdate>2021</risdate><abstract>Many recent successes in sentence representation learning have been achieved
by simply fine-tuning on the Natural Language Inference (NLI) datasets with
triplet loss or siamese loss. Nevertheless, they share a common weakness:
sentences in a contradiction pair are not necessarily from different semantic
categories. Therefore, optimizing the semantic entailment and contradiction
reasoning objective alone is inadequate to capture the high-level semantic
structure. The drawback is compounded by the fact that the vanilla siamese or
triplet losses only learn from individual sentence pairs or triplets, which
often suffer from bad local optima. In this paper, we propose PairSupCon, an
instance discrimination based approach aiming to bridge semantic entailment and
contradiction understanding with high-level categorical concept encoding. We
evaluate PairSupCon on various downstream tasks that involve understanding
sentence semantics at different granularities. We outperform the previous
state-of-the-art method with $10\%$--$13\%$ averaged improvement on eight
clustering tasks, and $5\%$--$6\%$ averaged improvement on seven semantic
textual similarity (STS) tasks.</abstract><doi>10.48550/arxiv.2109.05424</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2109.05424 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2109_05424 |
source | arXiv.org |
subjects | Computer Science - Computation and Language Computer Science - Learning |
title | Pairwise Supervised Contrastive Learning of Sentence Representations |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-26T05%3A04%3A27IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Pairwise%20Supervised%20Contrastive%20Learning%20of%20Sentence%20Representations&rft.au=Zhang,%20Dejiao&rft.date=2021-09-12&rft_id=info:doi/10.48550/arxiv.2109.05424&rft_dat=%3Carxiv_GOX%3E2109_05424%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |