Perturbation-Based Self-Supervised Attention for Attention Bias in Text Classification
In text classification, the traditional attention mechanisms usually focus too much on frequent words, and need extensive labeled data in order to learn. This article proposes a perturbation-based self-supervised attention approach to guide attention learning without any annotation overhead. Specifi...
Gespeichert in:
Veröffentlicht in: | IEEE/ACM transactions on audio, speech, and language processing speech, and language processing, 2023, Vol.31, p.3139-3151 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 3151 |
---|---|
container_issue | |
container_start_page | 3139 |
container_title | IEEE/ACM transactions on audio, speech, and language processing |
container_volume | 31 |
creator | Feng, Huawen Lin, Zhenxi Ma, Qianli |
description | In text classification, the traditional attention mechanisms usually focus too much on frequent words, and need extensive labeled data in order to learn. This article proposes a perturbation-based self-supervised attention approach to guide attention learning without any annotation overhead. Specifically, we add as much noise as possible to all the words in the sentence without changing their semantics and predictions. We hypothesize that words that tolerate more noise are less significant, and we can use this information to refine the attention distribution. Experimental results on three text classification tasks show that our approach can significantly improve the performance of current attention-based models, and is more effective than existing self-supervised methods. We also provide a visualization analysis to verify the effectiveness of our approach. |
doi_str_mv | 10.1109/TASLP.2023.3302230 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_10209221</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10209221</ieee_id><sourcerecordid>2851362448</sourcerecordid><originalsourceid>FETCH-LOGICAL-c247t-8a1b278d66246422d4172c5cd376f05e99a91188215e710b23b33eaadb8cf4043</originalsourceid><addsrcrecordid>eNpNkE9Lw0AQxRdRsNR-AfEQ8Jw6O7tJdo9t8R8ULLR6XTbJBLbUpO4mot_epK3Q08xj3nsDP8ZuOUw5B_2wma2XqykCiqkQgCjggo1QoI61AHn5v6OGazYJYQsAHDKtMzliHyvybedz27qmjuc2UBmtaVfF625P_tsNeta2VA_3qGr8mZo7GyJXRxv6aaPFzobgKlccmm7YVWV3gSanOWbvT4-bxUu8fHt-XcyWcYEya2NleY6ZKtMUZSoRS8kzLJKiFFlaQUJaW825UsgTyjjkKHIhyNoyV0UlQYoxuz_27n3z1VFozbbpfN2_NKgSLvpeqXoXHl2Fb0LwVJm9d5_W_xoOZkBoDgjNgNCcEPahu2PIEdFZAEEjcvEH_PJsYQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2851362448</pqid></control><display><type>article</type><title>Perturbation-Based Self-Supervised Attention for Attention Bias in Text Classification</title><source>IEEE/IET Electronic Library (IEL)</source><creator>Feng, Huawen ; Lin, Zhenxi ; Ma, Qianli</creator><creatorcontrib>Feng, Huawen ; Lin, Zhenxi ; Ma, Qianli</creatorcontrib><description>In text classification, the traditional attention mechanisms usually focus too much on frequent words, and need extensive labeled data in order to learn. This article proposes a perturbation-based self-supervised attention approach to guide attention learning without any annotation overhead. Specifically, we add as much noise as possible to all the words in the sentence without changing their semantics and predictions. We hypothesize that words that tolerate more noise are less significant, and we can use this information to refine the attention distribution. Experimental results on three text classification tasks show that our approach can significantly improve the performance of current attention-based models, and is more effective than existing self-supervised methods. We also provide a visualization analysis to verify the effectiveness of our approach.</description><identifier>ISSN: 2329-9290</identifier><identifier>EISSN: 2329-9304</identifier><identifier>DOI: 10.1109/TASLP.2023.3302230</identifier><identifier>CODEN: ITASFA</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Annotations ; Attention bias ; Classification ; Noise tolerance ; Perturbation ; Perturbation methods ; Predictive models ; self-supervised learning ; Semantics ; Task analysis ; Text categorization ; text classification ; Training ; Words (language)</subject><ispartof>IEEE/ACM transactions on audio, speech, and language processing, 2023, Vol.31, p.3139-3151</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c247t-8a1b278d66246422d4172c5cd376f05e99a91188215e710b23b33eaadb8cf4043</cites><orcidid>0000-0002-9704-1479 ; 0000-0002-9356-2883 ; 0000-0003-1264-6549</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10209221$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,4022,27922,27923,27924,54757</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10209221$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Feng, Huawen</creatorcontrib><creatorcontrib>Lin, Zhenxi</creatorcontrib><creatorcontrib>Ma, Qianli</creatorcontrib><title>Perturbation-Based Self-Supervised Attention for Attention Bias in Text Classification</title><title>IEEE/ACM transactions on audio, speech, and language processing</title><addtitle>TASLP</addtitle><description>In text classification, the traditional attention mechanisms usually focus too much on frequent words, and need extensive labeled data in order to learn. This article proposes a perturbation-based self-supervised attention approach to guide attention learning without any annotation overhead. Specifically, we add as much noise as possible to all the words in the sentence without changing their semantics and predictions. We hypothesize that words that tolerate more noise are less significant, and we can use this information to refine the attention distribution. Experimental results on three text classification tasks show that our approach can significantly improve the performance of current attention-based models, and is more effective than existing self-supervised methods. We also provide a visualization analysis to verify the effectiveness of our approach.</description><subject>Annotations</subject><subject>Attention bias</subject><subject>Classification</subject><subject>Noise tolerance</subject><subject>Perturbation</subject><subject>Perturbation methods</subject><subject>Predictive models</subject><subject>self-supervised learning</subject><subject>Semantics</subject><subject>Task analysis</subject><subject>Text categorization</subject><subject>text classification</subject><subject>Training</subject><subject>Words (language)</subject><issn>2329-9290</issn><issn>2329-9304</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkE9Lw0AQxRdRsNR-AfEQ8Jw6O7tJdo9t8R8ULLR6XTbJBLbUpO4mot_epK3Q08xj3nsDP8ZuOUw5B_2wma2XqykCiqkQgCjggo1QoI61AHn5v6OGazYJYQsAHDKtMzliHyvybedz27qmjuc2UBmtaVfF625P_tsNeta2VA_3qGr8mZo7GyJXRxv6aaPFzobgKlccmm7YVWV3gSanOWbvT4-bxUu8fHt-XcyWcYEya2NleY6ZKtMUZSoRS8kzLJKiFFlaQUJaW825UsgTyjjkKHIhyNoyV0UlQYoxuz_27n3z1VFozbbpfN2_NKgSLvpeqXoXHl2Fb0LwVJm9d5_W_xoOZkBoDgjNgNCcEPahu2PIEdFZAEEjcvEH_PJsYQ</recordid><startdate>2023</startdate><enddate>2023</enddate><creator>Feng, Huawen</creator><creator>Lin, Zhenxi</creator><creator>Ma, Qianli</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-9704-1479</orcidid><orcidid>https://orcid.org/0000-0002-9356-2883</orcidid><orcidid>https://orcid.org/0000-0003-1264-6549</orcidid></search><sort><creationdate>2023</creationdate><title>Perturbation-Based Self-Supervised Attention for Attention Bias in Text Classification</title><author>Feng, Huawen ; Lin, Zhenxi ; Ma, Qianli</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c247t-8a1b278d66246422d4172c5cd376f05e99a91188215e710b23b33eaadb8cf4043</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Annotations</topic><topic>Attention bias</topic><topic>Classification</topic><topic>Noise tolerance</topic><topic>Perturbation</topic><topic>Perturbation methods</topic><topic>Predictive models</topic><topic>self-supervised learning</topic><topic>Semantics</topic><topic>Task analysis</topic><topic>Text categorization</topic><topic>text classification</topic><topic>Training</topic><topic>Words (language)</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Feng, Huawen</creatorcontrib><creatorcontrib>Lin, Zhenxi</creatorcontrib><creatorcontrib>Ma, Qianli</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005–Present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998–Present</collection><collection>IEEE/IET Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE/ACM transactions on audio, speech, and language processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Feng, Huawen</au><au>Lin, Zhenxi</au><au>Ma, Qianli</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Perturbation-Based Self-Supervised Attention for Attention Bias in Text Classification</atitle><jtitle>IEEE/ACM transactions on audio, speech, and language processing</jtitle><stitle>TASLP</stitle><date>2023</date><risdate>2023</risdate><volume>31</volume><spage>3139</spage><epage>3151</epage><pages>3139-3151</pages><issn>2329-9290</issn><eissn>2329-9304</eissn><coden>ITASFA</coden><abstract>In text classification, the traditional attention mechanisms usually focus too much on frequent words, and need extensive labeled data in order to learn. This article proposes a perturbation-based self-supervised attention approach to guide attention learning without any annotation overhead. Specifically, we add as much noise as possible to all the words in the sentence without changing their semantics and predictions. We hypothesize that words that tolerate more noise are less significant, and we can use this information to refine the attention distribution. Experimental results on three text classification tasks show that our approach can significantly improve the performance of current attention-based models, and is more effective than existing self-supervised methods. We also provide a visualization analysis to verify the effectiveness of our approach.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TASLP.2023.3302230</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0002-9704-1479</orcidid><orcidid>https://orcid.org/0000-0002-9356-2883</orcidid><orcidid>https://orcid.org/0000-0003-1264-6549</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2329-9290 |
ispartof | IEEE/ACM transactions on audio, speech, and language processing, 2023, Vol.31, p.3139-3151 |
issn | 2329-9290 2329-9304 |
language | eng |
recordid | cdi_ieee_primary_10209221 |
source | IEEE/IET Electronic Library (IEL) |
subjects | Annotations Attention bias Classification Noise tolerance Perturbation Perturbation methods Predictive models self-supervised learning Semantics Task analysis Text categorization text classification Training Words (language) |
title | Perturbation-Based Self-Supervised Attention for Attention Bias in Text Classification |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-10T16%3A18%3A04IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Perturbation-Based%20Self-Supervised%20Attention%20for%20Attention%20Bias%20in%20Text%20Classification&rft.jtitle=IEEE/ACM%20transactions%20on%20audio,%20speech,%20and%20language%20processing&rft.au=Feng,%20Huawen&rft.date=2023&rft.volume=31&rft.spage=3139&rft.epage=3151&rft.pages=3139-3151&rft.issn=2329-9290&rft.eissn=2329-9304&rft.coden=ITASFA&rft_id=info:doi/10.1109/TASLP.2023.3302230&rft_dat=%3Cproquest_RIE%3E2851362448%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2851362448&rft_id=info:pmid/&rft_ieee_id=10209221&rfr_iscdi=true |