Robust Text Classification: Analyzing Prototype-Based Networks
Downstream applications often require text classification models to be accurate and robust. While the accuracy of the state-of-the-art Language Models (LMs) approximates human performance, they often exhibit a drop in performance on noisy data found in the real world. This lack of robustness can be...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-10 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Sourati, Zhivar Deshpande, Darshan Ilievski, Filip Gashteovski, Kiril Saralajew, Sascha |
description | Downstream applications often require text classification models to be accurate and robust. While the accuracy of the state-of-the-art Language Models (LMs) approximates human performance, they often exhibit a drop in performance on noisy data found in the real world. This lack of robustness can be concerning, as even small perturbations in the text, irrelevant to the target task, can cause classifiers to incorrectly change their predictions. A potential solution can be the family of Prototype-Based Networks (PBNs) that classifies examples based on their similarity to prototypical examples of a class (prototypes) and has been shown to be robust to noise for computer vision tasks. In this paper, we study whether the robustness properties of PBNs transfer to text classification tasks under both targeted and static adversarial attack settings. Our results show that PBNs, as a mere architectural variation of vanilla LMs, offer more robustness compared to vanilla LMs under both targeted and static settings. We showcase how PBNs' interpretability can help us to understand PBNs' robustness properties. Finally, our ablation studies reveal the sensitivity of PBNs' robustness to how strictly clustering is done in the training phase, as tighter clustering results in less robust PBNs. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2889797727</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2889797727</sourcerecordid><originalsourceid>FETCH-proquest_journals_28897977273</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSwC8pPKi0uUQhJrShRcM5JLC7OTMtMTizJzM-zUnDMS8yprMrMS1cIKMovyS-pLEjVdUosTk1R8EstKc8vyi7mYWBNS8wpTuWF0twMym6uIc4eugVF-YWlqcUl8Vn5pUVAY4rjjSwsLM0tzc2NzI2JUwUA53443Q</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2889797727</pqid></control><display><type>article</type><title>Robust Text Classification: Analyzing Prototype-Based Networks</title><source>Free E- Journals</source><creator>Sourati, Zhivar ; Deshpande, Darshan ; Ilievski, Filip ; Gashteovski, Kiril ; Saralajew, Sascha</creator><creatorcontrib>Sourati, Zhivar ; Deshpande, Darshan ; Ilievski, Filip ; Gashteovski, Kiril ; Saralajew, Sascha</creatorcontrib><description>Downstream applications often require text classification models to be accurate and robust. While the accuracy of the state-of-the-art Language Models (LMs) approximates human performance, they often exhibit a drop in performance on noisy data found in the real world. This lack of robustness can be concerning, as even small perturbations in the text, irrelevant to the target task, can cause classifiers to incorrectly change their predictions. A potential solution can be the family of Prototype-Based Networks (PBNs) that classifies examples based on their similarity to prototypical examples of a class (prototypes) and has been shown to be robust to noise for computer vision tasks. In this paper, we study whether the robustness properties of PBNs transfer to text classification tasks under both targeted and static adversarial attack settings. Our results show that PBNs, as a mere architectural variation of vanilla LMs, offer more robustness compared to vanilla LMs under both targeted and static settings. We showcase how PBNs' interpretability can help us to understand PBNs' robustness properties. Finally, our ablation studies reveal the sensitivity of PBNs' robustness to how strictly clustering is done in the training phase, as tighter clustering results in less robust PBNs.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Classification ; Computer vision ; Human performance ; Modular design ; Perturbation ; Prototypes ; Robustness ; Text categorization</subject><ispartof>arXiv.org, 2024-10</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Sourati, Zhivar</creatorcontrib><creatorcontrib>Deshpande, Darshan</creatorcontrib><creatorcontrib>Ilievski, Filip</creatorcontrib><creatorcontrib>Gashteovski, Kiril</creatorcontrib><creatorcontrib>Saralajew, Sascha</creatorcontrib><title>Robust Text Classification: Analyzing Prototype-Based Networks</title><title>arXiv.org</title><description>Downstream applications often require text classification models to be accurate and robust. While the accuracy of the state-of-the-art Language Models (LMs) approximates human performance, they often exhibit a drop in performance on noisy data found in the real world. This lack of robustness can be concerning, as even small perturbations in the text, irrelevant to the target task, can cause classifiers to incorrectly change their predictions. A potential solution can be the family of Prototype-Based Networks (PBNs) that classifies examples based on their similarity to prototypical examples of a class (prototypes) and has been shown to be robust to noise for computer vision tasks. In this paper, we study whether the robustness properties of PBNs transfer to text classification tasks under both targeted and static adversarial attack settings. Our results show that PBNs, as a mere architectural variation of vanilla LMs, offer more robustness compared to vanilla LMs under both targeted and static settings. We showcase how PBNs' interpretability can help us to understand PBNs' robustness properties. Finally, our ablation studies reveal the sensitivity of PBNs' robustness to how strictly clustering is done in the training phase, as tighter clustering results in less robust PBNs.</description><subject>Classification</subject><subject>Computer vision</subject><subject>Human performance</subject><subject>Modular design</subject><subject>Perturbation</subject><subject>Prototypes</subject><subject>Robustness</subject><subject>Text categorization</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSwC8pPKi0uUQhJrShRcM5JLC7OTMtMTizJzM-zUnDMS8yprMrMS1cIKMovyS-pLEjVdUosTk1R8EstKc8vyi7mYWBNS8wpTuWF0twMym6uIc4eugVF-YWlqcUl8Vn5pUVAY4rjjSwsLM0tzc2NzI2JUwUA53443Q</recordid><startdate>20241028</startdate><enddate>20241028</enddate><creator>Sourati, Zhivar</creator><creator>Deshpande, Darshan</creator><creator>Ilievski, Filip</creator><creator>Gashteovski, Kiril</creator><creator>Saralajew, Sascha</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241028</creationdate><title>Robust Text Classification: Analyzing Prototype-Based Networks</title><author>Sourati, Zhivar ; Deshpande, Darshan ; Ilievski, Filip ; Gashteovski, Kiril ; Saralajew, Sascha</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28897977273</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Classification</topic><topic>Computer vision</topic><topic>Human performance</topic><topic>Modular design</topic><topic>Perturbation</topic><topic>Prototypes</topic><topic>Robustness</topic><topic>Text categorization</topic><toplevel>online_resources</toplevel><creatorcontrib>Sourati, Zhivar</creatorcontrib><creatorcontrib>Deshpande, Darshan</creatorcontrib><creatorcontrib>Ilievski, Filip</creatorcontrib><creatorcontrib>Gashteovski, Kiril</creatorcontrib><creatorcontrib>Saralajew, Sascha</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection (ProQuest)</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Sourati, Zhivar</au><au>Deshpande, Darshan</au><au>Ilievski, Filip</au><au>Gashteovski, Kiril</au><au>Saralajew, Sascha</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Robust Text Classification: Analyzing Prototype-Based Networks</atitle><jtitle>arXiv.org</jtitle><date>2024-10-28</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Downstream applications often require text classification models to be accurate and robust. While the accuracy of the state-of-the-art Language Models (LMs) approximates human performance, they often exhibit a drop in performance on noisy data found in the real world. This lack of robustness can be concerning, as even small perturbations in the text, irrelevant to the target task, can cause classifiers to incorrectly change their predictions. A potential solution can be the family of Prototype-Based Networks (PBNs) that classifies examples based on their similarity to prototypical examples of a class (prototypes) and has been shown to be robust to noise for computer vision tasks. In this paper, we study whether the robustness properties of PBNs transfer to text classification tasks under both targeted and static adversarial attack settings. Our results show that PBNs, as a mere architectural variation of vanilla LMs, offer more robustness compared to vanilla LMs under both targeted and static settings. We showcase how PBNs' interpretability can help us to understand PBNs' robustness properties. Finally, our ablation studies reveal the sensitivity of PBNs' robustness to how strictly clustering is done in the training phase, as tighter clustering results in less robust PBNs.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-10 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2889797727 |
source | Free E- Journals |
subjects | Classification Computer vision Human performance Modular design Perturbation Prototypes Robustness Text categorization |
title | Robust Text Classification: Analyzing Prototype-Based Networks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T16%3A16%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Robust%20Text%20Classification:%20Analyzing%20Prototype-Based%20Networks&rft.jtitle=arXiv.org&rft.au=Sourati,%20Zhivar&rft.date=2024-10-28&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2889797727%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2889797727&rft_id=info:pmid/&rfr_iscdi=true |