Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers
To build an interpretable neural text classifier, most of the prior work has focused on designing inherently interpretable models or finding faithful explanations. A new line of work on improving model interpretability has just started, and many existing methods require either prior information or h...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Chen, Hanjie Ji, Yangfeng |
description | To build an interpretable neural text classifier, most of the prior work has
focused on designing inherently interpretable models or finding faithful
explanations. A new line of work on improving model interpretability has just
started, and many existing methods require either prior information or human
annotations as additional inputs in training. To address this limitation, we
propose the variational word mask (VMASK) method to automatically learn
task-specific important words and reduce irrelevant information on
classification, which ultimately improves the interpretability of model
predictions. The proposed method is evaluated with three neural text
classifiers (CNN, LSTM, and BERT) on seven benchmark text classification
datasets. Experiments show the effectiveness of VMASK in improving both model
prediction accuracy and interpretability. |
doi_str_mv | 10.48550/arxiv.2010.00667 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2010_00667</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2010_00667</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-366bc5b685ad5d1950f323e3ad1ba35f3ad54eb76970f81482206288182980b23</originalsourceid><addsrcrecordid>eNotj81KxDAUhbNxIaMP4Mq8QMc0adJ0KcWfQh03RcFNubE3Guy05SYOM2_vOLo6hwPfgY-xq1ysC6u1uAHah91aiuMghDHlOXtrEWgK0wd_AQqQwjzByF9nGvgTxK_I08yb7ULzDnn6RN5MCWkhTODCGNKBz55v8JuOUIf7xOsRYgw-IMULduZhjHj5nyvW3d919WPWPj809W2bgSnLTBnj3rUzVsOgh7zSwiupUMGQO1DaH4su0JWmKoW3eWGlFEZam1tZWeGkWrHrv9uTXL9Q2AId-l_J_iSpfgDCsEyR</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers</title><source>arXiv.org</source><creator>Chen, Hanjie ; Ji, Yangfeng</creator><creatorcontrib>Chen, Hanjie ; Ji, Yangfeng</creatorcontrib><description>To build an interpretable neural text classifier, most of the prior work has
focused on designing inherently interpretable models or finding faithful
explanations. A new line of work on improving model interpretability has just
started, and many existing methods require either prior information or human
annotations as additional inputs in training. To address this limitation, we
propose the variational word mask (VMASK) method to automatically learn
task-specific important words and reduce irrelevant information on
classification, which ultimately improves the interpretability of model
predictions. The proposed method is evaluated with three neural text
classifiers (CNN, LSTM, and BERT) on seven benchmark text classification
datasets. Experiments show the effectiveness of VMASK in improving both model
prediction accuracy and interpretability.</description><identifier>DOI: 10.48550/arxiv.2010.00667</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2020-10</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2010.00667$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2010.00667$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Chen, Hanjie</creatorcontrib><creatorcontrib>Ji, Yangfeng</creatorcontrib><title>Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers</title><description>To build an interpretable neural text classifier, most of the prior work has
focused on designing inherently interpretable models or finding faithful
explanations. A new line of work on improving model interpretability has just
started, and many existing methods require either prior information or human
annotations as additional inputs in training. To address this limitation, we
propose the variational word mask (VMASK) method to automatically learn
task-specific important words and reduce irrelevant information on
classification, which ultimately improves the interpretability of model
predictions. The proposed method is evaluated with three neural text
classifiers (CNN, LSTM, and BERT) on seven benchmark text classification
datasets. Experiments show the effectiveness of VMASK in improving both model
prediction accuracy and interpretability.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81KxDAUhbNxIaMP4Mq8QMc0adJ0KcWfQh03RcFNubE3Guy05SYOM2_vOLo6hwPfgY-xq1ysC6u1uAHah91aiuMghDHlOXtrEWgK0wd_AQqQwjzByF9nGvgTxK_I08yb7ULzDnn6RN5MCWkhTODCGNKBz55v8JuOUIf7xOsRYgw-IMULduZhjHj5nyvW3d919WPWPj809W2bgSnLTBnj3rUzVsOgh7zSwiupUMGQO1DaH4su0JWmKoW3eWGlFEZam1tZWeGkWrHrv9uTXL9Q2AId-l_J_iSpfgDCsEyR</recordid><startdate>20201001</startdate><enddate>20201001</enddate><creator>Chen, Hanjie</creator><creator>Ji, Yangfeng</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20201001</creationdate><title>Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers</title><author>Chen, Hanjie ; Ji, Yangfeng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-366bc5b685ad5d1950f323e3ad1ba35f3ad54eb76970f81482206288182980b23</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Chen, Hanjie</creatorcontrib><creatorcontrib>Ji, Yangfeng</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Chen, Hanjie</au><au>Ji, Yangfeng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers</atitle><date>2020-10-01</date><risdate>2020</risdate><abstract>To build an interpretable neural text classifier, most of the prior work has
focused on designing inherently interpretable models or finding faithful
explanations. A new line of work on improving model interpretability has just
started, and many existing methods require either prior information or human
annotations as additional inputs in training. To address this limitation, we
propose the variational word mask (VMASK) method to automatically learn
task-specific important words and reduce irrelevant information on
classification, which ultimately improves the interpretability of model
predictions. The proposed method is evaluated with three neural text
classifiers (CNN, LSTM, and BERT) on seven benchmark text classification
datasets. Experiments show the effectiveness of VMASK in improving both model
prediction accuracy and interpretability.</abstract><doi>10.48550/arxiv.2010.00667</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2010.00667 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2010_00667 |
source | arXiv.org |
subjects | Computer Science - Computation and Language Computer Science - Learning |
title | Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T21%3A52%3A21IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20Variational%20Word%20Masks%20to%20Improve%20the%20Interpretability%20of%20Neural%20Text%20Classifiers&rft.au=Chen,%20Hanjie&rft.date=2020-10-01&rft_id=info:doi/10.48550/arxiv.2010.00667&rft_dat=%3Carxiv_GOX%3E2010_00667%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |