Interpreting Recurrent and Attention-Based Neural Models: a Case Study on Natural Language Inference
EMNLP 2018 Deep learning models have achieved remarkable success in natural language inference (NLI) tasks. While these models are widely explored, they are hard to interpret and it is often unclear how and why they actually work. In this paper, we take a step toward explaining such deep learning ba...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | EMNLP 2018 Deep learning models have achieved remarkable success in natural language
inference (NLI) tasks. While these models are widely explored, they are hard to
interpret and it is often unclear how and why they actually work. In this
paper, we take a step toward explaining such deep learning based models through
a case study on a popular neural model for NLI. In particular, we propose to
interpret the intermediate layers of NLI models by visualizing the saliency of
attention and LSTM gating signals. We present several examples for which our
methods are able to reveal interesting insights and identify the critical
information contributing to the model decisions. |
---|---|
DOI: | 10.48550/arxiv.1808.03894 |