Evaluating Neuron Interpretation Methods of NLP Models
Neuron Interpretation has gained traction in the field of interpretability, and have provided fine-grained insights into what a model learns and how language knowledge is distributed amongst its different components. However, the lack of evaluation benchmark and metrics have led to siloed progress w...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Neuron Interpretation has gained traction in the field of interpretability,
and have provided fine-grained insights into what a model learns and how
language knowledge is distributed amongst its different components. However,
the lack of evaluation benchmark and metrics have led to siloed progress within
these various methods, with very little work comparing them and highlighting
their strengths and weaknesses. The reason for this discrepancy is the
difficulty of creating ground truth datasets, for example, many neurons within
a given model may learn the same phenomena, and hence there may not be one
correct answer. Moreover, a learned phenomenon may spread across several
neurons that work together -- surfacing these to create a gold standard
challenging. In this work, we propose an evaluation framework that measures the
compatibility of a neuron analysis method with other methods. We hypothesize
that the more compatible a method is with the majority of the methods, the more
confident one can be about its performance. We systematically evaluate our
proposed framework and present a comparative analysis of a large set of neuron
interpretation methods. We make the evaluation framework available to the
community. It enables the evaluation of any new method using 20 concepts and
across three pre-trained models.The code is released at
https://github.com/fdalvi/neuron-comparative-analysis |
---|---|
DOI: | 10.48550/arxiv.2301.12608 |