Multi-Level Knowledge Distillation for Out-of-Distribution Detection in Text
Self-supervised representation learning has proved to be a valuable component for out-of-distribution (OoD) detection with only the texts of in-distribution (ID) examples. These approaches either train a language model from scratch or fine-tune a pre-trained language model using ID examples, and the...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Self-supervised representation learning has proved to be a valuable component
for out-of-distribution (OoD) detection with only the texts of in-distribution
(ID) examples. These approaches either train a language model from scratch or
fine-tune a pre-trained language model using ID examples, and then take the
perplexity output by the language model as OoD scores. In this paper, we
analyze the complementary characteristics of both OoD detection methods and
propose a multi-level knowledge distillation approach that integrates their
strengths while mitigating their limitations. Specifically, we use a fine-tuned
model as the teacher to teach a randomly initialized student model on the ID
examples. Besides the prediction layer distillation, we present a
similarity-based intermediate layer distillation method to thoroughly explore
the representation space of the teacher model. In this way, the learned student
can better represent the ID data manifold while gaining a stronger ability to
map OoD examples outside the ID data manifold with the regularization inherited
from pre-training. Besides, the student model sees only ID examples during
parameter learning, further promoting more distinguishable features for OoD
detection. We conduct extensive experiments over multiple benchmark datasets,
i.e., CLINC150, SST, ROSTD, 20 NewsGroups, and AG News; showing that the
proposed method yields new state-of-the-art performance. We also explore its
application as an AIGC detector to distinguish between answers generated by
ChatGPT and human experts. It is observed that our model exceeds human
evaluators in the pair-expert task on the Human ChatGPT Comparison Corpus. |
---|---|
DOI: | 10.48550/arxiv.2211.11300 |