Semantic Re-tuning with Contrastive Tension

Extracting semantically useful natural language sentence representations frompre-trained deep neural networks such as Transformers remains a challenge. Wefirst demonstrate that pre-training objectives impose a significant task bias ontothe final layers of models, with a layer-wise survey of the Sema...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Carlsson, Fredrik, Gogoulou, Evangelia, Ylipää, Erik, Cuba Gyllensten, Amaru, Sahlgren, Magnus
Format: Tagungsbericht
Sprache:eng
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Extracting semantically useful natural language sentence representations frompre-trained deep neural networks such as Transformers remains a challenge. Wefirst demonstrate that pre-training objectives impose a significant task bias ontothe final layers of models, with a layer-wise survey of the Semantic Textual Similarity (STS) correlations for multiple common Transformer language models. Wethen propose a new self-supervised method called Contrastive Tension (CT) tocounter such biases. CT frames the training objective as a noise-contrastive taskbetween the final layer representations of two independent models, in turn makingthe final layer representations suitable for feature extraction. Results from multiple common unsupervised and supervised STS tasks indicate that CT outperformsprevious State Of The Art (SOTA), and when combining CT with supervised datawe improve upon previous SOTA results with large margins.