Code-Mixed Probes Show How Pre-Trained Models Generalise On Code-Switched Text
Code-switching is a prevalent linguistic phenomenon in which multilingual individuals seamlessly alternate between languages. Despite its widespread use online and recent research trends in this area, research in code-switching presents unique challenges, primarily stemming from the scarcity of labe...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Code-switching is a prevalent linguistic phenomenon in which multilingual
individuals seamlessly alternate between languages. Despite its widespread use
online and recent research trends in this area, research in code-switching
presents unique challenges, primarily stemming from the scarcity of labelled
data and available resources. In this study we investigate how pre-trained
Language Models handle code-switched text in three dimensions: a) the ability
of PLMs to detect code-switched text, b) variations in the structural
information that PLMs utilise to capture code-switched text, and c) the
consistency of semantic information representation in code-switched text. To
conduct a systematic and controlled evaluation of the language models in
question, we create a novel dataset of well-formed naturalistic code-switched
text along with parallel translations into the source languages. Our findings
reveal that pre-trained language models are effective in generalising to
code-switched text, shedding light on the abilities of these models to
generalise representations to CS corpora. We release all our code and data
including the novel corpus at https://github.com/francesita/code-mixed-probes. |
---|---|
DOI: | 10.48550/arxiv.2403.04872 |