MAGE: Machine-generated Text Detection in the Wild
Large language models (LLMs) have achieved human-level text generation, emphasizing the need for effective AI-generated text detection to mitigate risks like the spread of fake news and plagiarism. Existing research has been constrained by evaluating detection methods on specific domains or particul...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large language models (LLMs) have achieved human-level text generation,
emphasizing the need for effective AI-generated text detection to mitigate
risks like the spread of fake news and plagiarism. Existing research has been
constrained by evaluating detection methods on specific domains or particular
language models. In practical scenarios, however, the detector faces texts from
various domains or LLMs without knowing their sources. To this end, we build a
comprehensive testbed by gathering texts from diverse human writings and texts
generated by different LLMs. Empirical results show challenges in
distinguishing machine-generated texts from human-authored ones across various
scenarios, especially out-of-distribution. These challenges are due to the
decreasing linguistic distinctions between the two sources. Despite challenges,
the top-performing detector can identify 86.54% out-of-domain texts generated
by a new LLM, indicating the feasibility for application scenarios. We release
our resources at https://github.com/yafuly/MAGE. |
---|---|
DOI: | 10.48550/arxiv.2305.13242 |