Risks of abuse of large language models, like ChatGPT, in scientific publishing: Authorship, predatory publishing, and paper mills
Key points Academia is already witnessing the abuse of authorship in papers with text generated by large language models (LLMs) such as ChatGPT. LLM‐generated text is testing the limits of publishing ethics as we traditionally know it. We alert the community to imminent risks of LLM technologies, li...
Gespeichert in:
Veröffentlicht in: | Learned publishing 2024-01, Vol.37 (1), p.55-62 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Key points
Academia is already witnessing the abuse of authorship in papers with text generated by large language models (LLMs) such as ChatGPT.
LLM‐generated text is testing the limits of publishing ethics as we traditionally know it.
We alert the community to imminent risks of LLM technologies, like ChatGPT, for amplifying the predatory publishing ‘industry’.
The abuse of ChatGPT for the paper mill industry cannot be over‐emphasized.
Detection of LLM‐generated text is the responsibility of editors and journals/publishers. |
---|---|
ISSN: | 0953-1513 1741-4857 |
DOI: | 10.1002/leap.1578 |