The dangers of using large language models for peer review

The recent advances in artificial intelligence, and particularly large language models (LLM) such as chatGPT (openAI, San Francisco, CA, USA), has initiated extensive discussions in the scientific community regarding their potential uses and, more importantly, misuses. The real risk here is that the...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:The Lancet infectious diseases 2023-07, Vol.23 (7), p.781-781
1. Verfasser: Donker, Tjibbe
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The recent advances in artificial intelligence, and particularly large language models (LLM) such as chatGPT (openAI, San Francisco, CA, USA), has initiated extensive discussions in the scientific community regarding their potential uses and, more importantly, misuses. The real risk here is that the LLM produced a review report that looks properly balanced but has no specific critical content about the manuscript or the described study. Because it summarises the paper and methodology remarkably well, it could easily be mistaken for an actual review report by those that have not fully read the manuscript. [...]it is important that all participants within the peer-review process remain vigilant about the use of LLMs.
ISSN:1473-3099
1474-4457
1474-4457
DOI:10.1016/S1473-3099(23)00290-6