Can ChatGPT identify predatory biomedical and dental journals? A cross-sectional content analysis
•There is an ongoing challenge in identifying predatory journals.•ChatGPT correctly identified 92.5% of predatory and 71% of legitimate journals, respectively, demonstrating high accuracy and sensitivity.•Large-scale language models may have use in the identification of predatory publications. To as...
Gespeichert in:
Veröffentlicht in: | Journal of dentistry 2024-03, Vol.142, p.104840-104840, Article 104840 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •There is an ongoing challenge in identifying predatory journals.•ChatGPT correctly identified 92.5% of predatory and 71% of legitimate journals, respectively, demonstrating high accuracy and sensitivity.•Large-scale language models may have use in the identification of predatory publications.
To assess whether ChatGPT can help to identify predatory biomedical and dental journals, analyze the content of its responses and compare the frequency of positive and negative indicators provided by ChatGPT concerning predatory and legitimate journals.
Four-hundred predatory and legitimate biomedical and dental journals were selected from four sources: Beall's list, unsolicited emails, the Web of Science (WOS) journal list and the Directory of Open Access Journals (DOAJ). ChatGPT was asked to determine journal legitimacy. Journals were classified into legitimate or predatory. Pearson's Chi-squared test and logistic regression were conducted. Two machine learning algorithms determined the most influential criteria on the correct classification of journals.
The data were categorized under 10 criteria with the most frequently coded criteria being the transparency of processes and policies. ChatGPT correctly classified predatory and legitimate journals in 92.5 % and 71 % of the sample, respectively. The accuracy of ChatGPT responses was 0.82. ChatGPT also demonstrated a high level of sensitivity (0.93). Additionally, the model exhibited a specificity of 0.71, accurately identifying true negatives. A highly significant association between ChatGPT verdicts and the classification based on known sources was observed (P |
---|---|
ISSN: | 0300-5712 1879-176X |
DOI: | 10.1016/j.jdent.2024.104840 |