Can Artificial Intelligence "Hold" a Dermoscope?-The Evaluation of an Artificial Intelligence Chatbot to Translate the Dermoscopic Language
This survey represents the first endeavor to assess the clarity of the dermoscopic language by a chatbot, unveiling insights into the interplay between dermatologists and AI systems within the complexity of the dermoscopic language. Given the complex, descriptive, and metaphorical aspects of the der...
Gespeichert in:
Veröffentlicht in: | Diagnostics (Basel) 2024-05, Vol.14 (11), p.1165 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This survey represents the first endeavor to assess the clarity of the dermoscopic language by a chatbot, unveiling insights into the interplay between dermatologists and AI systems within the complexity of the dermoscopic language. Given the complex, descriptive, and metaphorical aspects of the dermoscopic language, subjective interpretations often emerge. The survey evaluated the completeness and diagnostic efficacy of chatbot-generated reports, focusing on their role in facilitating accurate diagnoses and educational opportunities for novice dermatologists. A total of 30 participants were presented with hypothetical dermoscopic descriptions of skin lesions, including dermoscopic descriptions of skin cancers such as BCC, SCC, and melanoma, skin cancer mimickers such as actinic and seborrheic keratosis, dermatofibroma, and atypical nevus, and inflammatory dermatosis such as psoriasis and alopecia areata. Each description was accompanied by specific clinical information, and the participants were tasked with assessing the differential diagnosis list generated by the AI chatbot in its initial response. In each scenario, the chatbot generated an extensive list of potential differential diagnoses, exhibiting lower performance in cases of SCC and inflammatory dermatoses, albeit without statistical significance, suggesting that the participants were equally satisfied with the responses provided. Scores decreased notably when practical descriptions of dermoscopic signs were provided. Answers to BCC scenario scores in the diagnosis category (2.9 ± 0.4) were higher than those with SCC (2.6 ± 0.66,
= 0.005) and inflammatory dermatoses (2.6 ± 0.67,
= 0). Similarly, in the teaching tool usefulness category, BCC-based chatbot differential diagnosis received higher scores (2.9 ± 0.4) compared to SCC (2.6 ± 0.67,
= 0.001) and inflammatory dermatoses (2.4 ± 0.81,
= 0). The abovementioned results underscore dermatologists' familiarity with BCC dermoscopic images while highlighting the challenges associated with interpreting rigorous dermoscopic images. Moreover, by incorporating patient characteristics such as age, phototype, or immune state, the differential diagnosis list in each case was customized to include lesion types appropriate for each category, illustrating the AI's flexibility in evaluating diagnoses and highlighting its value as a resource for dermatologists. |
---|---|
ISSN: | 2075-4418 2075-4418 |
DOI: | 10.3390/diagnostics14111165 |