The Content Quality of YouTube Videos for Professional Medical Education: A Systematic Review

Purpose To evaluate the content quality of YouTube videos intended for professional medical education based on quality rating tool (QRT) scores and determine if video characteristics, engagement metrics, or author type are associated with quality. Method The authors searched 7 databases for English-...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Academic medicine 2021-10, Vol.96 (10), p.1484-1493
Hauptverfasser: Helming, Andrew G., Adler, David S., Keltner, Case, Igelman, Austin D., Woodworth, Glenn E.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Purpose To evaluate the content quality of YouTube videos intended for professional medical education based on quality rating tool (QRT) scores and determine if video characteristics, engagement metrics, or author type are associated with quality. Method The authors searched 7 databases for English-language studies about the quality of YouTube videos intended for professional medical education from each database's inception through April 2019. To be included, studies had to be published in 2005 (when YouTube was created) or later. Studies were classified according to the type of QRT used: externally validated, internally validated, or limited global. Study information and video characteristics and engagement metrics were extracted. Videos were classified by video author type. Results Thirty-one studies were included in this review. Three studies used externally validated QRTs, 20 used internally validated QRTs, and 13 used limited global QRTs. Studies using externally validated QRTs had average scores/total possible scores of 1.3/4, 26/80, and 1.7/5. Among the 18 studies using internally validated QRTs, from which an average percentage of total possible QRT score could be computed or extracted, the average score was 44% (range: 9%-71%). Videos with academic-physician authors had higher internally validated QRT mean scores (46%) than those with nonacademic-physician or other authors (26%; P < .05). Conclusions The authors found a wide variation in QRT scores of videos, with many low QRT scores. While videos authored by academic-physicians were of higher quality on average, their quality still varied significantly. Video characteristics and engagement metrics were found to be unreliable surrogate measures of video quality. A lack of unifying grading criteria for video content quality, poor search algorithm optimization, and insufficient peer review or controls on submitted videos likely contributed to the overall poor quality of YouTube videos that could be used for professional medical education.
ISSN:1040-2446
1938-808X
DOI:10.1097/ACM.0000000000004121