The Use of Artificial Intelligence to Improve Readability of Otolaryngology Patient Education Materials

Objective The recommended readability of health education materials is at the sixth‐grade level. Artificial intelligence (AI) large language models such as the newly released ChatGPT4 might facilitate the conversion of patient‐education materials at scale. We sought to ascertain whether online otola...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Otolaryngology-head and neck surgery 2024-08, Vol.171 (2), p.603-608
Hauptverfasser: Patel, Evan A., Fleischer, Lindsay, Filip, Peter, Eggerstedt, Michael, Hutz, Michael, Michaelides, Elias, Batra, Pete S., Tajudeen, Bobby A.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Objective The recommended readability of health education materials is at the sixth‐grade level. Artificial intelligence (AI) large language models such as the newly released ChatGPT4 might facilitate the conversion of patient‐education materials at scale. We sought to ascertain whether online otolaryngology education materials meet recommended reading levels and whether ChatGPT4 could rewrite these materials to the sixth‐grade level. We also wished to ensure that converted materials were accurate and retained sufficient content. Methods Seventy‐one articles from patient educational materials published online by the American Academy of Otolaryngology–Head and Neck Surgery were selected. Articles were entered into ChatGPT4 with the prompt “translate this text to a sixth‐grade reading level.” Flesch Reading Ease Score (FRES) and Flesch‐Kincaid Grade Level (FKGL) were determined for each article before and after AI conversion. Each article and conversion were reviewed for factual inaccuracies, and each conversion was reviewed for content retention. Results The 71 articles had an initial average FKGL of 11.03 and FRES of 46.79. After conversion by ChatGPT4, the average FKGL across all articles was 5.80 and FRES was 77.27. Converted materials provided enough detail for patient education with no factual errors. Discussion We found that ChatGPT4 improved the reading accessibility of otolaryngology online patient education materials to recommended levels quickly and effectively. Implications for Practice Physicians can determine whether their patient education materials exceed current recommended reading levels by using widely available measurement tools, and then apply AI dialogue platforms to modify materials to more accessible levels as needed. Level of Evidence Level 5.
ISSN:0194-5998
1097-6817
1097-6817
DOI:10.1002/ohn.816