Comparison of Three Commercially Available, AI-Driven Cephalometric Analysis Tools in Orthodontics
Cephalometric analysis (CA) is an indispensable diagnostic tool in orthodontics for treatment planning and outcome assessment. Manual CA is time-consuming and prone to variability. This study aims to compare the accuracy and repeatability of CA results among three commercial AI-driven programs: Ceph...
Gespeichert in:
Veröffentlicht in: | Journal of clinical medicine 2024-06, Vol.13 (13), p.3733 |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Cephalometric analysis (CA) is an indispensable diagnostic tool in orthodontics for treatment planning and outcome assessment. Manual CA is time-consuming and prone to variability.
This study aims to compare the accuracy and repeatability of CA results among three commercial AI-driven programs: CephX, WebCeph, and AudaxCeph. This study involved a retrospective analysis of lateral cephalograms from a single orthodontic center. Automated CA was performed using the AI programs, focusing on common parameters defined by Downs, Ricketts, and Steiner. Repeatability was tested through 50 randomly reanalyzed cases by each software. Statistical analyses included intraclass correlation coefficients (ICC3) for agreement and the Friedman test for concordance.
One hundred twenty-four cephalograms were analyzed. High agreement between the AI systems was noted for most parameters (ICC3 > 0.9). Notable differences were found in the measurements of angle convexity and the occlusal plane, where discrepancies suggested different methodologies among the programs. Some analyses presented high variability in the results, indicating errors. Repeatability analysis revealed perfect agreement within each program.
AI-driven cephalometric analysis tools demonstrate a high potential for reliable and efficient orthodontic assessments, with substantial agreement in repeated analyses. Despite this, the observed discrepancies and high variability in part of analyses underscore the need for standardization across AI platforms and the critical evaluation of automated results by clinicians, particularly in parameters with significant treatment implications. |
---|---|
ISSN: | 2077-0383 2077-0383 |
DOI: | 10.3390/jcm13133733 |