Deep learning‐based facial analysis for predicting difficult videolaryngoscopy: a feasibility study

Summary While videolaryngoscopy has resulted in better overall success rates of tracheal intubation, airway assessment is still an important prerequisite for safe airway management. This study aimed to create an artificial intelligence model to identify difficult videolaryngoscopy using a neural net...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Anaesthesia 2024-04, Vol.79 (4), p.399-409
Hauptverfasser: Xia, M., Jin, C., Zheng, Y., Wang, J., Zhao, M., Cao, S., Xu, T., Pei, B., Irwin, M. G., Lin, Z., Jiang, H.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Summary While videolaryngoscopy has resulted in better overall success rates of tracheal intubation, airway assessment is still an important prerequisite for safe airway management. This study aimed to create an artificial intelligence model to identify difficult videolaryngoscopy using a neural network. Baseline characteristics, medical history, bedside examination and seven facial images were included as predictor variables. ResNet‐18 was introduced to recognise images and extract features. Different machine learning algorithms were utilised to develop predictive models. A videolaryngoscopy view of Cormack‐Lehane grade of 1 or 2 was classified as ‘non‐difficult’, while grade 3 or 4 was classified as ‘difficult’. A total of 5849 patients were included, of whom 5335 had non‐difficult and 514 had difficult videolaryngoscopy. The facial model (only including facial images) using the Light Gradient Boosting Machine algorithm showed the highest area under the curve (95%CI) of 0.779 (0.733–0.825) with a sensitivity (95%CI) of 0.757 (0.650–0.845) and specificity (95%CI) of 0.721 (0.626–0.794) in the test set. Compared with bedside examination and multivariate scores (El‐Ganzouri and Wilson), the facial model had significantly higher predictive performance (p 
ISSN:0003-2409
1365-2044
DOI:10.1111/anae.16194