Artificial intelligence for mechanical ventilation: systematic review of design, reporting standards, and bias

Artificial intelligence (AI) has the potential to personalise mechanical ventilation strategies for patients with respiratory failure. However, current methodological deficiencies could limit clinical impact. We identified common limitations and propose potential solutions to facilitate translation...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:British journal of anaesthesia : BJA 2022-02, Vol.128 (2), p.343-351
Hauptverfasser: Gallifant, Jack, Zhang, Joe, del Pilar Arias Lopez, Maria, Zhu, Tingting, Camporota, Luigi, Celi, Leo A., Formenti, Federico
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Artificial intelligence (AI) has the potential to personalise mechanical ventilation strategies for patients with respiratory failure. However, current methodological deficiencies could limit clinical impact. We identified common limitations and propose potential solutions to facilitate translation of AI to mechanical ventilation of patients. A systematic review was conducted in MEDLINE, Embase, and PubMed Central to February 2021. Studies investigating the application of AI to patients undergoing mechanical ventilation were included. Algorithm design and adherence to reporting standards were assessed with a rubric combining published guidelines, satisfying the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis [TRIPOD] statement. Risk of bias was assessed by using the Prediction model Risk Of Bias ASsessment Tool (PROBAST), and correspondence with authors to assess data and code availability. Our search identified 1,342 studies, of which 95 were included: 84 had single-centre, retrospective study design, with only one randomised controlled trial. Access to data sets and code was severely limited (unavailable in 85% and 87% of studies, respectively). On request, data and code were made available from 12 and 10 authors, respectively, from a list of 54 studies published in the last 5 yr. Ethnicity was frequently under-reported 18/95 (19%), as was model calibration 17/95 (18%). The risk of bias was high in 89% (85/95) of the studies, especially because of analysis bias. Development of algorithms should involve prospective and external validation, with greater code and data availability to improve confidence in and translation of this promising approach. PROSPERO – CRD42021225918.
ISSN:0007-0912
1471-6771
1471-6771
DOI:10.1016/j.bja.2021.09.025