Artificial intelligence bias in medical system designs: a systematic review
Inherent bias in the artificial intelligence (AI)-model brings inaccuracies and variabilities during clinical deployment of the model. It is challenging to recognize the source of bias in AI-model due to variations in datasets and black box nature of system design. Additionally, there is no distinct...
Gespeichert in:
Veröffentlicht in: | Multimedia tools and applications 2024-02, Vol.83 (6), p.18005-18057 |
---|---|
Hauptverfasser: | , , , , , , , , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Inherent bias in the artificial intelligence (AI)-model brings inaccuracies and variabilities during clinical deployment of the model. It is challenging to recognize the source of bias in AI-model due to variations in datasets and black box nature of system design. Additionally, there is no distinct process to identify the potential source of bias in the AI-model. To the best of our knowledge, this is the first review of its kind that addresses the bias in AI-model by categorizing 48 studies into three classes, namely, point-based, image-based, and hybrid-based AI-models. Selection strategy using PRISMA is adopted to select the 72 crucial AI studies for identifying bias in AI models. Using the three classes, bias is identified in these studies based on 44 critical AI attributes. Bias in the AI-models is computed by analytical, butterfly, and ranking-based bias models. These bias models were evaluated using two experts and compared using variability analysis. AI-studies that lacked sufficient AI-attributes are more prone to risk-of-bias (RoB) in all three classes. Studies with high RoB loses fins in the butterfly model. It has been analyzed that the majority of the studies in healthcare suffer from data bias and algorithmic bias due to incomplete specifications mentioned in the design protocol and weak AI design exploited for prediction. |
---|---|
ISSN: | 1573-7721 1380-7501 1573-7721 |
DOI: | 10.1007/s11042-023-16029-x |