Fairer AI in ophthalmology via implicit fairness learning for mitigating sexism and ageism

The transformative role of artificial intelligence (AI) in various fields highlights the need for it to be both accurate and fair. Biased medical AI systems pose significant potential risks to achieving fair and equitable healthcare. Here, we show an implicit fairness learning approach to build a fa...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Nature communications 2024-06, Vol.15 (1), p.4750-4750, Article 4750
Hauptverfasser: Tan, Weimin, Wei, Qiaoling, Xing, Zhen, Fu, Hao, Kong, Hongyu, Lu, Yi, Yan, Bo, Zhao, Chen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The transformative role of artificial intelligence (AI) in various fields highlights the need for it to be both accurate and fair. Biased medical AI systems pose significant potential risks to achieving fair and equitable healthcare. Here, we show an implicit fairness learning approach to build a fairer ophthalmology AI (called FairerOPTH) that mitigates sex (biological attribute) and age biases in AI diagnosis of eye diseases. Specifically, FairerOPTH incorporates the causal relationship between fundus features and eye diseases, which is relatively independent of sensitive attributes such as race, sex, and age. We demonstrate on a large and diverse collected dataset that FairerOPTH significantly outperforms several state-of-the-art approaches in terms of diagnostic accuracy and fairness for 38 eye diseases in ultra-widefield imaging and 16 eye diseases in narrow-angle imaging. This work demonstrates the significant potential of implicit fairness learning in promoting equitable treatment for patients regardless of their sex or age. Biased medical artificial intelligence systems pose significant potential risks to achieving fair and equitable healthcare. Here, we demonstrate a fairer ophthalmology AI that mitigates sexism and ageism through implicit fairness learning.
ISSN:2041-1723
2041-1723
DOI:10.1038/s41467-024-48972-0