Fair-AutoML: Enhancing fairness in machine learning predictions through automated machine learning and bias mitigation techniques

The usage of machine learning (ML) in decision-making software is rising, but recent events have cast doubt on the reliability of ML forecasts. To tackle this, new approaches and resources are required to reduce bias in machine learning systems. There have been attempts to address bias in the past,...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Komala, C. R., Kumar, Ashok, Hema, N., Nagarani, S., Yadav, Ajay Singh, Rajendiran, M., Srinivasan, R., Vijayan, V.
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The usage of machine learning (ML) in decision-making software is rising, but recent events have cast doubt on the reliability of ML forecasts. To tackle this, new approaches and resources are required to reduce bias in machine learning systems. There have been attempts to address bias in the past, however these algorithms have only been tested in limited contexts and frequently lead to inaccurate results. An innovative method that makes use of AutoML approaches to reduce bias is our suggested answer. A new optimization function and a search space that takes fairness into consideration are two important breakthroughs in our method. Incorporating fairness targets and enhancing AutoML’s default optimization method allow us to reduce bias while maintaining high accuracy. We also suggest a fairness-aware approach to trimming AutoML’s search space, which can cut down on computational cost and repair time. Using the cutting-edge Auto-Sklearn tool as a foundation, our method aims to mitigate bias in practical applications. When compared to the baseline and other bias reduction strategies, our findings indicate a considerable improvement. We assessed our strategy on four fairness problems and sixteen distinct ML models to verify its effectiveness. Compared to current bias mitigation methods, our method Fair-AutoML was able to fix sixty-four out of sixty-four flawed examples.
ISSN:0094-243X
1551-7616
DOI:10.1063/5.0235062