From plane crashes to algorithmic harm: applicability of safety engineering frameworks for responsible ML
Inappropriate design and deployment of machine learning (ML) systems leads to negative downstream social and ethical impact -- described here as social and ethical risks -- for users, society and the environment. Despite the growing need to regulate ML systems, current processes for assessing and mi...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Inappropriate design and deployment of machine learning (ML) systems leads to
negative downstream social and ethical impact -- described here as social and
ethical risks -- for users, society and the environment. Despite the growing
need to regulate ML systems, current processes for assessing and mitigating
risks are disjointed and inconsistent. We interviewed 30 industry practitioners
on their current social and ethical risk management practices, and collected
their first reactions on adapting safety engineering frameworks into their
practice -- namely, System Theoretic Process Analysis (STPA) and Failure Mode
and Effects Analysis (FMEA). Our findings suggest STPA/FMEA can provide
appropriate structure toward social and ethical risk assessment and mitigation
processes. However, we also find nontrivial challenges in integrating such
frameworks in the fast-paced culture of the ML industry. We call on the ML
research community to strengthen existing frameworks and assess their efficacy,
ensuring that ML systems are safer for all people. |
---|---|
DOI: | 10.48550/arxiv.2210.03535 |