Contesting border artificial intelligence: Applying the guidance-ethics approach as a responsible design lens

Border artificial intelligence (AI)—biometrics-based AI systems used in border control contexts—proliferates as common tools in border securitization projects. Such systems classify some migrants as posing risks like identity fraud, other forms of criminality, or terrorism. From a human rights persp...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Data & Policy 2022-01, Vol.4, Article e36
Hauptverfasser: La Fors, Karolina, Meissner, Fran
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Border artificial intelligence (AI)—biometrics-based AI systems used in border control contexts—proliferates as common tools in border securitization projects. Such systems classify some migrants as posing risks like identity fraud, other forms of criminality, or terrorism. From a human rights perspective, using such risk framings for algorithmically facilitated evaluations of migrants’ biometrics systematically calls into question whether these kinds of systems can be built to be trustworthy for migrants. This article provides a thought experiment; we use a bottom-up responsible design lens—the guidance-ethics approach—to evaluate if responsible, trustworthy Border AI might constitute an oxymoron. The proposed European AI Act only limits the use of Border AI systems by classifying such systems as high risk. In parallel with these AI regulatory developments, large-scale civic movements have emerged throughout Europe to ban the use of facial recognition technologies in public spaces to defend EU citizens’ privacy. The fact that such systems remain acceptable for states’ usage to evaluate migrants, we argue, insufficiently protects migrants’ lives. In part, we argue that this is due to regulations and ethical frameworks being top-down and technology driven by focusing more on the safety of AI systems than on the safety of migrants. We conclude that bordering technologies developed from a responsible design angle would entail the development of entirely different technologies. These would refrain from harmful sorting based on biometric identifications but would start from the premise that migration is not a societal problem.
ISSN:2632-3249
2632-3249
DOI:10.1017/dap.2022.28