AI Procurement Checklists: Revisiting Implementation in the Age of AI Governance
Public sector use of AI has been quietly on the rise for the past decade, but only recently have efforts to regulate it entered the cultural zeitgeist. While simple to articulate, promoting ethical and effective roll outs of AI systems in government is a notoriously elusive task. On the one hand the...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Public sector use of AI has been quietly on the rise for the past decade, but
only recently have efforts to regulate it entered the cultural zeitgeist. While
simple to articulate, promoting ethical and effective roll outs of AI systems
in government is a notoriously elusive task. On the one hand there are
hard-to-address pitfalls associated with AI-based tools, including concerns
about bias towards marginalized communities, safety, and gameability. On the
other, there is pressure not to make it too difficult to adopt AI, especially
in the public sector which typically has fewer resources than the private
sector$\unicode{x2014}$conserving scarce government resources is often the draw
of using AI-based tools in the first place. These tensions create a real risk
that procedures built to ensure marginalized groups are not hurt by government
use of AI will, in practice, be performative and ineffective. To inform the
latest wave of regulatory efforts in the United States, we look to
jurisdictions with mature regulations around government AI use. We report on
lessons learned by officials in Brazil, Singapore and Canada, who have
collectively implemented risk categories, disclosure requirements and
assessments into the way they procure AI tools. In particular, we investigate
two implemented checklists: the Canadian Directive on Automated Decision-Making
(CDADM) and the World Economic Forum's AI Procurement in a Box (WEF). We detail
three key pitfalls around expertise, risk frameworks and transparency, that can
decrease the efficacy of regulations aimed at government AI use and suggest
avenues for improvement. |
---|---|
DOI: | 10.48550/arxiv.2404.14660 |