NORMATIVE DIMENSIONS OF CONSENSUAL APPLICATION OF BLACK BOX ARTIFICIAL INTELLIGENCE IN ADMINISTRATIVE ADJUDICATION OF BENEFITS CLAIMS

Recent calls for administrative austerity have included demands that agencies do more with less as they make decisions about benefit eligibility.1 This economic logic dovetails with a business case for automating consideration of disputes. The field of computational legal studies suggests ways of de...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Law and contemporary problems 2021-06, Vol.84 (3), p.35
1. Verfasser: Pasquale, Frank
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Recent calls for administrative austerity have included demands that agencies do more with less as they make decisions about benefit eligibility.1 This economic logic dovetails with a business case for automating consideration of disputes. The field of computational legal studies suggests ways of deploying natural language processing to triage case filings, or otherwise to find patterns in past adjudications in order to inform (or even complete) the resolution of disputes.2 For example, a certain combination of medical records may have always led to an award of disability benefits in the past. Administrators may decide to fast track such claims or may even decide to award benefits on the basis of those medical records. Conversely, claims that look too unlike past, successful claims, may be rejected at the outset, ideally with some instructions as to how they may be improved. In the longer term, more ambitious surveillance programs may feed into administrative adjudications. For example, there are calls in the United States to review the eligibility of benefits recipients via evidence that could include surveillance of their social media feeds.3 However, using black box AI to deny benefits is an untested and dangerous proposal. Even when algorithms are transparent, problems arise. For example, Australia’s CentreLink agency used defective algorithms and data to mail thousands of letters to claimants demanding return of alleged overpayments. Many were inaccurate, causing a great deal of distress among those who received the demand letters.4 Nevertheless, there are promising avenues for automation of law. Tax scholars have argued that as many as 100 million filed tax returns in the United States each year may be unnecessary, wasting millions of hours of tedious formfilling and record-keeping. Instead, the government could simply automatically decide the tax liability of persons who take the standard deduction.5 With more advanced data, and tax laws written to be machine-readable, even complex returns may be automated.6 Thus, automated administration offers both promising possibilities and clear warning signs with respect to potential negative consequences.
ISSN:0023-9186
1945-2322