Fair or Unfair Differentiation? Reconsidering the Concept of Equality for the Regulation of Algorithmically Guided Decision-Making
Algorithms are increasingly relied upon to help decision-makers automate, streamline, structure and guide a variety of decision-making processes, both trivial and critical, in both the public and private sector. In this data-driven environment, people and groups of people are continuously classified...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Dissertation |
Sprache: | eng |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Naudts, Laurens |
description | Algorithms are increasingly relied upon to help decision-makers automate, streamline, structure and guide a variety of decision-making processes, both trivial and critical, in both the public and private sector. In this data-driven environment, people and groups of people are continuously classified, categorised, ranked and scored on a variety of features or attributes, such as their characteristics, interests, behaviour and preferences. For decision-subjects, the consequences of these classification acts can be significant: they affect the choices and options they are presented, the interactions and relationships they hold with others and themselves, the opportunities they are given, and the burdens and benefits they carry. Furthermore, when applied on a large enough scale, these instances of differentiation may also initiate social change. This dissertation is concerned with one particular type of injustice that may emerge from the deployment of algorithmic decision-making system: the introduction of unjustifiable (in)equality brought about by differentiation acts that take place within and as part of these systems. Due to the complexity of the digital environment and the distinctive characteristics algorithmically guided decisions exhibit however, it has become increasingly difficult to assess whether the decisions these knowledge and data-driven systems inform, and the (in)equalities they produce, can be justified.
In this dissertation, I examine whether we can revitalise the concept of equality to guide the evaluation and regulation of algorithmically guided decision-making processes in light of the inequalities they produce.
In my dissertation, I reposition and operationalise the notion of equality as a practicable and interpretative lens to strengthen the evaluation and regulation of algorithmically guided decision-making practices in light of the inequalities they (risk to) produce.
In a first step, I define the algorithmic research context in which I want to operationalise the notion of equality. I explore a series of characteristics algorithmic systems exhibit that render the inequalities they generate distinctive in terms of their form and scope. Due to these unique characteristics, algorithmic inequalities have the potential to restructure the fabric of society alongside new and existing dimensions: they may not only reinforce existing social injustice, but they may also introduce new forms of (non-representational) injustice.
Drawing inspirati |
format | Dissertation |
fullrecord | <record><control><sourceid>kuleuven_FZOIL</sourceid><recordid>TN_cdi_kuleuven_dspace_20_500_12942_711474</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>20_500_12942_711474</sourcerecordid><originalsourceid>FETCH-kuleuven_dspace_20_500_12942_7114743</originalsourceid><addsrcrecordid>eNqVjL0KwjAUhbs4iPoOdxaUtlaKk0j9W1xE5xDSm_ZiTGqaiK4-uan4ADqdA-c7Xz96bTlZMBbOWnZtTVKiRe2IOzJ6CUcURrdUoiVdgasRCqMFNg6MhM3Nc0XuCTIYuu2IlVefZzevVGUsufpKgiv1hJ0PnhLWKKgNyOTAL0E6jHqSqxZH3xxE4-3mVOwnF6_Q31Gzsm24QJbGbB7HLEkXWcryJMnybPYnPP0ZZu7hZm9-BF1k</addsrcrecordid><sourcetype>Institutional Repository</sourcetype><iscdi>true</iscdi><recordtype>dissertation</recordtype></control><display><type>dissertation</type><title>Fair or Unfair Differentiation? Reconsidering the Concept of Equality for the Regulation of Algorithmically Guided Decision-Making</title><source>Lirias (KU Leuven Association)</source><creator>Naudts, Laurens</creator><creatorcontrib>Naudts, Laurens ; Vedder, Anton ; Sottiaux, Stefan</creatorcontrib><description>Algorithms are increasingly relied upon to help decision-makers automate, streamline, structure and guide a variety of decision-making processes, both trivial and critical, in both the public and private sector. In this data-driven environment, people and groups of people are continuously classified, categorised, ranked and scored on a variety of features or attributes, such as their characteristics, interests, behaviour and preferences. For decision-subjects, the consequences of these classification acts can be significant: they affect the choices and options they are presented, the interactions and relationships they hold with others and themselves, the opportunities they are given, and the burdens and benefits they carry. Furthermore, when applied on a large enough scale, these instances of differentiation may also initiate social change. This dissertation is concerned with one particular type of injustice that may emerge from the deployment of algorithmic decision-making system: the introduction of unjustifiable (in)equality brought about by differentiation acts that take place within and as part of these systems. Due to the complexity of the digital environment and the distinctive characteristics algorithmically guided decisions exhibit however, it has become increasingly difficult to assess whether the decisions these knowledge and data-driven systems inform, and the (in)equalities they produce, can be justified.
In this dissertation, I examine whether we can revitalise the concept of equality to guide the evaluation and regulation of algorithmically guided decision-making processes in light of the inequalities they produce.
In my dissertation, I reposition and operationalise the notion of equality as a practicable and interpretative lens to strengthen the evaluation and regulation of algorithmically guided decision-making practices in light of the inequalities they (risk to) produce.
In a first step, I define the algorithmic research context in which I want to operationalise the notion of equality. I explore a series of characteristics algorithmic systems exhibit that render the inequalities they generate distinctive in terms of their form and scope. Due to these unique characteristics, algorithmic inequalities have the potential to restructure the fabric of society alongside new and existing dimensions: they may not only reinforce existing social injustice, but they may also introduce new forms of (non-representational) injustice.
Drawing inspiration from both European equality and non-discrimination law and political philosophical theories of justice, and informed by the (practical) functioning of algorithmic decision-making systems and the particular challenges these bring along, I then present equality as a multidimensional concept that can be specified alongside three (interrelated) axes. The model represents a core set of social ideals commonly associated with the notion of equality as a structuring value: equal concern and respect (the moral dimension), equal social standing (the socio-relational dimension), and/or equal enjoyment or access to certain justice-relevant goods (the distributive dimension).
Throughout this dissertation, this model will be operationalised as a practicable, analytical and interpretative lens to identify, articulate and evaluate algorithmic injustice, and the response formulated thereto within a given policy, law, code or theory. In a first step, I position the multidimensional model against the algorithmic environment in order to articulate and identify the egalitarian harms algorithms risk to impose (Part I: Identification, Chapter 2). In a second step, the multidimensional model functions as a supportive mechanism to unearth the legal conceptualisation of equality found within European equality and non-discrimination law (Council of Europe and European Union). By concretising whom the law protects against which (algorithmic) inequalities in the pursuit of which ideals, I assess whether the legal approach to equality can address the algorithmic injustice risk to introduce (Part II: Evaluation; Chapters 3-5). Finally, I rely upon the model to locate and examine specific notions of equality - chosen due to their correspondence with the aforementioned dimensions. These socio-relational (domination and oppression) and distributive (primary goods and capabilities) notions are examined in order to further specify the egalitarian harms algorithms (risk to) produce, the conditions under which these harms can manifest, and the institutional safeguards we can provide decision-subjects and society at large to strengthen their position in the algorithmically mediated environment (Part III: Navigation; Chapter 6 and 7). In the final part, I summarise the main research findings, and derive from them, a set of normative recommendations to reposition the concept of equality within the algorithmic governance debate in an effort to further strengthen its guiding function for the evaluation and regulation of algorithmically informed decision-making systems (Part IV: Synthesis; Chapter 8).</description><language>eng</language><creationdate>2023</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>311,315,776,27837</link.rule.ids><linktorsrc>$$Uhttps://lirias.kuleuven.be/handle/20.500.12942/711474$$EView_record_in_KU_Leuven_Association$$FView_record_in_$$GKU_Leuven_Association$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>Naudts, Laurens</creatorcontrib><title>Fair or Unfair Differentiation? Reconsidering the Concept of Equality for the Regulation of Algorithmically Guided Decision-Making</title><description>Algorithms are increasingly relied upon to help decision-makers automate, streamline, structure and guide a variety of decision-making processes, both trivial and critical, in both the public and private sector. In this data-driven environment, people and groups of people are continuously classified, categorised, ranked and scored on a variety of features or attributes, such as their characteristics, interests, behaviour and preferences. For decision-subjects, the consequences of these classification acts can be significant: they affect the choices and options they are presented, the interactions and relationships they hold with others and themselves, the opportunities they are given, and the burdens and benefits they carry. Furthermore, when applied on a large enough scale, these instances of differentiation may also initiate social change. This dissertation is concerned with one particular type of injustice that may emerge from the deployment of algorithmic decision-making system: the introduction of unjustifiable (in)equality brought about by differentiation acts that take place within and as part of these systems. Due to the complexity of the digital environment and the distinctive characteristics algorithmically guided decisions exhibit however, it has become increasingly difficult to assess whether the decisions these knowledge and data-driven systems inform, and the (in)equalities they produce, can be justified.
In this dissertation, I examine whether we can revitalise the concept of equality to guide the evaluation and regulation of algorithmically guided decision-making processes in light of the inequalities they produce.
In my dissertation, I reposition and operationalise the notion of equality as a practicable and interpretative lens to strengthen the evaluation and regulation of algorithmically guided decision-making practices in light of the inequalities they (risk to) produce.
In a first step, I define the algorithmic research context in which I want to operationalise the notion of equality. I explore a series of characteristics algorithmic systems exhibit that render the inequalities they generate distinctive in terms of their form and scope. Due to these unique characteristics, algorithmic inequalities have the potential to restructure the fabric of society alongside new and existing dimensions: they may not only reinforce existing social injustice, but they may also introduce new forms of (non-representational) injustice.
Drawing inspiration from both European equality and non-discrimination law and political philosophical theories of justice, and informed by the (practical) functioning of algorithmic decision-making systems and the particular challenges these bring along, I then present equality as a multidimensional concept that can be specified alongside three (interrelated) axes. The model represents a core set of social ideals commonly associated with the notion of equality as a structuring value: equal concern and respect (the moral dimension), equal social standing (the socio-relational dimension), and/or equal enjoyment or access to certain justice-relevant goods (the distributive dimension).
Throughout this dissertation, this model will be operationalised as a practicable, analytical and interpretative lens to identify, articulate and evaluate algorithmic injustice, and the response formulated thereto within a given policy, law, code or theory. In a first step, I position the multidimensional model against the algorithmic environment in order to articulate and identify the egalitarian harms algorithms risk to impose (Part I: Identification, Chapter 2). In a second step, the multidimensional model functions as a supportive mechanism to unearth the legal conceptualisation of equality found within European equality and non-discrimination law (Council of Europe and European Union). By concretising whom the law protects against which (algorithmic) inequalities in the pursuit of which ideals, I assess whether the legal approach to equality can address the algorithmic injustice risk to introduce (Part II: Evaluation; Chapters 3-5). Finally, I rely upon the model to locate and examine specific notions of equality - chosen due to their correspondence with the aforementioned dimensions. These socio-relational (domination and oppression) and distributive (primary goods and capabilities) notions are examined in order to further specify the egalitarian harms algorithms (risk to) produce, the conditions under which these harms can manifest, and the institutional safeguards we can provide decision-subjects and society at large to strengthen their position in the algorithmically mediated environment (Part III: Navigation; Chapter 6 and 7). In the final part, I summarise the main research findings, and derive from them, a set of normative recommendations to reposition the concept of equality within the algorithmic governance debate in an effort to further strengthen its guiding function for the evaluation and regulation of algorithmically informed decision-making systems (Part IV: Synthesis; Chapter 8).</description><fulltext>true</fulltext><rsrctype>dissertation</rsrctype><creationdate>2023</creationdate><recordtype>dissertation</recordtype><sourceid>FZOIL</sourceid><recordid>eNqVjL0KwjAUhbs4iPoOdxaUtlaKk0j9W1xE5xDSm_ZiTGqaiK4-uan4ADqdA-c7Xz96bTlZMBbOWnZtTVKiRe2IOzJ6CUcURrdUoiVdgasRCqMFNg6MhM3Nc0XuCTIYuu2IlVefZzevVGUsufpKgiv1hJ0PnhLWKKgNyOTAL0E6jHqSqxZH3xxE4-3mVOwnF6_Q31Gzsm24QJbGbB7HLEkXWcryJMnybPYnPP0ZZu7hZm9-BF1k</recordid><startdate>20230123</startdate><enddate>20230123</enddate><creator>Naudts, Laurens</creator><scope>FZOIL</scope></search><sort><creationdate>20230123</creationdate><title>Fair or Unfair Differentiation? Reconsidering the Concept of Equality for the Regulation of Algorithmically Guided Decision-Making</title><author>Naudts, Laurens</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-kuleuven_dspace_20_500_12942_7114743</frbrgroupid><rsrctype>dissertations</rsrctype><prefilter>dissertations</prefilter><language>eng</language><creationdate>2023</creationdate><toplevel>online_resources</toplevel><creatorcontrib>Naudts, Laurens</creatorcontrib><collection>Lirias (KU Leuven Association)</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Naudts, Laurens</au><format>dissertation</format><genre>dissertation</genre><ristype>THES</ristype><Advisor>Vedder, Anton</Advisor><Advisor>Sottiaux, Stefan</Advisor><btitle>Fair or Unfair Differentiation? Reconsidering the Concept of Equality for the Regulation of Algorithmically Guided Decision-Making</btitle><date>2023-01-23</date><risdate>2023</risdate><abstract>Algorithms are increasingly relied upon to help decision-makers automate, streamline, structure and guide a variety of decision-making processes, both trivial and critical, in both the public and private sector. In this data-driven environment, people and groups of people are continuously classified, categorised, ranked and scored on a variety of features or attributes, such as their characteristics, interests, behaviour and preferences. For decision-subjects, the consequences of these classification acts can be significant: they affect the choices and options they are presented, the interactions and relationships they hold with others and themselves, the opportunities they are given, and the burdens and benefits they carry. Furthermore, when applied on a large enough scale, these instances of differentiation may also initiate social change. This dissertation is concerned with one particular type of injustice that may emerge from the deployment of algorithmic decision-making system: the introduction of unjustifiable (in)equality brought about by differentiation acts that take place within and as part of these systems. Due to the complexity of the digital environment and the distinctive characteristics algorithmically guided decisions exhibit however, it has become increasingly difficult to assess whether the decisions these knowledge and data-driven systems inform, and the (in)equalities they produce, can be justified.
In this dissertation, I examine whether we can revitalise the concept of equality to guide the evaluation and regulation of algorithmically guided decision-making processes in light of the inequalities they produce.
In my dissertation, I reposition and operationalise the notion of equality as a practicable and interpretative lens to strengthen the evaluation and regulation of algorithmically guided decision-making practices in light of the inequalities they (risk to) produce.
In a first step, I define the algorithmic research context in which I want to operationalise the notion of equality. I explore a series of characteristics algorithmic systems exhibit that render the inequalities they generate distinctive in terms of their form and scope. Due to these unique characteristics, algorithmic inequalities have the potential to restructure the fabric of society alongside new and existing dimensions: they may not only reinforce existing social injustice, but they may also introduce new forms of (non-representational) injustice.
Drawing inspiration from both European equality and non-discrimination law and political philosophical theories of justice, and informed by the (practical) functioning of algorithmic decision-making systems and the particular challenges these bring along, I then present equality as a multidimensional concept that can be specified alongside three (interrelated) axes. The model represents a core set of social ideals commonly associated with the notion of equality as a structuring value: equal concern and respect (the moral dimension), equal social standing (the socio-relational dimension), and/or equal enjoyment or access to certain justice-relevant goods (the distributive dimension).
Throughout this dissertation, this model will be operationalised as a practicable, analytical and interpretative lens to identify, articulate and evaluate algorithmic injustice, and the response formulated thereto within a given policy, law, code or theory. In a first step, I position the multidimensional model against the algorithmic environment in order to articulate and identify the egalitarian harms algorithms risk to impose (Part I: Identification, Chapter 2). In a second step, the multidimensional model functions as a supportive mechanism to unearth the legal conceptualisation of equality found within European equality and non-discrimination law (Council of Europe and European Union). By concretising whom the law protects against which (algorithmic) inequalities in the pursuit of which ideals, I assess whether the legal approach to equality can address the algorithmic injustice risk to introduce (Part II: Evaluation; Chapters 3-5). Finally, I rely upon the model to locate and examine specific notions of equality - chosen due to their correspondence with the aforementioned dimensions. These socio-relational (domination and oppression) and distributive (primary goods and capabilities) notions are examined in order to further specify the egalitarian harms algorithms (risk to) produce, the conditions under which these harms can manifest, and the institutional safeguards we can provide decision-subjects and society at large to strengthen their position in the algorithmically mediated environment (Part III: Navigation; Chapter 6 and 7). In the final part, I summarise the main research findings, and derive from them, a set of normative recommendations to reposition the concept of equality within the algorithmic governance debate in an effort to further strengthen its guiding function for the evaluation and regulation of algorithmically informed decision-making systems (Part IV: Synthesis; Chapter 8).</abstract><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | |
ispartof | |
issn | |
language | eng |
recordid | cdi_kuleuven_dspace_20_500_12942_711474 |
source | Lirias (KU Leuven Association) |
title | Fair or Unfair Differentiation? Reconsidering the Concept of Equality for the Regulation of Algorithmically Guided Decision-Making |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-06T03%3A37%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-kuleuven_FZOIL&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft.genre=dissertation&rft.btitle=Fair%20or%20Unfair%20Differentiation?%20Reconsidering%20the%20Concept%20of%20Equality%20for%20the%20Regulation%20of%20Algorithmically%20Guided%20Decision-Making&rft.au=Naudts,%20Laurens&rft.date=2023-01-23&rft_id=info:doi/&rft_dat=%3Ckuleuven_FZOIL%3E20_500_12942_711474%3C/kuleuven_FZOIL%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |