Towards Human Cognition Level-based Experiment Design for Counterfactual Explanations (XAI)

Explainable Artificial Intelligence (XAI) has recently gained a swell of interest, as many Artificial Intelligence (AI) practitioners and developers are compelled to rationalize how such AI-based systems work. Decades back, most XAI systems were developed as knowledge-based or expert systems. These...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2022-10
Hauptverfasser: Suffian, Muhammad, Muhammad Yaseen Khan, Bogliolo, Alessandro
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Suffian, Muhammad
Muhammad Yaseen Khan
Bogliolo, Alessandro
description Explainable Artificial Intelligence (XAI) has recently gained a swell of interest, as many Artificial Intelligence (AI) practitioners and developers are compelled to rationalize how such AI-based systems work. Decades back, most XAI systems were developed as knowledge-based or expert systems. These systems assumed reasoning for the technical description of an explanation, with little regard for the user's cognitive capabilities. The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding. An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback, which are essential for XAI system evaluation. To this end, we propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding. In this regard, we adopt Bloom's taxonomy, a widely accepted model for assessing the user's cognitive capability. We utilize the counterfactual explanations as an explanation-providing medium encompassed with user feedback to validate the levels of understanding about the explanation at each cognitive level and improvise the explanation generation methods accordingly.
doi_str_mv 10.48550/arxiv.2211.00103
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2211_00103</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2731287425</sourcerecordid><originalsourceid>FETCH-LOGICAL-a953-275ce9c6158403aa9d963854e5b9ad598f506a2c01dbb1619763c9b6be7303a13</originalsourceid><addsrcrecordid>eNot0D1PwzAQBmALCYmq9AcwYYkFhgR_xHY8VqWllSKxZEBiiC6JU6VKnWAnpfx7kpbpludO770IPVASRrEQ5BXcuT6FjFEaEkIJv0EzxjkN4oixO7Tw_kAIYVIxIfgMfaXtD7jS4-1wBItX7d7Wfd1anJiTaYIcvCnx-twZVx-N7fGb8fXe4qp1ox1sb1wFRT9AM6EGLEzLHj9_Lncv9-i2gsabxf-co3SzTlfbIPl4362WSQBa8IApURhdSCriiHAAXWrJYxEZkWsohY4rQSSwgtAyz6mkWkle6FzmRvHRUz5Hj9ezl8-zbkwK7jebGsguDYzi6So6134PxvfZoR2cHTNlTHHKYhUxwf8Ab-ldiA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2731287425</pqid></control><display><type>article</type><title>Towards Human Cognition Level-based Experiment Design for Counterfactual Explanations (XAI)</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Suffian, Muhammad ; Muhammad Yaseen Khan ; Bogliolo, Alessandro</creator><creatorcontrib>Suffian, Muhammad ; Muhammad Yaseen Khan ; Bogliolo, Alessandro</creatorcontrib><description>Explainable Artificial Intelligence (XAI) has recently gained a swell of interest, as many Artificial Intelligence (AI) practitioners and developers are compelled to rationalize how such AI-based systems work. Decades back, most XAI systems were developed as knowledge-based or expert systems. These systems assumed reasoning for the technical description of an explanation, with little regard for the user's cognitive capabilities. The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding. An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback, which are essential for XAI system evaluation. To this end, we propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding. In this regard, we adopt Bloom's taxonomy, a widely accepted model for assessing the user's cognitive capability. We utilize the counterfactual explanations as an explanation-providing medium encompassed with user feedback to validate the levels of understanding about the explanation at each cognitive level and improvise the explanation generation methods accordingly.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2211.00103</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial intelligence ; Cognition ; Computer Science - Artificial Intelligence ; Computer Science - Human-Computer Interaction ; Design of experiments ; Evaluation ; Expert systems ; Explainable artificial intelligence ; Feedback ; Taxonomy</subject><ispartof>arXiv.org, 2022-10</ispartof><rights>2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,780,881,27902</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2211.00103$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1109/MAJICC56935.2022.9994203$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Suffian, Muhammad</creatorcontrib><creatorcontrib>Muhammad Yaseen Khan</creatorcontrib><creatorcontrib>Bogliolo, Alessandro</creatorcontrib><title>Towards Human Cognition Level-based Experiment Design for Counterfactual Explanations (XAI)</title><title>arXiv.org</title><description>Explainable Artificial Intelligence (XAI) has recently gained a swell of interest, as many Artificial Intelligence (AI) practitioners and developers are compelled to rationalize how such AI-based systems work. Decades back, most XAI systems were developed as knowledge-based or expert systems. These systems assumed reasoning for the technical description of an explanation, with little regard for the user's cognitive capabilities. The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding. An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback, which are essential for XAI system evaluation. To this end, we propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding. In this regard, we adopt Bloom's taxonomy, a widely accepted model for assessing the user's cognitive capability. We utilize the counterfactual explanations as an explanation-providing medium encompassed with user feedback to validate the levels of understanding about the explanation at each cognitive level and improvise the explanation generation methods accordingly.</description><subject>Artificial intelligence</subject><subject>Cognition</subject><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Human-Computer Interaction</subject><subject>Design of experiments</subject><subject>Evaluation</subject><subject>Expert systems</subject><subject>Explainable artificial intelligence</subject><subject>Feedback</subject><subject>Taxonomy</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><sourceid>GOX</sourceid><recordid>eNot0D1PwzAQBmALCYmq9AcwYYkFhgR_xHY8VqWllSKxZEBiiC6JU6VKnWAnpfx7kpbpludO770IPVASRrEQ5BXcuT6FjFEaEkIJv0EzxjkN4oixO7Tw_kAIYVIxIfgMfaXtD7jS4-1wBItX7d7Wfd1anJiTaYIcvCnx-twZVx-N7fGb8fXe4qp1ox1sb1wFRT9AM6EGLEzLHj9_Lncv9-i2gsabxf-co3SzTlfbIPl4362WSQBa8IApURhdSCriiHAAXWrJYxEZkWsohY4rQSSwgtAyz6mkWkle6FzmRvHRUz5Hj9ezl8-zbkwK7jebGsguDYzi6So6134PxvfZoR2cHTNlTHHKYhUxwf8Ab-ldiA</recordid><startdate>20221031</startdate><enddate>20221031</enddate><creator>Suffian, Muhammad</creator><creator>Muhammad Yaseen Khan</creator><creator>Bogliolo, Alessandro</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20221031</creationdate><title>Towards Human Cognition Level-based Experiment Design for Counterfactual Explanations (XAI)</title><author>Suffian, Muhammad ; Muhammad Yaseen Khan ; Bogliolo, Alessandro</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a953-275ce9c6158403aa9d963854e5b9ad598f506a2c01dbb1619763c9b6be7303a13</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Artificial intelligence</topic><topic>Cognition</topic><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Human-Computer Interaction</topic><topic>Design of experiments</topic><topic>Evaluation</topic><topic>Expert systems</topic><topic>Explainable artificial intelligence</topic><topic>Feedback</topic><topic>Taxonomy</topic><toplevel>online_resources</toplevel><creatorcontrib>Suffian, Muhammad</creatorcontrib><creatorcontrib>Muhammad Yaseen Khan</creatorcontrib><creatorcontrib>Bogliolo, Alessandro</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Suffian, Muhammad</au><au>Muhammad Yaseen Khan</au><au>Bogliolo, Alessandro</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Towards Human Cognition Level-based Experiment Design for Counterfactual Explanations (XAI)</atitle><jtitle>arXiv.org</jtitle><date>2022-10-31</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>Explainable Artificial Intelligence (XAI) has recently gained a swell of interest, as many Artificial Intelligence (AI) practitioners and developers are compelled to rationalize how such AI-based systems work. Decades back, most XAI systems were developed as knowledge-based or expert systems. These systems assumed reasoning for the technical description of an explanation, with little regard for the user's cognitive capabilities. The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding. An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback, which are essential for XAI system evaluation. To this end, we propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding. In this regard, we adopt Bloom's taxonomy, a widely accepted model for assessing the user's cognitive capability. We utilize the counterfactual explanations as an explanation-providing medium encompassed with user feedback to validate the levels of understanding about the explanation at each cognitive level and improvise the explanation generation methods accordingly.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2211.00103</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2022-10
issn 2331-8422
language eng
recordid cdi_arxiv_primary_2211_00103
source arXiv.org; Free E- Journals
subjects Artificial intelligence
Cognition
Computer Science - Artificial Intelligence
Computer Science - Human-Computer Interaction
Design of experiments
Evaluation
Expert systems
Explainable artificial intelligence
Feedback
Taxonomy
title Towards Human Cognition Level-based Experiment Design for Counterfactual Explanations (XAI)
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-08T20%3A58%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Towards%20Human%20Cognition%20Level-based%20Experiment%20Design%20for%20Counterfactual%20Explanations%20(XAI)&rft.jtitle=arXiv.org&rft.au=Suffian,%20Muhammad&rft.date=2022-10-31&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2211.00103&rft_dat=%3Cproquest_arxiv%3E2731287425%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2731287425&rft_id=info:pmid/&rfr_iscdi=true