From human explanations to explainable AI: Insights from constrained optimization

Many complex decision-making scenarios encountered in the real-world, including energy systems and infrastructure planning, can be formulated as constrained optimization problems. Solutions for these problems are often obtained using white-box solvers based on linear program representations. Even th...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Cognitive systems research 2024-12, Vol.88, p.101297, Article 101297
Hauptverfasser: Ibs, Inga, Ott, Claire, Jäkel, Frank, Rothkopf, Constantin A.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Many complex decision-making scenarios encountered in the real-world, including energy systems and infrastructure planning, can be formulated as constrained optimization problems. Solutions for these problems are often obtained using white-box solvers based on linear program representations. Even though these algorithms are well understood and the optimality of the solution is guaranteed, explanations for the solutions are still necessary to build trust and ensure the implementation of policies. Solution algorithms represent the problem in a high-dimensional abstract space, which does not translate well to intuitive explanations for lay people. Here, we report three studies in which we pose constrained optimization problems in the form of a computer game to participants. In the game, called Furniture Factory, participants manage a company that produces furniture. In two qualitative studies, we first elicit representations and heuristics with concurrent explanations and validate their use in post-hoc explanations. We analyze the complexity of the explanations given by participants to gain a deeper understanding of how complex cognitively adequate explanations should be. Based on insights from the analysis of the two qualitative studies, we formalize strategies that in combination can act as descriptors for participants’ behavior and optimal solutions. We match the strategies to decisions in a large behavioral dataset (>150 participants) gathered in a third study, and compare the complexity of strategy combinations to the complexity featured in participants’ explanations. Based on the analyses from these three studies, we discuss how these insights can inform the automatic generation of cognitively adequate explanations in future AI systems. •We introduce a problem solving paradigm to study explanations for optimization.•We utilize an exploration and a sequential decision-making version of the task.•We derive formal strategies from explanations, verbal reports, and behavioral data.•The results provide insights for the generation of cognitively adequate explanations.
ISSN:1389-0417
DOI:10.1016/j.cogsys.2024.101297