Interactive Explanations by Conflict Resolution via Argumentative Exchanges

A Rago, H Li, F Toni

Research output: Contribution to journalArticlepeer-review

Abstract

As the field of explainable AI (XAI) is maturing, calls for interactive explanations for (the outputs of) AI models are growing, but the state-of-the-art predominantly focuses on static explanations. In this paper, we focus instead on interactive explanations framed as conflict resolution between agents (i.e. AI models and/or humans) by leveraging on computational argumentation. Specifically, we define Argumentative eXchanges (AXs) for dynamically sharing, in multi-agent systems, information harboured in individual agents' quantitative bipolar argumentation frameworks towards resolving conflicts amongst the agents. We then deploy AXs in the XAI setting in which a machine and a human interact about the machine's predictions. We identify and assess several theoretical properties characterising AXs that are suitable for XAI. Finally, we instantiate AXs for XAI by defining various agent behaviours, e.g. capturing counterfactual patterns of reasoning in machines and highlighting the effects of cognitive biases in humans. We show experimentally (in a simulated environment) the comparative advantages of these behaviours in terms of conflict resolution, and show that the strongest argument may not always be the most effective.
Original languageEnglish
Pages (from-to)582-592
Number of pages0
JournalProceedings of the International Conference on Knowledge Representation and Reasoning
Volume0
Issue number0
Publication statusPublished - 1 Jan 2023

Fingerprint

Dive into the research topics of 'Interactive Explanations by Conflict Resolution via Argumentative Exchanges'. Together they form a unique fingerprint.

Cite this