LRP-Based Argumentative Explanations for Neural Networks

P Sukpanichnant, A Rago, P Lertvittayakumjorn, F Toni

Research output: Contribution to journalConference proceedings published in a journalpeer-review

Abstract

In recent years, there have been many attempts to combine XAI with the field of symbolic AI in order to generate explanations for neural networks that are more interpretable and better align with human reasoning, with one prominent candidate for this synergy being the sub-field of computational argumentation. One method is to represent neural networks with quantitative bipolar argumentation frameworks (QBAFs) equipped with a particular semantics. The resulting QBAF can then be viewed as an explanation for the associated neural network. In this paper, we explore a novel LRP-based semantics under a new QBAF variant, namely neural QBAFs (nQBAFs). Since an nQBAF of a neural network is typically large, the nQBAF must be simplified before being used as an explanation. Our empirical evaluation indicates that the manner of this simplification is all important for the quality of the resulting explanation.
Original languageEnglish
Pages (from-to)71-84
Number of pages0
JournalCEUR Workshop Proceedings
Volume3014
Issue number0
Publication statusPublished - 1 Jan 2021

Fingerprint

Dive into the research topics of 'LRP-Based Argumentative Explanations for Neural Networks'. Together they form a unique fingerprint.

Cite this