Please use this identifier to cite or link to this item:
Title: Irrelevant Explanations: a logical formalization and a case study
Authors: Colucci, Simona and and and and
Di Noia, Tommaso
Donini, Francesco Maria 
Pomo, Claudio
Di Sciascio, Eugenio
Issue Date: 2023
Explaining the behavior of AI-based tools, whose results may be unexpected even to experts, has become a major request from society and a major concern of AI practitioners and theoreticians. In this position paper we raise two points: (1) irrelevance is more amenable to a logical formalization than relevance; (2) since effective explanations must take into account both the context and the receiver of the explanations (called the explainee) so it should be also for the definition of irrelevance. We propose a general, logical framework characterizing context-aware and receiver-aware irrelevance, and provide a case study on an existing tool, based on Semantic Web, that prunes irrelevant parts of an explanation. © 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
Rights: Attribution-NonCommercial-NoDerivatives 4.0 International
Appears in Collections:D1. Contributo in Atti di convegno

Files in This Item:
File Description SizeFormat Existing users please
paper5.pdfFinal version2.4 MBAdobe PDF    Request a copy
Show full item record

Page view(s)

checked on Jul 20, 2024


checked on Jul 20, 2024

Google ScholarTM


This item is licensed under a Creative Commons License Creative Commons