Abstract
This paper investigates the usability of XAI (Explainable Artificial Intelligence) in AI-based image classification, particularly for non-experts like medical professionals. XAI provides the user of AI systems with an explanation for a particular decision. But the usability of such explanations remains an open point of discussion. The investigation highlights that there is a need for integrating explainability in the design of the classification approach. This paper will present an approach to classify the parts of an object separately and then utilize a white box model (decision tree) for the final classification. This is enriched by additional information, achieving understandability of the classification.
| Original language | English |
|---|---|
| Pages (from-to) | 362-367 |
| Number of pages | 6 |
| Journal | IFAC-PapersOnLine |
| Volume | 58 |
| Issue number | 24 |
| DOIs | |
| Publication status | Published - 1 Sept 2024 |
| Event | 12th IFAC Symposium on Biological and Medical Systems, BMS 2024 - Villingen-Schwenningen, Germany Duration: 11 Sept 2024 → 13 Sept 2024 |
ASJC Scopus subject areas
- Control and Systems Engineering
Keywords
- AI-based Image processing
- Investigation
- Non-AI Experts
- Usability
- XAI