Abstract
Artificial Neural Networks (ANNs) have achieved significant success in fields likehealthcare, but their "black box" nature challenges transparency and user trust.
Existing Explainable AI (XAI) methods aim to interpret ANN decisions, yet many
are not understandable to non-AI-experts, emphasizing the need for approaches
that prioritize both accuracy and usability, especially in high-stakes environments.
This thesis investigates the reliability and usability of selected existing XAI meth-
ods, evaluating how effectively they convey meaningful explanations to users with
AI varying expertise. Assessments of methods like LIME, GradCAM, and Fast-
CAM identify key limitations, such as inconsistent visual saliency maps and a lack
of user-centred design. These findings underpin the need of more understandable
XAI methods tailored to specific needs.
Among its various contributions, the research outlines domain-adapted approach
to XAI within healthcare by automating the integration of domain knowledge.
This customization reduces manual effort, ensuring that XAI methods provide
technically accurate and contextually meaningful explanations in applications like
surgical tool classification.
To enhance XAI evaluation, the thesis introduces novel metrics such as Explanation
Significance Assessment (ESA), Weighted Explanation Significance Assessment
(WESA), and the Unified Intersection over Union (UIoU). These metrics address
gaps in existing techniques by emphasizing precision and clarity, improving
transparency in AI systems for both AI experts and non-AI-experts.
Finally, the thesis introduces the Explainable Object Classification (EOC) frame-
work, which integrates object parts, attributes, and domain knowledge to offer
comprehensive, multimodal explanations accessible to users with varying ex-
pertise. By providing text, images, and decision paths, EOC enables users to
understand AI decisions more effectively, aiding informed decision-making in
critical sectors like healthcare.
This thesis contributes to advancing XAI by developing methods that bridge the
gap between AI developers and users, ensuring AI outputs are interpretable and
practically useful in real-world contexts.
Date of Award | 2024 |
---|---|
Original language | English |
Supervisor | Christoph Reich (Director of Studies (First Supervisor)), Nathan Clarke (Other Supervisor) & Martin Knahl (Other Supervisor) |
ASJC Scopus Subject Areas
- Artificial Intelligence