Skip to main content
Kent Academic Repository

Exploring feedforward neural network explainability using the layerwise relevance propagation framework

Harris, Lee (2025) Exploring feedforward neural network explainability using the layerwise relevance propagation framework. Doctor of Philosophy (PhD) thesis, University of Kent. (doi:10.22024/UniKent/01.02.110722) (Access to this publication is currently restricted. You may be able to access a copy if URLs are provided) (KAR id:110722)

PDF
Language: English

Restricted to Repository staff only until June 2028.

Contact us about this publication
[thumbnail of 208thesis_lh558.pdf]
Official URL:
https://doi.org/10.22024/UniKent/01.02.110722

Abstract

Neural Networks (NNs) can learn very accurate solutions to complex problems, but it is rarely clear how. The Layerwise Relevance Propagation (LRP) framework would explain how a given NN would produce a prediction for given data by assigning a relevance score to each data feature in each data example. This would be achieved by propagating each NN layer's output onto each data feature in its input. Other researchers have shown what hyperparameters and architectural choices lead to these explanations beinanalytically correct, however, it is not always possible to apply these in practice.

The first chapter discusses the problems and solutions that were explored in this research. The second chapter presents background literature about AI, NNs, model shade, explainability, and LRP. The third chapter compared explanations extracted by LRP to those extracted from white-box models. These were most comparable when the NN architecture was large and when the data that it was fitted on contained many data examples. This established a link between explainability and the predictive accuracy of a NN. The fourth chapter found that explanations generated by LRP can be made correct through hyperparameter optimisation, and the newly-proposed Local LRP (LLRP) framework exceeded the explainability of trained LRP over greyscale and colour images by learning the hyperparameters at each NN layer. Chapter five discovered and analysed why the actual and expected negative relevance representations differ, and the sensitivity of positive relevance was maximised individually instead of trying to mutually maximise positive and negative relevance. A final reflection in chapter six shows that this thesis has contributed to NN explainability by improving the relevance produced by LRP. Future research opportunities were highlighted throughout this work.

Item Type: Thesis (Doctor of Philosophy (PhD))
Thesis advisor: Grzes, Marek
DOI/Identification number: 10.22024/UniKent/01.02.110722
Uncontrolled keywords: deep learning; explainability; AI ethics; layerwise relevance propagation
Subjects: Q Science > QA Mathematics (inc Computing science) > QA 75 Electronic computers. Computer science
Institutional Unit: Schools > School of Computing
Former Institutional Unit:
There are no former institutional units.
SWORD Depositor: System Moodle
Depositing User: System Moodle
Date Deposited: 21 Jul 2025 10:10 UTC
Last Modified: 22 Jul 2025 12:25 UTC
Resource URI: https://kar.kent.ac.uk/id/eprint/110722 (The current URI for this page, for reference purposes)

University of Kent Author Information

Harris, Lee.

Creator's ORCID:
CReDIT Contributor Roles:
  • Depositors only (login required):

Total unique views of this page since July 2020. For more details click on the image.