Skip to main content
Kent Academic Repository

Secure and Privacy-preserving Federated Learning with Explainable Artificial Intelligence for Smart Healthcare System

Raza, Ali (2023) Secure and Privacy-preserving Federated Learning with Explainable Artificial Intelligence for Smart Healthcare System. Doctor of Philosophy (PhD) thesis, University of Kent, University of Kent. (doi:10.22024/UniKent/01.02.102717) (Access to this publication is currently restricted. You may be able to access a copy if URLs are provided) (KAR id:102717)

PDF
Language: English

Restricted to Repository staff only until August 2024.

Contact us about this Publication
[thumbnail of 239raza2023phdfinal.pdf]
Official URL:
https://doi.org/10.22024/UniKent/01.02.102717

Abstract

The growing population around the globe has a significant impact on various sectors including the labor force, healthcare, and the global economy. The healthcare sector is among the most affected sectors by the growing population due to the increasing demand for resources including doctors, nurses, equipment, and healthcare facilities. Intelligent systems have been incorporated to enhance decision-making, management, prognosis, and diagnosis in order to tackle such issues and offer improved healthcare to patients. Among such systems, those based on deep learning (DL), a subclass of machine learning (ML) have outperformed many traditional statistical and ML systems owing to their capability of automatically discovering and learning related features for a given task and robustness. Therefore, the use of DL has seen a steady increase in many applications. Nevertheless, usually, the training of DL models relies on a single centralized server, which brings many challenges: (1) except for some big enterprises most of the small enterprises have limited quality data, which is insufficient to support the training of data-hungry DL models, (2) access to data, which is vital for these systems, often raises privacy concerns. The collection and analysis of sensitive patient information must be done in a secure and ethical manner to ensure the protection of individual privacy rights, (3) high communication cost and computation resources required, (4) a large number of trainable parameters make the outcome of DL hard to explain, which is required in some applications, such as healthcare.

Compared to centralized ML, federated learning (FL) improves both privacy and communication costs, where clients collaboratively train a joint model without sharing the raw data directly. FL minimizes privacy breaches and safeguards sensitive data by keeping it distributed locally. This enables collaborative model training while reducing the risk of unauthorized access and data breaches. Additionally, it promotes data diversity and scalability by involving multiple sources in joint model training and decreases communication costs by sharing only model updates instead of the entire dataset. However, FL brings its own challenges. For example, heterogeneous local data among the clients makes it challenging to train a high-performing and robust global model. Sharing updates (hundreds of thousands of parameters) still has high communication costs. Additionally, the distributed nature and access control of local data in FL make it more vulnerable to malicious attacks. Moreover, the challenge of explaining the results of DL still remains challenging, and methods are needed to be developed to bring trust accountability, and transparency in sensitive applications, such as healthcare.

Therefore, the aim of this thesis is to create robust frameworks that are secure, high-performing, and privacy-friendly within federated settings. These frameworks will be specifically designed for end-to-end (we train our frameworks using raw data without any manual feature extraction) healthcare applications, considering the presence of non-identically distributed data among clients in FL to bring robustness. By addressing these challenges, the objective is to enhance the overall system's resilience and effectiveness. We also propose a methodology for detecting anomalies within federated settings, particularly in applications with limited available data for the abnormal class. Furthermore, clients in FL are usually resource-constrained with limited computation and communication resources available. Therefore, to support efficient computation and communication in a federated setting we propose a lightweight framework (in terms of the trainable number of parameters). Additionally, to provide explanations of the DL models' outcomes, which are usually hard to explain because of the large number of parameters, we propose model-agnostic explainable AI modules to help explain the results of DL models. Moreover, in order to protect the proposed frameworks against cyber attacks, such as poisoning attacks, we propose a framework in federated settings, which makes the proposed healthcare frameworks more secure and trustworthy. Finally, with experimental analysis using baseline datasets for one of the most common health conditions i.e., cardiovascular diseases (arrhythmia detection, ECG anomaly detection) and human activity recognition (used for supplementing cardiovascular diseases detection), we show the superiority of the proposed frameworks over state-of-the-art work.

Item Type: Thesis (Doctor of Philosophy (PhD))
Thesis advisor: Li, Shujun
Thesis advisor: Koehl, Ludovic
DOI/Identification number: 10.22024/UniKent/01.02.102717
Uncontrolled keywords: Federated Learning Edge Computing Healthcare privacy security Explainable Artificial Intelligence Explainable Anomaly Detection Embedded Artificial Intelligence Clinical Decision Support Systems Safety and Reliability of Artificial Intelligence poisoning attacks data poisoning model poisoning Byzantine attacks
Divisions: Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Computing
Funders: University of Lille Nord de France (https://ror.org/03btvgn05)
SWORD Depositor: System Moodle
Depositing User: System Moodle
Date Deposited: 06 Sep 2023 16:10 UTC
Last Modified: 08 Sep 2023 08:20 UTC
Resource URI: https://kar.kent.ac.uk/id/eprint/102717 (The current URI for this page, for reference purposes)

University of Kent Author Information

  • Depositors only (login required):

Total unique views for this document in KAR since July 2020. For more details click on the image.