Skip to main content

Reinforcement Learning for Shared Autonomy in Powered Wheelchair Navigation

Chatzidimitriadis, Sotirios (2023) Reinforcement Learning for Shared Autonomy in Powered Wheelchair Navigation. Doctor of Philosophy (PhD) thesis, University of Kent,. (doi:10.22024/UniKent/01.02.102477) (Access to this publication is currently restricted. You may be able to access a copy if URLs are provided) (KAR id:102477)

PDF
Language: English

Restricted to Repository staff only until August 2024.

Contact us about this Publication
[thumbnail of 232chatzidimitriadis2023phdfinal.pdf]
Official URL:
https://doi.org/10.22024/UniKent/01.02.102477

Abstract

Assistive robotics is witnessing a surge in research focusing on designing algorithms and frameworks that offer personalized support to users, considering their intentions and adapting system responses accordingly. This thesis delves into the integration of artificial intelligence in assistive technologies for powered wheelchairs, with a primary emphasis on the challenging problem of shared control through reinforcement learning.

Shared control, also known as shared autonomy, has been extensively studied, especially in the context of powered wheelchairs. Many wheelchair users rely on aid to enhance their everyday autonomy, particularly those who cannot use conventional joystick control interfaces, often encountering frustrations, fatigue, and compromised safety. Existing shared control methods typically involve blending human and autonomous controller decisions or predicting user goals to act autonomously. Unfortunately, such approaches often rely on assumptions like known goal sets, world dynamics models, and user behavior models, which limit adaptability. Motivated by the shortcomings of prior approaches and inspired by recent machine learning advances, this thesis introduces a novel shared control method using deep reinforcement learning within a continuous action space, which lifts the reliance on the aforementioned assumptions.

Initially, a reinforcement learning agent is developed to autonomously navigate complex indoor environments without the need for a map. The agent is trained using a virtual robotic wheelchair and rigorously validated against popular path planning methods. Subsequently, artificial noise is injected into the learned model to simulate disabled user input, enabling the training of an end-to-end shared control system. A modification in the typical reinforcement learning objective ensures compliance with user intentions while simultaneously maximizing future rewards associated with the assistive nature of the system. The shared control system receives noisy user commands and sensor data to generate corrective control commands for the wheelchair. Rigorous simulations and real-world trials with human users demonstrate significant reductions in collisions and increased obstacle clearance, albeit with a trade-off in user satisfaction.

Additionally, this thesis presents a non-intrusive, vision-based head-control interface for powered wheelchairs, employing face detection and head pose estimation. Through human user trials, the effectiveness and performance of this interface are benchmarked, confirming its viability as an alternative to the standard joystick interface. Notably, when combined with the shared control system in further real-world trials, the proposed assistive system proves adept at compensating for the less accurate input of this more challenging interface, resulting in a remarkable 92\% reduction in collisions and improved overall adequacy.

In summary, this thesis introduces a mapless autonomous navigation method for powered wheelchairs, a novel shared control framework employing deep reinforcement learning, and a non-intrusive vision-based head-control interface. The proposed assistive system is empirically validated, showcasing its substantial impact on enhancing user autonomy and safety in powered wheelchairs.

Item Type: Thesis (Doctor of Philosophy (PhD))
DOI/Identification number: 10.22024/UniKent/01.02.102477
Uncontrolled keywords: Assistive Robotics Assistive Technologies Shared Autonomy Reinforcement Learning Powered Wheelchair Navigation Human-Robot Interaction Vision-Based Control
Subjects: T Technology
Divisions: Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Engineering and Digital Arts
Funders: University of Kent (https://ror.org/00xkeyj56)
SWORD Depositor: System Moodle
Depositing User: System Moodle
Date Deposited: 16 Aug 2023 16:10 UTC
Last Modified: 17 Aug 2023 09:59 UTC
Resource URI: https://kar.kent.ac.uk/id/eprint/102477 (The current URI for this page, for reference purposes)
  • Depositors only (login required):

Total unique views for this document in KAR since July 2020. For more details click on the image.