Skip to main content
Kent Academic Repository

Optimized Deep Learning Model for Predicting Tumor Location in Medical Images for Robotic Trajectory Mapping

Seetohul, Jenna, Shafiee, Mahmood, Sirlantzis, Konstantinos, Stamenkovic, S., Sakhaei, Amir Hosein (2023) Optimized Deep Learning Model for Predicting Tumor Location in Medical Images for Robotic Trajectory Mapping. In: Conference on New Technologies for Computer and Robot Assisted Surgery, 12th September 2023, Paris, France. (In press) (Access to this publication is currently restricted. You may be able to access a copy if URLs are provided) (KAR id:103460)

PDF Author's Accepted Manuscript
Language: English

Restricted to Repository staff only

Contact us about this Publication
[thumbnail of CRAS_2023_paper_12.pdf]

Abstract

Recent studies show that pre-operative target localization has an increased efficiency of up to 93,3% [1], with the ability to detect, retrieve and generate specific anatomical landmarks from the medical image datasets. Despite a plethora of advances in the field of medical image registration, researchers are still confronted with issues such as label correspondence in sequences, high computational burden as well as background noise during signal acquisition. A simplified method proposed in [1] utilizes an appropriate transformation vector to achieve a quasi-optimized moving-image registration procedure from real chest scans. The extended framework adapted from the DeepReg architecture enabled the mapping of warped moving labels onto their fixed counterparts, hence earmarking the collision-free zones around the phrenic nerve and innominate vein. In their raw form, a deep neural network is used to process segmented masks as binary pixelated images, compress them and extract the areas of interest from the filtered image from erosion, blurring and uneven contrasts.

Despite their high rates of accuracy recorded, there is a diagnostic call to improve the decision based network for artificial intelligence (AI) driven classification and localization of moving tumors. Several authors have focused on performing such transfer learning (TL) processes with CNN models such as the AlexNet, DenseNet, Residual Network (ResNet) and Residual Network 50 version 2 (ResNet50v2). Following this method, authors such as Islam et al. [2] developed a new and improved DL method to detect and diagnose COVID-19 from X-ray images with a combination of CNNs and a long short term memory (LSTM). The proposed method utilised a novel adversarial loss for high-accuracy marker localization between warped moving and fixed images with respect to ground truth voxels. The advantage behind this method is that ground truth image alignment is not required due to the inherent use of images instead of image pairs. In terms of unsupervised learning method, works by De Backer et al [3] resonate the most, with the closest resemblance to our experiment. However, due to certain issues in efficiency in reconstruction and neural network accuracy, we deem our method to be most feasible in image-guided surgery for trajectory mapping via tumor localization.

Item Type: Conference or workshop item (Paper)
Subjects: Q Science > Q Science (General)
Divisions: Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Engineering and Digital Arts
Funders: Engineering and Physical Sciences Research Council (https://ror.org/0439y7842)
Depositing User: Jenna Seetohul
Date Deposited: 26 Oct 2023 11:40 UTC
Last Modified: 09 Jan 2024 15:34 UTC
Resource URI: https://kar.kent.ac.uk/id/eprint/103460 (The current URI for this page, for reference purposes)

University of Kent Author Information

  • Depositors only (login required):

Total unique views for this document in KAR since July 2020. For more details click on the image.