Coarse-grained and fine-grained parallel optimization for real-time en-face OCT imaging

Kapinchev, Konstantin and Bradu, Adrian and Barnes, Frederick R.M. and Podoleanu, Adrian G.H. (2016) Coarse-grained and fine-grained parallel optimization for real-time en-face OCT imaging. In: Izatt, Joseph A. and Fujimoto, James G. and Tuchin, Valery V., eds. Optical Coherence Tomography and Coherence Domain Optical Methods in Biomedicine XX. Proceedings of SPIE, 17 (9). SPIE Society of Photo-Optical Instrumentation Engineers, Bellingham, Washington, United States 96972N. ISBN 978-1-62841-931-3. (doi:https://doi.org/10.1117/12.2209560) (The full text of this publication is not currently available from this repository. You may be able to access a copy if URLs are provided)

The full text of this publication is not currently available from this repository. You may be able to access a copy if URLs are provided. (Contact us about this Publication)
Official URL
http://doi.org/10.1117/12.2209560

Abstract

This paper presents parallel optimizations in the en-face (C-scan) optical coherence tomography (OCT) display. Compared with the cross-sectional (B-scan) imagery, the production of en-face images is more computationally demanding, due to the increased size of the data handled by the digital signal processing (DSP) algorithms. A sequential implementation of the DSP leads to a limited number of real-time generated en-face images. There are OCT applications, where simultaneous production of large number of en-face images from multiple depths is required, such as real-time diagnostics and monitoring of surgery and ablation. In sequential computing, this requirement leads to a significant increase of the time to process the data and to generate the images. As a result, the processing time exceeds the acquisition time and the image generation is not in real-time. In these cases, not producing en-face images in real-time makes the OCT system ineffective. Parallel optimization of the DSP algorithms provides a solution to this problem. Coarse-grained central processing unit (CPU) based and fine-grained graphics processing unit (GPU) based parallel implementations of the conventional Fourier domain (CFD) OCT method and the Master-Slave Interferometry (MSI) OCT method are studied. In the coarse-grained CPU implementation, each parallel thread processes the whole OCT frame and generates a single en-face image. The corresponding fine-grained GPU implementation launches one parallel thread for every data point from the OCT frame and thus achieves maximum parallelism. The performance and scalability of the CPU-based and GPU-based parallel approaches are analyzed and compared. The quality and the resolution of the images generated by the CFD method and the MSI method are also discussed and compared. © (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.

Item Type: Conference or workshop item (Proceeding)
Divisions: Faculties > Sciences > School of Physical Sciences
Depositing User: Matthias Werner
Date Deposited: 23 Jan 2017 15:43 UTC
Last Modified: 23 Jan 2017 15:43 UTC
Resource URI: https://kar.kent.ac.uk/id/eprint/60038 (The current URI for this page, for reference purposes)
  • Depositors only (login required):