Ogunjumelo, Bamidele (2023) Reconstruction of Burner Flames Through Deep Learning. Master of Science by Research (MScRes) thesis, University of Kent,. (doi:10.22024/UniKent/01.02.101667) (KAR id:101667)
PDF
Language: English
This work is licensed under a Creative Commons Attribution 4.0 International License.
|
|
Download this file (PDF/5MB) |
Preview |
Official URL: https://doi.org/10.22024/UniKent/01.02.101667 |
Abstract
This MSc thesis reports the design, implementation, and experimental evaluation of a deep learning-based system for the three-dimensional (3-D) reconstruction and visualisation of fossil-fired burner flames. A literature review is given to examine all existing techniques for 3-D visualisation and characterisation of flames. Methodologies and techniques for the 3-D reconstruction of burner flames using optical tomographic and deep learning (DL) techniques are presented, together with a discussion of their advantages and limitations in their applications. Technical requirements and existing problems of the reviewed techniques are discussed. A technical strategy, incorporating numerical simulations, DL, digital image processing and optical tomographic techniques is proposed for the reconstruction and visualisation of a flame. Based on this strategy, a 3-D flame reconstruction and visualisation system based on DL is developed. The system consists of a trained convolutional neural network (CNN) based network model and the use of a third-party software tool for visualisation. The system can use flame images acquired concurrently from eight different directions of a burner and perform a 3-D reconstruction of the flame. A numerical simulation is performed initially to examine the suitability of the DL algorithm proposed, ground truth data are generated using a mathematical model designed to mimic a flame structure and 2-D projection data are generated from each ground truth. A modified CNN model with a 1-D output dense layer is established and trained for the reconstruction of the 3-D Gaussian distribution. To determine the optimal network model architecture for this solution, various experiments were conducted using different network model parameters. A detailed description of a CNN-based network implemented for the numerical solutions is presented. A series of experiments was conducted using flame data obtained from a laboratory-scale combustion test rig to evaluate the performance of the established CNN model. These included implementing code to perform image processing routines to prepare the dataset collected from the laboratory-scale combustion test rig. Additional datasets were also generated using OpenCV morphological transformation operations to augment the original dataset. The obtained results have proven that the implemented and trained CNN network model can reconstruct the cross-sectional slices of a burner flame based on the images obtained under various combustion conditions. It was also possible to obtain a 3-D flame structure from the reconstructed cross-sectional flame data using a 3-D visualisation tool. Results from the experiments and the performance of the implemented 3-D flame reconstruction and visualisation system based on DL are presented and discussed.
Item Type: | Thesis (Master of Science by Research (MScRes)) |
---|---|
Thesis advisor: | Hossain, Moinul |
Thesis advisor: | Lu, Gang |
DOI/Identification number: | 10.22024/UniKent/01.02.101667 |
Uncontrolled keywords: | deep learning flames image reconstruction tomography |
Subjects: | T Technology |
Divisions: | Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Engineering and Digital Arts |
Funders: | University of Kent (https://ror.org/00xkeyj56) |
SWORD Depositor: | System Moodle |
Depositing User: | System Moodle |
Date Deposited: | 14 Jun 2023 07:19 UTC |
Last Modified: | 05 Nov 2024 13:07 UTC |
Resource URI: | https://kar.kent.ac.uk/id/eprint/101667 (The current URI for this page, for reference purposes) |
- Link to SensusAccess
- Export to:
- RefWorks
- EPrints3 XML
- BibTeX
- CSV
- Depositors only (login required):