Skip to main content
Kent Academic Repository

BioGAN: An unpaired GAN-based image to image translation model for microbiological images

Bafti, Saber Mirzaee, Ang, Chee Siang, Marcelli, Gianluca, Hossain, MD Moinul, Maxamhud, Sadiya, Tsaousis, Anastasios D. (2023) BioGAN: An unpaired GAN-based image to image translation model for microbiological images. arXiv, . (Submitted) (doi:10.48550/arXiv.2306.06217) (KAR id:104317)


Background and objective: A diversified dataset is crucial for training a well-generalized supervised computer vision algorithm. However, in the field of microbiology, generation and annotation of a diverse dataset including field-taken images are time-consuming, costly, and in some cases impossible. Image to image translation frameworks allow us to diversify the dataset by transferring images from one domain to another. However, most existing image translation techniques require a paired dataset (original image and its corresponding image in the target domain), which poses a significant challenge in collecting such datasets. In addition, the application of these image translation frameworks in microbiology] is rarely discussed . In this study, we aim to develop an unpaired GAN-based (Generative Adversarial Network) image to image translation model for microbiological images, and study how it can improve generalization ability of object detection models. Methods: In this paper, we present an unpaired and unsupervised image translation model to translate laboratory-taken microbiological images to field images, building upon the recent advances in GAN networks and Perceptual loss function. We propose a novel design for a GAN model, BioGAN, by utilizing Adversarial and Perceptual loss in order to transform high level features of laboratory-taken images of Prototheca bovis into field images, while keeping their spatial features. Results: We studied the contribution of Adversarial and Perceptual loss in the generation of realistic field images. We used the synthetic field images, generated by BioGAN, to train an object-detection framework, and compared the results with those of an object-detection framework trained with laboratory images; this resulted in up to 68.1% and 75.3% improvement on F1score and mAP, respectively. We also present the results of a qualitative evaluation test, performed by experts, of the similarity of BioGAN synthetic images with field images.

Item Type: Article
DOI/Identification number: 10.48550/arXiv.2306.06217
Uncontrolled keywords: image-to-image translation; computational biology; image translation; data augmentation; computerised biology
Subjects: Q Science
Divisions: Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Engineering and Digital Arts
Funders: University of Kent (
Depositing User: Moinul Hossain
Date Deposited: 16 Dec 2023 23:06 UTC
Last Modified: 13 Jan 2024 15:39 UTC
Resource URI: (The current URI for this page, for reference purposes)

University of Kent Author Information

  • Depositors only (login required):

Total unique views for this document in KAR since July 2020. For more details click on the image.