Reprinted from: Publication of this reprint collection is supported by paid advertising SLAS Technology 27 (2022) 76–84 Contents lists available at ScienceDirect SLAS Technology journal homepage: www.elsevier.com/locate/slast Full Length Article DeepImageTranslator: A free, user-friendly graphical interface for image translation using deep-learning and its applications in 3D CT image analysis Run Zhou Ye a , Christophe Noll a , Gabriel Richard b , Martin Lepage b , Éric E. Turcotte c , André C. Carpentier a,∗ a Division of Endocrinology, Department of Medicine, Centre de recherche du Centre hospitalier universitaire de Sherbrooke, Université de Sherbrooke, Sherbrooke, Quebec, Canada bSherbrooke Molecular Imaging Center, Department of Nuclear Medicine and Radiobiology, Université de Sherbrooke, Sherbrooke, Quebec, Canada c Department of Nuclear Medicine and Radiobiology, Centre d’Imagerie Moléculaire de Sherbrooke, Université de Sherbrooke, Sherbrooke, QC, Canada a b s t r a c t The advent of deep-learning has set new standards in an array of image translation applications. At present, the use of these methods often requires computer programming experience. Non-commercial programs with graphical interface usually do not allow users to fully customize their deep-learning pipeline. Therefore, our primary objective is to provide a simple graphical interface that allows researchers with no programming experience to easily create, train, and evaluate custom deep-learning models for image translation. We also aimed to test the applicability of our tool in CT image semantic segmentation and noise reduction. DeepImageTranslator was implemented using the Tkinter library, the standard Python interface to the Tk graphical user interface toolkit; backend computations were implemented using data augmentation packages such as Pillow, Numpy, OpenCV, Augmentor, Tensorflow, and Keras libraries. Convolutional neural networks (CNNs) were trained using DeepImageTranslator. The effects of data augmentation, deep-supervision, and sample size on model accuracy were also systematically assessed. The DeepImageTranslator a simple tool that allows users to customize all aspects of their deep-learning pipeline, including the CNN, training optimizer, loss function, and the types of training image augmentation scheme. We showed that DeepImageTranslator can be used to achieve state-of-the-art accuracy and generalizability in semantic segmentation and noise reduction. Highly accurate 3D segmentation models for body composition can be obtained using training sample sizes as small as 17 images. In conclusion, an open-source deep-learning tool for accurate image translation with a user-friendly graphical interface was presented and evaluated. This standalone software can be downloaded at: https://sourceforge.net/projects/deepimagetranslator/ Introduction Image translation or transformation is an important and challenging task in many areas of clinical and fundamental sciences. Since the introduction of convolutional neural networks (CNN), generations of CNN architectures have been designed and have achieved state-of-the-art performance in image translation tasks, such as semantic segmentation [1– 8], noise reduction [9–11], and image synthesis [12,13]. Nevertheless, the application of deep-learning methods for image translation can be difficult for scientists with no computer programming experience. In general, deep-learning pipelines are created using custom-implemented codes. Existing software programs for deeplearning-based image analysis that allow users to build, train, and evaluate custom CNNs, such as Niftynet [14], are mostly accessed through a command-line interface. On the contrary, non-commercial open-source programs that interact with users with a graphical interface, such as the ImageJ implementation of U-net [15] or ilastik [16], do not allow users to customize their CNN (e.g. adjusting the number of channel/layers, specifying input image resolution), the training optimizer, the loss function, and the use of different training image augmentation scheme. Therefore, our primary objective is to create a user-friendly ∗ Corresponding author at: Division of Endocrinology, Centre hospitalier universitaire de Sherbrooke, Sherbrooke, Quebec, Canada J1H 5N4. E-mail address: andre.carpentier@usherbrooke.ca (A.C. Carpentier). graphical interface that will allow students and researchers to easily implement, train, and test custom deep-learning pipelines for image translation. Our secondary objective is to verify the applicability of our tool in two different image translation tasks using CT images. One specific use of the CNNs is in semantic segmentation of CT images for assessment of body composition. For example, CT segmentation is critical for precise quantification of different adipose tissue compartments to provide useful information for fundamental research in metabolic syndrome. However, most of existing implementations of deep-learning methods in abdominal CT segmentation were made for clinical research using 2D single-slice images from cohorts of hundreds to thousands of patients [3–7], which is not applicable in small scale studies. Furthermore, studies of 3D volumetric segmentation for body composition are also scarce. Therefore, we also aimed to evaluate the practical application of the DeepImageTranslator in our small dataset of 524 volumetric CT images from 5 subjects, while also assessing the effects of data augmentation, deep-supervision, and sample size on model accuracy. Here, we present the various features of DeepImageTranslator and use CT images to evaluate the performance of our software in two image translation tasks, namely semantic image segmentation and noise https://doi.org/10.1016/j.slast.2021.10.014 2472-6303/© 2021 The Author(s). Published by Elsevier Inc. on behalf of Society for Laboratory Automation and Screening. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
RkJQdWJsaXNoZXIy MTk3NTQxMg==