Etik 3

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/359754529

Deep Learning in Medical Imaging: A Review

Chapter · April 2022


DOI: 10.1201/9781003269793-15

CITATIONS READS

6 2,202

3 authors, including:

Zakaria Rguibi
Université Hassan 1er
12 PUBLICATIONS 22 CITATIONS

SEE PROFILE

All content following this page was uploaded by Zakaria Rguibi on 20 June 2022.

The user has requested enhancement of the downloaded file.


CHAPTER FIFTEEN

Deep Learning In Medical Imaging:


A Review

Rguibi Zakaria, Hajami Abdelmajid and Dya Zitouni

Abstract—Medical imaging diagnosis is I. INTRODUCTION


the most assisted method to help physicians
diagnose patient diseases using different Deep learning aims to simulate human cognitive
imaging test modalities. In fact, Deep learning functions. It is bringing a paradigm shift to
aims to simulate human cognitive functions. It medical imaging field, powered by increasing
providing a paradigm shift in the field of medical availability of medical imaging data and rapid
imaging, due to the expanding availability of progress of deep learning techniques. In fact,
medical imaging data and to the advancing deep deep learning algorithms have become a
learning techniques. In effect, deep learning methodology of choice for Medical imaging.
algorithms have become the approach of choice There are many uses for deep learning
for medical imaging, from image acquisition to technology in the healthcare sector [1].
image retrieval, from segmentation to disease In fact, deep learning algorithms have become
prediction. In our paper, we present a review a methodology of choice for radiology imaging
that focuses on exploring the application of analysis [2]. This includes different image
deep learning in medical imaging from different modalities like computerized tomography,
perspectives. Magnetic resonance imaging, Positron emission
Keywords—Deep learning, Medical imaging, tomography, ultrasonography and different
review tasks like tumor detection, segmentation, disease
prediction etc. deep learning based methods have
remarkable performance improvements over

Rguibi Zakaria () • Hajami Abdelmajid • Dya Zitouni


Hassan First University of Settat, Faculty of Science and Technology, Settat, Morocco
[email protected][email protected][email protected]

J K Mandal, S Misra, J S Banerjee and S Nayak (eds.), Application of Machine Intelligence in Engineering
Doi: 10.4324/9781003269793-15
132 2nd Global Conference on Artificial Intelligence and Applications (GCAIA 2021)

the conventional machine learning algorithms discussed in Sect. 2. Then, Sect. 3, 4 and 5
[3], because deep learning learns from enormous covers the application, algorithms and hardware
amount of image examples Similar to human perspective. In Sect. 6, we discuss the analysis
learning and it might take much less time, and recommendation to the research questions
as it solely depends on curated data and the set in the introduction. Finally, the conclusion
corresponding metadata rather than the domain of this work is presented in Sect. 7.
expertise, which usually takes years to develop
[4]. As the traditional AI requires predefined II. APPLICATION PERSPECTIVE
features and have shown plateauing performance
over recent years, and with the current success Deep learning techniques are increasingly
of deep learning in image research, it is expected being used to enhance clinical practice, and the
that the field of deep learning will further number of examples is long and growing every
dominate the image research in radiology. day. In our paper The research articles that were
collected are classified by imaging modality:
In this work, we present a comprehensive survey
magnetic resonance imaging, computed
to assist researchers in the area of deep learning
tomography, positron emission tomography
applications in medical imaging. This work
and single-photon emission tomography,
complements the existing work and contributes
radiography, ultrasound and histology.
towards providing the complete background
1) Magnetic resonance imaging: Magnetic
on the application of deep learning in Medical
resonance imaging (MRI) is a medical
imaging. The objective of this survey is set to
imaging technique used in the field
answer multiple research questions concerning
of radiology to produce images of the
deep learning in medical imaging. these research
anatomy and physiological processes of
questions are:
the body. From image acquisition to image
• Application perspective: What are the
retrieval, from segmentation to disease
main applications that use DL tools in
prediction, deep learning has been applied
Medical imaging?
to every step of complete workflows.
• Deep learning perspective: What are the
Generally deep learning has two levels
DL algorithms and tools used for Medical
of application in Magnetic resonance
imaging and the available Datasets? This
imaging workflow, the first one is the
question is to analyse the most commonly
Lower levels of Magnetic resonance
used algorithms and frameworks in Medical
imaging measurement techniques
imaging Application.
ranging from MR image capture and
• Hardware perspective: What are the
signal treatment to MR imprinting,
trade-offs when using different hardware
denoising and super-resolution, and
platforms for AI acceleration? This
image generation, because, has typically
question is to study and compare available
been focused on segmentation and
hardware implementation options based on
classification of reconstructed magnitude
standard figures of merit.
images. Reconstruction Research on CNN
The rest of the paper is organized as follows: and RNN-based image reconstruction
literature review covering related work is methods is rapidly increasing [6]–[9].
Deep Learning In Medical Imaging: A Review 133

A real-time method to reconstruct In 2020, much work has focused on


compressed sensed Magnetic resonance the novel coronavirus COVID-19, an
imaging using GAN has also been automated detection tool is needed to
proposed [10], [11]. We find also that assist in the detection of COVID-19
Image reconstructing a higher-resolution pneumonia using chest CT imaging
image or image sequence from the [16]. Their motivation is to leverage
observed low-resolution image, super- useful information from multiple related
resolution, is an exciting application tasks to help improve segmentation and
of deep learning methods [12] [13]. classification performance using a dataset
Reconstruction based in deep learning of 1044 patients, including 449 patients
method can extract more abundant with COVID-19, 100 normal patients, 98
edge details in the network training with lung cancer, and 397 with different
process, which potently improves the types of pathology. In [17], They
high-resolution image reconstruction investigates the feasibility of using a deep
of osteoarthritis. In spite of the fact that learning-based decision-tree classifier for
the proposed algorithm achieves better detecting COVID-19 from CXR images.
reconstruction effects than previous Most of the applications of computer
methods, but there is still a gap between vision to radiological imaging have been
the edge reconstruction effect and the focused on classifying images into disease
original high-resolution image. categories, but we may also use these
The second involves higher-level approaches to enhance image quality.
(downstream) applications, such as Hybrid imaging approaches, such as PET/
rapid and accurate image segmentation, CT, are ideally suited for the application of
content-based image searching, typically these methods. Li, Zhao and Lu et al. [18],
performed on reconstructed amplitude They suggest a new deep learning-based
images, and disease prediction in specific variational approach to automatically
organs (brain, kidney, prostate and spine). merge multimodal information for tumor
2) Computed tomography, PET & X-Ray: segmentation in PET/CT.
Medical imaging is a valuable source X-ray Security Imaging with deep learning
of information required for clinical models is presented in [19], They survey
decisions. However, the analysis of these X-ray security imaging algorithms by
examinations is not a trivial task. Since taxonomizing the area into both classical
CT examinations represent the major machine learning and contemporary deep
cause of radiation exposure to the general learning applications.
public from diagnostic medical imaging 3) Ultrasound: Ultrasound is a convenient
procedures, the advancement of low-dose and accessible examination tool. It
CT imaging protocols is highly preferable. is relatively inexpensive and fast. In
The results presented in the paper [15] addition, patients are not exposed to
indicate that ResNet is an optimal ionizing radiation. Technologically,
algorithm for generating very low-dose ultrasound probes are getting more
CT images for COVID-19 diagnosis. compact and portable, as the market
134 2nd Global Conference on Artificial Intelligence and Applications (GCAIA 2021)

demand for low-cost handheld devices by people who do not have extensive
is increasing [20]. Transducers are expertise in computer vision techniques.
becoming miniaturized, which allows, The process of developing new integrated,
for example, intra-corporeal imaging biologically and clinically uniform
for interventional applications. At the disease categories to identify patients
same time there is a strong movement at risk of progression, and to modify
towards 3D imaging, [21] and the use of current paradigms for the treatment and
high-framerate imaging schemes; both prevention of kidney disease, are central
accompanied by dramatically increasing to the application of these new tools by
data rates that pose a heavy burden the development of machine learning
on the probesystem communication approaches for extracting information
and subsequent image reconstruction from image data, enables the interrogation
algorithms. Current systems offer a of tissues in a way that was not previously
multitude of advanced applications and possible [24].
methods, including shear wave elasticity In pathology, DL has been used for the
imaging, ultra-sensitive Doppler, and detection, annotation, segmentation,
ultrasound localiza-tion microscopy for registration, processing and classification
super-resolution microvascular imaging. of WSIs (whole slide image). For
We can find that [22] is a good article mammography and breast histopathology
to give the reader a broad understanding images we can take an overview present
of the possible impact of deep learning the recent state-of-the-art deep learning
approaches on many aspects of based CAD systems developed is
ultrasound imaging, and this paper [23] presented in [25]. This paper presents an
will be useful for researchers who focus overview of different deep learning based
on the ultrasound CAD system, this study approaches used for mammography and
divided the ultrasound CAD system into breast histology and proposes a bridge
two categories. One is the traditional between these two fields employing deep
ultrasound CAD system which employed learning concepts.
the manmade feature and the other is the Deep-learning-based holographic
deep learning ultrasound CAD system. phase recovery, image enhancement/
4) Histology: The emergence of digital reconstruction and cross-modality
pathology-an image-based environment image transformations have profound
for acquiring, managing, and interpreting and broad implications in the field of
pathological information supported digital holography and coherent imaging
by computational techniques for data systems [26].
extraction and analysis-is changing the
pathology ecosystem. In particular, by
III. ALGORITHMS AND DATA
virtue of our new ability to generate and
maintain digital libraries, the field of
PERSPECTIVE
computer vision can now be effectively In this section, an analysis for the collected
applied to histopathology material research papers from algorithm perspective
Deep Learning In Medical Imaging: A Review 135

is presented in order to state the most used so health care data are highly heterogeneous,
deep learning algorithms in different medical ambiguous, noisy and incomplete that why
imaging applications we also present some Training a good deep learning model with such
available datasets. massive and variegate data sets is challenging.
Data Parallelism is a way to distribute
1) Medical Imaging Data
computing across different machines in a way
Publishing in academic journals without sharing that data is split across different machines and
data/annotations/models is more marketing than some computation is performed locally in each
science. Matthew Lungren MD: Co-Director machine using the whole model, but only for
Stanford AIMI Interventional Radiologist. part of the full data-set. Depending on how these
Deep learning refers to a set of highly intensive parameters are updated, we have two paradigms
computational models. One typical example is for parameter update Synchronous update and
fully connected multi-layer neural networks, Asynchronous update [27]. In [28], They aim to
where tons of network parameters need to be experimentally Measuring the Effects of Data
estimated properly. The basis to achieve this Parallelism on Neural Network Training. One
goal is the availability of huge amount of data. important thing from this paper is that The effect
In fact, while there are no hard guidelines about of the dataset on the maximum useful batch
the minimum number of training documents, a size tends to be weaker than the effects of the
general rule of thumb is to have at least about model and the learning algorithm, and does not
10 the number of samples as parameters in consistently depend on the size of the dataset.
the network. This is also one of the reasons Data parallelism [29] is one of the most
why deep learning is so successful in domains common parallelization approach In multi-GPU
where huge amount of data can be easily platforms, When deployed, all GPUs have a
collected (e.g. computer vision, speech, natural full copy of the DNN model and run a subset
language). However, health care is a different of the training data. Data parallelism incurs
domain; in fact, we only have approximately excessive inter-GPU communication overhead,
7.8 billion people all over the world (as per as the weights updated on each GPU must be
September 2021), with a great part not having synchronized. These communication overheads
access to primary health care. Consequently, increase as the model size increases, severely
we cannot get as many patients as we need to hindering the scalability of data parallelism
train a comprehensive deep learning model. [30]–[35].
Furthermore, understanding diseases and their
variability is much more complicated than other 2) Algorithmis
tasks, such as image or speech recognition. As a
We describe a few deep learning methods in
result, the amount of medical data that is needed
this section.
to train an successful and robust deep learning
model would be much more comparing with a) Convolutional neural networks their
other media, from a big data perspective. extensions
Health care data is not similar to other domains Automatic medical imaging analysis is crucial
where the data are clean and well-structured, to modern medicine. Diagnosis based on
136 2nd Global Conference on Artificial Intelligence and Applications (GCAIA 2021)

the interpretation of images can be highly order to understand which is in contrast to


subjective. Computer-aided diagnosis (CAD) traditional neural networks that process one
can provide an objective assessment of the input and move on to the next regardless of its
underlying disease processes. Modeling sequencing and cannot understand this because
of disease progression, common in several each input is supposed to be independent of
neurological conditions, such as Alzheimer’s, the other whereas in a time series, each input
multiple sclerosis, and stroke, requires analysis depends on the preceding input.
of brain scans based on multimodal data and
Pyramid Convolutional RNN (PC-RNN) is a
detailed maps of brain regions.
novel deep learning-based method to reconstruct
In the latest years, for the reason that, CNNs the image from multiple scales [39]. The results
had the ability to be parallelized with GPUs and show that PC-RNN model can reconstruct more
their outstanding performance demonstrated fine details in the image and outperforms CS
in computer vision, have been adapted rapidly and UNet in both single-coil and multi-coil
by the medical imaging research community. tasks with 4X and 8X accelerations. In [40],
The fact that CNNs in medical imaging have They present a classification framework based
yielded promising results in brain pathology on combination of convolutional and recurrent
segmentation [36] and in an editorial of deep neural networks for longitudinal analysis of
learning techniques in computer aided detection, structural MR images in AD diagnosis and In
segmentation, and shape analysis [37]. Among [41], They present a classification framework
the biggest challenges in CAD are the differences based on combination of Multi-Layer Perceptron
in shape and intensity of tumors/lesions and the (MLP) neural network and Recurrent Neural
variations in imaging protocol even within the Networks (RNN) for longitudinal analysis of
same imaging modality. In several cases, the MR images for AD diagnosis.
intensity range of pathological tissue may overlap
In this days, The confrontation of COVID-19
with that of healthy samples. Furthermore,
pandemic has become one of the promising
Rician noise, non isotropic resolution, and bias
challenges of the world healthcare, this why
field effects in magnetic resonance images
Accurate and fast diagnosis of COVID-19 cases
(MRI) cannot be handled automatically using
is essential for correct medical treatment to
simpler machine learning approaches. To
control this pandemic. Islam et al. [38] presents
deal with this data complexity, hand-designed
a combined architecture of convolutional neural
features are extracted and conventional machine
network (CNN) and recurrent neural network
learning approaches are trained to classify them
(RNN) to diagnose COVID-19 from chest
in a completely separate step.
Xrays. In terms of accuracy and computational
b) Recurrent Neural Networks (RNNs) time in our experiments we find that The
Recurrent neural networks (RNNs) are a type combination between VGG19 and RNN
of artificial neural network that can process architectures achieved the best performance
a sequence of inputs in deep learning and among all the networks.
maintain their state while processing the next c) GANs and their extensions
sequence of inputs. Data like time series have
Generative Adversarial Networks, or GANs,
a sequential order that needs to be tracked in
are a procesos to generative modeling using
Deep Learning In Medical Imaging: A Review 137

deep learning methods, GAN modeling is [46] they proposes multi-stage setup to applies
an unsupervised learning task that involves GANs for intravascular US (IVUS) simulation.
automatically discovering and learning
[47] elucidates that using the GAN training
regularities in the input data; in this manner, the
strategy in CNNs enhances not only the
model can be used for generating or producing
performance of semantic segmentation methods
new examples that could have been drawn from
but also makes the performance of non-semantic
the original data set.
segmentation methods closer to semantic ones.
Generative adversarial networks Models have [48] Highlights the excellent performance of
carved open many exciting ways to handle well GANs in the segmentation of normalized or
known and challenging medical image, analysis equalized patches of brain tumors. [49] The
problems such as data simulation, detection, give us a framework named SegAN framework.
classification, medical image denoising, They display that pixel-dependencies are
reconstruction or segmentation. Kazeminia learned better by using an adversarial loss in
et al. [42], give us a broad overview of recent addition to a multi-scale, pixel-wise losses
literature on GANs for medical applications and using the U-Net as the generator architecture
also the shortcomings and opportunities of the of a GAN. But One challenge with most of
proposed methods are thoroughly discussed, the supervised segmentation methods is the
and potential future work is elaborated. One performance degradation on unseen images.
more great thing from this paper is that all [50] Demonstrate an adversarial framework to
essential details such as the underlying method, address this problem for unsupervised domain
datasets, and performance are tabulated is done adaptation in brain lesion segmentation. For
in An interactive visualization. the segmentation of bony structures in brain
images, [44] synthesize CT images from MRI
CT imaging puts the patient at risk of cell damage
images using GANs and then use both of
and cancer because of radiation exposure that
them as the input to a segmentation network
why the synthesis of CT images from MR
called Deep-supGAN, optimized with five
acquisitions is very important and practically
different losses. For multi-class classification
the acquisition of CT images is required in
of brain tumors, [51] proposes combining the
many clinical settings. [43] Successfully utilizes
cGAN and MGAN, where class labels define
CycleGANs to transform 2D MR images to
conditions.
CT images without the need for explicit, co­
registered data pairs. Interestingly, their training
led to even better results as the mapping is IV. HARDWARE PERSPECTIVE
not affected by co-registration artifacts. In
Apparent recent advances in the advancement
[44] they map 3D MR data of the head to its
of computer vision algorithms are not only due
CT counterpart to facilitate the segmentation
to deep learning technologies and large-scale
of craniomaxillofacial bony structures using
datasets, but also relying on the major leaps
conditional GANs.
of hardware acceleration that gives powerful
In [45] they proposes a 3D voxel locations parallel computing architectures to make the
for synthesizing 2D freehand ultrasound (US) efficient training and inference of large-scale,
images of a fetus phantom using cGAN. In complex and multi-layered neural networks.
138 2nd Global Conference on Artificial Intelligence and Applications (GCAIA 2021)

This section discusses the hardware platform images will come from very different sources
can be used in the application of deep learning and, thus the networks must be trained as
in medical imaging field (figure 1). Hardware independently as possible from the acquisition
accelerators’ papers are categorized from source [54].
hardware perspective to provide useful
The use of Cloud TPUs for high-resolution 3D
comparisons.
imaging is a promising way to get better results.
Huot et al. [55], They developed a novel approach
to wavefield modeling using TensorFlow on
Cloud TPUs by adapting a numerical scheme to
exploit the TPU architecture, using higher-order
stencils to leverage its efficient matrix operation
and using the dedicated high-bandwidth, low-
latency, interchip interconnected network of
the TPU Pod for the halo exchange. Faster
computation allows faster decision making
Fig. 1 Summary of the Hardware Platform can
be used in the Application of Deep Learning in in workflows involving high-resolution
Medical Imaging Field medical imaging. Additionally, this work
demonstrated how TensorFlow on Cloud TPU
Hardware selection is about determining the can be used to accelerate conventional scientific
technical specifications for a proposed deep simulation problems. The development of TPU-
learning model. The critical metrics to consider implementation frameworks will potentially
in hardware selection are the size of the dataset allow users to program TPUs with simple and
and the complexity of the model. Deep learning easy-to-read code.
models can be trained on CPUs, GPUs, or TPUs
on cloud platforms, which can leverage deep There is two Important notes from [56], First,
learning-oriented devices [5]. TPU is well adapted to large-batch learning, as
systolic networks are very effective in increasing
1) Tensor Processing Units (TPU): The TPU is throughput. Their experiments show that large
a Cloud TPU instance to which we were given batch size is the key to higher TPU over GPU
academic access in February 2018. Its system speedup. Secondly All CNNs perform better
architecture includes a Cloud Engine VM, on TPU. TPU is the best platform for large
a Cloud TPU server, Google Cloud storage, CNNs due to batch size because it is always the
and a Cloud TPU board [52]. Each TPU card key to better TPU speed over GPU for CNNs,
contains four TPU packages (the default TPU which would suggest that the TPU architecture
Cloud configuration) [53]. is highly optimized for the spatial reusability
Reducing training times is an important characteristics of CNNs.
requirement in this scenario, and Google TPUs 2) GPU: Driven by the rapidly increasing
are currently one of the most powerful resources computational complexities of medical imaging,
available to train and carry out predictions processing time now limits the development of
for cloud-based segmentation. In an Another advanced technologies in medical and clinical
important aspect, in a cloud-based service, applications. Initially designed for parallel data
Deep Learning In Medical Imaging: A Review 139

acceleration in the computer graphics process, demonstrated in numerous examples mentioned


the GPU has been positioned as a versatile in this article.
platform for running parallel computations to
3) CPU: A central processing unit (CPU), also
process medical datasets [57].
called a central processor, main processor or
Kalaiselvi et al. [58] discusses The GPU just processor has many application in medical
performances of existing algorithms are imaging processes with others hardware’s, In
analyzed and the computational gain, and [61] to speed up the cost function derivative
the need of GPU CUDA computing in the calculation They use parallelization on the CPU
medical image analysis is discussed. They and they found that Parallelization increases
have presented a shortened survey of medical performance until the used number of threads
image analysis required GPU computation reaches the number of CPU cores as a results
for Denoising, Registration, Segmentation They obtained an overall speedup of 4–5x,
and Visualization. In the last a Few facts are using 8 threads on an 8 core system and to
discussed for calculating the speedup ratio accelerating the core registration algorithm
between GPU and CPU with some Limitations using the CPU, the GPU was used to accelerate
and future scopes of GPU programming. two potentially computationally intensive
components that are part of the algorithm. All
Smistad et al. [14] investigates the use of GPUs
registration similarity metrics almost equally
to accelerate medical image segmentation
well benefit from parallelization.
methods. They concludes that, due to the
methods’ data parallel structure and high thread Using accurate and robust watermarking
count, the most of segmentation methods may algorithm Securing gray-level and color medical
benefit from GPU processing, which makes images are achieved. Parallel implementations
them well suited for GPU acceleration, but on multi-core CPUs and GPUs tremendously
some factors such as synchronization, branch reduce the watermarking overall time [62],
divergence and memory usage can limit the Watermarking of a single-color medical image
speedup over serial execution. To deal with the using a sequential implementation requires
impact of these limiting factors, several GPU approximately 180 second. This time is reduced
optimization techniques are discussed. to 8 seconds. This significant reduction in time
is desirable for real-time applications in medical
Recent developments made in GPU-based
imaging systems where the medical images
medical image reconstruction is presented
could be secured once they are reconstructed
in [59], from a computerized tomography,
from the sensed medical data.
Magnetic resonance imaging, Positron emission
tomography, SPECT, and US perspective. In In short, the CPU has the best programmability,
this paper they discussed also some Strategies which allows it to achieve the highest FLOPS
and approaches to get the most out of GPUs utilization for RNNs, and it supports the largest
in image reconstruction as well as innovative model due to its large memory capacity [56].
applications arising from an increased
4) FPGA: One of the most future key requirement
computing capacity. The direct benefit of
for delivering data-driven patient care and
GPUs in medical imaging is the substantially
achieving operational optimizations that reduce
improved processing speed, as repeatedly
140 2nd Global Conference on Artificial Intelligence and Applications (GCAIA 2021)

costs is streaming data architecture that collects have been covered while at the neuron level,
and analyzes comprehensive and up-to-date the optimizations of convolutional and fully
health data. In fact Real-time analysis of patient connected layers have been detailed and
data during medical procedures can supply compared. All the different degrees of freedom
vital diagnostic feedback that significantly offered by FPGAs (custom data types, local
improves chances of success. Frameworks like data streams, dedicated processors, etc.) are
deep neural networks are expected to perform exploited by the presented methods. Moreover,
computations within strict time restrictions algorithmic and datapath optimizations can
for real-time performance With fast sensor a should be jointly implemented, resulting in
processing [63]. additive hardware performance gains.
Although GPUs have been shown to achieve
high throughput, they are widely used for V. CONCLUSION AND
hardware acceleration of DNNs, they are often FUTURE WORK
not favored for power/energy constrained
applications, such as IoT devices, due to In this paper, we present a Review study of
their high power consumption. Hence, DNN the application of deep learning in medical
acceleration requires an alternative solution imaging. The paper covers deep learning
based on low-power FPGAs [60]. application in medical imaging research over
the period between 2016 and 2020. The main
FPGA, and with the offer of great flexibility to objective of our Review is to answer three
accommodate the recent DNN models that are research questions addressing the subject from
featured with increased sparsity and compact three angles, namely application perspective, AI
network structures, allows us to implement algorithms and tools perspective, and hardware
irregular parallelism, customized data type platform perspective.
and application-specific hardware architecture.
Further, FPGA can be reprogrammed after We are confident about the future of deep
manufacturing for desired functions and learning in medical imaging. It is by no means an
applications. Owing to these attractive features, inevitability that deep learning will completely
a massive number of FPGA-based accelerators revolutionize these fields, but given the speed
have been proposed for both HPC data centers at which the field is developing. We have
[64] and embedded applications [65]. highlighted many challenges beyond improving
training and prediction accuracy, such as
In actual practice, a DNN model is frequently preserving patient privacy and interpreting
trained or refined on a high-performance models. Ongoing research has begun to address
computing platform such as a GPU, while these issues and has shown that they are not
FPGA acceleration is implemented for DNN insurmountable.
inference to process given input data based on
the pre-trained DNN model. In [66], A number In this research study we have highlighted the
of methods and tools have been compared that role of deep learning in the applications of
aim at porting Convolutional Neural Networks medical imaging. In the literature, several survey
onto FPGAs. At the network level, approximate and overview paper’s have been developed
computing and datapath optimization methods to address the application of deep learning in
Deep Learning In Medical Imaging: A Review 141

medical imaging, the significant development [8] Wang S, Su Z, Ying L, Peng X, Zhu S, Liang
can be observed since 2017. F, et al. Accelerating magnetic resonance
imaging via deep learning. In: 2016 IEEE
The success of the application of deep learning 13th International Symposium on Biomedical
in medical imaging had three important axes Imaging (ISBI) IEEE; 2016. p. 514–517.
H-D-M (Hardware-Data-Model). Our Future [9] Schlemper J, Caballero J, Hajnal JV, Price
work will be focused in this challenges, and AN, Rueckert D. A Deep Cascade of
Convolutional Neural Networks for Dynamic
our main future Question is how we can
MR Image Reconstruction. IEEE Transactions
get best accuracy with less data and best on Medical Imaging 2018;37(2):491–503.
generalisability?
[10] Yang G, Yu S, Dong H, Slabaugh G, Dragotti
PL, Ye X, et al. DAGAN: Deep de-aliasing
generative adversarial networks for fast
REFERENCES compressed sensing MRI reconstruction.
IEEE transactions on medical imaging
[1] Mou, Xiaomin. “Artificial Intelligence:
2017;37(6):1310–1321.
Investment Trends and Selected Industry
Uses”. The World Bank. (2019). [11] Chartsias A, Joyce T, Giuffrida MV,
Tsaftaris SA. Multimodal MR synthesis via
[2] Litjens G, Kooi T, Bejnordi BE, Setio AAA,
modality-invariant latent representation.
Ciompi F, Ghafoorian M, et al. A survey on
IEEE transactions on medical imaging
deep learning in medical image analysis.
2017;37(3):803–814.
Medical image analysis 2017;42:60–88.
[12] Wang Z, Chen J, Hoi SC. Deep learning for
[3] Paul R, Hawkins SH, Balagurunathan Y,
image super-resolution: A survey. IEEE
Schabath MB, Gillies RJ, Hall LO, et al.
Transactions on Pattern Analysis and Machine
Deep feature transfer learning in combination
Intelligence 2020.
with traditional features predicts survival
among patients with lung adenocarcinoma. [13] Qiu D, Zhang S, Liu Y, Zhu J, Zheng L.
Tomography 2016;2(4):388. Super-resolution reconstruction of knee
magnetic resonance imaging based on deep
[4] Hosny A, Parmar C, Quackenbush J,
learning. Computer methods and programs in
Schwartz LH, Aerts HJ. Artificial intelligence
biomedicine 2020;187:105059.
in radiology. Nature Reviews Cancer
2018;18(8):500–510. [14] Smistad E, Falch TL, Bozorgi M, Elster AC,
Lindseth F. Medical image segmentation on
[5] Montagnon E, Cerny M, Cadrin-Chênevert A,
GPUs–A comprehensive review. Medical
Hamilton V, Derennes T, Ilinca A, et al. Deep
image analysis 2015;20(1):1–18.
learning workflow in radiology: a primer.
Insights into Imaging 2020;11(1):22. [15] Shiri I, Akhavanallaf A, Sanaat A, Salimi Y,
Askari D, Mansouri Z, et al. Ultra-low-dose
[6] Liang D, Cheng J, KeZ, Ying L. Deep Magnetic
chest CT imaging of COVID-19 patients using
Resonance Image Reconstruction: Inverse
a deep residual neural network. European
Problems Meet Neural Networks. IEEE Signal
radiology 2020;p. 1–12.
Processing Magazine 2020;37(1):141–151.
[16] Amyar A, Modzelewski R, Ruan S. Multi-
[7] Sun J, Li H, Xu Z, et al. Deep ADMM-Net
task Deep Learning Based CT Imaging
for compressive sensing MRI. In: Advances in
Analysis For COVID-19: Classification
neural information processing systems; 2016.
and Segmentation. medRxiv 2020; https://
p. 10–18.
www.medrxiv.org/content/early/020/04/21/
2020.04.16.20064709.
142 2nd Global Conference on Artificial Intelligence and Applications (GCAIA 2021)

[17] Yoo SH, Geng H, Chiu TL, Yu SK, Cho DC, [27] Hegde V, Usmani S. Parallel and distributed
Heo J, et al. Deep learning-based decision- deep learning. In: Tech. report, Stanford
tree classifier for covid-19 diagnosis from University; 2016.
chest x-ray imaging. Frontiers in medicine [28] Shallue CJ, Lee J, Antognini J, Sohl-Dickstein
2020;7:427. J, Frostig R, Dahl GE. Measuring the effects
[18] Li L, Zhao X, Lu W, Tan S. Deep learning for of data parallelism on neural network training.
variational multimodality tumor segmentation arXiv preprint arXiv:181103600 2018.
in PET/CT. Neurocomputing 2020; 392: [29] Krizhevsky A. One weird trick for parallelizing
277–295. convolutional neural networks. arXiv preprint
[19] Akcay S, Breckon T. Towards Automatic arXiv:14045997 2014.
Threat Detection: A Survey of Advances of [30] Lin Y, Han S, Mao H, Wang Y, Dally WJ.
Deep Learning within X-ray Security Imaging. Deep gradient compression: Reducing the
arXiv preprint arXiv:200101293 2020. communication bandwidth for distributed
[20] Chernyakova T, Eldar YC. Fourier-domain training. arXiv preprint arXiv:171201887
beamforming: the path to compressed 2017.
ultrasound imaging. IEEE transactions on [31] Zhou S, Wu Y, Ni Z, Zhou X, Wen H,
ultrasonics, ferro-electrics, and frequency Zou Y. Dorefa-net: Training low bitwidth
control 2014;61(8):1252–1267. convolutional neural networks with
[21] Provost J, Papadacci C, Arango JE, Imbault low bitwidth gradients. arXiv preprint
M, Fink M, Gennisson JL, et al. 3D ultrafast arXiv:160606160 2016.
ultrasound imaging in vivo. Physics in [32] Strom N. Scalable distributed DNN training
Medicine Biology 2014;59(19):L1. using commodity GPU cloud computing.
[22] van Sloun RJ, Cohen R, Eldar YC. Deep In: Sixteenth Annual Conference of the
learning in ultrasound imaging. Proceedings International Speech Communication
of the IEEE 2019;108(1):11–29. Association; 2015.
[23] Huang Q, Zhang F, Li X. Machine learning [33] Seide F, Fu H, Droppo J, Li G, Yu D. 1-bit
in ultrasound computer-aided diagnostic stochastic gradient descent and its application
systems: a survey. BioMed research to data-parallel distributed training of speech
international 2018;2018. dnns. In: Fifteenth Annual Conference of
[24] Barisoni L, Lafata KJ, Hewitt SM, the International Speech Communication
Madabhushi A, Balis UG. Digital pathology Association; 2014.
and computational image analysis in [34] Alistarh D, Li J, Tomioka R, Vojnovic
nephropathology. Nature Reviews Nephrology M. Qsgd: Randomized quantization for
2020;p. 1–17. communication-optimal stochastic gradient
[25] Hamidinekoo A, Denton E, Rampun A, descent. arXiv preprint arXiv:161002132
Honnor K, Zwiggelaar R. Deep learning 2016.
in mammography and breast histology, an [35] Aji AF, Heafield K. Sparse communication
overview and future trends. Medical image for distributed gradient descent. arXiv preprint
analysis 2018;47:45–67. arXiv:170405021 2017.
[26] Rivenson Y, Zhang Y, Günaydın H, Teng D, [36] Havaei M, Guizard N, Larochelle H, Jodoin
Ozcan A. Phase recovery and holographic PM. Deep learning trends for focal brain
image reconstruction using deep learning in pathology segmentation in MRI. In: Machine
neural networks. Light: Science Applications Learning for Health Informatics Springer;
2018;7(2):17141–17141. 2016.p. 125–148.
Deep Learning In Medical Imaging: A Review 143

[37] Greenspan H, Van Ginneken B, Summers [46] Tom F, Sheet D. Simulating patho-realistic
RM. Guest editorial deep learning in medical ultrasound images using deep generative
imaging: Overview and future promise of an networks with adversarial learning. In: 2018
exciting new technique. IEEE Transactions on IEEE 15th International Symposium on
Medical Imaging 2016;35(5):1153–1159. biomedical imaging (ISBI 2018) IEEE; 2018.
[38] Serte S, Demirel H. Wavelet-based deep p. 1174–1177.
learning for skin lesion classification. IET [47] Moeskops P, Veta M, Lafarge MW, Eppenhof
Image Processing 2019;14(4):720–726. KA, Pluim JP. Adversarial training and dilated
[39] Wang P, Chen EZ, Chen T, Patel VM, convolutions for brain MRI segmentation.
Sun S. Pyramid Convolutional RNN In: Deep learning in medical image analysis
for MRI Reconstruction. arXiv preprint and multimodal learning for clinical decision
arXiv:191200543 2019. support Springer; 2017. p. 56–64.
[40] Cui R, Liu M, Initiative ADN, et al. RNN- [48] Li Z, Wang Y, Yu J. Brain tumor segmentation
based longitudinal analysis for diagnosis of using an adversarial network. In: International
Alzheimer’s disease. Computerized Medical MICCAI Brainlesion Workshop Springer;
Imaging and Graphics 2019;73:1–10. 2017. p. 123–132.
[41] Cui R, Liu M, Li G. Longitudinal analysis for [49] Xue Y, Xu T, Zhang H, Long LR, Huang
Alzheimer’sdisease diagnosis using RNN. In: X. Segan: Adversarial network with multi-
2018 IEEE 15th International Symposium on scale l 1 loss for medical imagesegmentation.
Biomedical Imaging (ISBI 2018); 2018. p. Neuroinformatics 2018;16(3-4):383–392.
1398–1401. [50] Kamnitsas K, Baumgartner C, Ledig C,
[42] Kazeminia S, Baur C, Kuijper A, van Newcombe V, Simpson J, Kane A, et al.
Ginneken B, Navab N, Albarqouni S, et al. Unsupervised domain adaptation in brain
GANs for medical image analysis. Artificial lesion segmentation with adversarial networks.
Intelligence in Medicine 2020; p. 101938. In: International conference on information
processing in medical imaging Springer;
[43] Wolterink JM, Dinkla AM, Savenije MH,
2017. p. 597–609.
Seevinck PR, van den Berg CA, Išgum I. Deep
MR to CT synthesis using unpaired data. In: [51] Rezaei M, Harmuth K, Gierke W, Kellermeier
Interna-tional workshop on simulation and T, Fischer M, Yang H, et al. A conditional
synthesis in medical imaging Springer; 2017. adversarial network for semantic segmentation
p. 14–23. of brain tumor. In: International MICCAI
Brainlesion Workshop Springer; 2017. p.
[44] Zhao M, Wang L, Chen J, Nie D, Cong
241–252.
Y, Ahmad S, et al. Craniomax-illofacial
bony structures segmentation from MRI [52] Google Cloud Documentation.
with deep-supervision adversarial learning. [53] Dean J. Recent advances in artificial
In: International Conference on Medical intelligence and the implications for computer
Image Computing and ComputerAssisted system design. In: Hot Chips; 2017.
Intervention Springer; 2018. p. 720–727. [54] Civit-Masot J, Luna-Perejon F, Vicente-Diaz
[45] Hu Y, Gibson E, Lee LL, Xie W, Barratt DC, S, Corral JMR, Civit A. TPU cloud-based
Vercauteren T, et al. Freehand ultrasound generalized U-net for eye fundus image
image simulation with spatiallyconditioned segmentation. IEEE Access 2019;7:142379–
generative adversarial networks. In: Molecular 142387.
imaging, reconstruction and analysis of [55] Huot F, Chen YF, Clapp R, Boneti C, Anderson
moving body organs, and stroke imaging and J. High-resolution imaging on TPUs. arXiv
treatment Springer; 2017. p. 105–115. preprint arXiv:191208063 2019.
144 2nd Global Conference on Artificial Intelligence and Applications (GCAIA 2021)

[56] Wang YE, Wei GY, Brooks D. Benchmarking [62] Hosny KM, Darwish MM, Li K, Salah A.
TPU, GPU, and CPU platforms for deep Parallel multi-core CPU and GPU for fast and
learning. arXiv preprint arXiv:190710701 robust medical image watermarking. IEEE
2019. Access 2018;6:77212–77225.
[57] Shi L, Liu W, Zhang H, Xie Y, Wang D. [63] Sanaullah A, Yang C, Alexeev Y, Yoshii K,
A survey of GPU-based medical image Herbordt MC. Real-time data analysis for
computing techniques. Quantitative imaging medical diagnosis using FPGA accelerated
in medicine and surgery 2012;2(3):188. neural networks. BMC bioinformatics
[58] Kalaiselvi T, Sriramakrishnan P, 2018;19(18):490.
Somasundaram K. Survey of using GPU [64] Ovtcharov K, Ruwase O, Kim JY, Fowers
CUDA programming model in medical image J, Strauss K, Chung ES. Accelerating
analysis. Informatics in Medicine Unlocked deep convolutional neural networks using
2017;9:133–144. specialized hardware. Microsoft Research
[59] Despres P, Jia X. A review of GPU-based Whitepaper 2015;2(11):1–4.
medical image reconstruction. Physica Medica [65] Qiu J, Wang J, Yao S, Guo K, Li B, Zhou
2017;42:76–92. E, et al. Going deeper with embedded
[60] Feng X, Jiang Y, Yang X, Du M, Li X. fpga platform for convolutional neural
Computer vision algorithms and hardware network. In: Proceedings of the 2016 ACM/
implementations: A survey. Integration SIGDA International Symposium on Field-
2019;69:309–320. Programmable Gate Arrays;2016. p. 26–35.
[61] Shamonin DP, Bron EE, Lelieveldt BP, Smits [66] Abdelouahab K, Pelcat M, Serot J, Berry F.
M, Klein S, Staring M. Fast parallel image Accelerating CNN inference on FPGAs: A
registration on CPU and GPU for diagnostic survey. arXiv preprint arXiv:180601683 2018.
classification of Alzheimer’s disease. Frontiers
in neuroinformatics 2014;7:50.

View publication stats

You might also like