Brain Learning 3
Brain Learning 3
Brain Learning 3
Techniques
Guangming Zhu1, Bin Jiang1, Liz Tong1, Yuan Xie1, Greg Zaharchuk1, Max Wintermark1*
1
Stanford University, United States
Submitted to Journal:
Frontiers in Neurology
Specialty Section:
Applied Neuroimaging
ISSN:
1664-2295
Article type:
Review Article
Received on:
13 Mar 2019
o n al
si
Accepted on:
i
26 Jul 2019
v
Provisional PDF published on:
o
26 Jul 2019
Citation:
Zhu G, Jiang B, Tong L, Xie Y, Zaharchuk G and Wintermark M(2019) Applications of Deep Learning to
Neuro-Imaging Techniques. Front. Neurol. 10:869. doi:10.3389/fneur.2019.00869
Copyright statement:
© 2019 Zhu, Jiang, Tong, Xie, Zaharchuk and Wintermark. This is an open-access article distributed
under the terms of the Creative Commons Attribution License (CC BY). The use, distribution and
reproduction in other forums is permitted, provided the original author(s) or licensor are credited
and that the original publication in this journal is cited, in accordance with accepted academic
practice. No use, distribution or reproduction is permitted which does not comply with these terms.
This Provisional PDF corresponds to the article as it appeared upon acceptance, after peer-review. Fully formatted PDF
and full text (HTML) versions will be made available soon.
o n al
r o vi si
P
Abstract:
Summary:
Many clinical applications based on deep learning and pertaining to radiology have been proposed
and studied in radiology for classification, risk assessment, segmentation tasks, diagnosis,
prognosis, and even prediction of therapy responses. There are many other innovative
applications of AI in various technical aspects of medical imaging, particularly applied to the
acquisition of images, ranging from removing image artifacts, normalizing/harmonizing images,
improving image quality, lowering radiation and contrast dose, and shortening the duration of
imaging studies. This article will address this topic and will seek to present an overview of deep
learning applied to neuroimaging techniques.
o n al
r o vi si
P
Introduction:
Artificial intelligence (AI) is a branch of computer science that encompasses machine learning,
representation learning, and deep learning1. A growing number of clinical applications based on
machine learning or deep learning and pertaining to radiology have been proposed and studied in
radiology for classification, risk assessment, segmentation tasks, diagnosis, prognosis, and even
prediction of therapy responses2–10. Machine learning and deep learning have also been extensively
used for brain image analysis to devise imaging-based diagnostic and classification systems of
strokes, certain psychiatric disorders, epilepsy, neurodegenerative disorders, and demyelinating
diseases11–17.
Recently, due to the optimization of algorithms, the improved computational hardware, and
access to large amount of imaging data, deep learning has demonstrated indisputable superiority
over the classic machine learning framework. Deep learning is a class of machine learning that
n l
uses artificial neural network architectures that bear resemblance to the structure of human
a
cognitive functions (Fig-1). It is a type of representation learning in which the algorithm learns a
o
si
composition of features that reflect a hierarchy of structures in the data18. Convolutional neural
r o vi
networks (CNN) and recurrent neural networks (RNN) are different types of dDeep learning
methods based onusing artificial neural networks ( ANN).
PAI can be applied to a wide range of tasks faced by radiologists (Fig-2). Most initial deep
learning applications in neuroradiology have focused on the “downstream” side: using computer
vision techniques for detection and segmentation of anatomical structures and the detection of
lesions, such as hemorrhage, stroke, lacunes, microbleeds, metastases, aneurysms, primary brain
tumors, and white matter hyperintensities6,9,15,19. On the “upstream” side, we have just begun to
realize that there are other innovative applications of AI in various technical aspects of medical
imaging, particularly applied to the acquisition of images. A variety of methods for image
generation and image enhancement using deep learning have recently been proposed, ranging from
removing image artifacts, normalizing/harmonizing images, improving image quality, lowering
radiation and contrast dose, and shortening the duration of imaging studies8,9,15.
As RNNs are commonly utilized for speech and language tasks, the deep learning
algorithms most applicable to radiology are CNNs, as these are morewhich can be efficiently
applied to image segmentation and classification. Instead of using more than billions of weights to
implement the full connections, CNNs can mimics mathematic operation of convolution, using
convolutional and pooling layers (Fig. 1) and significantly reduce the number of weights. CNNs
can also allow thefor spatial invariance. For different convolutional layers, multiple kernels can be
trained and then learn many location-invariant features. Since important features can be
automatically learned, information extraction from images in advance of the learning process is
not necessary. Therefore, CNNs are relatively easy to apply toin clinical practice.
There are many challenges related to the acquisition and post-processing of neuroimages,
including the risks of radiation exposure and contrast agent exposure, prolonged acquisition time,
and image resolution. In addition, Other challenges pertain, for example, to expert parameter
tuning of scanners, always required to optimize reconstruction performance, especially in the
presence of sensor non-idealities and noise20. Deep learning has the opportunity to have a
significant impact on such issues and challenges, with fewer ethical dilemmas and medical legal
risks compared to applications for diagnosis and treatment decision making21. Finally, these deep
o n l
learning approaches will make imaging much more accessible, from many perspectives, including
a
si
Published deep learning studies focused on improving medical imaging techniques are just
r o vi
beginning to enter the medical literature. A Pubmed search on computer-aided diagnosis in
radiology, machine learning, and deep learning for the year 2018 yielded more than 5,000 articles.
P
The number of publications addressing deep learning as applied to medical imaging techniques is
a small fraction of this number. Although many studies are not focused on neuroimaging, their
techniques can often be adapted for neuroimaging. This article will address this topic and will seek
to present an overview of deep learning applied to neuroimaging techniques.
1. Using Deep learning to reduce the risk associated with image acquisition
There are many risks associated with different image acquisitions, such as ionizing
radiation exposure and side effect of contrast agents. Deep learning based optimizing acquisition
parameters is crucial to achieve diagnostically acceptable image quality at the lowest possible
radiation dose and/or contrast agents dose.
MRI
Gadolinium-based contrast agents(GBCAs) have become indispensable in routine MR
imaging. Though considered safe, CBCAs were linked with nephrogenic systemic fibrosis, which
is a serious, debilitating, and sometimes life-threatening condition. There is ongoing discussion
regarding the documented deposition of gadolinium contrast agents in body tissues including the
n al
brain, especially for those patients who need repeated contrast administration22. Recent
publications have reported the gadolinium deposition in the brain tissue, most notably in the
o
si
dentate nuclei and globus pallidus23,24. This deposition can probably be minimized by limiting the
r o vi
dose of gadolinium used 25
. Unfortunately, low-dose contrast-enhanced MRI is typically of
insufficient diagnostic image quality. Gong et al26 implemented a deep learning model based on
P
an encoder-decoder convolutional neural network (CNN) for obtainingto obtain diagnostic quality
contrast-enhanced MRI using with low-dose gadolinium contrast. In this study, sixty patients with
brain abnormalities received 10% low-dose preload(0.01mmol/kg) of gadobenate dimeglumine,
before perfusion MR imaging with full contrast dosage(0.1mmol/kg). While ten cases were used
as training set, pPre-contrast MRI and low-dose post-contrast MRI of training set were introduced
as inputs, and full dose post-contrast MRI as Ground-truth. The contrast uptake in the low-dose
CE-MRI is noisy noisy, but does include contrast information. Through the training, the network
learned the guided denoising of the noisy contrast uptake extracted from the difference signal
between low-dose and zero-dose MRIs, and then combine them to synthesize a full-dose CE-MRI.
The results demonstrated that the deep learning algorithm was able to extract diagnostic quality
images with gadolinium doses 10-fold lower than those typically used (Fig-3).
CT
Computed Tomography (CT) techniques are widely used in clinical practice and involve a
radiation risk. For instance, the radiation dose associated with a head CT is the same as 200 chest
X-rays, or the amount most people would be exposed to from natural sources over seven years.
CT acquisition parameters can be adjusted to reduce the radiation dose, including reducing
kilovoltage peak (kVp), milliampere-seconds (mAs), gantry rotation time, and increasing
acquisition pitch; . Hhowever, all these approaches also reduce image quality. Since an insufficient
number of photons in the projection domain can lead to excessive quantum noise, the balance
between image quality and radiation dose is always a trade-off.
Various image denoising approaches for CT techniques have been developed. Iterative
reconstruction is one of them, but has been used used, but only sparsely, in part due to significant
computational costs, time delays between acquisition and reconstruction, and a suboptimal “waxy”
appearance of the augmented images27,28. Traditional image processing methods to remove image
n l
noise are also limited, because CT data is subject to both non-stationary and non-Gaussian noise
a
processes. Novel denoising algorithms based on deep learning have been studied intensively and
o
si
showed impressive potential29. For example, Xie et al30 used a deep learning method based on a
r o vi
GoogLeNet architecture to remove streak artifacts due to missing projections in sparse-view CT
reconstruction. The artifacts from low dose CT imaging were studied by residual learning, and
P
then subtracted from the sparse reconstructed image to recover a better image. These intensively
reconstructed images are comparable to the full-view projection reconstructed images. Chen et
al28,31 applied a residual encoder-decoder CNN, which incorporated a deconvolution network with
shortcut (“bypass”) connections into a CNN model, to reduce the noise level of CT images. The
model learned a feature mapping from low- to normal-dose images. After the training, it achieved
a competitive performance in both qualitative and quantitative aspects, while compared with other
denoising methods. Kang et al27 applied a CNN model using directional wavelets for low-dose CT
reconstruction. Compared to model-based iterative reconstruction methods, this algorithm can
remove complex noise patterns from CT images with greater denoising power and faster
reconstruction time. Nishio et al32 trained auto-encoder CNN for pairs of standard-dose (300 mA)
CT images and ultra-low-dose (10 mA) CT images, and then used the trained algorithm for patch-
based image denoising of ultra-low-dose CT images. The study demonstrated the advantages of
this method over block-matching 3D (BM3D) filtering for streak artifacts and other types of noise.
Many other deep learning-based approaches have been proposed in radiation-restricted
applications, such as adversarially trained networks, sharpness detection network, 3D dictionary
learning, and discriminative prior-prior image constrained compressed sensing33–36.
Reconstruction algorithms to denoise the output low-quality images or remove artifacts have
been studied intensively27,28,30–32. Gupta et al37 implemented a relaxed version of projected gradient
descent with a CNN for sparse-view CT reconstruction. There is a significant improvement over
total variation-based regularization and dictionary learning for both noiseless and noisy
measurements. This framework can also be used for super-resolution, accelerated MRI, or
deconvolution, etc. Yi et al used adversarially trained network and sharp detection network to
achieve sharpness-aware low-dose CT denoising34.
Since matched low- and routine-dose CT image pairs are difficult to obtain in multiphase
CT, Kang et al38 proposed a deep learning framework based on unsupervised learning technique
to solve this problem. They applied a cycle-consistent adversarial denoising network to learn the
o n l
mapping between low- and high-dose cardiac phases. Their network did not introduce artificial
a
vi
Sparse-data CT
r o si
The reconstruction of Sparse-data CT always compromises structural details and suffers from
P
notorious blocky artifacts. Chen et al39 implemented a Learned experts’ assessment-based
reconstruction network (LEARN) for sparse-data CT. The network was evaluated with Mayo
Clinic’s low-dose challenge image data set and was proved more effectively than other methods
in terms of artifact reduction, feature preservation, and computational speed.
PET
Radiation exposure is a common concern in PET imaging. To minimize this potential risk,
efforts have been made to reduce the amount of radio-tracer usage in PET imaging. However, low-
dose PET is inherently noisy and has poor image quality. Xiang et al. combined 4-fold reduced
time duration 18F-fluorodeoxyglucose (FDG) PET images and co-registered T1-weighted MRI
images to reconstruct standard dose PET40. Since PET image quality is to a first degree linear with
true coincidence events recorded by the camera, such a method could also be applied to reduced
dose PET. Kaplan et al41 introduced a deep learning model consisting an estimator network and
a generative adversarial network (GAN). After training with simulated 10x lower dose PET data,
the networks reconstructed standard dose images, while preserving edge, structural, and textural
details.
Using a simultaneous PET/MRI scanner, Xu et al42 proposed an encoder-decoder residual deep
network with concatenate skip connections to reconstruct high quality brain FDG PET images in
patients with glioblastoma multiforme using only 0.5% of normal dose of radioactive tracer. To
take advantage of the higher contrast and resolution of the MR imagesachieve this, they also
included T1-weighted and T2-FLAIR weighted images as inputs to the model, taking advantage
of the higher contrast and resolution of the MR images. Furthermore, they employed a “2.5D”
model in which adjacent slice information is used to improve the prediction of a central slice.
These modifications significantly reduced noise, while robustly preserving resolution and detailed
structures with comparable quality to normal-dose PET images.
These general principles were also applied by Chen et al. to simulated 1% dose 18F-florbetaben
n l
PET imaging43. This amyloid tracer is used clinically in the setting of dementia of unknown origin. ,
a
where aA “positive” amyloid study is compatible with the diagnosis of Alzheimer’s disease, while
o
si
a negative study essentially rules out the diagnosis44,45. Again, simultaneous PET/MRI was used
r o vi
to acquire co-registered contemporaneous T1-weighted and T2-FLAIR MR images, which were
combined as input along with the 1% undersampled PET image. They showed the crucial benefit
P
of including MR images in terms of retaining spatial resolution, which is critical for assessing
amyloid scans. They found that clinical readers evaluating the synthesized full dose images did
so with similar accuracy to their own intra-reader reproducibility. More recently, the same group
has demonstrated that the trained model can be applied to true (i.e., not simulated) ultra-low dose
diagnostic PET/MR images (Fig-4).
2. Accelerate imaging acquisition and reconstruct under-sampled k-space
Imageing acquisition can be time-consuming. Reducing raw data samples or subsample k-
space data can speed the acquisition, but withresult in suboptimal images. Deep learning based
reconstruction methods can output good images from under-sampled datasets.
Compared to most other imaging modalities, MRI acquisition is substantially slower. The
longer acquisition time limits the utility of MRI in emergency settings and often results in more
motion artifact. It also contributes to its high cost. Acquisition time can be reduced by simply
reducing the number of raw data samples. However, conventional reconstruction methods for these
sparse data often produce suboptimal images. Newer reconstruction methods deploying deep
learning have the ability to produce images with good quality from these under-sampled data
acquired with shorter acquisition times46. This approach has been applied in Diffusion Kurtosis
o n al
Imaging (DKI) and Neurite Orientation Dispersion and Density Imaging (NODDI). DKI and
NODDI are advanced diffusion sequences that can characterize tissue microstructure but require
si
long acquisition time to obtain the required data points. Using a combination of q-Space deep
r o vi
learning and of simultaneous multi-slice imaging, Golkov et al47 were able to reconstruct DKI
from only 12 data points and NODDI from only 8 data points, achieving an unprecedented 36-
P
fold scan time reduction for quantitative diffusion MRI. These results suggest that there is
considerable amount of information buried within the limited number of data points that can be
retrieved with deep learning methods.
Another way to reduce acquisition time is to subsample k-space data. However, naive
undersampling of k-space will cause aliasing artifact once the under-sampling rate exceeds the
Nyquist conditions. Hyun et al48 trained a deep learning network, using pairs of subsampled and
fully sampled k-space data as inputs and outputs respectively, to reconstruct images from sub-
sampled data. They reinforced the subsampled k-space data with a few low-frequency k-space
data to improve image contrast. Their network was able to generate diagnostic quality images from
sampling only 29% of k-space.
Lee et al49 investigated deep residual networks to remove global artifacts from under-
sampled k-space data. Deep residual networks are a special type of network that allows stacking
of multiple layers to create a very deep network without degrading the accuracy of training.
Compared to non-AI based fast-acquisition techniques such as compressed sensing MRI (which
randomly sub-samples k-space) and parallel MRI (which uses multiple receiver coils), Lee’s
technique achieved better artifact reduction and use much shorter computation time.
Deep learning techniques for acceleration and reconstruction are not limited to static
imaging, but are also applicable for dynamic imaging, such as cardiac MRI. Due to inherent
redundancy within adjacent slices and repeated cycles in dynamic imaging, the combination of
under-sampling and using Neural Networks for reconstruction seem to be the perfect solution.
Schelmper’s 50 trained CNN to learn the redundancies and the spatio-temporal correlations from
under-sampled data and reconstructed dynamic sequences of 2D cardiac MR images. Their CNN
outperformed traditional carefully handcrafted algorithms in terms of both reconstruction quality
and speed. Similarly, Majumdar51 address the problem of real-time dynamic MRI reconstruction
by using a stacked denoising autoencoder. They produced superior images in shorter time, when
compared to CS based technique and Kalman filtering techniques.
n l
Hammernik et al52 introduced an efficient trainable formulation, termed a variational
a
network, for accelerated Parallel Imaging-based MRI reconstruction. The reconstruction time of
o
si
was 193 ms on a single graphics card, and the MR images preserved the natural appearance as well
r o vi
as pathologies that were not included in the training data set. Chen et al53 also developed a deep
learning reconstruction approach based on a variational network to improve the reconstruction
P
speed and quality of highly undersampled variable-density single-shot fast spin-echo imaging.
This approach enables reconstruction speeds of approximately 0.2 s per section, allowing a real-
time image reconstruction for practical clinical deployment. This study showed improved image
quality with higher perceived signal-to-noise ratio and improved sharpness, when compared with
conventional parallel imaging and compressed sensing reconstruction. Yang et al54 proposed
ADMM-Net, a deep architecture defined over a data flow graph derived from the iterative
procedures in based on Alternating Direction Method of Multipliers algorithm(ADMM-Net, ) to
optimize a compressed sensing-based MRI model. The results suggested high reconstruction
accuracy with fast computational speed.
Several studies also used generative adversarial networks to model distributions (low-
dimensional manifolds) and generating natural images (high-dimensional data)35,55. Mardani et al56
proposed a compressed sensing framework using generative adversarial networks (GAN) to model
the low-dimensional manifold of high-quality MRI. This is combined with a compressed sensing
framework, a method known as GANCS. It offers reconstruction times of under a few
milliseconds and higher quality images with improved fine texture based on multiple reader
studies.
o n al
r o vi si
P
3. Artifacts Reduction
Image denoising is an important pre-processing step in medical image analysis, especially
in low-dose techniques. Much research has been conducted on the subject of computer algorithms
for image denoising for several decades, with varying success. Many attempts based on machine
learning57 or deep learning58,59 have been successfully implemented for denoising of medical
images.
Standard reconstruction approaches involve approximating the inverse function with
multiple ad hoc stages in a signal processing chain, . the composition of whichThey depends on
the details of each acquisition strategy, and requires parameter tuning to optimize image quality.
Zhu et al20 implemented a unified framework system called AUTOMAP, using a fully-connected
deep neural network to reconstruct a variety of MRI acquisition strategies. This method is
agnostic to the exact sampling strategy used, being trained on pairs of sensor data and ground truth
n al
images. They showed good performance for a wide range of k-space sampling methods, including
Cartesian, spiral, and radial image acquisitions. The trained model also showed superior immunity
o
si
to noise and reconstruction artifacts compared with conventional handcrafted methods. Manjón
r o vi
et al59 used two-stage strategy with deep learning for noise reduction. The first stage is to remove
the noise using a CNN without estimation of local noise level present in the images. Then the
P
filtered image is used as a guide image within a rotationally invariant non-local means filter. This
approach showed competitive results for all the studied MRI acquisitions.
n l
images) with multi-lateral guided filters and deep networks to boost the SNR and resolution of
a
ASL65. They also showed that the network could be trained with a relatively small number of
o
si
studies and that it generalized to stroke patients (Fig-5).
r o vi
P
Spurious Noise
Proton MR spectroscopic imaging can measure endogenous metabolite concentration in
vivo. The Cho/NAA ratio has been used to characterize brain tumors, such as glioblastoma. One
challenge is the poor spectral quality quality, because of due to the artifacts, which are caused by
magnetic field inhomogeneities, subject movement, or and improper water or lipid suppression.
Gurbani et al66 applied a tiled CNN tuned by Bayesian optimization technique to analyze
frequency-domain spectra to detect artifacts and compared the analysis results with a consensus
decision of MRS experts. This CNN algorithm achieved high sensitivity and specificity with an
AUC of 0.951, while compared with the consensus decision of MRS experts. One particular type
of MRS artifact is ghost or spurious echo artifact, due to insufficient spoiling gradient power.
Kyathanahally et al67 implemented multiple deep learning algorithms, including fully connected
neural networks, deep CNN, and stacked what-where auto encoders, to detect and correct spurious
echo signals. After training on a large dataset with and without spurious echoes, the accuracy of
the algorithm was almost 100%.
Motion Artifact
MRI is susceptible to image artifacts, including motion artifacts due to the relatively long
acquisition time. Küstner et al68 proposed a non-reference approach to automatically detect the
presence of motion artifacts on MRI images. A CNN classifier was trained to assess the motion
artifacts on a per-patch basis, and then used to localize and quantify the motion artifacts on a test
data set. The accuracy of motion detection reached 97/100% in the head and 75/100% in the
abdomen. There are several other studies on the detection or reducing of motion artifacts69–71.
Automating the process of motion detection can lead to more efficient scanner use, where
corrupted images are re-acquired without relying on the subjective judgement of technologists.
Metal Artifact
n l
Artifacts resulting from metallic objects have been a persistent problem in computed
a
tomography (CT) images over the last four decades. Gjesteby et al72 combined a CNN with the
o
si
NMAR algorithm to reduce metal streaks in critical image regions. The strategy is able to map
r o vi
metal-corrupted images to artifact-free monoenergetic images to achieve additional correction on
top of NMAR for improved image quality.
P
Crosstalk Noise
Attenuation correction is a critical procedure in PET imaging for accurate quantification of
radiotracer distribution. For PET/CT, the attenuation coefficients (μ) are derived from the CT
Hounsfield units from the CT portion of the examination. For PET/MRI, attenuation coefficient
(μ) has been estimated from segmentation- and atlas-based algorithms. Maximum-likelihood
reconstruction of activity and attenuation (MLAA) is a new method for generating activity images.
It can and produceing attenuation coefficients simultaneously from emission data only, without
the need of a concurrent CT or MRI. However, MLAA suffers from crosstalk artifacts. Hwang et
al73 tested three different CNN architectures, such as convolutional autoencoder (CAE), U-net, and
hybrid of CAE to mitigate the crosstalk problem in the MLAA reconstruction. Their CNNs
generated less noisy and more uniform μ-maps, . The CNNs also and better resolved the air cavities
and, bones, and even the crosstalk problem.
Others Other studies have used deep learning to create CT-like images from MRI, often
but not always for the purposes of PET/MRI attenuation correction. Nie et al74 applied an auto-
context model to implement a context-aware deep convolutional GAN to. It can generate a target
image given from a source image, demonstrating its use in predicting head CT images from T1-
weighted MR images. This CT could be used for radiation planning or attenuation correction.
Han75 proposed a deep CNN with 27 convolutional layers interleaved with pooling and unpooling
layers. Similar to Nie et al, the network was trained to learn a direct end-to-end mapping from MR
images to their corresponding CTs. This method produced accurate synthetic CT results in near
real time (9 seconds) from conventional, single-sequence MR images. Other deep learning
networks, such as deep embedding CNN by Xiang et al76, Dixon-VIBE deep learning by Torrado-
Carvajal et al77, GAN with two synthesis CNNs and two discriminator CNNs by Wolterink et al78,
as well as deep CNN based on U-net architecture by Leynes et al79 and Roy et al80, were also
proposed to generate pseudo CT from MRI.
n al
Liu et al showed that it was possible totried to train a network to transform T1-weighted
o
si
head images into “pseudo-CT” images images, which that could be used for attenuate
r o vi
calculations81. TThey showed that the errors in PET SUV could be reduced to less than 1% for
most areas of the brain, about a 5-fold improvement over existing techniques such as atlas-based
P
and 2-point Dixon methods. More recently, the same group has shown that it is possible to take
non-attenuation correction PET brain images and using attenuation corrected images as the ground
truth, to directly predict one from the other, without the need to calculate an attenuation map82.
This latter method could enable the development of new PET scanners that do not require either
CT or MR imaging to be acquired, and which might be cheaper to site and operate.
Random Noise
Medical fluoroscopy video is also sensitive to noise. Angiography is one medical procedure
using live video, and the video quality is highly important. Speed is the main limitation of
conventional denoising algorithms such as BM3D. Praneeth Sadda et al83 applied a deep neural
network to remove Gaussian noise, speckle noise, salt and pepper noise from fluoroscopy images.
The final output live video could meet and even exceed the efficacy of BM3D with a 20-fold
speedup.
o n al
r o visi
P
4. Synthetic Image Production
Each imaging modality (X-ray, CT, MRI, ultrasound) as well as different MR sequences
have different contrast and noise mechanisms and hence captures different characteristics of the
underlying anatomy. The intensity transformation between any two modalities/sequences is highly
nonlinear. For example, Vemulapalli et al84 used a deep network to predict T1 images from T2
images. With deep learning, mMedical image synthesis can produce images of a desired modality
without preforming an actual scan, such as creating CT images from MRI data. This can be of
benefit because radiation can be avoided.
Ben-Cohen et al85 explored the use of full CNN and conditional GAN to reconstruct PET
images from CT images. The deep learning system was tested for detection of malignant tumors
in the live region. The and the results suggested a true positive ratio of 92.3% (24/26) and false
positive ratio of 25% (2/8). This is surprising given thatbecause no metabolic activity is expected
n al
to be present on CT images. I; it must be assumed that the CT features somehow contain
information about tumor metabolism. In a reverse strategy, Choi et al86 generated structural MR
o
si
images from amyloid PET images using generative adversarial networks. Finally, Li et al87 used a
r o vi
3D CNN architecture to predict missing PET data from MRI, using the ADNI study, and found it
to be a better way of estimating missing data than currently existing methods.
P
High-field MRI
More recently, AI based methods, such as deep CNN’s, can take a low-resolution image as
the input and then output a high-resolution image88, with three operations, “patch extraction and
representation”, “non-linear mapping”, and “reconstruction”89. Higher (or super-) resolution MRI
can be implemented using MRI scanners with higher magnetic field, such as advanced 7-T MRI
scanners, which involves much higher instrumentation and operational costs. As an alternative,
many studies have attempted to achieve super-resolution MRI images from low-resolution MRI
images. Bahrami et al90 trained a deep learning architecture based CNN, inputting the appearance
and anatomical features of 3T MRI images and outputting as the corresponding 7T MRI patch to
reconstruct 7T-like MRI images. Lyu et al91 adapted two neural networks based on deep learning,
conveying path-based convolutional encoder-decoder with VGG (GAN-CPCE) and GAN
constrained by the identical, residual, and cycle learning ensemble (GAN-CIRCLE), for super-
resolution MRI from low-resolution MRI. Both neural networks had a 2-fold resolution
improvement. Chaudhari et al92 implemented a 3-D CNN entitled DeepResolve to learn residual-
based transformations between high-resolution and lower-resolution thick-slice images of
musculoskeletal MRI. This algorithm can maintain the resolution as diagnostic image quality with
a 3-fold down-sampling. Similar methods have recently been applied to T1-weighted brain
imaging, which is a sequence that requires a long acquisition time to obtain adequate resolution
for cortical thickness mapping (Fig-6).
Synthetic FLAIR
Synthetic MRI imaging has become more and more clinically feasible, but synthetic
FLAIR images are usually of lower quality than conventional FLAIR images93. Using
conventional FLAIR images as target, Hagiwara et al94 applied a conditional GAN to generate
improved FLAIR images from raw synthetic MRI imaging data. This work created improved
n l
synthetic FLAIR imaging with reduced swelling artifacts and granular artifacts in the CSF, while
a
preserving lesion contrast. More recently, Wang et al. showed that improvements in image quality
o
si
for all synthetic MR sequences could be obtained using a single model for multi-contrast synthesis
r o vi
along with a GAN discriminator, which was dubbed “OneforAll.”95. This offered superior
performance to a standard U-net architecture trained on only one image contrast at a time. Readers
P
scored equivalent image quality between the deep learning-based images and the conventional MR
sequences for all except proton-density images, . while Tscoring the deep learning based T2
FLAIR images were superior to the conventional images, due to the inherent noise suppression
aspects of the training process.
5. Image registration:
Deformable image registration is critical in clinical studies. Image registration is necessary
to establish accurate anatomical correspondences. Intensity-based feature selection methods are
widely used in medical image registration, but do not guarantee the exact correspondence of
anatomic sites. Hand-engineered features, such as Gabor filters and geometric moment invariants,
are also widely used, but do not work well for all types of image data. Recently, many AI-based
methods have been used to perform image registration. Deep learning may be more promising
when compared to other learning-based methods, because it is unsupervised withoutit doesn’t not
requireing prior knowledge or hand-crafted features. It uses a hierarchical deep architecture to
infer complex non-linear relationships quickly and efficiently96.
Wu et al96 applied a convolutional stacked auto-encoder to identify compact and highly
discriminative features in observed imaging data. They used a stacked two-layer CNN to directly
n al
learn the hierarchical basis filters from a number of image patches on the MR brain images. Then
the coefficients can be applied as the morphological signature for correspondence detection to
o
si
achieve promising registration results97. Registration for 2D/3D image is one of the keys to enable
r o vi
image-guided procedures, including advanced image-guided radiation therapy. Slow computation
and small capture range, which is defined as the distance at which 10% of the registrations fail,
P
are the two major limitations of existing intensity-based 2D/3D registration approaches. Miao et
al98 proposed a CNN regression approach, referred to as Pose Estimation via Hierarchical Learning
(PEHL), to achieve real-time 2D/3D registration with large capture range and high accuracy. Their
results showed an increased capture range of 99%-306% and a success rate of 5%-27.8%. The
running time was approximately 0.1 second, about one tenth of the time consumption other
intensity-based methods have. This CNN regression approach achieved significantly higher
computational efficiency such that it is capable of real-time 2D/3D registration. Neylon et al99
presented a method based on deep neural network for automated quantification of deformable
image registration. This neural network was able to quantify deformable image registration error
to within a single voxel for 95% of the sub-volumes examined. Other studies also include fast
predictive image registration with deep encoder-decoder network based on a Large Deformation
Diffeomorphic Metric Mapping model100.
6. Quality Analysis
Quality control is crucial for accurate medical imaging measurement. However, it is a time-
consuming process. Deep learning-based automatic assessment may be more objective and
efficient. Chen et al101 applied a CNN to predict whether CT scans meet the minimal image quality
threshold for diagnosis. Due to the relatively small number of cases, this deep learning network
had a fair performance with an accuracy of 0.76 and an AUC of 0.78. Wu et al102 designed a
computerized fetal ultrasound quality assessment (FUIQA) scheme with two deep CNNs (L-CNN
and C-CNN). The L-CNN finds the region of interest, while the C-CNN evaluates the image
quality.
o n al
r o vi si
P
7. Challenges of deep learning applied to neuroimaging techniques
In summary, deep learning is a machine learning method based on artificial neural
networks (ANN), and encompasses supervised, unsupervised, and semi-supervised learning. a
supervised machine learning method that uses different forms of neural networks, such as artificial
neural networks (ANN), convolutional neural networks (CNN), and recurrent neural networks
(RNN). As RNNs are commonly utilized for speech and language tasks, the deep learning
algorithms most applicable to radiology are CNNs, as these are more efficiently applied to image
segmentation and classification. Instead of using more than billions of weights to implement the
full connections, CNNs can mimics mathematic operation of convolution, using convolutional and
pooling layers (Fig. 1) and significantly reduce the number of weights. CNNs can also allow the
spatial invariance. For different convolutional layers, multiple kernels can be trained and then learn
many location-invariant features. Since important features can be automatically learned,
n al
information extraction from images in advance of the learning process is not necessary. Therefore,
CNNs are relatively easy to apply to clinical practice.
o
si
Despite the promises made by many studies, reliable application of deep learning for
r o vi
neuroimaging still remains in its infancy and many challenges remain.
First of them is overfitting. Training a complex classifier with a small dataset always carries
P
the risk of overfitting. Deep learning models tend to fit the data exceptionally well, but it doesn’t
mean that they generalize well. There are many studies that used different strategies to reduce
overfitting, including regularization103, early stopping104, and drop out105. While overfitting can be
evaluated by performance of the algorithm on a separate test data set, the algorithm may not
perform well on similar images acquired in different centers, on different scanners, or with
different patient demographics. Larger data sets from different centers are typically acquired in
different ways using different scanners and protocols, with subtly different image features, leading
to poor performance21. According to those, data augmentation without standard criteria cannot
appropriately address issues encountered with small datasetsAccording to those, data
augmentation without standard criteria can’t well address those issues with small data.
Overcoming this problem, known as “brittle AI,” is an important area of research if these methods
are to be used widely. Deep learning is also an intensely data hungry technology. It requires a
very large number of well labeled examples to achieve accurate classification and validate its
performance for clinical implementation. Because upstream applications such as image quality
improvement are essentially learning from many predictions in each image, this means that the
requirements for large datasets are not as severe as for classification algorithms (where only one
learning data point is available per person). Nonetheless, building large, public, labeled medical
image datasets is important, while privacy concerns, costs, assessment of ground truth, and the
accuracy of the labels remain stumbling blocks18. One advantage of image acquisition
applications is that the data is in some sense already labeled, with the fully sampled or high dose
images playing the role of labels in classification tasks. Besides the ethical and legal challenges,
the difficulty of physiologically or mechanistically interpreting the results of deep learning are
unsettling to some. Deep networks are “black boxes” where data is input and an output prediction,
whether classification or image, is produced.106. All deep learning algorithms operate in higher
dimensions than what can be directly visualized by the human mind, which has been coined as
“The Mythos of Model Interpretability”107. Some estimates of the network uncertainly in
o n al
prediction would be helpful to better interpret the images produced.
r o vi si
P
Conclusion
Although deep learning techniques in medical imaging are still in their initial stages, they
have been enthusiastically applied to imaging techniques with many inspired advancements. Deep
learning algorithms have revolutionized computer vision research and driven advances in the
analysis of radiologic images. Upstream applications to image quality and value improvement are
just beginning to enter into the consciousness of radiologists, and will have a big impact on making
imaging faster, safer, and more accessible for our patients.
o n al
r o vi si
P
Reference
1. Jiang F, Jiang Y, Zhi H, et al. Artificial intelligence in healthcare: past, present and future.
Stroke Vasc Neurol. 2017;2(4):230-243. doi:10.1136/svn-2017-000101.
2. Mayo RC, Leung J. Artificial intelligence and deep learning – Radiology’s next frontier?
Clin Imaging. 2018;49(November 2017):87-88. doi:10.1016/j.clinimag.2017.11.007.
3. Liew C. The future of radiology augmented with Artificial Intelligence: A strategy for
success. Eur J Radiol. 2018;102(March):152-156. doi:10.1016/j.ejrad.2018.03.019.
4. Choy G, Khalilzadeh O, Michalski M, et al. Current Applications and Future Impact of
Machine Learning in Radiology. Radiology. 2018;288(2):318-328.
doi:10.1148/radiol.2018171820.
5. Nichols JA, Herbert Chan HW, Baker MAB. Machine learning: applications of artificial
intelligence to imaging and diagnosis. Biophys Rev. September 2018. doi:10.1007/s12551-
6.
018-0449-9.
n al
Savadjiev P, Chong J, Dohan A, et al. Demystification of AI-driven medical image
o
si
interpretation: past, present and future. Eur Radiol. August 2018. doi:10.1007/s00330-
7.
r o vi
018-5674-x.
Giger ML. Machine Learning in Medical Imaging. J Am Coll Radiol. 2018;15(3):512-520.
8.
9.
P
doi:10.1016/j.jacr.2017.12.028.
Hosny A, Parmar C, Quackenbush J, Schwartz LH, Aerts HJWL. Artificial intelligence in
radiology. Nat Rev Cancer. 2018;18(8):500-510. doi:10.1038/s41568-018-0016-5.
McBee MP, Awan OA, Colucci AT, et al. Deep Learning in Radiology. Acad Radiol.
2018;25(11):1472-1480. doi:10.1016/j.acra.2018.02.018.
10. Fazal MI, Patel ME, Tye J, Gupta Y. The past, present and future role of artificial
intelligence in imaging. Eur J Radiol. 2018;105(May):246-250.
doi:10.1016/j.ejrad.2018.06.020.
11. Kamal H, Lopez V, Sheth SA. Machine Learning in Acute Ischemic Stroke
Neuroimaging. Front Neurol. 2018;9(November):7-12. doi:10.3389/fneur.2018.00945.
12. Mateos-Pérez JM, Dadar M, Lacalle-Aurioles M, Iturria-Medina Y, Zeighami Y, Evans
AC. Structural neuroimaging as clinical predictor: A review of machine learning
applications. NeuroImage Clin. 2018;20:506-522. doi:10.1016/j.nicl.2018.08.019.
13. Feng R, Badgeley M, Mocco J, Oermann EK. Deep learning guided stroke management: a
review of clinical applications. J Neurointerv Surg. 2018;10(4):358-362.
doi:10.1136/neurintsurg-2017-013355.
14. Davatzikos C. Machine learning in neuroimaging: Progress and challenges. Neuroimage.
October 2018. doi:10.1016/j.neuroimage.2018.10.003.
15. Zaharchuk G, Gong E, Wintermark M, Rubin D, Langlotz CP. Deep Learning in
Neuroradiology. Am J Neuroradiol. 2018;39(10):1776-1784. doi:10.3174/ajnr.A5543.
16. Middlebrooks EH, Ver Hoef L, Szaflarski JP. Neuroimaging in Epilepsy. Curr Neurol
Neurosci Rep. 2017;17(4):32. doi:10.1007/s11910-017-0746-x.
17. Plis SM, Hjelm DR, Salakhutdinov R, et al. Deep learning for neuroimaging: a validation
study. Front Neurosci. 2014;8:229. doi:10.3389/fnins.2014.00229.
18. Chartrand G, Cheng PM, Vorontsov E, et al. Deep Learning: A Primer for Radiologists.
RadioGraphics. 2017;37(7):2113-2131. doi:10.1148/rg.2017170077.
19.
n al
Tang A, Tam R, Cadrin-Chênevert A, et al. Canadian Association of Radiologists White
Paper on Artificial Intelligence in Radiology. Can Assoc Radiol J. 2018;69(2):120-135.
o
si
doi:10.1016/j.carj.2018.02.002.
20.
vi
Zhu B, Liu JZ, Cauley SF, Rosen BR, Rosen MS. Image reconstruction by domain-
r o
transform manifold learning. Nature. 2018;555:487-492.
21.
22.
P
https://doi.org/10.1038/nature25988.
Pesapane F, Volonté C, Codari M, Sardanelli F. Artificial intelligence as a medical device
in radiology: ethical and regulatory issues in Europe and the United States. Insights
Imaging. 2018;9(5):745-753. doi:10.1007/s13244-018-0645-y.
Ramalho J, Ramalho M. Gadolinium Deposition and Chronic Toxicity. Magn Reson
Imaging Clin N Am. 2017;25(4):765-778. doi:10.1016/j.mric.2017.06.007.
23. Gulani V, Calamante F, Shellock FG, Kanal E, Reeder SB. Gadolinium deposition in the
brain: summary of evidence and recommendations. Lancet Neurol. 2017;16(7):564-570.
doi:10.1016/S1474-4422(17)30158-8.
24. Kanda T, Nakai Y, Oba H, Toyoda K, Kitajima K, Furui S. Gadolinium deposition in the
brain. Magn Reson Imaging. 2016;34(10):1346-1350. doi:10.1016/j.mri.2016.08.024.
25. Khawaja AZ, Cassidy DB, Al Shakarchi J, McGrogan DG, Inston NG, Jones RG.
Revisiting the risks of MRI with Gadolinium based contrast agents—review of literature
and guidelines. Insights Imaging. 2015;6(5):553-558. doi:10.1007/s13244-015-0420-2.
26. Gong E, Pauly JM, Wintermark M, Zaharchuk G. Deep learning enables reduced
gadolinium dose for contrast-enhanced brain MRI. J Magn Reson Imaging.
2018;48(2):330-340. doi:10.1002/jmri.25970.
27. Kang E. A Deep Convolutional Neural Network using Directional Wavelets for Low-dose
X-ray CT Reconstruction Eunhee. Med Phys. 2017;44(October 2016):1-32.
doi:10.1002/mp.12344.
28. Chen H, Zhang Y, Zhang W, et al. aLow-dose CT via convolutional neural network.
Biomed Opt Express. 2017;8(2):679. doi:10.1364/BOE.8.000679.
29. Zhang K, Zuo W, Chen Y, Meng D, Zhang L. Beyond a Gaussian denoiser: Residual
learning of deep CNN for image denoising. IEEE Trans Image Process. 2017;26(7):3142-
3155. doi:10.1109/TIP.2017.2662206.
30. Xie S, Zheng X, Chen Y, et al. Artifact Removal using Improved GoogLeNet for Sparse-
31.
n l
view CT Reconstruction. Sci Rep. 2018;8(1):1-9. doi:10.1038/s41598-018-25153-w.
a
Chen H, Zhang Y, Kalra MK, et al. Low-Dose CT with a Residual Encoder-Decoder
o
si
Convolutional Neural Network (RED-CNN). ClinSci. 2017;60:199-205.
32.
r o vi
doi:10.1109/TMI.2017.2715284.
Nishio M, Nagashima C, Hirabayashi S, et al. Convolutional auto-encoders for image
33. P
denoising of ultra-low-dose CT. Heliyon. 2017;3(8):e00393.
doi:10.1016/j.heliyon.2017.e00393.
Eck BL, Fahmi R, Brown KM, et al. Computational and human observer image quality
evaluation of low dose, knowledge-based CT iterative reconstruction. Med Phys.
2015;42(10):6098-6111. doi:10.1118/1.4929973.
34. Yi X, Babyn P. Sharpness-Aware Low-Dose CT Denoising Using Conditional Generative
Adversarial Network. J Digit Imaging. 2018;31(5):655-669. doi:10.1007/s10278-018-
0056-0.
35. Wolterink JM, Leiner T, Viergever MA, Isgum I. Generative Adversarial Networks for
Noise Reduction in Low-Dose CT. IEEE Trans Med Imaging. 2017;36(12):2536-2545.
doi:10.1109/TMI.2017.2708987.
36. Bai T, Yan H, Jia X, Jiang S, Wang G, Mou X. Z-Index Parameterization for Volumetric
CT Image Reconstruction via 3-D Dictionary Learning. IEEE Trans Med Imaging.
2017;36(12):2466-2478. doi:10.1109/TMI.2017.2759819.
37. Gupta H, Jin KH, Nguyen HQ, McCann MT, Unser M. CNN-Based Projected Gradient
Descent for Consistent CT Image Reconstruction. IEEE Trans Med Imaging.
2018;37(6):1440-1453. doi:10.1109/TMI.2018.2832656.
38. Kang E, Koo HJ, Yang DH, Seo JB, Ye JC. Cycle-consistent adversarial denoising
network for multiphase coronary CT angiography. Med Phys. November 2018.
doi:10.1002/mp.13284.
39. Chen H, Zhang Y, Chen Y, et al. LEARN: Learned Experts’ Assessment-Based
Reconstruction Network for Sparse-Data CT. IEEE Trans Med Imaging. 2018;37(6):1333-
1347. doi:10.1109/TMI.2018.2805692.
40. Xiang L, Qiao Y, Nie D, et al. Deep auto-context convolutional neural networks for
standard-dose PET image estimation from low-dose PET/MRI. Neurocomputing.
2017;267:406-416.
41.
n al
Kaplan S, Zhu YM. Full-Dose PET Image Estimation from Low-Dose PET Image Using
Deep Learning: a Pilot Study. J Digit Imaging. 2018;3. doi:10.1007/s10278-018-0150-3.
o
si
42. Xu J, Gong E, Pauly J, Zaharchuk G. 200x Low-dose PET Reconstruction using Deep
43.
r o vi
Learning.
Chen KT, Gong E, de Carvalho Macruz FB, et al. Ultra–Low-Dose 18F-Florbetaben
44.
45.
P
Amyloid PET Imaging Using Deep Learning with Multi-Contrast MRI Inputs. Radiology.
2018:180940.
Sabri O, Sabbagh MN, Seibyl J, et al. Florbetaben PET imaging to detect amyloid beta
plaques in Alzheimer’s disease: phase 3 study. Alzheimer’s Dement. 2015;11(8):964-974.
Villemagne VL, Ong K, Mulligan RS, et al. Amyloid imaging with 18F-florbetaben in
Alzheimer disease and other dementias. J Nucl Med. 2011;52(8):1210-1217.
46. Wang S, Su Z, Ying L, et al. Accelerating magnetic resonance imaging via deep learning.
In: 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI). IEEE;
2016:514-517. doi:10.1109/ISBI.2016.7493320.
47. Golkov V, Dosovitskiy A, Sperl JI, et al. q-Space Deep Learning: Twelve-Fold Shorter
and Model-Free Diffusion MRI Scans. IEEE Trans Med Imaging. 2016;35(5):1344-1351.
doi:10.1109/TMI.2016.2551324.
48. Hyun CM, Kim HP, Lee SM, Lee S, Seo JK. Deep learning for undersampled MRI
reconstruction. Phys Med Biol. 2018;63(13). doi:10.1088/1361-6560/aac71a.
49. Lee D, Yoo J, Tak S, Ye JC. Deep residual learning for accelerated MRI using magnitude
and phase networks. IEEE Trans Biomed Eng. 2018;65(9):1985-1995.
doi:10.1109/TBME.2018.2821699.
50. Schlemper J, Caballero J, Hajnal J V., Price AN, Rueckert D. A Deep Cascade of
Convolutional Neural Networks for Dynamic MR Image Reconstruction. IEEE Trans Med
Imaging. 2018;37(2):491-503. doi:10.1109/TMI.2017.2760978.
51. Majumdar A. Real-time Dynamic MRI Reconstruction using Stacked Denoising
Autoencoder. 2015. http://arxiv.org/abs/1503.06383.
52. Hammernik K, Klatzer T, Kobler E, et al. Learning a variational network for
reconstruction of accelerated MRI data. Magn Reson Med. 2018;79(6):3055-3071.
doi:10.1002/mrm.26977.
53. Chen F, Taviani V, Malkiel I, et al. Variable-Density Single-Shot Fast Spin-Echo MRI
2018:180445. doi:10.1021/om000361n.
o n l
with Deep Learning Reconstruction by Using Variational Networks. Radiology.
a
si
54. Yang Y, Sun J, Li H, Xu Z. Deep ADMM-Net for Compressive Sensing MRI. Adv Neural
55.
r o vi
Inf Process Syst. 2017;(Nips):10-18. doi:10.1145/2966986.2980084.
Quan TM, Nguyen-Duc T, Jeong WK. Compressed Sensing MRI Reconstruction Using a
56. P
Generative Adversarial Network With a Cyclic Loss. IEEE Trans Med Imaging.
2018;37(6):1488-1497. doi:10.1109/TMI.2018.2820120.
Mardani M, Gong E, Cheng JY, et al. Deep Generative Adversarial Neural Networks for
Compressive Sensing (GANCS) MRI. IEEE Trans Med Imaging. 2018;38(1):167-179.
doi:10.1109/TMI.2018.2858752.
57. Kaur P, Singh G, Kaur P. A Review of Denoising Medical Images using Machine
Learning Approaches. Curr Med Imaging Rev. 2017;13:675-685.
doi:10.2174/1573405613666170428154156.
58. Gondara L. Medical Image Denoising Using Convolutional Denoising Autoencoders. In:
2016 IEEE 16th International Conference on Data Mining Workshops (ICDMW). ;
2016:241-246. doi:10.1109/ICDMW.2016.0041.
59. Manjón J V, Coupe P. MRI Denoising Using Deep Learning. In: International Workshop
on Patch-Based Techniques in Medical Imaging. Springer; 2018:12-19.
60. Jiang D, Dou W, Vosters L, Xu X, Sun Y, Tan T. Denoising of 3D magnetic resonance
images with multi-channel residual learning of convolutional neural network. Jpn J
Radiol. 2018;36(9):566-574. doi:10.1007/s11604-018-0758-8.
61. Ran M, Hu J, Chen Y, et al. Denoising of 3-D Magnetic Resonance Images Using a
Residual Encoder-Decoder Wasserstein Generative Adversarial Network. arXiv Prepr
arXiv180803941. 2018.
62. Ulas C, Tetteh G, Kaczmarz S, Preibisch C, Menze BH. DeepASL: Kinetic model
incorporated loss for denoising arterial spin labeled MRI via deep residual learning. Lect
Notes Comput Sci (including Subser Lect Notes Artif Intell Lect Notes Bioinformatics).
2018;11070 LNCS:30-38. doi:10.1007/978-3-030-00928-1_4.
63. Kim KH, Choi SH, Park S-H. Improving Arterial Spin Labeling by Using Deep Learning.
Radiology. 2018;287(2):658-666. doi:10.1148/radiol.2017171154.
64. Owen D, Melbourne A, Eaton-Rosen Z, et al. Deep Convolutional Filtering for Spatio-
n l
Temporal Denoising and Artifact Removal in Arterial Spin Labelling MRI. In:
a
International Conference on Medical Image Computing and Computer-Assisted
o
si
Intervention. Springer; 2018:21-29.
65.
vi
Gong E, Pauly J, Zaharchuk G. Boosting SNR and/or resolution of arterial spin label
r o
(ASL) imaging using multi-contrast approaches with multi-lateral guided filter and deep
66. P
networks. In: Proceedings of the Annual Meeting of the International Society for Magnetic
Resonance in Medicine, Honolulu, Hawaii. ; 2017.
Gurbani SS, Schreibmann E, Maudsley AA, et al. A convolutional neural network to filter
artifacts in spectroscopic MRI. Magn Reson Med. 2018;80(5):1765-1775.
doi:10.1002/mrm.27166.
67. Kyathanahally SP, Döring A, Kreis R. Deep learning approaches for detection and
removal of ghosting artifacts in MR spectroscopy. Magn Reson Med. 2018;80(3):851-863.
doi:10.1002/mrm.27096.
68. Küstner T, Liebgott A, Mauch L, et al. Automated reference-free detection of motion
artifacts in magnetic resonance images. Magn Reson Mater Physics, Biol Med.
2018;31(2):243-256. doi:10.1007/s10334-017-0650-z.
69. Tamada D, Kromrey M-L, Onishi H, Motosugi U. Method for motion artifact reduction
using a convolutional neural network for dynamic contrast enhanced MRI of the liver.
2018:1-15. doi:arXiv:1807.06956v1.
70. Tamada D, Onishi H, Motosugi U. Motion artifact reduction in abdominal MR imaging
using the U-NET network. In: Proc ICMRM and Scientific Meeting of KSMRM. ; 2018.
71. Duffy BA. Retrospective correction of motion artifact affected structural MRI images
using deep learning of simulated motion. Med Imaging with Deep Learn. 2018;(Midl
2018):1-8.
72. Gjesteby L, Yang Q, Xi Y, et al. Deep learning methods for CT image- domain metal
artifact reduction. Dev X-Ray Tomogr XI. 2017;(August):10391-31.
https://spie.org/Documents/ConferencesExhibitions/op17 abstract.pdf#page=148.
73. Hwang D, Kim KY, Kang SK, et al. Improving accuracy of simultaneously reconstructed
activity and attenuation maps using deep learning. J Nucl Med.
2018;59(10):jnumed.117.202317. doi:10.2967/jnumed.117.202317.
74. Nie D, Trullo R, Lian J, et al. Medical Image Synthesis with Deep Convolutional
doi:10.1109/TBME.2018.2814538.
o n l
Adversarial Networks. IEEE Trans Biomed Eng. 2018;65(12):2720-2730.
a
si
75. Han X. MR-based synthetic CT generation using a deep convolutional neural network
76.
r o vi
method: Med Phys. 2017;44(4):1408-1419. doi:10.1002/mp.12155.
Xiang L, Wang Q, Nie D, et al. Deep embedding convolutional neural network for
77.
78.
P
synthesizing CT image from T1-Weighted MR image. Med Image Anal. 2018;47:31-44.
Torrado-Carvajal A, Vera-Olmos J, Izquierdo-Garcia D, et al. Dixon-VIBE Deep
Learning (DIVIDE) Pseudo-CT Synthesis for Pelvis PET/MR Attenuation Correction. J
Nucl Med. 2018:jnumed.118.209288. doi:10.2967/jnumed.118.209288.
Wolterink JM, Dinkla AM, Savenije MHF, Seevinck PR, van den Berg CAT, Išgum I.
Deep MR to CT synthesis using unpaired data. In: International Workshop on Simulation
and Synthesis in Medical Imaging. Springer; 2017:14-23.
79. Leynes AP, Yang J, Wiesinger F, et al. Direct PseudoCT Generation for Pelvis PET/MRI
Attenuation Correction using Deep Convolutional Neural Networks with Multi-parametric
MRI: Zero Echo-time and Dixon Deep pseudoCT (ZeDD-CT). J Nucl Med.
2017;59(5):jnumed.117.198051. doi:10.2967/jnumed.117.198051.
80. Roy S, Butman JA, Pham DL. Synthesizing CT from Ultrashort Echo-Time MR Images
via Convolutional Neural Networks. In: International Workshop on Simulation and
Synthesis in Medical Imaging. Springer; 2017:24-32.
81. Liu F, Jang H, Kijowski R, Bradshaw T, McMillan AB. Deep learning MR imaging–based
attenuation correction for PET/MR imaging. Radiology. 2018;286(2):676-684.
doi:10.1148/radiol.2017170700.
82. Liu F, Jang H, Kijowski R, Zhao G, Bradshaw T, McMillan AB. A deep learning
approach for 18F-FDG PET attenuation correction. EJNMMI Phys. 2018;5(1):24.
doi:10.1186/s40658-018-0225-8.
83. Hyun S, Johnson SB, Bakken S. Real-Time Medical Video Denoising with Deep
Learning: Application to Angiography. Int J Appl Inf Syst. 2018;12(13):22-28.
doi:10.1097/NCN.0b013e3181a91b58.Exploring.
84. Vemulapalli R, Van Nguyen H, Zhou SK. Deep networks and mutual information
maximization for cross-modal medical image synthesis. In: Deep Learning for Medical
Image Analysis. Elsevier; 2017:381-403.
85.
n al
Ben-Cohen A, Klang E, Raskin SP, Amitai MM, Greenspan H. Virtual pet images from ct
data using deep convolutional networks: Initial results. In: International Workshop on
o
si
Simulation and Synthesis in Medical Imaging. Springer; 2017:49-57.
86.
vi
Choi H, Lee DS. Generation of Structural MR Images from Amyloid PET: Application to
r o
MR-Less Quantification. J Nucl Med. 2018;59(7):1111-1117.
87.
88.
P
doi:10.2967/jnumed.117.199414.
Li R, Zhang W, Suk H-I, et al. Deep learning based imaging data completion for improved
brain disease diagnosis. In: International Conference on Medical Image Computing and
Computer-Assisted Intervention. Springer; 2014:305-312.
Dong C, Loy CC, He K, Tang X. Image Super-Resolution Using Deep Convolutional
Networks. IEEE Trans Pattern Anal Mach Intell. 2016;38(2):295-307.
doi:10.1109/TPAMI.2015.2439281.
89. Higaki T, Nakamura Y, Tatsugami F, Nakaura T, Awai K. Improvement of image quality
at CT and MRI using deep learning. Jpn J Radiol. 2018;37(1):73-80. doi:10.1007/s11604-
018-0796-2.
90. Bahrami K, Shi F, Rekik I, Shen D. Convolutional neural network for reconstruction of
7T-like images from 3T MRI using appearance and anatomical features. In: Deep
Learning and Data Labeling for Medical Applications. Springer; 2016:39-47.
91. Lyu Q, You C, Shan H, Wang G. Super-resolution MRI through Deep Learning. arXiv
Prepr arXiv181006776. 2018.
92. Chaudhari AS, Fang Z, Kogan F, et al. Super-resolution musculoskeletal MRI using deep
learning. Magn Reson Med. 2018;80(5):2139-2154. doi:10.1002/mrm.27178.
93. Campbell BC V, Christensen S, Parsons MW, et al. Advanced imaging improves
prediction of hemorrhage after stroke thrombolysis. Ann Neurol. 2013;73(4):510-519.
doi:10.1002/ana.23837.
94. Hagiwara A, Otsuka Y, Hori M, et al. Improving the Quality of Synthetic FLAIR Images
with Deep Learning Using a Conditional Generative Adversarial Network for Pixel-by-
Pixel Image Translation. Am J Neuroradiol. 2018;in press:1-7.
95. G W, E G, S B, J P, G Z. OneforAll: Improving Synthetic MRI with Multi-task Deep
Learning using a Generative Model. In: ISMRM MR Value Workshop. ; 2019.
96. Wu G, Kim M, Wang Q, Munsell BC, Shen D. Scalable high-performance image
o n l
registration framework by unsupervised deep feature representations learning. IEEE Trans
a
si
97. Wu G, Kim M, Wang Q, Gao Y, Liao S, Shen D. Unsupervised deep feature learning for
r o vi
deformable registration of MR brain images. In: International Conference on Medical
Image Computing and Computer-Assisted Intervention. Springer; 2013:649-656.
98.
99.
P
Miao S, Wang ZJ, Liao R. A CNN Regression Approach for Real-Time 2D/3D
Registration. IEEE Trans Med Imaging. 2016;35(5):1352-1363.
doi:10.1109/TMI.2016.2521800.
Neylon J, Min Y, Low DA, Santhanam A. A neural network approach for fast, automated
quantification of DIR performance. Med Phys. 2017;44(8):4126-4138.
100. Yang X, Kwitt R, Styner M, Niethammer M. Quicksilver: Fast predictive image
registration – A deep learning approach. Neuroimage. 2017;March(158):378-396.
doi:10.1586/14737175.2015.1028369.Focused.
101. Lee JH, Grant BR, Chung JH, Reiser I, Giger M. Assessment of diagnostic image quality
of computed tomography (CT) images of the lung using deep learning. In: Medical
Imaging 2018: Physics of Medical Imaging. Vol 10573. International Society for Optics
and Photonics; 2018:105731M.
102. Wu L, Cheng J-Z, Li S, Lei B, Wang T, Ni D. FUIQA: Fetal ultrasound image quality
assessment with deep convolutional networks. IEEE Trans Cybern. 2017;47(5):1336-
1349.
103. Goodfellow I, Bengio Y, Courville A, Bengio Y. Regularization for deep learning. In:
Deep Learning. Cambridge MIT Press. 2016:221–265.
104. Prechelt L. Early stopping—but when? In: Neural Networks: Tricks of the Trade.
Springer; 2012:53-67.
105. Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: a simple
way to prevent neural networks from overfitting. J Mach Learn Res. 2014;15(1):1929-
1958.
106. Litjens G, Kooi T, Bejnordi BE, et al. A survey on deep learning in medical image
analysis. Med Image Anal. 2017;42(December 2012):60-88.
doi:10.1016/j.media.2017.07.005.
107. Lipton ZC. The mythos of model interpretability. arXiv Prepr arXiv160603490. 2016.
o n al
r o vi si
P
Figure 1. Example of components of Biologic Neural Network(A) and Computer Neural
Network(B)
Reprinted with permission from Zaharchuk et al15. Copyright American Journal of
Neuroradiology.
o n al
r o vi si
P
Figure 2. Imaging Value Chain
While most AI applications have focused on the downstream (or right) side of this pathway, such
the use of AI to detect and classify lesions on imaging studies, it is likely that there will be earlier
adoption for the tasks on the upstream (or left) side, where most of the costs of imaging are
concentrated.
o n al
r o vi si
P
Figure 3. Example of Low-dose Contrast-enhaced MRI
Results from a deep network for predicting a 100% contrast dose image from a study obtained
with 10% of the standard contrast dose. This example MRI is abtained from a patient with
menigioma. Such methods may enable diagnostic quality images to be acquired more safely in a
wider range of patients. (Courtesy of Subtle Medical, Inc)
o n al
r o vi si
P
Figure 4. Example of ultra-low dose 18F-florbetaben PET/MRI
Example of a positive 18F-florbetaben PET/MRI study acquired at 0.24 mCi, approximately 3%
of a standard dose. Similar image quality is present in the 100% dose image and the synthetized
image, which was created using a deep neural network along with MRI information such as T1,
T2, and T2-FLAIR. As Alzheimer Disease studies are moving towards cognitively normal and
younger patients, reducing dose would be helpful. Furthermore, tracer costs could be reduced if
doses can be shared.
o n al
r o vi si
P
Figure 5. Deep learning for improving image quality of arterial spin labeling in a patient
with right-sided Moyamoya disease.
Reference scan(a) requiring 8 min to collect (nex=6). Using a rapid scan acquired in 2 min (nex=1)
(b), it is possible to create an image (f) with the SNR of a study requiring over 4.5 min (nex=3)
(e). The peak signal-to-noise (PSNR) performance is superior to existing de-noising methods
such as (c) block matched 3D (BM3D) and (d) total generalized variation (TGV). Such methods
could speed up MRI acquisition, enabling more functional imaging and perhaps reducing the cost
of scanning.
o n al
r o vi si
P
Figure 6: Use of convolutional neural networks to perform super-resolution.
High-resolution T1-weighted imaging often requires long scan times to acquire sufficient
resolution to resolve the gray-white border and to estimate cortical thickness. Shorter scans may
be obtained with lower resolution, and AI can be used to restore the required high resolution.
(Image courtesy of Subtle Medical Inc.)
o n al
r o vi si
P
Figure 01.TIFF
o n al
r o vi si
P
Figure 02.TIFF
o n al
r o vi si
P
Figure 03.TIFF
o n al
r o vi si
P
Figure 04.TIFF
o n al
r o vi si
P
Figure 05.TIFF
o n al
r o vi si
P
Figure 06.TIFF
o n al
r o vi si
P