Tayal2021 Article DL-CNN-basedApproachWithImageP
Tayal2021 Article DL-CNN-basedApproachWithImageP
Tayal2021 Article DL-CNN-basedApproachWithImageP
https://doi.org/10.1007/s00530-021-00769-7
Abstract
Artificial intelligence has the potential to revolutionize disease diagnosis, classification, and identification. However, the
implementation of clinical-decision support algorithms for medical imaging faces challenges with reliability and interpret-
ability. This study presents a diagnostic tool based on a deep-learning framework for four-class classification of ocular
diseases by automatically detecting diabetic macular edema, drusen, choroidal neovascularization, and normal images in
optical coherence tomography (OCT) scans of the human eye. The proposed framework utilizes OCT images of the retina
and analyses using three different convolution neural network (CNN) models (five, seven, and nine layers) to identify the
various retinal layers extracting useful information, observe any new deviations, and predict the multiple eye deformities.
The framework utilizes OCT images of the retina, which are preprocessed and processed for noise removal, contrast enhance-
ments, contour-based edge, and detection of retinal layer extraction. This image dataset is analyzed using three different
CNN models (of five, seven, and nine layers) to identify the four ocular pathologies. Results obtained from the experimental
testing confirm that our model has excellently performed with 0.965 classification accuracy, 0.960 sensitivity, and 0.986
specificities compared with the manual ophthalmological diagnosis.
Keywords Biomedical imaging · Deep learning · Disease detection · Image processing · Optical coherence tomography
(OCT)
1 Introduction
* Anand Nayyar In the past few decades, we have seen the advent of computer
[email protected] science in disease detection and diagnosis for biomedical
Akash Tayal sciences. Artificial intelligence (AI) has revolutionized the
[email protected] disease diagnosis and the anatomization process by perform-
Jivansha Gupta ing the classification steps, which were time-consuming and
[email protected] tedious for the experts [1−3]. The medical field has been
Arun Solanki accepting and adopting AI because of the rampant increase
[email protected] in applications employing AI-based technologies in recent
Khyati Bisht times and the physicians’ demand to operate with reduced
[email protected] errors, mishaps, and misdiagnosis. Many AI and subset DL
Mehedi Masud networks are useful in medical image processing for prog-
[email protected] nosis and diagnosis of various ailments (e.g., breast cancer,
1 lung cancer, and brain tumor), which are tedious and prone
Indira Gandhi Delhi Technical University for Women,
New Delhi, India to human error if manually performed. Medical images are
2 processed using these DL methods to solve various tasks,
Gautam Buddha University, Greater Noida, India
such as prediction, segmentation, and classification, conse-
3
Duy Tan University, Da Nang, Viet Nam quently accurately bypassing human abilities.
4
Department of Computer Science, College of Computers The scope of AI is significant in retinal disease diagno-
and Information Technology, Taif University, sis and procedure. The mechanism requires precise, correct
P.O. Box 11099, Taif 21944, Saudi Arabia
13
Vol.:(0123456789)
A. Tayal et al.
identification, and extraction of ocular layers, making it types of OCT are time-domain (TD) OCT and spectral-
easy for ophthalmologists to focus on the treatment. In this domain (SD) OCT. The TD-OCT is used to produce the
study, the benefits of AI have been leveraged to classify and 2D scans of the given sample internal edifice. The SD-
identify an ocular disease; the retina’s structural complex- OCT is said to be 50 times quicker than the conventional
ity makes it inconvenient and time-consuming for accurate TD-OCT technique. Furthermore, SD-OCT is 100 times
evaluation by the expert. The retina is situated on the inside faster than the ultra-high-resolution OCT. The SD-OCT
back wall of the eye and is responsible for sending light scan has more clarity and high quality compared with the
and images back to the brain. When the light focuses on TD-OCT systems.
the retina instead of elsewhere, normal vision is observed. Figure 1 shows the OCT scans of CNV, drusen, DME,
A person with normal vision can see objects at near and far and normal retina. The growth of new blood vessels
distances. Vision loss, myopia, and macular degeneration through the Bruch membrane in the choroid layer describes
can occur if the retinal layer is affected. Some commonly CNV, and it causes vision loss. The accretion of fluid in
known retinal diseases are choroidal neovascularization the retina’s macula part forms DME. The yellow deposits
(CNV), drusen, diabetic retinopathy, and diabetic macular composed of lipids, protein, and calcium salts under the
edema (DME). retina characterizes drusen. The risk of developing age-
Employing advanced AI techniques in medical diagnosis related macular generation (AMD) increases in drusen.
and image detection has brought the much-needed head- This research has evaluated and analyzed a large data-
way in medical science. The automated detection of retinal set of retinal OCT images available in the public domain
diseases involves a preprocessing with image quantization, to classify the normal retina and three ocular pathologies
segmentation and sampling procedures, training of neural (CNV, drusen, and DME) for accurately detecting results
networks with the vast data, and analysis of statistics. The of significant pathological structures. Figure 2 describes
researchers are currently focusing on improving the accuracy the framework of the proposed methodology for detecting
of classification and identification of the disease, reducing ocular degeneration using OCT images of the retina. The
the computational time and memory utilization, proper seg- OCT images of the retina are preprocessed and enhanced
mentation of ocular layers, and minimizing computational for noise removal using a median filter. Contrast Limited
complexity. Adaptive Histogram Equalization (CLAHE) for contrast
Optical coherence tomography (OCT) is a modern non- enhancements is used, followed by morphological opera-
invasive imaging technique built on low coherence inter- tion, thresholding, and contour-based edge detection for
ferometry. This technique can reconstruct (tomographic) retinal layer extraction. This image dataset is analyzed
sectional images with a high depth resolution of the object using three different Convolution Neural Network (CNN)
under study using the projected light beams. The measured models (of five, seven, and nine layers) to identify the four
dimension of the thickness of the retinal layers helps early ocular pathologies.
detection of pathologies and disease diagnosis. The two
13
DL‑CNN‑based approach with image processing techniques for diagnosis of retinal diseases
The proposed approach has an accuracy of 96.5%. The in high resolution. TD-OCT provides a 2D image of the
primary goal is to help the patients and eye specialists make given structure of the internal part of the retina. TD-OCT is
an automated and fast diagnosis. Another goal is to increase ineffective because it only includes thickness analysis of the
the analytical performance by improving the accuracy and macula, while SD-OCT enables the monitoring and meas-
assisting ophthalmologists in making quicker and efficient urement of various characteristic features. The study found
detection, which can be of enormous benefit for the patients. that OCT is a useful technique in analyzing, monitoring, and
The paper is organized as follows: Sect. 2 elaborates assessing AMD’s different stages. Moreover, drusen could
the literature review; Sect. 3 presents the methodology; be analyzed using various characteristics of its structure.
Sects. 4 and 5 provide the experimental setup and analysis Srinivasan et al. [6] presented a classification method
of the results, respectively; Sect. 6 discusses the conclusion; based on support vector machine (SVM) classifiers and
Sect. 7 presents the future scope and limitation. Histogram of Oriented Gradients (HOG) descriptors and
obtained successful results to detect dry AMD and DME
using the OCT imaging technique. Their proposed method
2 Literature review did not involve the segmentation of inner retinal layers. The
SD-OCT datasets consisted of 45 volumetric scans, 15 nor-
Deep learning techniques have advanced the state-of-the- mal, 15 AMD, and 15 DME. The algorithm achieved the
art in medical image analysis. However, the application of highest specificity and perfect sensitivity in detecting 100%
DL in retinal diseases is relatively recent. Numerous ocular of AMD cases, 100% of DME cases, and 86.67% of normal
diseases, such as DME, drusen, and CNV, can be captured cases.
using OCT scans of the human eye and analyzed using DL A transfer learning method was described by Karri et al.
techniques. This section elaborates the research on the auto- [7] to identify retinal pathologies on the basis of the incep-
mation of ocular pathology using AI, machine learning, and tion network using the retinal OCT images. The dataset con-
DL. The tomography process involves the reintegration of sisted of OCT images with dry AMD, DME, and normal
different cross-sectional images of the subject using vari- subjects. Their study demonstrated that the fine-tuned CNN
ous projections. Ţălu et al. [4] and Schmidt-Erfurth et al. was able to effectively identify pathologies compared with
[5] stated that OCT is a high-resolution imaging technique classical learning methods. The classifying OCT algorithm
classified as SD-OCT and TD-OCT. The SD-OCT results shown with limited training data and trained with the use of
provide a cross-sectional and volumetric view of the retina non-medical images can be fine-tuned. The mean prediction
13
A. Tayal et al.
accuracies were 99% for normal, 89% for AMD, and 86% system as a two-class problem of a diseased and healthy
for DME. retina using a random forest classifier. The methodology had
Wang et al. [8] proposed a model for the detection of an accuracy of more than 96%.
AMD, DME, and healthy macula using OCT images. Clas- Kermany et al. [12] achieved 96.6% accuracy along with
sification algorithms are proven to be necessary for training 97.8% sensitivity and 97.4% specificity. The AUC was 0.999
a classification model. The quadratic programming-based in distinguishing the retinal diseases (CNV, drusen, and
algorithm, kernel-based algorithm, linear regression-based DME) from normal subjects. A variation in the number of
classification algorithm, neural network algorithm, Bayes- images was observed in each category because the training
ian algorithm, tree-based algorithm, and ensemble forest dataset consisted of 37,206 images of CNV, 11,349 images
algorithm are the various classification algorithm groups. of DME, 8617 images of drusen, and 51,140 normal images.
The dataset was tested using one representative from each The model’s performance was biased because the validation
classification algorithm group. The SMO-sequential mini- dataset acted as the testing dataset, while only 250 images
mal optimization-based model was the best with an accu- were chosen for testing and validation from each retinal
racy of 99.3%. Their experimental procedure involved four class. The result analysis was affected due to the imbalance
steps, namely OCT image preprocessing, feature extraction of the number of images in each class.
and selection, classification model building, and predicting Schlegl et al. [13] proposed a DL-based technique for
results. the detection of different types of fluids in the retina across
Alsaih et al. [9] presented an automated classification various macular diseases using OCT images. Their data-
framework to detect DME for SD-OCT imaging data vol- set consisted of OCT images of 1200 patients, including
umes. Their method included the general steps of preproc- 400 patients with AMD, 400 patients with DME, and 400
essing, feature detection, feature representation, and classifi- patients with RVO (retinal vein occlusion). This fully auto-
cation. The SVM and principal component analysis resulted mated method achieved a mean accuracy of 0.94 with 0.91
in a sensitivity of 87.5% and specificity of 87.5%. LBP-ri precision and 0.94 recall value and was developed to quan-
vectors contributed to the most successful result in the clas- tify and detect subretinal fluid and intraretinal cystoid.
sification of disease. Das [14] surveyed the diagnosis of retinal diseases, such
Choi et al. [10] applied CNN-based deep learning tech- as retinal tear, retinal detachment, glaucoma, macular hole,
nique for fundus photography analysis and classification of and macular degeneration, using various machine learning
various retinal diseases. Fundus photographs were taken techniques. The study of healthcare analytics and implemen-
from the structured analysis of the retina database for auto- tation of deep learning-based pathology detection is a work
mated detection of numerous retinal diseases, based on DL- by Hossain and Muhammad [11, 15]. Some commonly used
CNN using MatConvNet. The dataset was built by including machine techniques in ocular diagnosis are logistic regres-
10 different categories of retinal images, including normal sion, Naive Bayes, KNN algorithm, and SVM classifier. The
retina images. The classification results varied as per the implementation of machine learning techniques can be stud-
number of categories. These results were obtained using the ied from [16–20].
random forest transfer learning method, which was based on Lemaître et al. [21] addressed the problem of classifica-
VGG-19 architecture. tion of SD-OCT data for automated detection of patients
Choi et al. [10] stated that other retinal diseases can even- affected by DME. Tsanim et al. [22] employed four CNN
tually lead to irreversible loss of vision. The causes of vision models, namely, Vanilla CNN, MobileNetV2, ResNet50, and
impairment may include retinal vessel occlusion, retinitis, Xception network, to detect the category of diseases from
and hypertensive retinopathy. Previous studies focused on the retinal OCT scanned images.
glaucoma, DMR, AMD, and other eye pathologies using Feng et al. [44] focused their study on a four-class retinal
fundus photographs. A more effective detection method disease classification problem for the detection of drusen,
is necessary to reduce vision loss caused by retinal dis- DME, CNV, and normal retina using optical coherence
eases. The DMR screening was initially adopted for diabetic tomography images. They proposed a novel classifica-
patients, and it used fundus photographs as inputs. tion model for the automated detection of most common
Hussain et al. [11] proposed the automated identification blinding diseases and prepared a big dataset of retinal OCT
of AMD and DME using SD-OCT images. The thickness images. The model was based on improved ResNet50. Their
of the retina or individual retinal layer and the volume of approach achieved an accuracy of 0.973, a sensitivity of
pathologies, such as drusen, were some of the retinal fea- 0.963, a specificity of 0.985, and an AUC of 0.995 at the
tures used in the techniques’ classification methodology. The B-scan level.
SD-OCT images were segmented to extract critical retinal OCT images’ automated layering with a blurred layered
features. A dataset of 251 subjects, 59 normal, 15 DME, structure and low contrast is considered challenging or dif-
and 177 AMD is evaluated for effectiveness by training the ficult. Xiaoming et al. [23] solved this problem using a new
13
DL‑CNN‑based approach with image processing techniques for diagnosis of retinal diseases
OCT detection method. Their methodology was based on into training, testing, and validation folders, each of which
complex Shearlet transforms. The test dataset consisted of has subfolders for four model classes (CNV, DME, drusen,
OCT images of dry AMD, Stargardt disease, and retinal and normal), having a total of 84,495 b-scan views of OCT
macular area with normal condition. The results proved images in .jpeg format.
that the complex Shearlet transform method was an effec-
tive measure because more layers of OCT images could be
detected using this method. 3.2 Preprocessing data
The recent work on CNN and its application in image pro-
cessing can be studied from Bhatt et al. [24]. They have pre- The first step is to obtain uniform-sized normalized
sented the prevalent DL models, their architectures, related images; the dataset images are read, transformed, resized,
pros and cons, and their medical diagnosis and healthcare and cropped. The image sampling is performed into train-
system prospects. Kim [25] proposed a model that applies an ing, validation, and testing in the ratio of 90.16:1.84:8.00.
image super-resolution method to an algorithm that classifies Out of the 83,484 images (dataset), 75,270 images were
emotions from facial expressions using deep learning. Nie used for training, 1536 images as validation data, and
et al. [26] discussed convolution deep learning models for 6678 images as testing data. Table 2 shows the number
3D object retrieval. Zhao et al. [27] reviewed the state-of- of images of each class type in the respective data load-
the-art blood vessel segmentation methods by dividing them ers, and the distribution of the dataset is given in terms
into two categories, rule-based and machine-learning-based. of percentages. The images were shuffled to reduce the
Rajalingam et al. [28] presented an image fusion algorithm biases during training to obtain improved results, and they
to visualize and analyze the MRI-CT-PET medical images were loaded into different data loaders in a batch size of
better. A recent paper by Xi et al. [29] discusses multiscale 84. The data loading was performed in uniformly sized
CNNs for the segmentation of CNV from the OCT data. batches because the entire dataset processing in a single
Table 1 illustrates a summary of critical papers for OCT step would have resulted in computation memory overload
analysis. and system crash.
The research gap identified from the literature review are Figure 4 shows different samples of images from train-
as follows: ing, validation, and testing datasets post-preprocessing.
From these samples, the nine-layered retinal structure
1. Most papers have used a pre-trained model, which has of the normal eye retina for the given samples is visible.
fixed biases and weights for retinal disease classification; The samples with CNV show proliferation of blood ves-
2. The researchers have not studied the effect of image sels in the choroid layer of the retina, causing ruptures
enhancement or segmentation over feeding raw images; in the Bruch’s membrane. These samples are visible as
and hollow cavity-like structures in the retinal scans in CNV.
3. Existing research models have low specificity and sensi- The DME results from the accumulation of fluids in the
tivity values, which are considered essential parameters macula in the retina resulting from leaky blood vessels,
for evaluating the performance for medical diagnosis. which causes fovea swelling and is visible as tiny holes
in the image. The build-up of small yellow/white extra-
This paper addresses the research gaps mentioned above. cellular material aggregates between the retina pigment
epithelium of the eye and the Bruch’s membrane causes
drusen, visible as dome-like elevations. Meanwhile, the
normal retinal structure is seen with clear and continuous
3 Methodology
membrane boundaries with a deep cut fovea valley with
almost a uniform thickness across the structure.
Figure 3 demonstrates the process flow of the OCT image
Steps for preprocessing of data:
data analysis for the detection of four retinal diseases clas-
sification using DL-CNN models. Each step is explained in
1. Read files from the directory.
the sections below.
2. Apply resizing of each image to 150 × 150 pixels.
3. Apply CentreCrop operation with final dimensions of
3.1 Data collection 128 × 128 pixels to each image.
4. Convert the image to the tensor data type for compat-
The images of the retinal OCT scans for DME, drusen, ibility with the model.
CNV, and normal retina are taken from the public (Men- 5. Normalize the image by subtracting the mean from each
deley database) dataset published in Kermany et al. [12]. pixel value and dividing the result by standard deviation
The images were taken from the dataset and partitioned using standard transform.
13
13
Table 1 Literature review summary related to OCT analysis
References Database Type of dataset Target diseases Approach Description Performance
Feng Li et al. [1] Proprietary OCT CNV, DME, druse, and Multi-ResNet 50 Ensembling The improved ResNet50 Accuracy: 97.9%; sensitivity:
normal mainly consisted of con- 96.8%; specificity: 99.4%;
volutional layers, pooling AUC: 0.998; kappa: 0.969
layers, and fully connected
layers
Kermany et al. [6] Public (Mendeley database) OCT CNV, DME, drusen, and Inception V3 Architecture The network is composed Accuracy: 96.6%; sensitivity:
normal of 11 inception modules 97.8%; specificity: 97.4%;
of five types, and each AUC: 0.999
module is designed with
convolutional layer, activa-
tion layer, pooling layer,
and batch normalization
layer
Hussain et al. [2] Public (DHU and Tian et al. OCT AMD, DME, and normal Random forest algorithm Creates a set of decision Av. accuracy: 96%; av. sensi-
images) and proprietary trees from a subset of train- tivity: 94%; av. specificity:
ing set and then decides 85%; mean AUC: 0.99
the final class of test object
based on voting
Alsaih et al. [3] Proprietary OCT DME and normal Linear SVM The algorithm creates a line Sensitivity: 87.5%; specificity:
or a hyperplane that sepa- 87.5%
rates the data into classes
Srinivasan et al. [4] Public (DHU) OCT AMD, DME, and normal HOG descriptors and SVM This algorithm divides the Accuracies of AMD, DME,
classifiers image into connected and normal were 100%,
regions, called cells, and 100%, and 86.67%, respec-
the shape of the local tively
objects is described by
counting the strength and
orientation of the spatial
gradients in each cell
Karri et al. [5] Public (DHU) OCT AMD, DME, and normal Inception network This approach fine-tunes Mean accuracies: 99%, 89%,
a pre-trained CNN to and 86%
improve its prediction
capability and identifies
salient responses during
prediction to understand
learned filter characteristics
A. Tayal et al.
DL‑CNN‑based approach with image processing techniques for diagnosis of retinal diseases
13
A. Tayal et al.
Fig. 4 Images after preprocessing in training, validation, and testing datasets (without image enhancement, with normalization, and resizing
operations)
After the speckle noise is removed, the next step is to inexperienced technicians. Nandani et al. [33] showcased a
improve the contrast of the scans. Low contrast is generally comparison of different contrast improving algorithms over
due to poor illumination conditions, capturing devices, and OCT scan images. They observed that the CLAHE method
13
DL‑CNN‑based approach with image processing techniques for diagnosis of retinal diseases
outperforms other techniques. Setiawan et al. [34] proposed estimation of distribution algorithms to generate contours
using CLAHE in Green (G) channel to improve the color by a prior step of the reference shape’s alignment process,
retinal image quality. which increased the exploration and exploitation capabili-
The CLAHE algorithm is an enhanced version of adap- ties. Mishra et al. [41] improved the active contour model
tive histogram equalization used to reduce the noise ampli- using an efficient two-step kernel-based optimization scheme
fication in regions of homogeneity. This algorithm is widely that first identified the individual layers’ approximate loca-
used in medical images and ophthalmology. In this method, tion and refined the results using an active contour model.
the image is divided into subsections, and equalization is In our training of the model shown in Figs. 5, 6, and 7,
performed for each area. This situation results in flattening only edges or segmented structures were not suitable inputs
the division of gray levels and increasing the visibility of for our model because they did not account for fine details
the image’s hidden features. Thus, we applied and compared in the membrane structure. The edge detection results in the
Histogram Equalization and CLAHE to improve the gray- collection of edge segments or contours encompassing the
scale image’s contrast and enhance the edges. whole image. The images lacked useful information, such
The next image enhancement step is edge detection. as layered structures of the retina and cavity within these
Xiaoming et al. [23] used a complex-Shearlet-based method layers by only extracting edges, thus, the model could not
with properties of adequate space, multiscale, frequency learn much from this information. The segmentation was
domain localization, multidirection, and contrast invari- conducted on the retina structure samples, which is the
ance. They compared their work with other algorithms and extraction of the coherent region of interest isolated from
precisely extracted edges with strong anti-noise robust- the background. Originally, segmentation is a low-level
ness. Dodo et al. [35] worked upon the level set method for image processing technique where the image is divided on
separating retinal layers into seven non-overlapping layer the basis of the regions of importance into many segments
structures. They started by selecting a region of interest and separated by boundaries. These mechanisms produced bet-
obtained gradient edges from it, and these were used to ini- ter results but still could not perform and with successive
tialize curves for the layers. A different approach in Pekala processing steps. Moreover, these mechanisms lacked the
et al. [36] showed a deep learning-based model based on fine layer structural details but were able to include cav-
fully convolutional networks with a Gaussian process cou- ity structures to a certain extent. However, combining both
pled with regression-based post-processing to segment the methods allowed us to enhance the edge and fine details in
images. Luo et al. [37] used the popular two-pass method, these images. The former method obtained a testing accu-
Canny edge detector, and the edge-flow technique for edge racy of up to 90.20%, while the latter with segmented output
detection and found the two-pass method’s performance achieved up to 94.47%. Finally, geometric transformations,
promising over the others. They also found that intensity- image resizing, zooming, cropping, and normalizing are
based edge detectors, such as the Canny edge detector, and performed, and images of 128 × 128 pixels were obtained.
the two-pass method outperformed the texture-based edge- Normalization helps in obtaining the data within a range,
flow method for OCT retinal image analysis. The Canny which helps CNN in performing better and make training
edge detector algorithm observed fine edge losses in the faster. Figure 8 shows the final enhanced image sample used
edge structure due to Gaussian filtering used in the algo- in Step 3.4 for the training of the model.
rithm. Similar results are observed when this algorithm was Steps for image enhancement:
used on our research data for edge detection.
A contour-based algorithm was applied to detect edges 1. Read files from the directory.
and fine details in the scan images. Finding contours is 2. Apply medium blur filter for smoothening.
essential for shape analysis and feature/object detection 3. Convert to grayscale for future operations.
and recognition. Contour joins all the continuous points 4. Apply CLAHE over image for low contrast improve-
along a boundary with the same intensity. It is an outline ment.
of the feature to be extracted in a binary image using gradi- 5. Image thresholding by suitable threshold cut limits.
ent operations. Contour overlaying is performed to enhance 6. Remove further noise and breaks in structure by mor-
the boundary quality as breaks occur in the edges after phology operation.
segmentation and morphological operations. These steps 7. Extract contours from the above output to extract retinal
were preceded by binary image thresholding and morpho- layer edges (the other edge detection techniques were not
logical transformation for noise removal to successfully useful as discussed).
find contours. The present studies have employed active 8. Draw contours to the original image to allow edges and
contour-based segmentation in their work (González-López layer structures.
et al. [38], Somfai et al. [39], Perez-Cisneros et al. [40]. 9. Apply further transforms as in the previous step, includ-
Perez-Cisneros et al. [40] used active contour models and ing resizing, center crop, and normalization.
13
A. Tayal et al.
3.4 Deep learning models based architecture was chosen for this problem because
it demonstrates excellent performance and accurate
Our research has used three different CNN-based results in computer vision problems and image clas-
model architecture and compared the results of these sification among the other deep neural network archi-
model architectures on the selected dataset. The CNN- tectures [42]. The benefits of CNN-based models over
13
DL‑CNN‑based approach with image processing techniques for diagnosis of retinal diseases
Fig. 7 Image processing step outcomes for the four classes of diseases: a DME, b CNV, c drusen, and d normal
conventional feed-forward neural network models unit) activation. The first, second, fourth, and fifth CNN
include lesser parameters and connections and faster layers outputted were fed to the max pool layer with a
training [43]. filter size of 2 × 2. The kernel size applied to the image
In Fig. 9, the three models are based on different num- has a dimension of 3 × 3. The required padding and
bers of convolutional layers, max-pooling, and fully stride were set to one. The final output of the 2D CNN
connected dense layers and explained below: layers was flattened, and the features extracted were fed
to a block of three fully connected layers with ReLU
activation. Finally, the log-softmax probability was cal-
1. Five-layered CNN model This model has five CNN lay-
culated and used for further computations. A dropout
ers, one input CNN layer with three input channels, and
with a probability of 0.4 was used to avoid overfitting.
four hidden CNN layers, all with ReLU (rectifier linear
13
A. Tayal et al.
Fig. 8 Final processed images after retinal structure edge and layer enhancement
2. Seven-layered CNN model The second model was devel- to compute the error in prediction using the defined NLL-
oped following a similar architecture as in the previous Loss (negative log-likelihood) criterion and backpropagate
one with an increased number of hidden CNN layers. the error through the network for gradient weight tuning of
This model has used CNN-based blocks for better fea- CNN and fully connected layers with an Adam optimizer
ture extraction and the applied max pool layer to the with a learning rate of 10−3.
output of convolutional blocks. Four CNN blocks exist; In our model, the dropout regularization technique and
the first and second blocks consist of only a single CNN early stopping algorithm are used to avoid overfitting of
layer. The third block consists of three CNN layers, results. Table 3 provides the CNN parameters for initializa-
while the fourth one has two CNN layers. Each block tion. Fifteen epochs are used to train the model with a batch
has an output max pool layer with 2 × 2 filter dimen- size of 84. A total of 880 steps exist in each epoch and a total
sions. The output of these CNN blocks has dimensions of 13,440 steps for training. The current step model param-
of 48 × 8 × 8, which is fed to a block of three fully con- eters were tested on the validation dataset after every 20th
nected dense layers. step during training for the top-1 class prediction to evaluate
3. Nine-layered CNN model The third model has nine CNN the model’s performance. The finally trained models were
layers compared with the previous models. The block then used to evaluate the testing dataset, and statistical anal-
model architecture was used for training the dataset. The ysis was conducted on the observed results.
first and second blocks consist of a single CNN layer. In CNN, a set of inputs from the training data is mapped
The third block consists of three CNN layers, while the to a set of outputs. Many unknown weights exist for a neural
fourth and fifth blocks have two CNN layers each. The network; therefore, the perfect weights for it are impossible
five-block architecture generates an output with dimen- to calculate. The problem of learning is seen as a search or
sions of 64 × 4 × 4 fed to fully connected dense layers. optimization problem, and the model may use an algorithm
to navigate the space of possible sets of weights to make
The first convolutional layer acts as the input layer and useful predictions.
converts each image into a vector. The convolutional layers Optimizers are algorithms used to modify the neural
extract spatial and temporal features by applying different network parameters, such as weights and learning rate, to
filter kernels over the entire image. These filters slide around reduce losses. Gradient descent is the popularly used opti-
the image doing element-wise multiplication of filter weights mization algorithm. The term “gradient” refers to an error
with image pixel values. These values are then summed up gradient. The model is used to make predictions with a
for each filter stride and generate a new activation or feature given set of weights and the error for the calculated pre-
map, which is inputted to the hidden CNN layers. The hid- dictions. The gradient descent algorithm makes changes in
den CNN layers then improvise on the feature extraction the weights; accordingly, the next evaluation reduces the
and increase the depth of activation maps. The output is error. This notion means that the optimization algorithm
fed through ReLU activation to introduce nonlinearity for is navigating down the gradient (or slope) of error. This
better performance. Then, the output is fed to the pooling algorithm is employed in linear regression and classifica-
layer (max pool) to reduce the feature map’s dimensionality. tion algorithms. ADAM is among the most efficient opti-
The last layer of the model comprises fully connected dense mizer algorithms that find the learning rate for each model
neural networks that use these generated features and clas- attribute. The learning rate is the parameter that defines how
sify them. The log-softmax probability final output is used the model responds to error estimated after the weights are
13
DL‑CNN‑based approach with image processing techniques for diagnosis of retinal diseases
updated. ADAM considers the exponentially decaying aver- The function, which is minimized or maximized, is
age of gradients (such as momentum), which are termed the referred to as a criterion. This function can be referred to
first moment, and squared gradients termed as the second as the cost function, error function, or loss function while
moment. Hence, the model is named ADAptive moment. minimizing it. The training loss, which can be defined as
The past and squared gradients are calculated, which are the error or difference between true and predicted values
then biased toward zero. The bias updated gradient and used, is the NLL. We pass in the raw output from the mod-
squared gradients are calculated. Finally, the weights are el’s final layer because the NLL loss in PyTorch expects
updated. log probabilities, which are useful to obtain predictions
13
A. Tayal et al.
Table 3 Parameter initialization for CNN 3. Define loss criterion, optimizer, learning rate, and num-
Parameter Value/type ber of epochs.
4. For each epoch:
Weight initialization Random Take a batch of training dataset.
Image input size 128 × 128
Loss criterion NLLLoss (a) Initialize optimizer, input, and labels
Optimizer ADAM (b) Pass input image to model
Learning rate 0.001 (c) Compute the training loss and backpropagate loss
Batch size 84 to update weights
Network output Softmax probability of (d) After each kth step, validate the trained model
CNV, DME, drusen, and
using the validation dataset. Compute validation
normal
loss and accuracy
(e) Store results in an array.
13
DL‑CNN‑based approach with image processing techniques for diagnosis of retinal diseases
study while evaluating the performance of our differ- 10. Minimum testing loss: It is the minimum loss calcu-
ent models and used confusion matrix to compute the lated on the testing dataset using the trained model.
same. 11. Model size and training time: The time taken by the
The value of true positive (TP) and true negative program to train the model and test results defines the
(TN) and false positive (FP) and false-negative (FN) training time in minutes. The size of memory used by
can be derived from the confusion matrix and are the CPU to store model weights defines the model size
explained below: in megabytes.
12. Kappa value: Cohen’s kappa statistic is an important
• TP: correctly predicted positive class; overall accuracy measurement parameter for multiclass
• FP: incorrectly predicted positive class; classification problems with data imbalance. In this
• FN: incorrectly predicted negative class; case, other measures may not provide a complete per-
• TN: correctly predicted negative class. formance picture of the classifier by considering the
possibility of the outcomes occurring by chance.
2. Accuracy: It is the measure of how accurately the clas-
sifier can classify the data. The following equation pro- 5.2 Parameter‑based evaluation
vides the accuracy:
The system is trained on three machine learning models,
TP + TN
Accuracy(range0 − 1) = . (3) namely, five-layer CNN model, seven-layer CNN model,
FP + FN + TP + TN
and nine-layer model. Figure 10 illustrates the confusion
3. Precision: It defines the relation of total positive results matrix for each of the CNN models. The comparison aims to
that are correct to the classifier’s total positive results, find the most suitable and efficient model for our dataset by
as provided by comparing the performance metrics and visualization results
from all the CNN layers. The parameter-based model evalu-
TP ation’s essential measures are precision, accuracy, F1 score,
Precision (range 0 − 1) = . (4)
TP + FP kappa value, losses, and confusion matrix. Table 4 reports
4. Sensitivity (or recall): It corresponds to the TP rate of the outcomes of our study.
the considered class and is computed using The epochs at 15/13,440, learning rate at 0.001, and input
size at 0.19 MB are maintained for all three models. The
TP minimum training loss is 0.039854, the minimum validation
Senstivity (range 0 − 1) = . (5)
TP + FN loss is 0.149069, and the minimum test loss is 0.005284 in
5. Specificity: It corresponds to TN rate of the considered the five-layer CNN that is similar to the losses seen in other
class (i.e., the proportion of negatives that have been layered models. The nine-layer CNN model is memory-
correctly identified). The following equation provides optimized. The overall accuracy is high for the five- and
the specificity: seven-layer CNN models at 96.50% and 96.54%, respec-
tively. The maximum validation accuracy used to estimate
Senstivity (range 0 − 1) =
TN
. (6) the model’s prediction capability for the five-layer CNN at
TN + FP 96.30% is higher than that of other models. The F1 score,
6. F1 score: It considers the precision and recall values to which signifies the accuracy by finding the balance between
calculate the weighted average and is computed using precision and recall value, is considerably low for the five-
the following equation: layer CNN, but it is best for the seven-layer CNN. The train-
able parameters increase with the increase in the number
2 × TP of layers. Hence, considerable differences exist between
F - 1 Score = . (7)
FP + FN + (2 × TP) the nine-layer CNN and the rest of the models. The kappa
coefficient identifies the relationship between the expected
7. Minimum training loss: It is the minimum amount of
accuracy and the observed accuracy in a confusion matrix.
error on the training set of data during the training
The value is high for the five-layer CNN (0.949) and seven-
steps.
layer CNN (0.948). The precision is calculated for all four
8. Minimum validation loss: It is the minimum error after
classes, and the nine-layer CNN has the highest precision
running the validation set of data through the trained
for CNV at 98.53%. The seven-layer CNN has the highest
network.
precision for DME, drusen, and normal at 97.13%, 96.86%,
9. Maximum validation accuracy: It is the measure of
and 98.31%, respectively. The recall value indicates the
how accurate the model’s prediction is compared with
accuracy with which the model detects all the classes in the
the true data after running the validation set of data.
dataset. The recall accuracy is highest in the seven-layer
13
A. Tayal et al.
CNN in CNV and normal detection at 99.58% and 98.03%, slightly less than that of the five-layer model. The last model
respectively, whereas the five-layer CNN performs best in with nine layers has total trainable parameters of 7,00,636,
DME and drusen with accuracy rates of 98.93% and 96.38%, which took 230.1585 min of training time. The model’s total
respectively. estimated size was 7.08 MB (0.19 MB input size, 4.22 MB
The model training time is about the same for all three forward/backward pass size, and 2.67 MB parameter size).
model structures and almost double for the image-enhanced This model may be best suited to run in environments with
dataset. In the five-layer CNN model, the total trainable memory limitations with some trade-off with performance.
parameters were 55,58,116, which took 233.4027 min of In the three models devised, a high specificity value and sen-
training time. The model’s estimated total size was 26.01 sitivity value of 0.98 and 0.96 were accomplished. The F1
MB (0.19 MB input size, 4.62 MB forward/backward pass score observed in the seven-layer model was highest among
size, and 21.2 MB parameter size). In the seven-layer model, the three models.
the total trainable parameters were 55,69,684, which took The training log shows that the model training was sig-
238.8658 min of training time. The model’s total estimated nificant for ten epochs, and beyond that, it started to overfit
size was 25.61 MB (0.19 MB input size, 4.18 MB forward/ (Fig. 11). The increase in accuracy was also reduced. Thus,
backward pass size, and 21.25 MB parameter size), which is epochs were reduced in the final model; hence, computations
13
DL‑CNN‑based approach with image processing techniques for diagnosis of retinal diseases
Table 4 Performance of Comparison parameters Five-layer CNN Seven-layer CNN Nine-layer CNN With image
different models at the B-scan processing
level for detection of four
classes Epochs/steps 15/13,440 15/13,440 15/13,440 10/8960.0
Learning rate 0.001 0.001 0.001 0.001
Min training loss 0.039854 0.05164 0.084115 0.085634
Min validation loss 0.149069 0.14923 0.158438 0.154837
Max validation accuracy 96.30% 95.36% 95.55% 95.14%
Accuracy (overall) 96.54% 96.50% 96.05% 97.14%
Precision (CNV) 97.19% 94.93% 98.53% 98.34%
Precision (DME) 96.69% 97.13% 94.03% 97.19%
Precision (drusen) 96.10% 96.86% 91.54% 88.89%
Precision (normal) 95.71% 98.31% 94.83% 98.03%
Recall accuracy (CNV) 98.53% 99.58% 97.72% 98.93%
Recall accuracy (DME) 98.93% 98.22% 96.29% 95.26%
Recall accuracy (drusen) 96.38% 79.80% 85.22% 93.11%
Recall accuracy (normal) 83.04% 98.03% 97.32% 96.64%
Sensitivity 0.9649 0.9605 0.9654 0.9447
Specificity 0.9883 0.9868 0.9884 0.9816
F1 score 89.7995 95.33464 94.43444 95.80451
Kappa 0.949 0.948 0.941 0.957
Model training time (minutes) 233.4027 238.8658 230.1585 420.3642
Min test loss 0.005284 0.005602 0.008608 0.006861
Total trainable parameters 55,58,116 55,69,684 7,00,036 55,69,684
Input size (MB) 0.19 0.19 0.19 0.19
Forward/backward pass size (MB) 4.62 4.18 4.22 4.18
Params size (MB) 21.2 21.25 2.67 21.25
Estimated total size (MB) 26.01 25.61 7.08 25.61
and time also decreased. Figure 11 illustrates that the five- with a decrease in validation loss. The filter outputs of all the
layer CNN model is slightly overfitting. CNN layers and fully connected layers show how the model
views the image internally as it passes down through multi-
5.3 Visualizing model performance ple layers. The confusion matrix is plotted to calculate multi-
ple parameters and evaluate the model performance. Finally,
The deep learning-based classification performed is a black 50 random example images from the test dataset were taken,
box AI system for automated decision-making, which uses and their prediction for each model was observed (Fig. 12).
machine learning techniques to map feature data into class Each correct prediction was marked with a green and an
without uncovering the reasons. Different visualization tech- incorrect prediction with a red. Running on different samples
niques are used to analyze the performance and understand multiple times for each model showed that most predictions
the decision-making of these models. For this purpose, were correct.
popular open-source matplotlib and OpenCV libraries were This research focuses on three CNN models with five,
utilized. The variation of training loss and validation loss seven, and nine layers. Lower numbers, such as a three- or
(NLLLoss with softmax envelope) over successive steps dur- four-layer model, were analyzed, which showed poor per-
ing training was plotted. This situation demonstrates how the formance in extracting fine features because the input OCT
two losses vary with each other. Both the losses decrease images are similar with subtle differences in structure. The
with successive steps, reflecting that predictions are becom- result showed that the higher layered models performed bet-
ing increasingly accurate, and updating weights results in ter in identifying the information, such as layered structures
the movement of losses in the direction of minima. This of the retina and cavity. We also wanted to study and explore
phenomenon can also show if the model is overfitting or not. memory and time-efficient solution. We added layers to our
Next, a comparison of the variation of validation loss and model and observed that performance decreased for layers
accuracy over the training steps is performed (Fig. 11). We greater than nine to our model. We observed a decrease in
can see a subsequent increase in the validation accuracies the gap between the training and the validation loss with
13
A. Tayal et al.
Fig. 11 Performance of five, seven, nine-layer models and with image enhancement CNN model: a training and validation losses over successive
training steps; b validation loss and validation accuracy over successive training steps
13
DL‑CNN‑based approach with image processing techniques for diagnosis of retinal diseases
the increase in the number of CNN layers in the successive for CNV, DME, and normal classes can be achieved than
models (Fig. 11). The saturation in losses is achieved at later others with decreased value for drusen. This model also had
stages with the increase in the number of layers. The slope poor performance in terms of sensitivity, which decreased to
is more gradual when two models are compared in Fig. 13, 0.94 from 0.96 compared with the other models. The train-
which helps the model learn over greater epochs and requires ing time increased due to the additional preprocessing of
more features for classification. the image.
13
A. Tayal et al.
are evaluated, we could identify the detrimental parameters settings or techniques. Therefore, the efficacy of this model
affecting the algorithm’s operations. The seven-layer CNN for different systems is still not fully established.
model is the one with balanced statistics and is suggested by We can further improve this work by exploring various
our proposed work for use. The proposed approach has an options for dimensional reduction. In this work, we reduced
accuracy of 96.5%. The primary goal is to help the patients the size of the input images to 128 × 128 pixels to minimize
and eye specialists in making an automated and fast diagno- the input parameters and employed max pool layers in the
sis with increased accuracy, performance, and quicker and models, which also decreases the dimensions of the feature
efficient detection, which can greatly benefit the patients. matrix over successive steps.
Further extension of the model may include analyzing
the other ocular pathology class, such as diabetic retinopa-
thy, AMD, and glaucoma. The model currently operates on
7 Limitation and future scope the OCT images for the classification; however, it would be
beneficial to modify the model to operate on the OCT angi-
This research successfully demonstrated the detection of ography and fundus photographs. Models can be developed
four ocular diseases from the OCT images with an accu- or explored to consider the biological variations in eye and
racy of 96%. Certain limitations of this study are as follows: retina structure.
(1) the dataset which we had selected for the project had
scans collected from a single demographic region and did Acknowledgements This study was supported by Taif University
Researchers Supporting Project (number: TURSP-2020/10), Taif Uni-
not contain diversity in terms of eye structure observed in versity, Taif, Saudi Arabia.
people of different races; (2) the images taken for this pro-
ject specifically included the OCT scans, while for other
diseases, the scans may not be OCT but fundus photographs References
or angiographic pictures, which may require the project to be
trained again for such types images; (3) the scans taken from 1. Yang, X., et al.: Deep relative attributes. IEEE Trans. Multimedia
the dataset consisted of all of them in the same scanning 18(9), 1832–1842 (2016)
13
DL‑CNN‑based approach with image processing techniques for diagnosis of retinal diseases
2. Hossain, M.S., Muhammad, G., Alamri, A.: Smart healthcare 16. Pandey, S., Solanki, A.: Music instrument recognition using deep
monitoring: a voice pathology detection paradigm for smart cit- convolutional neural networks. Int. J. Inf. Technol. 13(3), 129–149
ies. Multimedia Syst. 25(5), 565–575 (2019) (2019)
3. Hossain, M.S., Amin, S.U., Muhammad, G., Sulaiman, M.: Apply- 17. Rajput, R., Solanki, A.: Real-time analysis of tweets using
ing deep learning for epilepsy seizure detection and brain mapping machine learning and semantic analysis. In: International Confer-
visualization. In: ACM Trans. Multimedia Comput. Commun. ence on Communication and Computing Systems (ICCCS2016),
Appl. (ACM TOMM), vol. 15(1s) (2019) Taylor and Francis, at Dronacharya College of Engineering, Gur-
4. Ţălu, S.-D., Ţălu, Ş: Use of OCT imaging in the diagnosis and gaon, 9–11 Sept, vol 138 issue 25, pp. 687–692 (2016).
monitoring of age related macular degeneration, age related macu- 18. Ahuja, R., Solanki, A.: Movie recommender system using
lar degeneration—the recent advances in basic research and clini- K-Means clustering and K-Nearest Neighbor. In: Accepted for
cal care, Gui-Shuang Ying. IntechOpen (2012). https://d oi.o rg/1 0. Publication in Confluence-2019: 9th International Conference on
5772/33410 Cloud Computing, Data Science & Engineering, Amity Univer-
5. Schmidt-Erfurth, U., Klimscha, S., Waldstein, S.M., Bogunović, sity, Noida, vol. 1231, no. 21, pp. 25–38 (2019).
H.: A view of the current and future role of optical coherence 19. Tayal, A., Kose, U., Solanki, A., Nayyar, A., Saucedo, J.A.M.:
tomography in the management of age-related macular degenera- Efficiency analysis for stochastic dynamic facility layout prob-
tion. Eye (Lond.) 31(1), 26–44 (2017). https://doi.org/10.1038/ lem using meta-heuristic, data envelopment analysis and machine
eye.2016.227 learning. Comput. Intell. 36(1), 172–202 (2020)
6. Srinivasan, P.P., Kim, L.A., Mettu, P.S., Cousins, S.W., Comer, 20. Singh, T., Nayyar, A., Solanki, A.: Multilingual opinion min-
G.M., Izatt, J.A., Farsiu, S.: Fully automated detection of diabetic ing movie recommendation system using RNN. In: Singh, P.,
macular edema and dry age-related macular degeneration from Pawłowski, W., Tanwar, S., Kumar, N., Rodrigues, J., Obaidat, M.
optical coherence tomography images. Biomed. Opt. Express (eds.) Proceedings of First International Conference on Comput-
5(10), 3568–3577 (2014). https://doi.org/10.1364/BOE.5.003568 ing, Communications, and Cyber-Security (IC4S 2019). Lecture
7. Karri, S.P., Chakraborty, D., Chatterjee, J.: Transfer learning Notes in Networks and Systems, vol 121. Springer, Singapore
based classification of optical coherence tomography images with (2020). https://doi.org/https://doi.org/10.1007/978-981-15-3369-
diabetic macular edema and dry age-related macular degeneration. 3_44.
Biomed. Opt. Express 8(2), 579–592 (2017). https://doi.org/10. 21. Lemaître, G., Rastgoo, M., Massich, J., Cheung, C.Y., Wong, T.Y.,
1364/BOE.8.000579 Lamoureux, E., Milea, D., Mériaudeau, F., Sidibé, D.: Classifica-
8. Wang, Y., Zhang, Y., Yao, Z., Zhao, R., Zhou, F.: Machine learn- tion of SD-OCT volumes using local binary patterns: experimen-
ing based detection of age-related macular degeneration (AMD) tal validation for DME detection. J. Ophthalmol. 2016, 3298606
and diabetic macular edema (DME) from optical coherence (2016). https://doi.org/10.1155/2016/3298606
tomography (OCT) images. Biomed. Opt. Express 7(12), 4928– 22. Tasnim, N., Hasan, M., Islam, I.: Comparisonal study of Deep
4940 (2016). https://doi.org/10.1364/BOE.7.004928 Learning approaches on Retinal OCT Image (2019).
9. Alsaih, K., Lemaitre, G., Rastgoo, M., Massich, J., Sidibé, 23. Xiaoming, L., Ke, Xu., Peng, Z., Jiannan, C.: Edge detection of
D., Meriaudeau, F.: Machine learning techniques for diabetic retinal OCT image based on complex shearlet transform. IET
macular edema (DME) classification on SD-OCT images. Image Process. 13(10), 1686–1693 (2019). https://doi.org/10.
Biomed Eng Online. 16(1), 68 (2017). https://doi.org/10.1186/ 1049/iet-ipr.2018.6634
s12938-017-0352-9 24. Bhatt, C., Kumar, I., Vijayakumar, V., et al.: The state of the
10. Choi, J.Y., Yoo, T.K., Seo, J.G., Kwak, J., Um, T.T., Rim, T.H.: art of deep learning models in medical science and their chal-
Multi-categorical deep learning neural network to classify reti- lenges. Multimedia Syst. (2020). https:// d oi. o rg/ 1 0. 1 007/
nal images: a pilot study employing small database. PLoS ONE s00530-020-00694-1
12(11), e0187336 (2017). https://doi.org/10.1371/journal.pone. 25. Kim, P.W.: Image super-resolution model using an improved
0187336 deep learning-based facial expression analysis. Multimedia Syst.
11. Hussain, A., Bhuiyan, A., Luu, C.D., Theodore Smith, R., (2020). https://doi.org/10.1007/s00530-020-00705-1
Guymer, R.H., Ishikawa, H., et al.: Classification of healthy and 26. Nie, W., Cao, Q., Liu, A., et al.: Convolutional deep learning for
diseased retina using SD-OCT imaging and Random Forest algo- 3D object retrieval. Multimedia Syst. 23, 325–332 (2017). https://
rithm. PLoS ONE 13(6), e0198281 (2018). https://doi.org/10. doi.org/10.1007/s00530-015-0485-2
1371/journal.pone.0198281 27. Zhao, F., Chen, Y., Hou, Y., et al.: Segmentation of blood vessels
12. Kermany, D.S., Goldbaum, M., Cai, W., Valentim, C., Liang, H., using rule-based and machine-learning-based methods: a review.
Baxter, S.L., McKeown, A., Yang, G., Wu, X., Yan, F., Dong, Multimedia Syst. 25, 109–118 (2019). https://doi.org/10.1007/
J., Prasadha, M.K., Pei, J., Ting, M., Zhu, J., Li, C., Hewett, S., s00530-017-0580-7
Dong, J., Ziyar, I., Shi, A., et al.: Identifying medical diagnoses 28. Rajalingam, B., Al-Turjman, F., Santhoshkumar, R., et al.:
and treatable diseases by image-based deep learning. Cell 172(5), Intelligent multimodal medical image fusion with deep guided
1122-1131.e9 (2018). https://doi.org/10.1016/j.cell.2018.02.010 filtering. Multimedia Syst. (2020). https://d oi.o rg/1 0.1 007/
13. Schlegl, T., Waldstein, S.M., Bogunovic, H., Endstraßer, F., Sad- s00530-020-00706-0
eghipour, A., Philip, A.M., Podkowinski, D., Gerendas, B.S., 29. Xi, X., Meng, X., Yang, L., et al.: Automated segmentation of
Langs, G., Schmidt-Erfurth, U.: Fully automated detection and choroidal neovascularization in optical coherence tomography
quantification of macular fluid in OCT using deep learning. Oph- images using multi-scale convolutional neural networks with
thalmology 125(4), 549–558 (2018). https://doi.org/10.1016/j. structure prior. Multimedia Syst. 25, 95–102 (2019). https://doi.
ophtha.2017.10.031 org/10.1007/s00530-017-0582-5
14. Das, S., Malathy, C.: Survey on diagnosis of diseases from retinal 30. Baroni, M., Fortunato, P., Torre, A.: Towards quantitative analysis
images. J Phys Conf Ser. 1000, 012053 (2018). https://doi.org/10. of retinal features in optical coherence tomography. Med. Eng.
1088/1742-6596/1000/1/012053 Phys. 29, 432–441 (2007). https://doi.org/10.1016/j.medengphy.
15. Hossain, M.S., Muhammad, G.: Deep learning based pathology 2006.06.003
detection for smart connected healthcares. IEEE Netw. 34(6),
120–125 (2020)
13
A. Tayal et al.
31. Shaw, P.R., Manickam, S. , Burnwal, S., Kalyanakumar, V.: Study tomography images of the retina. Invest. Ophthalmol. Vis. Sci.
of removal of speckle noise from OCT images. 35, 313–317 55(13), 4793 (2014)
(2015). 40. Perez-Cisneros, M., Cruz-Aceves, I., Avina-Cervantes, J.G.,
32. Kalyanakumar, V., Manickam, S.: Performance evaluation of Lopez-Hernandez, J.M., Garcia-Hernandez, M.G., Torres-Cis-
speckle reduction filters for optical coherence tomography images. neros, M., Estrada-Garcia, H.J., Hernandez-Aguirre, A.: Auto-
Int J Pharm. Bio. Sci 6, B837–B845 (2015) matic image segmentation using active contours with univariate
33. Saya Nandini Devi, M., Santhi, S.: Improved oct image enhance- marginal distribution. Math. Probl. Eng J. (2013). https://doi.org/
ment using Clahe. In: International Journal of Innovative Technol- 10.1155/2013/419018
ogy and Exploring Engineering (IJITEE) ISSN: 2278–3075, vol. 41. Mishra, A., Wong, A., Bizheva, K., et al.: Intra-retinal layer seg-
8, issue 11 (2019) mentation in optical coherence tomography images. Opt. Express
34. Setiawan, A., Mengko, T., Santoso, O., Suksmono, A.: Color reti- 17(26), 23719–32372 (2009)
nal image enhancement using CLAHE, pp. 1–3 (2013). https://doi. 42. Sultana, F., Sufian, A., Dutta, P. : Advancements in image clas-
org/https://doi.org/10.1109/ICTSS.2013.6588092. sification using convolutional neural network. In: 2018 Fourth
35. Dodo, B.I., Li, Y., Liu, X., & Dodo, M.I.: Level Set Segmentation International Conference on Research in Computational Intelli-
of Retinal OCT Images (2019). gence and Communication Networks (ICRCICN), pp. 122–129
36. Pekala, M., Joshi, N., Liu, T., Bressler, N.M., DeBuc, D.C., (2018)
Burlina, P.: Deep learning based retinal OCT segmentation. Com- 43. Hossain, M.S., Muhammad, G.: Cloud-based collaborative media
put. Biol. Med. 114, 103445 (2019). https://doi.org/10.1016/j. service framework for healthcare. Int. J. Distrib. Sens. Netw. 2014,
compbiomed.2019.103445 11 (2014)
37. Luo, S., Yang, J., Gao, Q., Zhou, S., Zhan, C.A.: The edge detec- 44. Feng, L., Chen. H., Liu, Z, et al.: Deep learning-based automated
tors suitable for retinal OCT image segmentation. J. Healthc. Eng. detection of retinal diseases using optical coherence tomography
2017, 1 (2017) images. Biomed. Opt. Express 10, 6204–6226 (2019)
38. González-López, A., de Moura, J., Novo, J., Ortega, M., Penedo,
M.G.: Robust segmentation of retinal layers in optical coherence Publisher’s Note Springer Nature remains neutral with regard to
tomography images based on a multistage active contour model. jurisdictional claims in published maps and institutional affiliations.
Heliyon. 5, e01271 (2019). https://d oi.o rg/1 0.1 016/j.h eliyo n.2 019.
e01271
39. Somfai, G.M., Jozsef, M., Chetverikov, D., DeBuc, D.: Active
contour detection for the segmentation of optical coherence
13