e Tarjome E17573
e Tarjome E17573
e Tarjome E17573
Research Article
Hematologic Cancer Detection Using White Blood Cancerous
Cells Empowered with Transfer Learning and Image Processing
Muhammad Umar Nasir,1 Muhammad Farhan Khan,2 Muhammad Adnan Khan ,3,4
Muhammad Zubair,5 Sagheer Abbas ,6 Meshal Alharbi,7 and Md Akhtaruzzaman 8
1
Department of Computer Science, Bahria University, Lahore Campus, Lahore 54000, Pakistan
2
Department of Forensic Sciences, University of Health Sciences, Lahore 54000, Pakistan
3
Riphah School of Computing and Innovation, Faculty of Computing, Riphah International University, Lahore Campus,
Lahore 54000, Pakistan
4
School of Information Technology, Skyline University College, University City Sharjah, Sharjah, UAE
5
Faculty of Computing, Riphah International University, Islamabad 45000, Pakistan
6
School of Computer Science, National College of Business Administration & Economics, Lahore 54000, Pakistan
7
Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University,
Alkharjb 11942, Saudi Arabia
8
Department of Computer Science and Engineering, Aisan University of Bangladesh, Ashulia, Dhaka-1230, Bangladesh
Received 25 December 2022; Revised 23 March 2023; Accepted 28 March 2023; Published 29 May 2023
Copyright © 2023 Muhammad Umar Nasir et al. Tis is an open access article distributed under the Creative Commons
Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is
properly cited.
Lymphoma and leukemia are fatal syndromes of cancer that cause other diseases and afect all types of age groups including male
and female, and disastrous and fatal blood cancer causes an increased savvier death ratio. Both lymphoma and leukemia are
associated with the damage and rise of immature lymphocytes, monocytes, neutrophils, and eosinophil cells. So, in the health
sector, the early prediction and treatment of blood cancer is a major issue for survival rates. Nowadays, there are various manual
techniques to analyze and predict blood cancer using the microscopic medical reports of white blood cell images, which is very
steady for prediction and causes a major ratio of deaths. Manual prediction and analysis of eosinophils, lymphocytes, monocytes,
and neutrophils are very difcult and time-consuming. In previous studies, they used numerous deep learning and machine
learning techniques to predict blood cancer, but there are still some limitations in these studies. So, in this article, we propose
a model of deep learning empowered with transfer learning and indulge in image processing techniques to improve the prediction
results. Te proposed transfer learning model empowered with image processing incorporates diferent levels of prediction,
analysis, and learning procedures and employs diferent learning criteria like learning rate and epochs. Te proposed model used
numerous transfer learning models with varying parameters for each model and cloud techniques to choose the best prediction
model, and the proposed model used an extensive set of performance techniques and procedures to predict the white blood cells
which cause cancer to incorporate image processing techniques. So, after extensive procedures of AlexNet, MobileNet, and ResNet
with both image processing and without image processing techniques with numerous learning criteria, the stochastic gradient
descent momentum incorporated with AlexNet is outperformed with the highest prediction accuracy of 97.3% and the mis-
classifcation rate is 2.7% with image processing technique. Te proposed model gives good results and can be applied for smart
diagnosing of blood cancer using eosinophils, lymphocytes, monocytes, and neutrophils.
2 Journal of Healthcare Engineering
support vector machine was utilized to separate normal and on a picture of a sample of blood from a particular patient. A
blast cells. MoradiAmin et al. [20] separated background thick structurally complex transformation was used in
lymphocytes using widespread pooling of C medium to feature extraction. At the encoding layer, the size of the
increase the identifcation of acute lymphocytic leukemia recovered feature vectors was lowered. Finally, a multiclass
cells. Following the extraction of diverse shape-based data, support vector machine classifer was used to do the clas-
they used hierarchical clustering to minimize the number of sifcation. Experiments with four hundred samples resulted
parameters before providing them to assist the SVM clas- in an accuracy of 79.38%.
sifer for normal and popping cell categorization. Te study [31] established a classifcation system for
Te authors of the study employed computer vision acute myelogenous leukemia that segments grains using
techniques to overcome the difculties of manual counting. In contour and k-signature methods. Ten, utilizing the
this situation, the picture has been preprocessed to remove the morphology, characteristics such as cell volume, cell color,
chance of distortion, and the proportion of white blood cells and so on were retrieved. Studies on a dataset of one
to red blood cells has been determined to determine if the hundred pictures found that the SVM classifer had up to
image is normal or abnormal for detecting Salihah et al. [21]. 92% classifcation accuracy.
Horie et al. [22] proved the diagnostic efectiveness of deep Te study [32] described an automated microscopic
learning approaches such as CNN for esophagitis, including image-based technique for diagnosing leukemia and lym-
melanoma and adenocarcinoma, with a sensitivity of 98%. phoma. Te technique begins by reducing noise and blur
Te authors [23] developed a CNN model for leukemia during preprocessing. Te white blood cells were then
prediction that featured three key steps: CNN comparison separated using the k-means and Zack algorithms. Following
stretching and edge extraction, followed by transfer learning that, chromatic, statistical, geometric, and textural elements
depth feature extraction. In [24], authors developed were restored. Finally, to distinguish between healthy and
a method for distinguishing tainted pictures from healthy unhealthy images, an SVM classifer was utilized. Trials on
ones that uses a convolutional neural network. Furthermore, a dataset of twenty-seven pictures revealed a 93.57% clas-
the clustering method using the EM approach is utilized to sifcation accuracy.
compute the rate of infection spread thus far. Te study [25] Te research [33] created a categorization system for
proposed computer-aided diagnostic methods for leukemia three forms of acute myeloid leukemia and acute multiple
cancer classifcation using an ensembled SVM learning myeloma leukemia and acute lymphocytic leukemia. Twelve
approach. Te authors proposed a supervised machine characteristics were extracted by hand from picture samples.
learning approach to identify blood cancer and then cate- Finally, for classifcation, a K-nearest neighbor predictor was
gorise them using a fully integrated network [26]. applied. Experiments using a sample of 350 photos yielded
Te study ofered classifcation models for distinguishing 86% reliability. Te authors [34] developed a fve-
microscopic pictures of blood from leukemia and lymphoma characteristic method for acute myelogenous leukemia
patients from those who were not [27]. A pretrained CNN categorization that improved picture contrast. An SVM
named AlexNet, as well as numerous additional classifers, classifer was used to categorize the data. Experiments with
are utilized to extract the features. In comparison to other 51 photos provided a categorization accuracy of 93.5%.
classifers, the support vector machine fared better in tests. To categorize acute lymphocytic leukemia and its sub-
In the second model, AlexNet is used for extraction and types, the authors [35] recommended using a CNN network
classifcation only when the results demonstrate that it termed a convolutional network. A dataset of 373 pictures
outperforms other models on various performance criteria. was utilized for the evaluation, and an accuracy of 80% was
Te authors of this study [28] presented a computational reached. Experiments confrmed this method’s superiority
method for detecting acute leukemia and lymphoma. To over a range of earlier techniques. Te authors [36] used
begin, focusing was applied to digital microscope pictures to numerous deep learning approaches to predict blood cancer
decrease noise and blurring. Color, form, texture, and sta- cells including CNN and SVM and they achieved 97.04%
tistics were identifed and classed as benign or malignant. prediction accuracy.
Classifcation models based on K-nearest neighbors and Previous research on the prediction of blood cancer
naive Bayes were utilized. Experiments with sixty blood utilizing white blood cells using machine learning and deep
analyzers proved the efectiveness of the K-nearest neighbor learning had signifcant drawbacks. Te limitations of prior
(KNN), which had a 92.8% accuracy rate. investigations are shown in Table 1.
Te authors of this study [29] created a mechanism for As Table 1 depicts the summary of all previous research
categorizing acute myelogenous leukemia cells into sub- that predicted blood cancer using machine learning, deep
groups. Initially, cells were segmented using a color k-means learning, and transfer learning techniques, every study has
technique. Following that, six statistical characteristics were its own limitations. So, this study has the advantage of
retrieved and fed into a multiclass SVM classifer. Te data coping with all these limitations wisely during blood cancer
yielded a maximum aiming accuracy of 87% and a maxi- prediction.
mum accuracy of 92.9%.
According to the study [30], a three-tier approach in- 3. Materials and Methods
cluding extracting features, coding, and categorization is
recommended. Te goal of this approach was to evaluate Figure 1 depicts an overview of the suggested approach for
whether or not a patient has leukemia or lymphoma based predicting blood cancer using transfer learning-enabled
4 Journal of Healthcare Engineering
Table 1: Limitations of previous studies (it explains the results of previous studies and shows the previous studies research gap).
Publications Methods Datasets Accuracy (%) Limitations
(i) Low-ratio dataset
Pansombut et al. [35] CNN Image (public) 80
(ii) Data image processing layer
(i) Low-ratio dataset
Madhukar et al. [34] SVM Image (public) 93.5 (ii) Minor image classes
(iii) Data image processing layer
(i) Low-ratio dataset
Supardi et al. [33] KNN Image (public) 86
(ii) Data image processing layer
Patel and Mishra [32] SVM Image (public) 93.57 (i) Data image processing layer
(i) Low-ratio dataset
Laosai and Chamnongthai [31] SVM Image (public) 92
(ii) Data image processing layer
Faivdullah et al. [30] SVM Feature (public) 79.38 (i) Required handcrafted features
(i) Low-diverse dataset
Setiawan et al. [29] SVM, K-means Image (public) 87 (ii) Low-ratio dataset
(iii) Data image processing layer
(i) Low-ratio dataset
Kumar et al. [28] KNN, Naı̈ve Bayes, CNN Image (public) 92.8
(ii) Less number of classes
(i) Low-ratio dataset
Loey et al. [27] CNN, AlexNet Image (public) 94.3 (ii) Data image processing
(iii) Less number of classes
Figure 1: Te proposed methodology for prediction of blood cancer using white blood cells empowered with transfer learning (this shows
the research methodology of the current study, the overall process how data is fetching, training, and testing).
malignant white blood cells. Te proposed process begins augmentation techniques were used to overcome the chal-
with data input from several hospital sources. After col- lenges of mining data samples for WBC classes. Te pro-
lecting all white blood cell (WBC) samples, data posed methodology depicts four major steps in the whole
Journal of Healthcare Engineering 5
prediction process for blood cancer. In the frst phase, the private cloud for the further testing processes. In the last
proposed framework collects data from the hospital, pre- phase, which is known as the testing phase, import blood
processed blood samples, divides all preprocessed samples samples from the cloud, import the best-performed trained
into training testing sample ratios, and stores these split model from the model secluded, and apply the testing
samples in private for easy access at any step. Te training process to predict the cancerous white blood cells. Finally,
phase imports training data samples from the private cloud the proposed framework used numerous statistical matrices
and trains the AlexNet, ResNet, and MobileNet algorithms [37–43], e.g., classifcation accuracy (CA), negative predicted
with stochastic gradient descent (SGD) with momentum, value (NPV), sensitivity, specifcity, f1-score, miss-
adaptive momentum estimation, and signal propagation classifcation rate (MCR), positive predicted value (PPV),
algorithms using squares of the root. After training all al- likelihood positive ratio (LPR), false negative rate (FNR),
gorithms of AlexNet, ResNet, and MobileNet, applying likelihood negative ratio (LNR), false positive rate (FPR),
learning criteria techniques, if the learning criteria match the and Fowlkes Mallows index (FMI), all statistical matrix
proposed framework expectations, then the trained model is equations are given as follows:
stored separately on each algorithm’s private cloud. If the ℵi
proposed model does not meet the learning criteria then zi � . (1)
ζi
apply image processing techniques such as histogram
equalization and again train the model and check the ∴ ζ for true class and ℵ is for the predicted class:
learning criteria.
In the third phase, choose the semi-best-trained algo-
rithms from all private clouds and store them in another
3
ℵ
χ i � i ,
j�1
ζ j≠i
3 ℵ
j≠i
ςi � ,
j�1
ζ i
3 ℵ
j≠i
ϱi � ,
j�1
ζ j≠i
ℵi /ζ i
sensitivity � ∗ 100,
ℵi /ζ i + 3j�1 ℵj≠i /ζ j≠i
2 ℵi /ζ i
F1 − score � ∗ 100,
2 ℵi /ζ i + 3j�1 ℵj≠i /ζ i + 3j�1 ℵj≠i /ζ j≠i
6 Journal of Healthcare Engineering
ℵi /ζ i
PPV � ∗ 100,
ℵi /ζ i + 3j�1 ℵj≠i /ζ i
⎝ ℵi /ζ i ⎠,
FNR � 100 − ⎛ ∗ 100⎞
ℵi /ζ i + 3j�1 ℵj≠i /ζ j≠i (2)
Te descriptive algorithm of the proposed study for 5. Simulation Results and Discussion
blood cancer prognostication utilizing transfer learning-
enhanced white blood cells is shown in Table 2. It repre- In this article, transfer learning has been used for the pre-
sents the specifcs of the proposed framework’s complete diction of blood cancer empowered with eosinophils,
procedure. lymphocytes, monocytes, and neutrophils incorporated into
the blood cell dataset [45]. For simulation purposes to train
and test the data sample, the proposed model used a Mac-
3.1. Dataset. In this proposed study, the dataset is acquired Book Pro 2017, 16 giga byte random access memory, and
from the online source Ghaderzadeh et al. [43]. Te dataset Core i5 with a 512 Giga byte solid state drive. Te proposed
consists of four classes named eosinophils, lymphocytes, framework divided the dataset into 70% and 30% blood cells
monocytes, and neutrophils. Machine learning algorithms for training and testing, respectively. To remove the diferent
are highly data hungry [44]. anomalies in the dataset, diferent pre-processing techniques
Te proposed framework dataset consists of 10,000 in- have been implemented. Numerous transfer learning algo-
stances, and each class instance consists of almost 2,500 rithms have been used to train the models and test the data
blood samples. Figure 2 shows the sample data instance of samples. Diferent phases of training and testing have been
each blood sample class. discussed and elaborated on in this article. To measure the
performance of all trained models and test results, the
4. Image Processing proposed framework used statistical performance parame-
ters, and all parameters’ equations are mentioned above in
To overcome the classifcation accuracy defciency problem, the methodology section.
the proposed framework uses the image processing tech- Figure 5 shows the training progress of SGD with
nique e-g histogram equalization to enhance the quality of momentum using AlexNet without image processing. To
blood samples. With the help of histogram equalization, as train this model, the proposed model set a learning rate of
shown in Figures 3 and 4, the proposed framework enhances 0.001, 100 epochs, and 58 iterations per epoch. Te proposed
the contrast and intensity of pixels in blood samples. framework of this training model has a lot of distortion and
Equation (3) represents the distribution function of histo- does not converge until the last epoch; all this distortion
gram equalization. happens due to the contrast and pixels unbalancing in the
m data samples. So, stochastic gradient descent with mo-
DFy (m) � ρy (y�n) . (3) mentum achieves 75.78% of CA and 24.22% MCR.
n�0
Journal of Healthcare Engineering 7
Table 2: Pseudocode of the proposed model (this depicts the algorithm of the current study and research methodology from data fetching to
training and testing and also it explains every training model).
Steps Codes
1 Input cancerous white blood cell images
2 Image preprocessing
3 Data division into training and testing
4 Store into cloud (£)
5 Input training images to deep learning algorithms
AlexNet
1- SGDM
2- Adaptive moment (ADAM)
3- Root mean square propagation (RMSPROP)
Check learning criteria
If meet
Store into private cloud
Else applying image preprocessing
Input image preprocessed images to deep learning algorithm
6
AlexNet
4- SGDM
5- ADAM
6- RMSPROP
Check learning criteria
If meet
Store into private cloud
Else
Retrain
ResNet
1- SGDM
2- ADAM
3- RMSPROP
Check learning criteria
If meet
Store into private cloud
Else applying image preprocessing
Input image preprocessed images to deep learning algorithm
7
ResNet
4- SGDM
5- ADAM
6- RMSPROP
Check learning criteria
If meet
Store into private cloud
Else
Retrain
ResNet
1- SGDM
2- ADAM
3- RMSPROP
Check learning criteria
If meet
Store into private cloud
Else applying image preprocessing
Input image preprocessed images to deep learning algorithm
8
ResNet
4- SGDM
5- ADAM
6- RMSPROP
Check learning criteria
If meet
Store into private cloud
Else
Retrain
8 Journal of Healthcare Engineering
Table 2: Continued.
Steps Codes
Access all private cloud
Check the learning criteria of the best deep learning trained models
If meet
9
Select semi-best model and store it in other private cloud
Else
Retry
10 Import one best trained model
11 Input pre-processed test images
12 Test analysis
13 Applying statistical performance matrix
(a) (b)
(c) (d)
Figure 2: Data samples of all white blood cancerous cells (this shows some data samples for the easiness of the reader, learner, and other
researchers). (a) Eosinophil. (b) Lymphocyte. (c) Monocyte. (d) Neutrophil.
Figure 6 shows the training progress of the adaptive unbalancing in data samples. So, adaptive momentum as-
momentum association using AlexNet without image pro- sociation achieves 86.72% and 13.28% of classifcation ac-
cessing. To train this model, the proposed study set curacy and miss-classifcation rate, respectively.
a learning rate of 0.001, 100 epochs, and 58 iterations per Figure 7 shows the training progress of root mean square
epoch. Te proposed framework of this training model has (RMS) propagation using AlexNet without image process-
a lot of distortion and does not converge till the last epoch; ing. To train this model, the proposed study set a learning
all this distortion happens due to the contrast and pixels rate of 0.001, 100 epochs, and 58 iterations per epoch. Te
Journal of Healthcare Engineering 9
10000
8000
6000
4000
2000
6000
5000
4000
3000
2000
1000
proposed framework of this training model has a lot of Figure 9 shows the training progress of adaptive mo-
distortion and does not converge till the last epoch; all this mentum association using MobileNet without image pro-
distortion happens due to the contrast and pixels unbal- cessing. To train this model, the proposed framework set
ancing in the data samples. So, root means square propa- a learning rate of 0.001, 100 epochs, and 58 iterations per
gation achieves 75.78% of CA and 24.22% MCR. epoch. Te proposed framework of this training model has
Figure 8 shows the training progress of SGD with a lot of distortion and does not converge till the last epoch;
momentum using MobileNet without image processing. To all this distortion happens due to the contrast and pixels
train this model, the proposed study set a learning rate of unbalancing in the data samples. So, adaptive momentum
0.001, 100 epochs, and 58 iterations per epoch. Te proposed association achieves 82.03% of classifcation accuracy and
framework of this training model has a lot of distortion and 17.97% miss-classifcation rate, respectively.
does not converge till the last epoch; all this distortion Figure 10 shows the training progress of RMS propa-
happens due to the contrast and pixel unbalancing in the gation using MobileNet without image processing. To train
data samples. So, stochastic gradient descent with mo- this model, the proposed study set a learning rate of 0.001,
mentum achieves 75.00% and 25.00% of CA and MCR, 100 epochs, and 58 iterations per epoch. Te proposed
respectively. framework of this training model has a lot of distortion and
10 Journal of Healthcare Engineering
Figure 5: Training progress of SGDM and AlexNet without image processing (this depicts the training progress of SGDM and AlexNet
before the image processing phase and shows fuctuation).
Figure 6: Training progress of Adam of AlexNet without image processing (this depicts the training progress of adam and AlexNet before
the image processing phase and shows fuctuation).
Journal of Healthcare Engineering 11
Figure 7: Training progress of RMSPROP of AlexNet without image processing (this depicts the training progress of RMSPROP and
AlexNet before the image processing phase and shows fuctuation).
Figure 8: Training progress of SGDM of the mobile net without image processing (this depicts the training progress of SGDM and
MobileNet before the image processing phase and shows fuctuation).
12 Journal of Healthcare Engineering
Figure 9: Training progress of adam on the mobile net without image processing (this depicts the training progress of adam and MobileNet
before the image processing phase and shows fuctuation).
Figure 10: Training progress of RMSPROP of the mobile net without image processing (this depicts the training progress of RMSPROP and
MobileNet before the image processing phase and shows fuctuation).
Journal of Healthcare Engineering 13
Figure 11: Training progress of SGDM of ResNet without image processing (this depicts the training progress of SGDM and ResNet before
the image processing phase and shows fuctuation).
does not converge till the last epoch; all this distortion samples. So, root means square propagation achieves 71.09%
happens due to the contrast and pixels unbalancing in the of classifcation accuracy and 28.91% miss-classifcation rate,
data samples. So, root means square propagation achieves respectively.
75.00% and 25.00% of CA and MCR, respectively. Table 3 shows the training results of AlexNet models
Figure 11 shows the training progress of SGD with after image processing; all models tune on 5800 iterations,
momentum using ResNet without image processing. To a 0.001 learning rate, and 100 epochs. So, the stochastic
train this model, the proposed study set a learning rate of gradient descent moment outperformed the supra each
0.001, 100 epochs, and 58 iterations per epoch. Te proposed training model and achieves 99.2% of classifcation accuracy
framework of this training model has a lot of distortion and and 0.8% miss-classifcation rate, respectively.
does not converge till the last epoch; all this distortion Figure 14 shows the training progress of SGD moments
happens due to the contrast and pixels unbalancing in the using AlexNet after image processing. To train this model,
data samples. So, stochastic gradient descent with mo- the proposed study set a learning rate of 0.001, 100 epochs,
mentum achieves 87.50% of classifcation accuracy and and 58 iterations per epoch. Te proposed framework of this
12.50% miss-classifcation rate, respectively. training model converges before 90 epochs and gives the
Figure 12 shows the training progress of adaptive mo- highest training results.
mentum association using ResNet without image process- Table 4 shows the training results of ResNet models after
ing. To train this model, the proposed study set a learning image processing; all models tune on 5800 iterations, a 0.001
rate of 0.001, 100 epochs, and 58 iterations per epoch. Te learning rate, and 100 epochs. So, RMSPROP outperformed
proposed framework of this training model has a lot of all models and achieves a 69.53% and 30.47% of classifcation
distortion and does not converge till the last epoch, all this accuracy and miss-classifcation rate, respectively.
distortion happens due to the contrast and pixels unbal- Table 5 shows the training results of MobileNet models
ancing in the data samples. So, adaptive momentum asso- after image processing; all models tune on 5800 iterations,
ciation achieves 74.22% and 25.78% of classifcation a 0.001 learning rate, and 100 epochs. So, RMSPROP out-
accuracy and miss-classifcation rate respectively. performed all models and achieves a 73.44% and 26.56% of
Figure 13 shows the training progress of RMS propa- classifcation accuracy and miss-classifcation rate,
gation using ResNet without image processing. To train this respectively.
model, the proposed study set a learning rate of 0.001, 100 Table 6 shows the test simulation of AlexNet models after
epochs, and 58 iterations per epoch. Te proposed frame- image processing. Te proposed framework fnds that
work of this training model has a lot of distortion and does SGDM performs very well as compared with other models.
not converge till the last epoch; all this distortion happens SGDM predicted 719 lymphocyte cancerous cells correctly,
due to the contrast and pixels unbalancing in the data 742 monocyte cancerous cells, and 716 neutrophil cancerous
14 Journal of Healthcare Engineering
Figure 12: Training progress of adam of ResNet without image processing (this depicts the training progress of adam and ResNet before the
image processing phase and shows fuctuation).
Figure 13: Training progress of RMSPROP of ResNet without image processing (this depicts the training progress of rmsprop and ResNet
before the image processing phase and shows fuctuation).
Journal of Healthcare Engineering 15
Table 3: AlexNet training results after image processing (this depicts the AlexNet training results of all learners after image processing).
AlexNet
Learning rate Classifcation accuracy Missclassifcation rate
Model Iterations Epoch
(LR) (%) (%)
SGD moment 99.2 0.8
ADAM 5800 0.001 100 95.31 4.69
RMSPROP 96.09 3.91
Figure 14: Training progress of SGDM and AlexNet after image processing (this depicts the training progress of SGDM and ResNet after the
image processing phase and shows no fuctuation in the end and smoothes the training curve).
Table 4: Training results of ResNet models after image processing (this depicts the ResNet training results of all learners after image
processing).
AlexNet
Learning rate Classifcation accuracy Missclassifcation rate
Models Iterations Epoch
(LR) (%) (%)
SGD moment 68.75 31.25
ADAM 5800 0.001 100 68.79 31.21
RMSPROP 69.53 30.47
Table 5: Training results of mobile net models after image processing (this depicts the mobile net training results of all learners after image
processing).
AlexNet
Learning rate Classifcation accuracy Missclassifcation rate
Models Iterations Epoch
(LR) (%) (%)
SGD moment 69.53 30.47
ADAM 5800 0.001 100 72.3 27.7
RMSPROP 73.44 26.56
16 Journal of Healthcare Engineering
Table 6: AlexNet confusion matrix of testing samples after image Table 7: Mobile net confusion matrix of testing samples after image
processing (this depicts the testing confusion metric of AlexNet processing (this depicts the testing confusion metric of the mobile
after image processing). net after image processing).
AlexNet MobileNet
Model (SGDM) Model (SGDM)
Lymphocyte Monocyte Neutrophil Lymphocyte Monocyte Neutrophil
Sample � 2238 Sample � 2238
Lymphocyte 719 0 9 Lymphocyte 678 44 47
Monocyte 16 742 25 Monocyte 0 284 308
Neutrophil 10 1 716 Neutrophil 67 415 388
AlexNet MobileNet
Model (ADAM) Model (ADAM)
Lymphocyte Monocyte Neutrophil Lymphocyte Monocyte Neutrophil
Sample � 2238 Sample � 2238
Lymphocyte 674 15 14 Lymphocyte 676 3 5
Monocyte 22 703 16 Monocyte 28 339 348
Neutrophil 49 25 720 Neutrophil 41 401 390
AlexNet MobileNet
Model (RMSPROP) Model (RMSPROP)
Lymphocyte Monocyte Neutrophil Lymphocyte Monocyte Neutrophil
Sample � 2238 Sample � 2238
Lymphocyte 714 30 36 Lymphocyte 729 13 14
Monocyte 16 680 22 Monocyte 16 559 570
Neutrophil 15 33 692 Neutrophil 0 171 159
cells correctly. SGDM achieves 97.3%, 2.7% of classifcation Table 8: RESNET confusion matrix of testing samples after image
accuracy, and miss-classifcation rate, respectively. ADAM processing (this depicts the testing confusion metric of ResNet after
predicted 674 lymphocyte cancerous cells correctly, 703 image processing).
monocyte cancerous cells, and 720 neutrophil cancerous ResNet
cells correctly. ADAM achieves 93.7% of classifcation ac- Model (SGDM)
Lymphocyte Monocyte Neutrophil
curacy, 6.3% of classifcation accuracy, and a miss- Sample � 2238
classifcation rate, respectively. RMSPROP predicted 714 Lymphocyte 717 1 0
lymphocyte cancerous cells correctly, 680 monocyte can- Monocyte 21 409 391
cerous cells, and 692 neutrophil cancerous cells correctly. Neutrophil 7 333 352
RMSPROP achieves 93.2%, 6.8% of classifcation accuracy, ResNet
and a miss-classifcation rate, respectively. Model (ADAM)
Lymphocyte Monocyte Neutrophil
Table 7 shows the test simulation of MobileNet models Sample � 2238
after image processing. Te proposed framework fnds that Lymphocyte 736 1 1
RMSPROP performs very well as compared with other models. Monocyte 1 319 314
RMSPROP predicted 729 lymphocyte cancerous cells correctly, Neutrophil 8 423 428
559 monocyte cancerous cells, and 159 neutrophil cancerous ResNet
cells correctly. RMSPROP achieves 64.9% and 35.1% of clas- Model (RMSPROP)
Lymphocyte Monocyte Neutrophil
sifcation accuracy and miss-classifcation rate, respectively. Sample � 2238
ADAM predicted 676 lymphocyte cancerous cells correctly, Lymphocyte 728 6 3
339 monocyte cancerous cells, and 390 neutrophil cancerous Monocyte 16 608 612
cells. ADAM achieves 63.0% of classifcation accuracy and Neutrophil 1 129 128
37.0% miss-classifcation rate, respectively. SGDM predicted
678 lymphocyte cancerous cells correctly, 284 monocyte accuracy and miss-classifcation rate respectively. SGDM
cancerous cells, and 388 neutrophil cancerous cells correctly. predicted 717 lymphocyte cancerous cells correctly, 409
SGDM achieves 60.5% of classifcation accuracy and 39.5% monocyte cancerous cells, and 352 neutrophil cancerous cells
miss-classifcation rate, respectively. correctly. SGDM achieves 66.2% of classifcation accuracy and
Table 8 shows the test simulation of MobileNet models 33.8% miss-classifcation rate, respectively.
after image processing. Te proposed framework fnds that Table 9 shows the statistical matrix results of blood
ADAM performs very well as compared with other models. cancer prediction after image processing. Tese results de-
ADAM predicted 736 lymphocyte cancerous cells correctly, pict that the SGDM of AlexNet outperthiformed all models
319 monocyte cancerous cells, and 428 neutrophil cancerous and achieves 97.3% classifcation accuracy and a 2.7% miss-
cells. ADAM achieves 66.5%, and 33.5% of classifcation ac- classifcation rate. SGDM of MobileNet performed below the
curacy and miss-classifcation rate respectively. RMSPROP performance line and achieved 60.5% classifcation accuracy
predicted 728 lymphocyte cancerous cells correctly, 608 and a 39.5% miss-classifcation rate.
monocyte cancerous cells, and 128 neutrophil cancerous cells. Table 10 shows the testing results of all models, and
RMSPROP achieves 65.6%, and 34.4% of classifcation SGDM of AlexNet is outperformed as compared with other
Journal of Healthcare Engineering 17
Table 9: Statistical matrix test results of blood cancer prediction after image processing (this depicts the statistical results of all models
performed in the current study).
AlexNet
Stochastic gradient descent moment (%)
CA Sen PPV FPR FNR LPR LNR FMI F1 NPV Spec MCR
97.3 96.51 98.76 0.60 3.49 160.10 0.04 97.63 97.62 98.28 99.40 2.7
Adaptive moment estimation (%)
CA Sen PPV FPR FNR LPR LNR FMI F1 NPV Spec MCR
93.07 90.47 95.87 1.94 9.53 46.58 0.10 93.13 93.09 95.37 98.06 6.03
Root mean square propagation (%)
CA Sen PPV FPR FNR LPR LNR FMI F1 NPV Spec MCR
93.02 95.84 91.54 4.42 4.16 21.68 0.04 93.66 93.64 97.87 95.58 6.08
MobileNet
Stochastic gradient descent moment (%)
CA Sen PPV FPR FNR LPR LNR FMI F1 NPV Spec MCR
60.5 91.01 88.77 6.12 8.99 14.86 0.10 89.58 89.56 95.42 93.88 39.5
Adaptive moment estimation (%)
CA Sen PPV FPR FNR LPR LNR FMI F1 NPV Spec MCR
63.0 90.74 98.83 0.54 9.26 46.55 0.09 94.70 94.61 95.42 99.46 37.0
Root mean square propagation (%)
CA Sen PPV FPR FNR LPR LNR FMI F1 NPV Spec MCR
64.09 97.85 96.43 1.82 2.15 53.86 0.02 97.14 97.14 98.92 98.18 35.01
ResNet
Stochastic gradient descent moment (%)
CA Sen PPV FPR FNR LPR LNR FMI F1 NPV Spec MCR
66.2 96.24 99.86 0.07 3.76 30.15 0.04 98.03 98.02 98.92 99.93 83.8
Adaptive moment estimation (%)
CA Sen PPV FPR FNR LPR LNR FMI F1 NPV Spec MCR
66.5 59.55 88.04 6.31 40.45 9.43 0.43 72.40 71.04 98.92 93.69 33.5
Root mean square propagation (%)
CA Sen PPV FPR FNR LPR LNR FMI F1 NPV Spec MCR
65.6 59.28 87.92 6.34 40.72 9.35 0.43 72.20 70.82 98.92 93.66 34.4
Table 10: Testing results before image processing (this depicts the Table 11 depicts the descriptive comparative analysis of
comparative results of all models like classifcation accuracy and
this study with previous work, so the analysis depicts
miss-classifcation rate).
Pansombut et al. [35] achieved 80% classifcation accuracy,
Models CA (%) MCR (%) 20% miss-classifcation rate empowered with the CNN
AlexNet model on publicly available image blood samples, Madhukar
SGDM 78.6 21.4 et al. [34] achieved 93.5% classifcation accuracy, 6.5% miss-
ADAM 74.3 25.7 classifcation rate empowered with an SVM classifer on
RMSPORP 68.3 31.7 publicly available image blood samples. Supardi et al. [33]
ResNet achieved 86% classifcation accuracy and a 14% miss-
SGDM 71.0 29.0 classifcation rate empowered with the KNN model on
ADAM 75.0 25.0 publicly available image blood samples, Mishra and Patel
RMSPROP 56.0 44.0 [32] achieved 93.57% classifcation accuracy, 6.53% miss-
MobileNet classifcation rate empowered with SVM classifer on pub-
SGDM 63.1 36.9 licly available image blood samples. Laosai and Cham-
ADAM 73.8 26.2 nongthai [31] achieved 92% classifcation accuracy and an
RMSPROP 60.9 39.1
18% miss-classifcation rate empowered with the SVM
model on publicly available image blood samples. Faivdullah
models and achieves 78.6% classifcation accuracy, 21.4% of et al. [30] achieved 79.38% classifcation accuracy, 20.72%
classifcation accuracy, and miss-classifcation rate, re- miss-classifcation rate empowered with the SVM model on
spectively. But when the proposed framework compared publicly available feature-based blood samples. Setiawan
these results with the results of after-image processing, the et al. [29] achieved 87% classifcation accuracy and 13%
results of the proposed model after-image processing per- miss-classifcation rate empowered with SVM and K-means
formed well as compared with these. Te proposed model on publicly available image blood samples. Kumar
framework performed outstandingly as compared with the et al. [28] achieved 92.8% classifcation accuracy and 17.2%
previous studies. miss-classifcation rate empowered with CNN, Naı̈ve Bayes,
18
Table 11: Comparative analysis with previous studies (this depicts the comparative study of current research with previous research).
Publication Image processing Model Dataset Accuracy (%) MCR (%)
Pansombut et al. [35] No CNN Image (public) 80 20
Madhukar et al. [34] No SVM Image (public) 93.5 6.5
Supardi et al. [33] No KNN Image (public) 86 14
Patel and Mishra [32] No SVM Image (public) 93.57 6.53
Laosai and Chamnongthai [31] No SVM Image (public) 92 18
Faivdullah et al. [30] No SVM Feature (public) 79.38 20.72
Setiawan et al. [29] No SVM, K-means Image (public) 87 13
Kumar et al. [28] No KNN, Naı̈ve Bayes, CNN Image (public) 92.8 17.2
Loey et al. [27] No CNN, AlexNet Image (public) 94.3 5.7
Te proposed model Yes Transfer learning (AlexNet, ResNet, MobileNet) Image (public, 10,000 instances) 97.3 2.7
Journal of Healthcare Engineering
Journal of Healthcare Engineering 19
neural networks,” Medical, & Biological Engineering & [32] N. Patel and A. Mishra, “Automated leukemia detection using
Computing, vol. 55, pp. 1287–1301, 2017. microscopic images,” Procedia Computer Science, vol. 58,
[17] J. W. Choi, Y. Ku, B. W. Yoo, J. A. Kim, D. S. Lee, and pp. 635–642, 2015.
Y. J. Chai, “White blood cell diferential count of maturation [33] N. Z. Supardi, M. Y. Mashor, N. H. Harun, F. A. Bakri, and
stages in bone marrow smear using dual-stage convolutional R. Hassan, “Classifcation of blasts in acute leukemia blood
neural networks,” PLoS One, vol. 12, pp. 1–10, 2017. samples using a k-nearest neighbor,” in Preceeding of the 2012
[18] F. Qin, N. Gao, Y. Peng, Z. Wu, S. Shen, and A. Grudtsin, IEEE 8th International Colloquium on Signal Processing and
“Fine-grained leukocyte classifcation with deep residual its Applications, pp. 461–465, Malacca, Malaysia, March 2012.
learning for microscopic images,” Computer Methods and [34] M. Madhukar, S. Agaian, and A. T. Chronopoulos, “De-
Programs in Biomedicine, vol. 162, pp. 243–252, 2018. terministic model for acute myelogenous leukemia classif-
[19] T. Karthikeyan and N. Poornima, “Microscopic image seg- cation,” in Proceedings of the 2012 IEEE International
mentation using fuzzy c means for leukemia diagnosis,” Conference on Systems, Man, and Cybernetics (SMC),
Leukemia, vol. 4, pp. 3136–3142, 2017. pp. 433–438, Seoul, Korea, October 2012.
[20] M. MoradiAmin, N. Samadzadehaghdam, S. Kermani, and [35] T. Pansombut, S. Wikaisuksakul, K. Khongkraphan, and
A. Talebi, “Enhanced recognition of acute lymphoblastic A. Phon-on, “Convolutional neural networks for recognition
leukemia cells in microscopic images based on feature re- of lymphoblast cell images,” Computational Intelligence and
duction using principal component analysis,” Front Biomed Neuroscience, vol. 2019, Article ID 7519603, 12 pages, 2019.
Technol, vol. 2, no. 3, pp. 128–136, 2015. [36] R. Baig, A. Rehman, A. Almuhaimeed, A. Alzahrani, and
[21] A. Salihah, N. Nasir, N. Mustafa, and M. Nasir, “Application H. T. Rauf, “Detecting malignant leukemia cells using mi-
of thresholding technique in determining the ratio of blood croscopic blood smear images: a deep learning approach,”
cells for leukemia detection,” in Proceedings of the In- Applied Sciences, vol. 12, p. 6317, 2022.
ternational Conference on Man-Machine Systems (ICoMMS [37] M. U. Nasir, T. M. Ghazal, M. A. Khan, M. Zubair, and
2009), Penang, Malaysia, October 2009. A. U. Rahman, “Breast cancer prediction empowered with
[22] Y. Horie, T. Yoshio, K. Aoyama, S. Yoshimizu, and fne-tuning,” Computational Intelligence and Neuroscience,
Y. Horiuchi, “Diagnostic outcomes of esophageal cancer by vol. 9, 2022.
artifcial intelligence using convolutional neural networks,” [38] M. U. Nasir, M. A. Khan, M. Zubair, T. M. Ghazal, and
Gastrointestinal Endoscopy, vol. 89, pp. 25–32, 2019. R. A. Said, “Single and mitochondrial gene inheritance dis-
[23] R. Baig, A. Rehman, A. Almuhaimeed, A. Alzahrani, and order prediction using machine learning,” Computers, Ma-
H. T. Rauf, “Detection malignant leukemia cells using mi- terials & Continua, vol. 73, pp. 953–963, 2022.
croscopic blood smear images: a deep learning approach,” [39] A. U. Rahman, A. Alqahtani, N. Aldhaferi, M. U. Nasir, and
Applied Sceinces, vol. 12, p. 6317, 2022. M. F. Khan, “Histopathologic oral cancer prediction using
[24] K. Sekaran, P. Chandana, N. M. Krishna, and S. Kadry, “Deep oral squamous cell carcinoma biopsy empowered with
learning convolutional neural network (CNN) with Gaussian transfer learning,” Sensors, vol. 22, p. 3833, 2022.
mixture model for predicting pancreatic cancer,” Multimedia [40] T. M. Ghazal, H. A. Hamadi, M. U. Nasir, M. Gollapalli, and
Tools and Applications, vol. 79, pp. 10233–10247, 2019. M. Zubair, “Supervised machine learning empowered mul-
[25] A. Karim, A. Azhari, M. Shahroz, S. B. Belhaouri, and tifactorial genetic inheritance disorder prediction,” Compu-
K. L. D. S. V. M. Mustofa, “Leukemia cancer classifcation tational Intelligence and Neuroscience, vol. 10, 2022.
using machine learning,” Computers, Materials & Continua, [41] N. Taleb, S. Mehmood, M. Zubair, I. Naseer, and B. Mago,
vol. 71, pp. 3887–3903, 2012. “Ovary cancer diagnosing empowered with machine learn-
[26] K. Kourou, T. P. Exarchos, K. P. Exarchos, M. V. Karamouzis, ing,” in Proceeding of the2022 International Conference on
and D. I. Fotiadis, “Machine learning applications in cancer Business Analytics for Technology and Security (ICBATS),
prognosis and prediction,” Computational and Structural pp. 1–6, Dubai, United Arab Emirates, July 2022.
Biotechnology Journal, vol. 13, pp. 8–17, 2015. [42] M. Ghaderzadeh, M. Aria, A. Hosseini, F. Asadi, and
[27] M. Loey, M. Naman, and H. Zayed, “Deep transfer learning in D. Bashah, “A fast and efcient CNN model for B-ALL di-
diagnosing leukemia in blood cells,” Computers, vol. 9, p. 29, agnosis and its subtypes classifcation using peripheral blood
2020. smear images,” International Journal of Intelligent Systems,
[28] S. Kumar, S. Mishra, and P. Asthana, “Automated detection of 2021.
acute leukemia using a k-mean clustering algorithm,” Ad- [43] M. Ghaderzadeh, F. Asadi, A. Hosseini, D. Bashah, and
H. Abolghasemi, “Machine learning in detection and classi-
vances in Computer and Computational Sciences, vol. 2,
fcation of leukemia using smear blood images: a systematic
pp. 655–670, 2018.
review,” Scientifc Programming, vol. 14, 2021.
[29] A. Setiawan, A. Harjoko, T. Ratnaningsih, E. Suryani, and
[44] Z. Obermeyer and E. J. Emanuel, “Predicting the future—big
S. Palgunadi, “Classifcation of cell types in Acute Myeloid
data, machine learning, and clinical medicine,” New England
Leukemia (AML) of M4, M5 and M7 subtypes with support
Journal of Medicine, vol. 375, pp. 1216–1219, 2016.
vector machine classifer,” in Proceedings of the 2018 In-
[45] Kaggle, “Blood cell images,” 2022, https://www.kaggle.com/
ternational Conference On Information And Communications
datasets/paultimothymooney/blood-cells.
Technology (ICOIACT), pp. 45–49, Yogyakarta, Indonesia,
March 2018.
[30] L. Faivdullah, F. Azahar, Z. Z. Htike, and W. N. Naing,
“Leukemia detection from blood smears,” Journal of Medical
and Biological Engineering, vol. 4, pp. 488–491, 2015.
[31] J. Laosai and K. Chamnongthai, “Acute leukemia classifca-
tion by using SVM and K-Means clustering,” in Proceedings of
the 2014 IEEE International Electrical Engineering Congress
(iEECON), pp. 1–4, Chonburi, Tailand, March 2014.