Implementation of Intelligent Model For Pneumonia Detection: Željko KNOK, Klaudio PAP, Marko HRNČIĆ

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

ISSN 1846-6168 (Print), ISSN 1848-5588 (Online) Original scientific paper

https://doi.org/10.31803/tg-20191023102807

IMPLEMENTATION OF INTELLIGENT MODEL FOR PNEUMONIA DETECTION

Željko KNOK, Klaudio PAP, Marko HRNČIĆ

Abstract: The advancement of technology in the field of artificial intelligence and neural networks allows us to improve speed and efficiency in the diagnosis of various types of
problems. In the last few years, the rise in the field of convolutional neural networks has been particularly noticeable, showing promising results in problems related to image
processing and computer vision. Given that humans have limited ability to detect patterns in individual images, accurate diagnosis can be a problem for even medical professionals.
In order to minimize the number of errors and unintended consequences, computer programs based on neural networks and deep learning principles are increasingly used as
assistant tools in medicine. The aim of this study was to develop a model of an intelligent system that receives x-ray image of the lungs as an input parameter and, based on the
processed image, returns the possibility of pneumonia as an output. The implementation of this functionality was implemented through transfer learning methodology based on
already defined convolution neural network architectures.

Keywords: computer vision; machine learning; neural networks; pneumonia

1 INTRODUCTION can lead to unwanted consequences. Project uses several


different technologies combined. The implementation of the
Nowadays, machine learning and artificial intelligence convolution neural network model was done in Python using
methods are culminating in every aspect. These methods are Anaconda as programming environment. [2]
increasingly being used in business systems, enterprises of
various types and in science, in order to improve efficiency 2 MACHINE LEARNING
of the facilities they serve. While numerous works focus on
a conscious form of artificial intelligence that would Machine learning is a branch of artificial intelligence that
somehow replace humans, computers with learning ability focuses on designing algorithms that improve their own
have been around for some time. performance based on input. Machine learning is one of the
Object recognition and deep analysis are just some of the most active and represented areas of computing and is the
characteristics of machine learning and neural networks. base of data science.
Although the concept of neural networks was developed back It enables computers to learn in a manner similar to
in the 20th century, training process represented hard job for humans, that is, to gather knowledge based on past
computers from that time. With the development of experience and analysis. Instead of constantly updating the
hardware, GPU (Graphical Processing Unit) and CPU program code, the computer eventually becomes
(Central Processing Unit), and high availability of data, independently capable of improving the performance of its
neural networks are experiencing significant upswing. algorithms. Data processing using machine learning methods
Sometimes the diagnosis of the disease was almost often results in a model capable of performing some kind of
exclusively dependent on the experience of the doctor and his prediction on later test data. [3]
assessment. Today, more and more diseases can be diagnosed
with the help of some kind of computer or machine. 2.1 Deep Learning
This paper is based on the diagnosis of pneumonia
according to the obtained x-ray image as an input parameter. In this paper, the focus is on deep learning, which can be
With the advancement of artificial intelligence, it can be very characterized as a branch of machine learning inspired by the
useful in the diagnosis of x-ray images and play an important structure of the human brain. This principle was developed
role in the detection of disease, serve as a computer assistant by modeling the first neural networks back in the 1940s, with
to radiologists, and increase their efficiency in diagnosis. the term first mentioned by Rina Dechter in 1986 and defined
Particularly noteworthy is the application of AI system as a field of machine learning based on the presentation of
for the diagnosis of pediatric pneumonia using chest X-ray data to complex representations at a high level of abstraction
images. This tool may eventually aid in expediting the to which comes by learned nonlinear transformations.
diagnosis and referral of these treatable conditions, thus Deep learning methods are most commonly used in
facilitating earlier treatment, resulting in improved clinical challenging areas where the dimensionality and complexity
outcomes. [1] of the data is extremely high. Deep learning models mainly
The aim of this project is to implement software for the carry out learning process on a large set of data, which falls
automatic detection of pneumonia and the use of computer within the scope of supervised learning, and form the
algorithms in the field of machine learning to automate the backbone of today's principle of autonomous driving, disease
process of obtaining the most accurate diagnosis, which detection and pattern recognition, with result that were
could reduce the possibility of errors and misdiagnosis that impossible before. [4]

TEHNIČKI GLASNIK 13, 4(2019), 315-322 315


Željko KNOK et al.: IMPLEMENTATION OF INTELLIGENT MODEL FOR PNEUMONIA DETECTION

2.2 Biological Neural Networks


In artificial neural networks, neurons are arranged in
The human brain consists of densely interconnected layers that are interconnected by the links through which the
nerve cells that serve to process data, also called neurons. signals pass, that is, information that can be of great
Within the human brain, there are approximately 10 billion importance for the end result and performance of the neural
neurons and approximately 60 billion connections between network. Connections between neurons are triggered if the
them. condition is met. Condition is defined with activation
A neuron as a unit represents a very simple structure, but function that will be elaborated later.
a large number of neurons represent tremendous processing The layer that receives information from the
power. One neuron consists of a soma, representing the body environment is called the input layer. It is associated with the
of a cell, fibers called dendrites, and one longer dendrite, or hidden layers in which information is processed, that is, the
axon. Dendrites are located in a network around the cell of network learns. The last layer that produces outputs is called
that same neuron, while the axon extends to the dendrites and the output layer. The learning process is performed by
cells of other neurons. Signals or information are transmitted changing the value of the weights or connections between the
from one neuron to another by complex electro-chemical neurons, comparing the required and obtained sizes on the
reactions. Chemicals are released from synapses and cause a output layer and calculating the error.
change in electrical potential in neuronal cells. As the electric Based on the error value obtained, it is attempted to be
potential threshold is reached, the electrical impulse sends the reduced by each subsequent step, and this is done by weight
action potential over the axon. The impulse extends until it correction based on a defined learning rule related to data
reaches a synapse and increases or decreases its potential. acquisition.

Figure 1 Structure of a biological neuron

2.3 Artificial Neural Networks Figure 3 Layers of artificial neural network

The artificial neural network, inspired by the work of the 2.4 General Learning Rule
human brain, contains a number of connected processors,
neurons, which have the same role as biological neurons Each artificial neural network is based on a general
within the brain. They are connected by links whose signals learning rule that involves collecting relevant data, which is
can pass from one neuron to another, thus transmitting then divided into training and validation data.
important information. Each neuron receives a certain After the data collection is completed, it is necessary to
number of input signals in the Xi tag. Each connection has its determine the architecture of the neural network, which
own numerical value, namely the weight of the Wi, which is involves determining the number of layers in the network and
the basis for long-term memory in artificial neural networks. the number of neurons in each layer, and then selecting the
The Xi and Wi values are summed by the transfer or type of connection between the neurons together with the
activation function, and the result is sent as output Y to learning rule that is the basis for defining criteria that
another neuron. determines the architecture of the neural network. The next
step is learning, which is the basis of artificial neural
networks. Learning involves initializing weights, training a
model based on a training dataset, and checking the amount
of error that weights are corrected after each iteration, which
refers to going through all the training samples, called the
epoch.
Learning lasts until the desired number of epochs is
reached or a wanted error is met. In the initial stages of
Figure 2 Structure of artificial neuron learning, the neural network will adapt to the general trends
present in the data set, so the error in both the training set and

316 TECHNICAL JOURNAL 13, 4(2019), 315-322


Željko KNOK et al.: IMPLEMENTATION OF INTELLIGENT MODEL FOR PNEUMONIA DETECTION

the validation set will fall through time. problems are jump functions and sigmoidal functions.
During the longer learning process, there is a possibility
that the neural network can begin to adapt to the specific data
and noise of the learning data, thereby losing its
generalization property. The error on the training set will
drop while on the validation set it will start to grow. The
moment the validation set error starts to grow, the learning
process must be interrupted so that the model does not
become overfitted.
With the completion of the learning process, it is
necessary to test the operation of the network using
previously obtained validation data. The difference between
the learning and the testing phase is that the neural network
no longer learns in the testing phase and the weight values
are fixed. The evaluation of the network is obtained by
calculating the error and comparing it with the desired error
value. If the error is greater than allowed, additional training
data may need to be collected or the number of epochs
increased for better results, since in this case the network is
unsuitable for use.

2.5 How Neural Networks Work


Figure 4 Types of activation functions

In working with neural networks, there are two basic


algorithms used today. These are the feedforward algorithm 2.7 Convolutional Neural Networks
and the so-called backpropagation algorithm.
Working with neural networks allows us to define In the neural network structures mentioned, each neuron
complex, nonlinear hypotheses consisting of one or more output was a scalar value. Convolutional neural networks are
neurons that can be arranged in one or more layers that build a special type of artificial neural networks for processing
the neural network architecture. This method of mapping the unstructured data such as images, text, sound and speech in
input vector to the network outputs is called forward which extensions occur in the form of convolution layers.
propagation. The outputs from the convolution layers are two-dimensional
Backpropagation algorithm is one of the main reasons and are called feature maps. The input to convolutional
that made artificial neural networks known and usable for neural networks is two-dimensional (image data), while
solving different types of problems. With this algorithm, kernels are used instead of weight values. In addition to the
artificial neural networks have been given a new capability, convolution layers mentioned, these types of neural networks
supporting multiple layers, and the backpropagation have specific pooling layers and fully connected layers.
algorithm has proven to be the most common and effective Convolutional neural networks usually start with one or
method in deep neural networks. The algorithm is used in more convolution layers, followed by a pooling layer, then
conjunction with optimization methods such as gradient the convolution layer again, and the process is repeated
descent. The principle of network operation is based on the several times.
transfer of input values from the input to the output layer, The convolution layer is a fundamental part of any
determination of the error and propagation of the error back convolution neural network. Each convolution layer consists
through the neural network from the output to the input in of filters containing weight values that need to be learned in
order to train the network as best as possible and to reduce order for the network to return better results.
the existing error and thus improve the final results. In the initial phase, the input convolves (multiplies) with
the filter and produces a scalar product over the entire width
2.6 Types of Activation Functions and length of the image. The result of the convolution is a
two-dimensional activation map that shows the filter
There are many types of activation functions that response at each position in the array. In this process, the
determine whether and how neurons are activated within a network will learn the weights within the filter to activate the
network. Activation functions can be divided into linear and filter where it recognizes certain image properties, edges,
nonlinear, and in practice, only a few that have proven useful shapes, and the like. In order to define the exact size of the
are used. Linear functions are used in regression problems output of the convolutional layer, it is also necessary to define
when unlimited output of any kind is required in the output the stride of the filter according to the input data. The stride
layers, while nonlinear activation functions are more suitable determines how far the filter will move in height and width
for working with classification problems where the outputs during the convolution process.
are limited to small quantities. The output width is indicated by Wy and the input width
The most popular functions to use within classification by Wx. The filter stride is denoted by the letter S, while F

TEHNIČKI GLASNIK 13, 4(2019), 315-322 317


Željko KNOK et al.: IMPLEMENTATION OF INTELLIGENT MODEL FOR PNEUMONIA DETECTION

indicates the square filter size. The output width after the where Ds is the old dimension, F is the filter width, and S is
convolution can be defined as: a jump between 2 value selections.
The fully connected layer is in most cases used in the
Wx − F final layers of the network. The reason for using fully
Wy
= + 1, (1) connected layers is to reduce the dimensions of the image by
S
passing it through the neural network, since complete
connectivity is defined as the square number of connections
between layers.
For example, for image data with dimensions
200×200×3, the input layer would have 120,000 input values.
If we fully linked this to a hidden layer consisting of 1000
neurons, we would have 120 million weight values to learn,
which requires big computing power. This is why fully
connected layers are used in the later stages of the neural
network.

Figure 5 Convolution process 2.8 Avoiding Local Minimums

The pooling layer contains a filter that reduces the During the learning process of the neural network, the
dimensions of an image. In a convolutional neural network, goal is to find the location of the global minimum of error,
pooling layer is most commonly used after several which means that the model is at the best possible level at a
convolution layers in order to reduce the resolution of maps given moment, and the learning process can stop. In this
generated by the convolution layer. The pooling layer filter process, the so-called local minimums can fool the network
is different from the convolution layer filter because it does in a way that the network thinks it is within the global
not contain weight values. The specified filter is used to minimum. Avoiding local minimums can be achieved by
select values in the filter default dimensions. using various methods.
Of the several types of pooling, the most commonly used Known methods for avoiding local minimums:
are average pooling and max pooling. Average pooling Random Transformations – Random transformations
replaces the arithmetic mean of clustered values, while the serve to augment an existing training dataset that is
max pooling simply selects the maximum value. The benefit accomplished by operations such as translation, rotation, and
of max pooling is to store the stronger and more prominent scaling. This increases the number of data without the need
pixels in the image, which are more important for getting the to collect additional samples. An increased amount of
end result, while the irrelevant pixels are eliminated. learning data makes the network less likely to get stuck in
local minimum. Random transformations can be performed
during each iteration in the learning process or by pre-
processing data before training begins.

Figure 6 Average pooling

Figure 8 Local and global minimum comparison


Figure 7 Max pooling
Dropout – During every iteration of the learning process,
In the pooling layer, the dimensions of the output maps some neurons are "accidentally" switched off, thus creating
are calculated as: the appearance of a large number of architectures, although
in reality they only work with one. In the case of multiple
Ds − F architectures, it is unlikely that they will be stuck at the same
Dn
= + 1, (2)
S local minimum.

318 TECHNICAL JOURNAL 13, 4(2019), 315-322


Željko KNOK et al.: IMPLEMENTATION OF INTELLIGENT MODEL FOR PNEUMONIA DETECTION

2.9 Transfer Learning padding of value 1, and the total number of parameters of the
specified network architecture is 138 million. [5]
Conventional deep learning algorithms are traditionally
based on isolated tasks, while a single neural network serves 3 MODEL IMPLEMENTATION
a particular type of classification. Transfer learning is a new
methodology that seeks to change this and circumvent the This chapter describes the implementation of the neural
paradigm of isolated learning by developing knowledge network model. The process of collecting image data for
transfer methods that can use models learned on one type of learning, division of the set into a learning and validation set,
classification for multiple different tasks. visual analysis and comparison of data for learning and with
In this way, a model initially created for one type of data for validation, comparison of the number of positive and
problem can later be used as a starting point for solving a new negative images, and preprocessing of the collected data for
type of classification, and thus give better results than entering the learning process are presented.
initializing a new neural network from the beginning. The procedure of retrieving the previously described
An analogy can be made with learning to drive in real architecture of the VGG16 network and changing the output
life. In this analogy, learning how to drive a motorcycle can layer in accordance with the collected data were developed,
be greatly assisted by the knowledge of driving a car. the learning curves using Tensorboard technology and the
Transfer learning works in a similar way. implementation of the training or learning process were
Transfer learning is nowadays a very popular approach presented. After learning was completed, the best-performing
on which the practical part of this paper is based. Choosing model was saved and an evaluation of that model was
pre-trained models and already defined neural network performed to graphically present prediction accuracy on
architectures can be of great use in solving complex problems validation data not used in the learning process.
such as detecting pneumonia on x-ray images.
After the architecture of the existing neural network with 3.1 Data Collection
defined layers has been loaded, it is necessary to delete the
last, output layer from the specified network, and replace it The first step in implementing a model for detecting
with a new output layer related to the problem for which we pneumonia on x-ray images is to collect imaging data that
will use the network. will consist of sets for training and validating the model.
The next step is to conduct training on the complete, Dataset available for download on the Kaggle site was used.
already defined, architecture and all layers of the network on Kaggle offers users a large number of different datasets that
the learning dataset. By using this way of learning, the neural can be used for various research purposes.
network will be able to apply the classification principles The dataset selected consists of a total of 5863 lung x-
learned in the previous tasks to a new type of problem, and ray images divided into two categories. Existing categories
in that way the results will be better without the need to define are x-rays with a positive diagnosis of pneumonia and images
layers and create a new neural network from the beginning. of normal, healthy lungs with no indication of disease. The
Transfer learning is good to use in cases where we do not set contains 1583 images of healthy lungs and 4273 images
have a large dataset to learn. In the case where we have positive for pneumonia. The image data format is .jpeg, while
approximately 1000 images to perform learning process, by the image dimensions are different and vary from image to
merging that 1000 data with trained networks that have been image. In the later steps, it is necessary to change the image
trained with millions of data, we can gather many learned dimensions that will be supported as input for the VGG16
principles for sorting and classification, and in that way network architecture. [6]
improve model efficiency and reduce training time.
VGG16 is a convolution neural network architecture 3.2 Data Preprocessing
proposed by K. Simonyan and A. Zisserman of Oxford
University. The model achieves 92.7% accuracy on the After successful process of the data collection for
ImageNet image dataset, which consists of over 14 million training process, before learning, the same data should be
image data divided into 1000 classes. processed in such a way that they are suitable for entering the
The VGG16 network architecture consists of 13 network.
convolution layers in which the number of filters increases Image file paths were originally loaded using a function
from 64 to 512; 5 compression layers with highest value from the Scikit-learn library that loads the paths of all files in
compression; and, 3 fully connected layers that are used to a given directory as parameters of those directories within the
avoid local minimums using the dropout technique described main directory as categories of individual files, or their paths.
earlier. Subsequently, categories or labels are defined from the
Big progress in comparison with previous neural obtained data set, depending on whether the data is in the
network architectures has been made after resizing the subdirectory of images with pneumonia or images without
convolution filter to 3×3. disease. Saving image categories was done using
The filter of the pooling layers is 2×2 in size with a stride functionality from the Numpy library.
of 2, while all the hidden layers of the network use the By completing the label definition for all retrieved image
rectification linear activation function described in the paths, the total image dataset should be divided into a
previous chapters. All convolutional layers use stride and learning set and a set for validation. In this case, 90% of the

TEHNIČKI GLASNIK 13, 4(2019), 315-322 319


Željko KNOK et al.: IMPLEMENTATION OF INTELLIGENT MODEL FOR PNEUMONIA DETECTION

total data set was determined for learning purposes, with the 3.4 Model Architecture
remaining 10% for validation.
After splitting paths and labels into training and Defining a convolutional neural network architecture is
validation data, functions were implemented to load saved the most important aspect on which the overall success of a
paths and convert them to image data and resize images to project depends. This part of the paper was done using the
256×256, which will be required for later input to the initial transfer learning methodology described earlier in the paper,
layer of the neural network. which uses defined architectures available for use in order to
have a better model performance. The VGG16 architecture
3.3 Data Visualization was used in this paper. In this way, the neural network can
take advantage of all previously learned principles such as
This chapter elaborates a section based on the recognizing edges, angles, discoloration, etc., thus shortening
visualization of preprocessed data for an easier idea of the learning time and improving model efficiency.
dataset collected, the ratios between learning and validation The VGG16 architecture was loaded using a function
data, and the like. Graphics and diagrams to show the ratio from the Keras library. Another layer of average pooling
between the data were implemented using the Matplotlib layer was added to the defined VGG16 architecture. The last
library. layer of the neural network was changed, which defines the
A visualization feature that has been implemented offers number of classes of possible outputs, in our case 2, that is,
a graphical representation of the data. This function performs pneumonia and normal state of lungs. A softmax activation
visual processing of the data ratio, namely, the ratio between function has been added to the last layer, which will change
the learning and the validation sets and the number of images the output values depending on the ratio in values ranging
positively diagnosed for pneumonia in relation to x-ray from 0 to 1 in order to easily obtain a percentage of the
images of healthy lungs. prediction value.
A visual representation of the ratio of the number of The initial VGG16 architecture and the changes on the
learning data sets to the validation set: last layer are integrated into a single architecture, after which
the complete architecture is stored in a variable that will
execute the learning process. The functions for calculating
the error, the type of optimizer with the learning rate and the
values to be monitored during the iterations of the learning
process are defined, in our case the prediction accuracy and
the error value is included by default. The use of Tensorboard
enables a later overview of the learning flow and changes of
defined parameters using Tensorboard technology.
After completing all of the above procedures, the model
architecture has been successfully implemented and is ready
to perform a model training process that may take some time
depending on the computer configuration and the amount of
learning data.

3.5 Training Process

The process of learning a defined model is the part where


Figure 9 Comparison of validation and training data
the so-called neural network magic takes place. The model
will in an iterative way, using the backpropagation algorithm,
learn which weight values are best and most effective for
producing a model that can detect pneumonia. Once the
convolutional neural network architecture has been
implemented, the path of the model in which the weighted
learning values will be stored is determined. An option is
selected that allows only the best model to be saved, to save
memory, and to avoid storing weight values at each iteration.
The fit function in the domain of Keras technology is called
to perform learning process. Images and labels from the
learning set and the validation set are sent as parameters,
epochs, checkpoints for storing weights, and Tensorboard for
later graphical representation of learning curves after
learning are determined.

Figure 10 Comparison of positive and negative samples

320 TECHNICAL JOURNAL 13, 4(2019), 315-322


Željko KNOK et al.: IMPLEMENTATION OF INTELLIGENT MODEL FOR PNEUMONIA DETECTION

3.6 Model Results this type of problem.

A learning curve that shows the evolution of model


accuracy by epochs and the decrease in the amount of error
is presented using Tensorboard technology. Graphs were
drawn to visualize the accuracy of the model on the learning
set across epochs, the error drop on the learning set, and the
accuracy percentages and the error drop on the validation set.
The tensorboard can be open by typing the tensorboard --
logdir = logs / command in the Anaconda command line. The
word logs / indicates the directory where graphs are stored.
The localhost: 6006 address is then entered in the browser
and the Tensorboard window in the browser opens. Graphs
related to the results obtained from the learning set:
Figure 13 Accuracy curve on validation set

Figure 11 Accuracy curve on learning set


Figure 14 Error curve on validation set

After visualizing the learning process, a confusion


matrix was used to display the number of correctly and
incorrectly classified images on a validation dataset divided
into two classes. The machine learning confusion matrix
serves as an instrument for conducting the evaluation of the
learned model in the domain of classification problems.
After graphical representation of the confusion matrix,
using the Keras evaluate function, the overall accuracy of the
model on the validation dataset is 94%. Accuracy is
determined on the validation dataset because in the case of
evaluation of the learning set, overfitting may occur, and in
Figure 12 Error curve on learning set that way the true accuracy of the model cannot be
determined.
The curves produced from the learning dataset show us The 94% accuracy rate indicates that the model is very
an increase in accuracy across all 10 learning epochs, while well trained on a relatively small dataset of 5000 images. A
the error decreases, which is the goal of the model presented. very important factor in this is transfer learning, in which the
A much more important aspect of the model is the results architecture of the VGG16 network uses some of the
based on a validation set that has not been used in the learning previously acquired knowledge and achieves high accuracy
process to see the model's response to unprecedented data. in categorizing pneumonia.
Graphs from validation set of x-ray image data are shown in The Fig. 15 shows a visualized confusion matrix created
the Figs. 13 and 14. as a product of the steps described above.
The results on the validation dataset show an overall By analysing the confusion matrix shown, the high
accuracy of 94%, which is an indicator of the high accuracy of the model can be noticed. Of the 429 test RTG
performance of the pneumonia prediction model. [7] images with pneumonia, 415 images showed pneumonia,
After the epoch that was one before the last one, it is while only 14 images were misclassified.
noticeable that the amount of error at the validation set starts In the test dataset, there are significantly fewer images
to increase, while at the training set it continues to decrease, without pneumonia, and out of a total of 157 images, 138
which emphasizes the possibility of overfitting. It is were correctly classified, and for 19 images the model gave
necessary to stop the learning process after the 10th epoch, incorrect results.
which proves the optimal value of the number of epochs for

TEHNIČKI GLASNIK 13, 4(2019), 315-322 321


Željko KNOK et al.: IMPLEMENTATION OF INTELLIGENT MODEL FOR PNEUMONIA DETECTION

The matrix shows a diagonal shape in darker colours, regional development fund and implemented within
which is a good sign since the confusion matrix with the Operational Programme Competitiveness and Cohesion 2014
strength of its diagonal shows the superiority of the learned – 2020, based on the call "Investing in Organizational
model. The total accuracy of the model is below the matrix Reform and Infrastructure in the Research, Development and
when all the images are combined together, and it shows the Innovation Sector".
94% accuracy already indicated.
5 REFERENCES

[1] Image dataset, https://www.kaggle.com/paultimothymooney/


chest-xray-pneumonia (02.10.2019)
[2] Hrnčić, M. (2019). Design of detector model for RTG imaging,
Polytechnic of Međimurje in Čakovec.
https://repozitorij.mev.hr/islandora/object/mev%3A1004
(18.09.2019)
[3] Bishop, C. (2007). Pattern Recognition and Machine Learning.
Springer, pp 738.
[4] Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep
Learning. The MIT Press, pp 787.
[5] Nielsen, M. (2019). Neural Networks and Deep Learning.
Online book.
[6] Backpropagation algorithm, https://towardsdatascience.com/
understanding-backpropagation-algorithm-7bb3aa2f95fd
(23.09.2019)
[7] Activation functions, https://ieeexplore.ieee.org/document/
8404724 (25.09.2019)
[8] Williams, A. (2017). Convolutional Neural Networks in
Figure 15 Model confusion matrix Python. CreateSpace Independent Publishing Platform, pp 140.
[9] Transfer learning, https://machinelearningmastery.com/
4 CONCLUSION transfer-learning-for-deep-learning/ (29.09.2019.)
[10] Gulli, A. (2017). Deep Learning with Keras. Packt Publishing,
The aim of this paper was to develop a model of an pp 318.
[11] Tensorboard, https://www.tensorflow.org/tensorboard
intelligent system that receives x-ray image of the lung as an
(02.10.2019)
input parameter and, based on the processed image, returns [12] Confusion matrix, https://ieeexplore.ieee.org/document/
the possibility of pneumonia as an output. The 7326461 (02.10.2019)
implementation of the mentioned functionality was
implemented using a transfer learning methodology based on
already defined convolution neural network architectures. Author’s contacts:
In this example, the VGG16 architecture was used,
consisting of a total of 16 layers, which greatly contributes to Željko KNOK, mr. sc., senior lecturer
Polytechnic of Međimurje in Čakovec,
the accuracy of the model using previously learned principles Bana Josipa Jelačica 22a, 40000 Čakovec, Croatia
for the classification of image data. After validation of the [email protected]
system, the model shows extremely good results on the
validation dataset. For a more complete and qualitative Klaudio PAP, PhD, Prof.
University of Zagreb,
prediction of pneumonia, more data should be available to Faculty of Graphic Arts,
make the model more representative for decision making and Getaldićeva 2, 10000 Zagreb, Croatia
to assist radiologists in making diagnoses. [email protected]
The model was later integrated with the technologies
Marko HRNČIĆ, student
used to build web applications using the Flask framework. Zagreb University of Applied Sciences
Creating a web application based on the implemented model Mlinarska cesta 38, 10000 Zagreb, Croatia
contributes to the availability of the model, since web [email protected]
applications are available from anywhere at any time, with
the requirement that they are connected to the internet.
Intuitiveness and ease of use of the application are the main
factors for widespread use, which would reduce the number
of misdiagnosis of pneumonia.

Acknowledgements

This paper describes the results of research being carried


out within the project "Centar održivog razvoja"/"Center of
sustainable development", co-financed by the European

322 TECHNICAL JOURNAL 13, 4(2019), 315-322

You might also like