Development of A Smart Hospital Bed Based On Deep

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Journal of Disability Research

2023 | Volume 2 | Issue 2 | Pages: 25–36


DOI: 10.57197/JDR-2023-0017

Development of a Smart Hospital Bed Based on


Deep Learning to Monitor Patient Conditions
Sarra Ayouni1, Mohamed Maddeh2, Shaha Al-Otaibi1, Malik Bader Alazzam3,* , Nazik Alturki1 and Fahima Hajjej1

1Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdul Rahman University,
Riyadh 11671, Saudi Arabia
2Department of Information Systems, College of Applied Computer Science, King Saud University, Riyadh 11451, Saudi Arabia
3Information Technology Department, Information Technology College, Ajloun National Private University, Ajloun, Jordan

Correspondence to:
Malik Bader Alazzam*, e-mail: [email protected], Tel.: 0096779358694

Received: May 21 2023; Revised: June 22 2023; Accepted: June 23 2023; Published Online: July 19 2023

ABSTRACT
An Internet of Things-based automated patient condition monitoring and detection system is discussed and built in this work. The proposed algorithm
that underpins the smart-bed system is based on deep learning. The movement and posture of the patient’s body may be determined with the help of
wearable sensor-based devices. In this work, an internet protocol camera device is used for monitoring the smart bed, and sensor data from five key
points of the smart bed are core components of our approach. The Mask Region Convolutional Neural Network approach is used to extract data from
many important areas from the body of the patient by collecting data from sensors. The distance and the time threshold are used to identify motions
as being either connected with normal circumstances or uncomfortable ones. The information from these key locations is also utilised to establish the
postures in which the patient is lying in while they are being treated on the bed. The patient’s body motion and bodily expression are constantly mon-
itored for any discomfort if present. The results of the experiments demonstrate that the suggested system is valuable since it achieves a true-positive
rate of 95% while only yielding a false-positive rate of 4%.

KEYWORDS
smart bed, deep learning, convolutional neural network, regression loss, region of interest, overfitting, threshold component evaluation

INTRODUCTION
When integrated with computer vision, machine learn- who are admitted to hospitals (Carpenter et al., 2007). The
ing, and deep learning methods, the Internet of Things activities and relationships between hospitalised patients
(IoT) delivers rapid and precise monitoring of patients and and the community in which they are housed are becoming
detection systems. This is one of the many ways in which prohibitive because of major health concerns, factors such
the IoT is helping to advance the area of smart healthcare as age, infirmity, and pre-established medical directives.
(Pace et al., 2018). A few of the traditional patient moni- Several recent studies concluded that the healthcare system
toring systems often include approaches based on sensors sector suffers from a serious lack of available personnel, and
that the patients wear on their body. The challenging and healthcare experts carry a heavy burden of labour during
time-consuming task of providing the best medical care is the whole process (Akter et al., 2019; Qureshi et al., 2019).
a tough one. Patients are often checked in when they are This contributes to a further decline in the overall quality of
experiencing significant health issues. They could possibly medical attention for people who are ill (Farid et al., 2020).
enter for conditions that are not life-threatening but do need Additionally, research demonstrates that the healthcare
frequent monitoring. Also, numerous therapy practices may industry is under a significant amount of strain that is already
not be administered to a patient until after he or she has been developed, in addition to those that are still developing
admitted to the hospital in a private room at a medical facil- (Akter et al., 2019). To reiterate, the situation is the way it is
ity for a certain amount of time. Approximately 40% of these because the widespread COVID-19 epidemic has reached a
newly admitted patients are members of a population that new level of decline (McGarry et al., 2020; Sterpetti, 2020).
is mostly old, with an average age of >65 years (Mattison However, developing nations such as Finland, Canada,
et al., 2013). Additionally, there are those who are disabled England, Ireland, and New Zealand are examples. Both New

25
26 S. Ayouni et al.: Smart Hospital Bed Based on Deep Learning to Monitor Patient Conditions

Zealand and Japan have put a lot of effort into developing most of its detection work, particularly for key points that
innovative approaches to patient care with the interest of are identified and connected to a particular sensor.
enhancing the standard of medical treatment (Reinhold et al., The remainder of the paper is laid out as follows: this
2019). However, despite the fact that scholars have concen- research paper has five sections: Introduction, Literature
trated on building assistance programmes for patients who Review, Proposed Technique, Experimental Findings and
are hospitalised, bedridden, and individuals who are unable Discussion, and Conclusion.
to move about and have limited ability for engagement (for
example, monitoring systems for patients based on brain-
wave activity interface systems, namely interaction systems
based on hand gestures, etc.) (Gao et al., 2003; Raheja et al., LITERATURE REVIEW
2014; Saha et al., 2018), just a little amount of study has
been carried out thus far to determine the needs of patients The underlying foundation for the algorithms that are
in this category and then come up with a plan to meet their employed in this system is deep learning, and hence the
specific needs. To mitigate the effects of this emergency on effectiveness and precision of the functioning of the system
the healthcare system, the necessities that a patient in a hos- in real time have been improved. There have been several
pital smart bed is expected to have are clearly perceived and different digital systems created to provide medical treat-
well comprehended. A viable solution may be developed as ment to those who are unable to care for themselves due
well as created to assist the hospitalised patients alongside to illness or disability and those in advanced years (Lakkis
the professionals in the healthcare industry. and Elshakankiri, 2017; Khan et al., 2018; Hossain et al.,
Recent research suggests that systems based on deep learn- 2019; Hasan et al., 2021; Islam et al., 2021). In Kanase
ing may improve the effectiveness, precision, and adaptabil- and Gaikwad (2016), the authors suggested using a cloud-
ity of healthcare systems (LeCun et al., 2015; Shrestha and based approach that uses a variety of sensory data. To get
Mahmood, 2019). Several systems created for use in the information from the patient’s room, sensors were used. To
medical industry have been put into use by using technology exhibit this, both an application and a website were built,
based on deep learning (for instance, medical image analy- and readings from the sensors and functions for regulating
sis, illness prognostication, and other similar topics) (Shen the patient’s environment were used by the relevant medical
et al., 2017; Bakator and Radosav, 2018). Recently, research- professionals, nurses, and staff members.
ers have focused on utilising deep learning for smart beds. In addition, the research demonstrated how the various
Consequently, the goals of this work are as follows: types of data that are essential may be transported from the
1. To gain an understanding of the smart-bed control mecha- perspective of the sick to that of the carer. But in this planned
nism for patients who are bedridden based on the graphical system, the patient was not given any control over what was
user interface (GUI), Node microcontroller unit (MCU), happening to them.
and 12V 6-channel relay. Some system proposals were made by Khan et al. (2018),
2. To design and construct the control system for the smart- Saha et al. (2018), and Kamruzzaman (2020) for the moni-
bed system in accordance with the five key points tar- toring and care of patients during emergency alarm gener-
geted, which correspond to the heart rate, blood pressure, ation. In Aadeeb et al. (2020), authors have constructed a
body temperature, motion detector, and bed occupancy brain-computer interface (BCI) and Deep management sys-
for patient body monitoring and an internet protocol (IP) tem for hospital patient rooms that is based on assisting indi-
camera device for monitoring the smart bed by using sen- viduals who are immobile, ill, or incapacitated in exercising
sory data and deep learning technologies. control over their immediate environment.
3. To analyse the system using the proposed deep learn- Tam et al. (2019) created a controller that is based on hand
ing-based model through empirical research and analysis. gestures. The operator was responsible for the collection of
electromyographic sensors that assisted in the detection of
According to the research that was carried out, it was found muscular motions. Sensors for electromyography may be
that quite a few different studies have been conducted to help found here and were utilised as features for training a convo-
the healthcare industry. In addition, most of the interface lutional neural network (CNN).
systems created in earlier studies concentrated on imple- There are some alternative methods for computer vision
menting a single mode of operation without considering the and face feature analysis for individual patients. In Khan
patients in their various states of varying degrees throughout et al. (2017), the authors developed and conducted tests on a
the range of the impairment. It was discovered that just a control system for people who are unable to use their hands
few patient care systems had been established using various because of their condition.
forms of deep learning technology. Because of this, the study The developments that have been made in the IoT in recent
focuses on developing a solution for the smart-bed system as years have made it possible for us to create healthcare sys-
well as the difficulties confronted by its challenges. tems that are more intelligent and predictable. It is necessary
This paper develops an IoT-based automated non-invasive to have a gateway (also known as a bridge point) to link the
patient detection and monitoring system using an algorithm internet and the sensor network architecture in most of the
based on deep learning by using line-of-sight cameras and technology based on the IoT in the medical field, notably in
cumbersome wearable sensing devices. The suggested sys- smart hospitals. The gateway is often located on the periph-
tem makes use of a trained and pre-trained Mask Region ery of the network and is responsible for performing critical
Convolutional Neural Network (Mask-RCNN) model for tasks utilised by the internet and networks of sensors (Alam

Journal of Disability Research 2023


S. Ayouni et al.: Smart Hospital Bed Based on Deep Learning to Monitor Patient Conditions 27

et al., 2019; Chen et al., 2019). These gateways provide for intrusive procedures. Techniques for posture-based mon-
streamlined management of the sensor network and access itoring using many cameras have been researched by Liu
to relevant data over the internet, which is the medium via and Ostadabbas (2017), but they mostly concentrate on
which data are being transferred (Casadei et al., 2019). the patient’s upper body region. In Deng et al. (2018), the
The IoT is well positioned to play an important role in authors presented the idea of computerised eye tracking for
trends in healthcare due to technology on a variety of dif- sleep analysis that makes use of motion sensors and infrared
ferent levels. Unlocking the potential of the IoT to improve cameras. In the published study, most researchers concen-
healthcare facilities is made possible when a hospital is trated their attention on a single patient and/or bed, in addi-
equipped with a real-time smart healthcare system (Savaglio tion to certain contact-based devices. For these methods to
et al., 2020). It makes the experience better for patients, work, it is necessary to connect sensors to patient bodies or
enhances the experience for carers, has a positive impact on beds, which is not only inconvenient but also expensive to
health and clinical outcomes, and reduces costs. do and often something the patient does not want. Several
The IoT-based systems, in conjunction with data mining studies included visual methods, although most of the time
methods (Amato et al., 2018; Piccialli et al., 2019), mon- different types of cameras were used for monitoring breath-
itor and synthesise patient information, crosscheck these ing and detecting falls. Several researchers have created
records against registered patterns, and analyse illness indi- techniques that are based on patients’ facial expressions;
cations. The IoT delivers patient monitoring solutions that however, these techniques require the patient to have his face
are efficient and quick, playing a crucial role in the process oriented squarely towards the camera.
of remote patient monitoring in the absence of a medical per- This research work develops an automated system for
sonnel. Currently, it offers a supporting monitoring system patient monitoring or detection. This system makes use of
that makes use of non-intrusive equipment (IoT-based sen- specialised hardware, sensors, or line-of-sight cameras in
sors or cameras), in addition to computer vision and deep order to monitor patient conditions. The suggested method
learning algorithms (Piccialli et al., 2020; Qureshi et al., identified the patient body’s important spots using a model
2020; Piccialli et al., 2021). Monitoring patients becomes that was based on deep learning. This model was known as
more effective when it is based on the ability to detect their Mask-RCNN. The information from the key sites is then uti-
symptoms based on visual signals, such as how they stand, lised for further investigation of the data from the smart bed,
how they move, and how they seem. In recent years, scien- which corresponds to the heart rate, blood pressure, body
tists have experimented with systems for tracking patients temperature, motion detector, and bed occupancy. As a met-
that include specialist equipment, pressure beds, and sensors ric of performance, we use the average distance between two
(Birku and Agrawal, 2018), but this has resulted in incurring successive critical points that have been identified.
extra costs. According to the available research, most of the In this study, in contrast to earlier similar efforts, a technique
developed approaches make use of several different camera based on deep learning is developed to classify and investigate
devices or sensors for monitoring patients. the IP-based top-view camera for the smart bed and five differ-
In the published study, most researchers investigated a ent key points of the patient’s body connected to sensors.
variety of sensor-based approaches to fall detection. In addi-
tion, several scholars resorted to relying on vision in their
investigation of fall detection (Birku and Agrawal, 2018).
Numerous researchers, such as Merrouche and Baha
DEEP LEARNING-BASED SMART-
(2016), extract visual information from scenes that only BED MONITORING SYSTEM
include a single room by using numerous camera systems.
Researchers monitored the pace, depth, and consistency of Proposed methodology
patients’ breathing using sensors and information based on
signals. The analysis was based on the inhalation and exha- Deep learning, an artificial intelligence technique, uses deep
lation duration and ratio, for example Sathyanarayana et al. neural networks to analyse medical data and provide accu-
(2018). Very few of them created vision-based systems that rate diagnoses and prognoses for patients. Deep learning
used information about patients’ facial expressions to assess works in this domain; image analysis: deep neural networks
pain, such as Jan et al. (2018), which requires the patients’ analyse magnetic resonance imaging (MRIs), X-rays, and
faces to be aligned directly with the camera. Monitoring histopathological samples. The deep model recognises com-
measures for respiration or breathing include keeping an plex characteristics and illness patterns to help diagnose can-
eye on things like the patient’s breathing rate, stability, and cer and neurological ailments early. Prediction and progno-
depth, as well as their exhalation/inhalation time ratio and sis: deep learning processes massive amounts of patient data,
any indications of sleep apnoea (Uysal and Filik, 2018). In including medical records, clinical reports, and diagnostic
Dhillon (2017), a method was proposed that is responsible findings (Arshad et al., 2023). These data help the deep
for the development of a monitoring and warning system model anticipate illness progression and therapy responses.
for the diagnosis of epileptic seizures. Researchers from Predictions may enhance treatment planning and identify
many institutions looked at a patient’s behaviour and uti- outcome-affecting factors. Deep learning may uncover sub-
lised the data they gathered to conduct an analysis of the tle medical data patterns and changes that healthcare prac-
patient’s medical problems (Sathyanarayana et al., 2018). In titioners overlook. It may identify health issues and disease
Liu and Ostadabbas (2017), the authors suggested an in-bed development by finding complicated data linkages (Islam
patient posture monitoring system that does not involve any et al., 2022).

Journal of Disability Research 2023


28 S. Ayouni et al.: Smart Hospital Bed Based on Deep Learning to Monitor Patient Conditions

Deep learning might improve medical diagnostics, espe- Development of the CNN model
cially in terms of accuracy. Medical data may teach advanced
models disease patterns and indicators. This improves diag- The preparation of the dataset was the first step that needed
nosis and treatment. Deep learning may anticipate and to be taken before the CNN models could be trained. This
diagnose illnesses. Advanced models may use prior data to required several stages, such as the gathering of picture
predict patient response to therapy and sickness progres- data, the tagging of image data, the resizing of images, the
sion. This dataset might improve and customise treatment cropping of images, the scaling of images, and several other
regimens. Deep learning reduces medical mistakes, improv- procedures. After that, the picture data were split into two
ing patient safety. Deep models may identify problems and groups, one of which will be used to train the CNN models
provide solutions by studying data and risk. Deep learning and the other to verify them. In addition, the pictures were
algorithms may discover anomalies early, improving patient shuffled around and divided up into batches before being
monitoring and health outcomes. used for training the CNNs. The Results section provides a
The utilisation of deep learning enables the provision of more in-depth analysis of the final picture collection that can
tailored treatment to individual patients. The utilisation of be credited to the previously indicated actions and proce-
deep models has the potential to facilitate individualised dures. There is one input layer (Wang, 2003), several con-
treatment plans and therapies through the analysis of med- volutional layers (Albawi et al., 2017), and many pooling
ical history, genetic information, and lifestyle factors. The layers (Sun et al., 2017) that together form the classification
implementation of a tailored approach has the potential to system that is the CNN. The input layer is where a picture
enhance both the physical well-being and overall satisfaction is fed into the CNN before it is processed further. This layer
of patients (Bhardwaj et al., 2022; Sujith et al., 2022). is responsible for providing an input feature map to the suc-
The potential of deep learning is substantial; however, it ceeding CNN layers so that the picture may be processed
is imperative that it is employed in conjunction with medical (Wu and Lin, 2018). With a three-dimensional feature map
expertise and clinical discernment to optimise patient care. as the input, the convolutional layer applies the convolution
operation (Ludwig, 2013) defined by Eq. 1. F stands for the
feature map, I for the convolution kernel, x and y for the
Design specifications location on the feature map where the kernel’s centre lies,
and i and j for the surrounding coordinates.
For the time being, we can determine each of the data sets
n n
that are currently available and each of the in-lab trials that
can be found in the published research. The data set was
Fo I ( x, y )    F (i, j ) I ( x  i, y  i). (1)
j  N j  N
compiled with the assistance of a top-view IP-based camera
that was positioned at a height of 4 m. The images from the By using the pooling layer, shrinkage of a feature map’s
IP camera have been considered to have a resolution of 640 total width is possible. Each feature map that is supplied to
× 480. The system keeps a record of the patient’s smart- it is processed by a single kernel. Because of its ability to
bed image using an IP camera and sensory data from five reduce the data size, feature maps may likewise be converted
key points of the patient’s body, that is, heart rate, blood using the pooling layer into one-dimensional feature maps.
pressure, body temperature, motion detector, and bed occu- The input parameter x for the dense layer is a one-dimen-
pancy. The findings of their investigation revealed that the sional feature map, and it multiplies that parameter by the
detection and monitoring of smart beds can be associated weight values w of the layer. In addition to this, the dense
with a GUI to assist in making healthcare better. A per- layer is having a bias value denoted by b, which is applied
sonal computer camera was attached to the smart bed of the to the result generated when weights are multiplied together
patient. Then, the images are displayed on a computer for such that y = wxT + b. The dense layer will conduct the mul-
further monitoring. tiplication of weight, as well as the addition of bias, with the
prescribed feature inputs. An activation function (Sharma
et al., 2017) is included in each layer of a CNN. This function
The proposed Internet of Things-based is responsible for modifying the output that is produced by
system the CNN layer. The SoftMax equation determines the param-
eters for the SoftMax type of activation function, which is
An IoT-deep learning-based smart-bed system has been pro- utilised in CNNs to analyse feature maps from convolutional
posed in this study. The system uses a deep learning algo- layers, and is written as
rithm to monitor patients’ activity. An IP-based camera is
y  max( x, 0) (2a)
used to capture images from the top view of the patient’s
smart bed, and a deep learning-based model is employed to
recognise information about the patient’s key points con- e xi
( xi )  . (2b)

k
nected to body sensors, which correspond to the heart rate, j 1
e xi
blood pressure, body temperature, motion detector, and bed
occupancy. Together, these two components make up the An activation function (the rectified linear unit activation
total system. The proposed system’s block diagram is shown function and the SoftMax type of activation function) with
in Figure 1. The workflow of the proposed model with hard- the parameters has been implemented in the final defence
ware specifications is shown in Figure 2. layer for the purposes of classification. The CNNs that are

Journal of Disability Research 2023


S. Ayouni et al.: Smart Hospital Bed Based on Deep Learning to Monitor Patient Conditions 29

Figure 1: Overall block diagram of the proposed model.

suggested for use in executing the proposed algorithm fea- further processed. Mask-RCNN (Islam et al., 2021) is per-
ture pooling layers as well as convolutional layers. The pro- formed to monitor and identify any signs of patient discom-
posed algorithm based CNNs employ a convolutional layer fort, as shown in Figure 1. The Mask-RCNN model receives
rather than a dense layer as their final layer. This is the key the images that are being inputted into it. The model can
distinction between the two types of CNNs. This convolu- identify the patient as well as critical places on their bodies
tional layer produces a three-dimensional feature map as its by using sensors. The identified critical spots are then used
final output. The value of each vector zi with all images that for further investigation. Figure 3 presents the block diagram
sound like the image when the expression of patient is not for the proposed Mask-RCNN network for predicting patient
normal may be determined by zi = [pi, wi, hi, xi, yi]. Here, the conditions.
variable pi denotes the likelihood that an item is like “not The data mining technique of association rules has been
normal class” which is denoted by the vector yi. A rectan- used to investigate sensors. Finally, a threshold that is
gular area that begins at the point (xi, yi) and has height and dependent on distance is used to detect the patient’s data
width dimensions of hi and wi correspondingly may be used from different sensors that need attention. The completed
to define the boundaries of the item (portion of the patient, processes are then sent to the monitoring unit, where they
i.e. face, legs, or arms if they are behaving abnormally) that are evaluated further and may be used for emergency calls
has been spotted. or alerts.
The fundamental design is additionally expanded so that
human posture may be estimated by leveraging information
The proposed Mask-RCNN model from critical spots (He et al., 2017). The process consists
of two phases. In the first step, a CNN is used to extract
The images from the IP camera are sent to the monitoring a features map from the input image using the region pro-
model, which is based on deep learning, so that they may be posal network (Lin et al., 2017). CNN’s extracted candidates

Journal of Disability Research 2023


30 S. Ayouni et al.: Smart Hospital Bed Based on Deep Learning to Monitor Patient Conditions

Figure 2: Workflow of the proposed model with hardware specification.

for the bounding box choose a spot on the features map at distribution over k + 1 ­categories of objects. The equation for
random. It is possible to extract bounding box candidates of determining the regression loss inside the box’s confines Lloc
varying sizes; hence, in the second step, a layer known as for RoI is as follows: Lloc (t u ; v)   smooth L1 (tiu  vi )
RoI (Region of Interest) Align is used to reduce the dimen- xx , y , w , h

sion of the extracted features to ensure that their sizes are in which v = (vw; vh; vx; vy) represents the true RoI
all comparable to one another, as shown in Figure 3. The ­regression inside a box and t u  (twu , thu , t xu , t yu ) ­represents
collected characteristics are then supplied to the parallel the anticipated regression inside a class’s bounding
branches of CNNs to make predictions about the segmenta- ­boundary u. In the third equation, the formula for smooth
tion masks and the bounding boxes. The loss function for the L1 is written as in Ren et al. (2015), and it is as follows:
Mask-RCNN is arrived at by adding together the results in  0.5 x 2 , if x  1 
L = Lcls + Lloc + Lmask. Here, the symbol Lcls stands for classi- smooth L1    . The loss function for
fication loss (He et al., 2017), while the symbol Lloc denotes  x  0.5, otherwise 
regression loss for the observed bounding box. The formula binary data is the value of Lmask and is found by calculating
for determining the Lcls of a RoI is as follows: Lcls(p, u) = the average loss in mask branch binary cross-entropy. The
2
−log pu. In the equation u stands for the actual class of the output of this mask branch is the value K m for each RoI, and
item, and p = (p0, …,, pk) displays the projected probability K stands for a binary mask, with a resolution of m × n and K

Journal of Disability Research 2023


S. Ayouni et al.: Smart Hospital Bed Based on Deep Learning to Monitor Patient Conditions 31

Figure 3: Block diagram for the proposed Mask-RCNN network in predicting patient conditions. Abbreviation: Mask-RCNN,
Mask Region Convolutional Neural Network.

denotes the number of classes. The Lmask is determined as He locates and segments patients in the input picture. In addition
et al. (2017) which is written as follows: to this, the model can identify five distinct important data
records from the patient’s body.
1 The information about the patient’s expression is uti-
m2 
Lmask  [Qij log Piju  (1  Qij log(1  Piju ))]. (3)
ij lised, and other sensory data are used to determine whether
the patient is experiencing any discomfort and needs some
In this equation, Q represents the actual mask, while assistance.
Pu ­represents the anticipated mask for the class u of the The patient’s motion is characterised by such a lack of
RoI. These two masks are represented by the expressions control and a high frequency that it is regarded as a marker
Qij∈[0, 1] and Piju  [0,1], respectively. The framework is of pain. These motions and the detection of pain may be
enhanced so that facial expression estimation may take place. different for different illnesses, as other aspects may be
Next, the values of the sensory data from identified key linked to them. In this context, it is presumed that patients
point is represented as a one-hot mask in the model. The are experiencing standard discomfort. It is thought that there
model forecasts a total of K masks, with one representing is discomfort present when there is a greater frequency of
each of the K key point types. While this is happening, unknown situations and frequent moves over a longer period.
the object detection algorithm is being taught to determine The continuous motions of a specific body part are used as
whether the patient’s key point is normal or needs attention. the basis for conducting an analysis of pain in that body part
Both steps, “patient expression detection” and “key point of the patient.
detection,” may be completed in isolation from one another. This technique monitors the movement associated with
The pre-trained model that is shown in Figure 3 initially a motion detection sensor over time by utilising key point

Journal of Disability Research 2023


32 S. Ayouni et al.: Smart Hospital Bed Based on Deep Learning to Monitor Patient Conditions

coordinates (x, y), and it determines if the patient’s state data collection as well as the kind of data set. The value of
is normal or whether he is experiencing pain. Using infor- Condpatient reflects the current state of the patient P which is
mation about the patient’s distance from the sensor, every written as follows:
movement that may have happened in any organ of the
patient’s body may be quantified. Calculating the distance  Discomfort , Movbody  Tt 
Cond patient    . (5)
between consecutive images of the video sequences using  Normal , otherwise 
the Euclidean distance allows for the related key spots of
the concerned organ to have their distances determined. The The time threshold is denoted by the symbol Tt, and it
equation that follows has been taken into consideration to refers to the amount of time that serves as the dividing line
determine how to compute the distance between the (x, y) between motions that are considered normal and movements
coordinates of all critical locations in two subsequent images that are produced by any uncomfortable situation. In this
piece, the movement for 10 or more consecutive images is
such that D  (k x j  K x j 1 ) 2  (k y j  k y j 1 ) 2 . The term D
referred to as Tt.
denotes the key point to pivot Euclidean distance k in sub­
sequent images j and j − 1 correspondingly in the equa-
tion that was just presented. The notation (k x j , k y j ) and
( K x j 1 , k y j 1 ) are used to indicate the (x, y) location of piv- RESULTS AND DISCUSSION
otal pixel ­coordinates k in images j and j − 1 respectively.
An equation is used to determine whether a threshold Examining the Mask-CNNs statistically by calculating
 1, D  d p  their accuracy, precision, F1-Score, Cohen's kappa, recall,
T should be used such that if T    . Regarding receiver operating characteristics area under the curve
0, otherwise  (ROC AUC), true-positive rate (TPR), false-negative rate
the ­picture’s pixels that make up the picture or frame, the (FNR), true-negative rate (TNR), and false-positive rate
distance between consecutive key points, also known as the (FPR). This statistical analysis is done for the overall six
movement of key points, is what the threshold value T meas- key points, such as T1: image from smart bed, T2: heart
ures. The dp value for this work has been increased all the rate, T3: blood pressure, T4: body temperature, T5: motion
way up to 25 pixels. The distance threshold is used to assess detector, and T6: bed occupancy, which were classified
whether there has been movement in any of the identified based on the prediction values as well as the label values.
key points of the connected patient body key point associ- A few of the statistical measures are shown in Table 1 w.r.t.
ated with motion detection. This allows doctors to determine classification for T1, classification for T2, classification for
whether a body part has been moved. For instance, if the T3, classification for T4, classification for T5, and classi-
(x, y) coordinates of any of the identified key points on a fication for T6. In addition, the photos that were used for
patient’s body changed would cause a change in the value training were likewise pre-processed using the same proce-
of the (x, y) coordinates. An equation has been used to do dure and statistically examined using the same metrics for
an analysis on the Euclidean distances for rapid movement the sake of comparison.
in the patient’s body (Movbody) to achieve this goal and is The metrics that have been presented are all considered
represented as follows: positive metrics; hence, the performance of the model that
is being assessed will be improved in direct proportion to

 1, if Vi 1  1 
n
 the values of these metrics. The images used in this work
Movbody    . (4)

 0, otherwise 
 went through a series of preparation steps, which included
grayscaling, enlarging the image resolution, and rescaling
The variable i may take on any value between 1 and n, the pixel values. After that, the photos were used by the
and it represents the n component important points of the Mask-RCNN model, and the values of the predictions were
patient’s body B. Finally, a time-based threshold known saved in a file. The values that were predicted comprised
as Tt is used to assess each image for the occurrence of the category of the items that were found, and the patient is
movement to determine whether a patient is experienc- classified as either “normal” or “abnormal.”
ing normal sensations or some degree of discomfort. The This can improve the performance of a CNN model
time threshold might change depending on the size of the through data augmentation; data augmentation may expand

Table 1: Statistical analysis of classification using the proposed method.


Parameter Classification Classification Classification Classification Classification Classification
for T1 for T2 for T3 for T4 for T5 for T6
Accuracy 0.971/0.963 0.969/0.953 0.971/0.953 0.964/0.944 0.986/0.956 0.976/0.943
Precision 0.966/0.953 0.953/0.946 0.963/0.941 0.953/0.956 0.963/0.944 0.968/0.922
Recall 0.987/0.973 0.973/0.964 0.952/0.931 0.969/0.954 0.958/0.923 0.956/0.921
F1-Score 0.985/0.965 0.963/0.911 0.952/0.921 0.974/0.955 0.966/0.948 0.955/0.934
Cohen’s Kappa 0.936/0.904 0.921/0.911 0.911/0.901 0.933/0.932 0.943/0.922 0.962/0.932
ROC_AUC 0.974/0.954 0.974/0.932 0.964/0.941 0.983/0.945 0.974/0.952 0.956/0.939

Abbreviation: ROC_AUC, receiver operating characteristics area under the curve.

Journal of Disability Research 2023


S. Ayouni et al.: Smart Hospital Bed Based on Deep Learning to Monitor Patient Conditions 33

and diversify the training dataset. The model may become Architecture changes
more resilient and generalise to unknown data by adding
random changes like rotation, scaling, and flipping to the Change the CNN model architecture. This may include add-
input pictures. ing or deleting layers, modifying the number of filters or
neurons, network depth, or convolutional or pooling layer
types. A model architecture that fits the job and dataset can
Hyperparameter tuning be used. Early halting prevents overtraining. The training
can be stopped when the model’s performance degrades on
You can try multiple hyperparameter settings to obtain the a validation set. This helps locate the ideal point when the
best CNN model. This covers learning rate, batch size, layer model has learned meaningful patterns without overfitting
number, filter size, and activation function. Grid or random the training data.
search can tune hyperparameters.

Regular model evaluation


Transfer learning
Assess the model’s performance using a distinct validation
You can use pre-trained models to start CNN model training. or test dataset. This monitors model progress, identifies
Transfer learning uses ImageNet-trained information. Fine- faults, and adjusts. If possible, use multiple data sources.
tuning the pre-trained model for your task and dataset may Integrating appropriate sensor data or patient information
improve performance with less training data. into the model may improve its performance.
The statistical analysis was carried out to measure the
effectiveness of the proposed system in terms of the standard
Regularisation deviation for training and testing data. The standard devia-
tion is measured as the value deviating from the “normal”
Regularisation prevents overfitting and improves generali- class. As a result, the usefulness of the proposed model is
sation. Dropout and L2 regularisation may minimise model based on the key points considered.
complexity and feature dependence. Optimising the CNN The findings of the statistical analysis are reported in
model can be done using multiple optimisation strategies. Table 2 for both the train dataset and the test dataset.
Adam, RMSprop, and momentum may expedite conver- Every step of the Mask-RCNNs has been shown to per-
gence and enhance training efficiency. form better on the training dataset than on the testing dataset,
according to the statistical investigation. Since the metrics
that were acquired for the testing datasets are not vastly dif-
Ensemble methods ferent from the metrics obtained for the training datasets, we
can conclude that the models did not suffer from overfitting.
Combine predictions from many CNN models. Averaging or The suggested model had the highest value of precision on
voting may increase model robustness and accuracy by inte- the training dataset but the lowest value of precision on the
grating varied viewpoints from different models. testing dataset. The suggested model got the best overall

Table 2: Measure of the effectiveness of the proposed system in terms of standard deviation for training and testing data.
Task T1: training, T2: training, T3: training, T4: training, T51: training, T6: training,
testing testing testing testing testing testing
T1 1.63, 1.14 2.54, 1.34 1.54, 1.13 1.58, 1.16 1.79, 1.36 1.83, 1.26
T2 1.72, 1.68 2.83, 2.41 1.65, 1.21 1.58, 1.45 1.63, 1.31 1.54, 1.21
T3 1.72, 1.41 2.54, 2.13 1.83, 1.01 1.84, 1.32 1.78, 1.42 1.56, 1.32
T4 1.52, 1.23 2.47, 2.24 1.92, 1.43 1.77, 1.29 1.65, 1.23 1.83, 1.38
T5 1.63, 1.21 2.65, 2.74 1.83, 1.21 1.56, 1.32 1.47, 1.27 1.45, 1.03
T6 1.45, 1.01 2.78, 2.81 1.45, 1.04 1.55, 1.17 1.66, 1.21 1.59, 1.15

Table 3: Evaluation of task completion time using the proposed method.


Task T1: training, T2: training, T3: training, T4: training, T51: training, T6: training,
testing testing testing testing testing testing
T1 10.16, 9.16 8.46, 9.66 8.14, 7.15 8.99, 8.34 8.93, 8.26 8.34, 7.29
T2 8.42, 7.65 7.48, 8.58 9.46, 8.62 9.85, 8.11 9.67, 8.39 8.67, 7.38
T3 9.76, 8.45 8.96, 7.75 8.71, 9.41 8.78, 7.46 8.59, 7.45 7.92, 6.74
T4 7.85, 7.28 8.84, 7.69 8.86, 8.26 9.34, 8.24 8.92, 7.49 8.28, 7.83
T5 7.68, 8.54 8.61, 9.52 8.63, 7.59 8.59, 7.34 7.65, 6.45 7.92, 6.59
T6 8.94, 7.45 7.34, 6.54 7.45, 6.34 7.48, 6.22 8.45, 7.93 8.91, 7.48

Journal of Disability Research 2023


34 S. Ayouni et al.: Smart Hospital Bed Based on Deep Learning to Monitor Patient Conditions

Table 4: Average performance of performance metrics using the proposed method.


Average performance Accuracy (%) Recall (%) Precision (%) FNR (%) TPR (%) FPR (%) TNR (%)
T1 94 90 83 6 95 8 91
T2 93 91 89 5 91 7 89
T3 95 89 87 4 92 9 93
T4 94 92 83 7 93 6 90
T5 93 88 84 6 94 7 93
T6 92 89 83 6 95 6 93

Abbreviations: FNR, false-negative rate; FPR, false-positive rate; TNR, true-negative rate; TPR, true-positive rate.

performance and the highest value for the measure of recall systems. Intelligent monitoring devices and sensors are
on the training dataset. capable of quantifying vital physiological parameters such as
Both qualitative and quantitative data were gathered using blood pressure, heart rate, and oxygen saturation levels. The
the strategy for assessing the system described in the meth- utilisation of deep learning algorithms facilitates the analysis
odology section. The information generated by the system of data to detect anomalies or emergencies in a timely man-
has been used to assess its usefulness, efficiency, and user ner, enabling the healthcare team to promptly intervene and
friendliness. administer suitable medical interventions.
The time it took to complete the network run was used to Smart hospital technology employs data derived from
determine how efficient the suggested method was. Table 3 electronic medical records, monitoring devices, and other
summarises the results of the network runtime analysis for medical sources to conduct data analysis and generate pre-
both the train dataset and the test dataset. dictive insights. The utilisation of deep learning and arti-
Table 4 illustrates the average performance of various ficial intelligence enables the identification of patterns,
performance metrics like accuracy, recall, precision, FNR, trends, and forecasts within the given dataset. The insights
TPR, FPR, and TNR using the proposed method. After the have the potential to enhance care planning, facilitate evi-
conclusion of the tests, the results were gathered for further dence-based decision-making, and optimise treatment
consideration. And a decision is to be made about whether outcomes.
the patient needs assistance or whether the condition is The findings of this study shed even more light on the
normal. importance of using a GUI for smart beds. The proposed
framework demonstrated how computer vision works with-
out the need for any additional hardware sensors for efficient
patient care. This framework shows in greater detail how
CONCLUSION approaches like deep learning may greatly improve the per-
formance of a proposed system by making use of computer
Most patients who are confined to hospital beds need special vision and several sensors as the components.
care from hospital staff. Even though there are many different
types of systems designed to help these patients, most of them
concentrate on certain duties, such as calling for emergencies
or monitoring the patient’s health and activities. This work
FUTURE TRENDS FOR SMART
uses a hospital smart-bed control system with the help of com- HOSPITAL BEDS
puter vision, smart sensors, and deep learning technologies.
The results of the assessment demonstrated, in the end, that Smart hospital beds will improve patient care and efficiency.
the system is successful and efficient. Deep learning has been Smart hospital bed development may include:
implemented into the system with the purpose of increasing 1. Mobile apps and smart gadgets may help patients, phy-
accessibility. Therefore, the smart-bed system that has been sicians, and nurses communicate. Virtual and augmented
designed, combined with the results of its assessment and the reality may also improve patient communication by pro-
needs that have been specified, presents a viable answer for viding visual information and instructions.
the crisis that is now occurring in the healthcare sector. 2. Intelligent data analysis: smart hospital beds will need
The implementation of smart hospital technology has the intelligent data analysis. Medical gadgets, comput-
potential to enhance patient care through various means, erised medical records, and sophisticated monitoring
including the improvement of communication and coopera- systems generate massive volumes of data. This anal-
tion among healthcare teams. Instant messaging and smart- ysis improves patient care and allows evidence-based
phone apps provide a rapid means of communication for decision-making.
medical professionals such as doctors, nurses, and chemists. 3. Robotics and AI: smart hospital beds may use robotics
Facilitating care coordination and exchanging crucial infor- and AI to simplify operations and help patients. Robots
mation are simplified. can transport medications and provide basic care, while
The implementation of smart hospital technology enables AI systems can help physicians make data-driven
precise patient monitoring through intelligent monitoring judgements.

Journal of Disability Research 2023


S. Ayouni et al.: Smart Hospital Bed Based on Deep Learning to Monitor Patient Conditions 35

FUNDING CONFLICTS OF INTEREST


The authors extend their appreciation to the King Salman The authors declare no conflicts of interest in association
center For Disability Research for funding this work through with the present study.
Research Group no KSRG-2022-042.

ACKNOWLEDGEMENTS
AUTHOR CONTRIBUTIONS
The authors extend their appreciation to the King Salman
M.M. and S.A. conceptualised this study, M.B.A. did the Center for Disability Research for funding this work through
methodology, S.Al.O. was responsible for the preparation of Research Group no KSRG-2022-042.
software, N.Al.T. and S.A. carried out the validation, M.M.
conducted formal analysis, S.A. and F.H. conducted investi-
gation, M.B.A. was responsible for resources management,
S.Al.O. was responsible for data curation, N.Al.T. drafted DATA AVAILABILITY STATEMENT
the original manuscript, and S.A. wrote, reviewed, and
edited the manuscript. All authors have read and agreed to The data used to support the findings of this study are
the published version of the manuscript. included within the article.

REFERENCES
Aadeeb M.S., Hassan Munna M., Rahman M.R. and Islam M.N. (2020). Dhillon A.S. (2017). Monitoring and alerting system for epilepsy
Towards developing a hospital cabin management system using brain patients. EEE Student Reports (FYP/IA/PA/PI). http://hdl.handle.
computer interaction. In: International Conference on Intelligent Sys- net/10356/70748.
tems Design and Applications, pp. 212-224, New York, Springer. Farid M., Purdy N. and Neumann WP. (2020). Using system dynamics
Akter N., Akter M. and Turale S. (2019). Barriers to quality of work life modelling to show the effect of nurse workload on nurses’ health and
among Bangladeshi nurses: a qualitative study. Int. Nurs. Rev., 66(3), quality of care. Ergonomics, 63(8), 952-964.
396-403. Gao X., Xu D., Cheng M. and Gao S. (2003). A BCI-based environmental
Alam M.G.R., Hassan M.M., Uddin M.Z., Almogren A. and Fortino G. controller for the motion-disabled. IEEE Trans. Neural. Syst. Rehabil.
(2019). Autonomic computation offloading in mobile edge for IoT Eng., 11(2), 137-140.
applications. Future Gener. Comput. Syst., 90, 149-157. Hasan Z., Khan R.R., Rifat W., Dipu D.S., Islam M.N. and Sarker I.H.
Albawi S., Mohammed T.A. and Al-Zawi S. (2017). Understanding of a (2021). Development of a predictive analytic system for chronic kid-
convolutional neural network. In: 2017 International Conference on ney disease using ensemble based machine learning. In: 2021 62nd
Engineering and Technology (ICET), pp. 1-6, New York, IEEE. International Scientific Conference on Information Technology and
Amato F., Moscato V., Picariello A., Piccialli F. and Sperlí G. (2018). Management Science of Riga Technical University (ITMS), pp. 1–6,
Centrality in heterogeneous social networks for lurkers detection: New York, IEEE.
an approach based on hypergraphs. Concurr. Comput. Pract. Exp., He K., Gkioxari G., Dollár P. and Girshick R. (2017). Mask R-CNN. In:
30(3), e4188. Proceedings of the IEEE International Conference on Computer
Arshad J., Ashraf M.A., Asim H.M., Rasool N., Jaffery M.H. and Bhatti Vision, pp. 2961-2969, Cambridge, MA, USA.
S.I. (2023). Multi-mode electric wheelchair with health monitoring Hossain T., Sabbir M.S.-U.-A., Mariam A., Inan T.T., Islam M.N., Mah-
and posture detection using machine learning techniques. Electronics, bub K., et al. (2019). Towards developing an intelligent wheelchair
12(5), 1132. for people with congenital disabilities and mobility impairment. In:
Bakator M. and Radosav D. (2018). Deep learning and medical diagnosis: a 2019 1st International Conference on Advances in Science, Engi-
review of literature. Multimodal. Technol. Interact., 2(3), 47. neering and Robotics Technology (ICASERT), pp. 1-7, New York,
Bhardwaj V., Joshi R. and Gaur A.M. (2022). IoT-based smart health moni- IEEE.
toring system for COVID-19. SN Comput. Sci., 3(2), 137. Islam M.N., Khan S.R., Islam N.N., Rownok R., Zaman R. and Zaman S.R.
Birku Y. and Agrawal H. (2018). Survey on fall detection systems. Int. J. (2021). A mobile application for mental health care during COVID-
Pure Appl. Math., 118(18), 2537-2543. 19 pandemic: development and usability evaluation with system
Carpenter I., Bobby J., Kulinskaya E. and Seymour G. (2007). People usability scale. In: International Conference on Computational Intel-
admitted to hospital with physical disability have increased length ligence in Information System, pp. 33-42, New York, Springer.
of stay: implications for diagnosis related group re-imbursement in Islam M.N., Aadeeb M.S., Hassan Munna M.M. and Rahman M.R. (2022).
England. Age Ageing, 36(1), 73-78. A deep learning based multimodal interaction system for bed ridden
Casadei R., Fortino G., Pianini D., Russo W., Savaglio C. and Viroli M. and immobile hospital admitted patients: design, development and
(2019). Modelling and simulation of opportunistic IOT services with evaluation. BMC Health Serv. Res., 22(1), 803.
aggregate computing. Future Gener. Comput. Syst., 91, 252-262. Jan A., Meng H., Gaus Y.F.B.A. and Zhang F. (2018). Artificial intelligent
Chen M., Li W., Fortino G., Hao Y., Hu L. and Humar I. (2019). A dynamic system for automatic depression level analysis through visual and
service migration mechanism in edge cognitive computing. ACM vocal expressions. IEEE Trans. Cogn. Dev. Syst., 10(3), 668-680.
Trans. Int. Technol. (TOIT), 19(2), 1-15. Kamruzzaman M. (2020). Architecture of smart health care system using
Deng F., Dong J., Wang X., Fang Y., Liu Y., Yu Z., et al. (2018). Design and artificial intelligence. In: 2020 IEEE International Conference on
implementation of a noncontact sleep monitoring system using infra- Multimedia & Expo Workshops (ICMEW), pp. 1-6, New York, IEEE.
red cameras and motion sensor. IEEE Trans. Instrum. Meas., 67(7), Kanase P and Gaikwad S. (2016). Smart hospitals using internet of things
1555-1563. (IOT). Int. Res. J. Eng. Technol. (IRJET), 3(03), 1735-1737.

Journal of Disability Research 2023


36 S. Ayouni et al.: Smart Hospital Bed Based on Deep Learning to Monitor Patient Conditions

Khan N.S., Kundu S., Al Ahsan S., Sarker M. and Islam M.N. (2018). Qureshi K.N., Din S., Jeon G. and Piccialli F. (2020). An accurate and
An assistive system of walking for visually impaired. In: 2018 dynamic predictive model for a smart m-health system using machine
International Conference on Computer, Communication, Chemical, learning. Inf. Sci., 538, 486-502.
Material and Electronic Engineering (IC4ME2), pp. 1-4, New York, Raheja J.L., Gopinath D. and Chaudhary A. (2014). GUI system for elders/
IEEE. patients in intensive care. In: 2014 IEEE International Technology
Khan S.S., Sunny M.S.H., Hossain M.S., Hossain E. and Ahmad M. Management Conference, pp. 1-5, New York, IEEE.
(2017). Nose tracking cursor control for the people with disabilities: Reinhold K., Tint P., Traumann A., Tamme P., Tuulik V. and Voolma S-R.
an improved HCI. In: 2017 3rd International Conference on Elec- (2019). Digital support in logistics of home-care nurses for disabled
trical Information and Communication Technology (EICT), pp. 1-5, and elderly people. In: International Conference on Human Interac-
New York, IEEE. tion and Emerging Technologies, pp. 563-568, New York, Springer.
Lakkis S.I. and Elshakankiri M. (2017). IOT based emergency and Ren S., He K., Girshick R. and Sun J. (2015). Faster R-CNN: towards real-
­operational services in medical care systems. In: 2017 Internet of time object detection with region proposal networks. In: Advances
Things Business Models, Users, and Networks, pp. 1-5, New York, in Neural Information Processing Systems, pp. 91-99, Montreal,
IEEE. ­Quebec, Canada.
LeCun Y., Bengio Y. and Hinton G. (2015). Deep learning. Nature, Saha J., Saha A.K., Chatterjee A., Agrawal S., Saha A., Kar A., et al. (2018).
521(7553), 436-444. Advanced IOT based combined remote health monitoring, home
Lin T.-Y., Dollár P., Girshick R., He K., Hariharan B. and Belongie S. automation and alarm system. In: 2018 IEEE 8th Annual Computing
(2017). Feature pyramid networks for object detection. In: Proceed- and Communication Workshop and Conference (CCWC), pp. 602-
ings of the IEEE Conference on Computer Vision and Pattern Recog- 606, New York, IEEE.
nition, pp. 2117-2125, Honolulu, HI, USA. Sathyanarayana S., Satzoda R.K., Sathyanarayana S. and Thambipillai S.
Liu S. and Ostadabbas S. (2017). A vision-based system for in-bed pos- (2018). Vision-based patient monitoring: a comprehensive review
ture tracking. In: 2017 IEEE International Conference on Com- of algorithms and technologies. J. Ambient Intell. Human. Comput.,
puter Vision Workshops (ICCVW), pp. 1373-1382, Venice, Italy, 9(2), 225-251.
IEEE. Savaglio C., Ganzha M., Paprzycki M., Bădică C., Ivanović M. and For-
Ludwig J. (2013). Image Convolution, Portland State University, Portland. tino G. (2020). Agent-based internet of things: state-of-the-art and
Mattison M., Marcantonio E., Schmader K., Gandhi T. and Lin F. (2013). research challenges. Future Gener. Comput. Syst., 102, 1038-1053.
Hospital Management of Older Adults. UpToDate, Waltham. Sharma S., Sharma S. and Athaiya A. (2017). Activation functions in neural
McGarry B.E., Grabowski D.C. and Barnett M.L. (2020). Severe staffing networks. Towards Data Sci., 6(12), 310-316.
and personal protective equipment shortages faced by nursing homes Shen D., Wu G. and Suk H-I. (2017). Deep learning in medical image anal-
during the covid-19 pandemic: study examines staffing and personal ysis. Annu. Rev. Biomed. Eng., 19, 221-248.
protective equipment shortages faced by nursing homes during the Shrestha A. and Mahmood A. (2019). Review of deep learning algo-
COVID-19 pandemic. Health Aff., 39(10), 1812-1821. rithms and architectures. IEEE Access, 7, 53040-53065. 10.1109/
Merrouche F. and Baha N. (2016). Depth camera based fall detection ACCESS.2019.2912200.
using human shape and movement. In: 2016 IEEE International Sterpetti AV. (2020). Lessons learned during the covid-19 virus pandemic.
Conference on Signal and Image Processing (ICSIP), pp. 586-590, J. Am. Coll. Surg., 230(6), 1092-1093.
Beijing, China, IEEE. Sujith A.V., Sajja G.S., Mahalakshmi V., Nuhmani S. and Prasanalakshmi
Pace P., Aloi G., Gravina R., Caliciuri G., Fortino G. and Liotta A. (2018). B. (2022). Systematic review of smart health monitoring using deep
An edge-based architecture to support efficient applications for learning and Artificial intelligence. Neurosci. Inform., 2(3), 100028.
healthcare industry 4.0. IEEE Trans. Ind. Inform., 15(1), 481-489. Sun M., Song Z., Jiang X., Pan J. and Pang Y. (2017). Learning pooling for
Piccialli F., Casolla G., Cuomo S., Giampaolo F. and Di Cola V.S. (2019). convolutional neural network. Neurocomputing, 224, 96-104.
Decision making in IOT environment through unsupervised learning. Tam S., Boukadoum M., Campeau-Lecours A. and Gosselin B. (2019). A
IEEE Intell. Syst., 35(1), 27-35. fully embedded adaptive real-time hand gesture classifier leveraging
Piccialli F., Cuomo S., Crisci D., Prezioso E. and Mei G. (2020). A deep HD-sEMG and deep learning. IEEE Trans. Biomed. Circ. Syst., 14(2),
learning approach for facility patient attendance prediction based on 232-243.
medical booking data. Sci. Rep., 10(1), 1-11. Uysal C. and Filik T. (2018). MUSIC algorithm for respiratory rate estima-
Piccialli F., Di Somma V., Giampaolo F., Cuomo S. and Fortino G. (2021). tion using RF signals. Electrica, 18(2), 300-309.
A survey on deep learning in medicine: why, how and when? Inf. Wang S.-C. (2003). Artificial neural network. In: Interdisciplinary Comput-
Fusion, 66, 111-137. ing in Java Programming, pp. 81-100, New York, Springer.
Qureshi SM., Purdy N., Mohani A. and Neumann WP. (2019). Predicting Wu B.-F. and Lin C.-H. (2018). Adaptive feature mapping for customiz-
the effect of nurse–patient ratio on nurse workload and care quality ing deep learning based facial expression recognition model. IEEE
using discrete event simulation. J. Nurs. Manag., 27(5), 971-980. Access, 6, 12451-12461.

Journal of Disability Research 2023

You might also like