Predicting Length of Stay, Functional Outcome, and Aftercare in The Rehabilitation of Stroke Patients
Predicting Length of Stay, Functional Outcome, and Aftercare in The Rehabilitation of Stroke Patients
Predicting Length of Stay, Functional Outcome, and Aftercare in The Rehabilitation of Stroke Patients
Abstract:
We categorized data values and derived new fields from existing data in the
following features: ejection fraction, diastolic blood pressure, systolic blood
pressure, smoking, triglyceride, low-density lipoprotein, high-density lipoprotein,
hemoglobin, serum cholesterol, and fasting blood sugar. These features were
changed to categorical attributes for analysis and to obtain bad results.
DISADVANTAGES:
ADVANTAGES:
They proposed using the HR model to predict the mean LOS of stroke
patients.
The largest advantage of a registry is the ability to prospectively add
patients, while allowing investigators to go back and collect information
retrospectively if needed
SYSTEM CONFIGURATION:
HARDWARE CONFIGURATION:
SOFTWARE CONFIGURATION:
ABSTRACT:
Cerebral ischaemia is a major cause of disability and death globally and has a
profoundly negative impact on the individuals it affects, those that care for them
and society as a whole. The most common and familiar manifestation is stroke,
85% of which are ischaemic and which is the second leading cause of death and
most common cause of complex chronic disability worldwide. Stroke survivors
often suffer from long-term neurological disabilities significantly reducing their
ability to integrate effectively in society with all the financial and social
consequences that this implies. These difficulties cascade to their next of kin who
often become caregivers and are thus indirectly burdened. A more insidious
consequence of cerebral ischaemia is progressive cognitive impairment causing
dementia which although less abrupt is also associated with a significant long-term
disability. Globally cerebrovascular diseases are responsible for 5.4 million deaths
every year Approximately 3% of total healthcare expenditure is attributable to
cerebral ischaemia with cerebrovascular diseases costing EU healthcare systems 21
billion euro in 2003. The cost to the wider economy (including informal care and
lost productivity) is even greater with stroke costing the UK 7-8 billion pound in
2005 and the US $62.7 billion in 2007. Cerebrovascular disease cost the EU 34
billion euro in 2003. From 2005 to 2050 the anticipated cost of stroke to the US
economy is estimated at $2.2 trillion. Given the global scale of the problem and the
enormous associated costs it is clear that there is an urgent need for advances in the
prevention of cerebral ischaemia and its consequences. Such developments would
result in profound benefits for both individuals and their wider societies and
address one of the world's most pre-eminent public health issues
YEAR OF PUBLICATION:2008
ABSTRACT:
Stroke most commonly results from occlusion of a major artery in the brain
and typically leads to the death of all cells within the affected tissue. Mitochondria
are centrally involved in the development of this tissue injury due to modifications
of their major role in supplying ATP and to changes in their properties that can
contribute to the development of apoptotic and necrotic cell death. In animal
models of stroke, the limited availability of glucose and oxygen directly impairs
oxidative metabolism in severely ischemic regions of the affected tissue and leads
to rapid changes in ATP and other energy-related metabolites. In the less-severely
ischemic "penumbral" tissue, more moderate alterations develop in these
metabolites, associated with near normal glucose use but impaired oxidative
metabolism. This tissue remains potentially salvageable for at least the first few
hours following stroke onset. Early restoration of blood flow can result in
substantial recovery of energy-related metabolites throughout the affected tissue.
However, glucose oxidation is markedly decreased due both to lower energy
requirements in the post-ischemic tissue and limitations on the mitochondrial
oxidation of pyruvate. A secondary deterioration of mitochondrial function
subsequently develops that may contribute to progression to cell loss.
Mitochondrial release of multiple apoptogenic proteins has been identified in
ischemic and post-ischemic brain, mostly in neurons. Pharmacological
interventions and genetic modifications in rodent models strongly implicate
caspase-dependent and caspase-independent apoptosis and the mitochondrial
permeability transition as important contributors to tissue damage, particularly
when induced by short periods of temporary focal ischemia.
YEAR OF PASSING:2012
AUTHOR NAME: Sims NR1, Muyderman H.
ABSTRACT:
This report presents preliminary U.S. data on deaths, death rates, life
expectancy, leading causes of death, and infant mortality for 2008 by selected
characteristics such as age, sex, race, and Hispanic origin. Methods-Data in this
report are based on death records comprising more than 99 percent of the
demographic and medical files for all deaths in the United States in 2008. The
records are weighted to independent control counts for 2008. For certain causes of
death such as unintentional injuries, homicides, suicides, drug-induced deaths, and
sudden infant death syndrome, preliminary and final data may differ because of the
truncated nature of the preliminary file. Comparisons are made with 2007 final
data. Results-The age-adjusted death rate decreased from 760.2 deaths per 100,000
population in 2007 to 758.7 deaths per 100,000 population in 2008. From 2007 to
2008, age-adjusted death rates decreased significantly for 6 of the 15 leading
causes of death: Diseases of heart, Malignant neoplasms, Cerebrovascular diseases,
Accidents (unintentional injuries), Diabetes mellitus, andAssault (homicide). From
2007 to 2008, age-adjusted death rates increased significantly for 6 of the 15
leading causes of death: Chronic lower respiratory diseases; Alzheimer's disease;
Influenza and pneumonia; Nephritis, nephrotic syndrome and nephrosis;
Intentional self-harm (suicide); and Essential hypertension and hypertensive renal
disease.
YEAR OF PUBLISHING:2008
ABSTRACT:
Research in recent years has revealed factors that are important predictors of
physical and functional rehabilitation: demographic variables, visual and
perceptual impairments, and psychological and cognitive factors. However, there is
a remaining uncertainty about prediction of outcome and a need to clinically apply
research findings. This study was designed to identify the relative importance of
medical, functional, demographic, and cognitive factors in predicting length of stay
in rehabilitation, functional outcome, and recommendations for post discharge
continuation of services.
YEAR OF PUBLICATION:2002
ABSTRACT:
YEAR OF PUBLICATION:2013
ABSTRACT:
A prospective study of 419 patients aged 70 and over admitted to acute medical
wards was carried out by medical staff from a geriatric unit. Data, including
presenting problem, housing, social support, mental state, continence, and degree
of independence before and after admission, were recorded. Of the 419 patients,
143 remained in hospital after 14 days and 65 after 28 days. The major factors
associated with prolonged stay in hospital included advanced age, stroke,
confusion and falls as reasons for admission to hospital, incontinence, and loss of
independence for everyday activities. Social circumstances did not predict length
of stay. Although these factors are interrelated, the most important influence on
length of stay was the medical reason for admission. Early contact with the
geriatric medical unit in these patients may speed up the recovery or result in more
appropriate placement.
YEAR OF PUBLICATION:2000
ABSTRACT:
YEAR OF PUBLICATION:1999
ABSTRACT:
As a new concept that emerged in the middle of 1990's, data mining can help
researchers gain both novel and deep insights and can facilitate unprecedented
understanding of large biomedical datasets. Data mining can uncover new
biomedical and healthcare knowledge for clinical and administrative decision
making as well as generate scientific hypotheses from large experimental data,
clinical databases, and/or biomedical literature. This review first introduces data
mining in general (e.g., the background, definition, and process of data mining),
discusses the major differences between statistics and data mining and then speaks
to the uniqueness of data mining in the biomedical and healthcare fields. A brief
summarization of various data mining algorithms used for classification,
clustering, and association as well as their respective advantages and drawbacks is
also presented. Suggested guidelines on how to use data mining algorithms in each
area of classification, clustering, and association are offered along with three
examples of how data mining has been used in the healthcare industry. Given the
successful application of data mining by health related organizations that has
helped to predict health insurance fraud and under-diagnosed patients, and identify
and classify at-risk people in terms of health with the goal of reducing healthcare
cost, we introduce how data mining technologies (in each area of classification,
clustering, and association) have been used for a multitude of purposes, including
research in the biomedical and healthcare fields. A discussion of the technologies
available to enable the prediction of healthcare costs (including length of hospital
stay), disease diagnosis and prognosis, and the discovery of hidden biomedical and
healthcare patterns from related databases is offered along with a discussion of the
use of data mining to discover such relationships as those between health
conditions and a disease, relationships among diseases, and relationships among
drugs. The article concludes with a discussion of the problems that hamper the
clinical use of data mining by health professionals.
YEAR OF PUBLICATION:2011
ABSTRACT: This article addresses the special features of data mining with
medical data. Researchers in other fields may not be aware of the particular
constraints and difficulties of the privacy-sensitive, heterogeneous, but voluminous
data of medicine. Ethical and legal aspects of medical data mining are discussed,
including data ownership, fear of lawsuits, expected benefits, and special
administrative issues. The mathematical understanding of estimation and
hypothesis formation in medical data may be fundamentally different than those
from other data collection activities. Medicine is primarily directed at patient-care
activity, and only secondarily as a research resource; almost the only justification
for collecting medical data is to benefit the individual patient. Finally, medical data
have a special status based upon their applicability to all people; their urgency
(including life-ordeath); and a moral obligation to be used for beneficial purposes
YEAR OF PUBLICATION:2002
PATIENT HOSPITAL
REGISTER ACCEPT
PATIENT HOSPITAL
REGISTER
ACCEPT
REQUEST
BED ALLOCATION
ACTIVITY DIAGRAM:
PATIENT HOSPITAL
REGISTER ACCPET
REQUEST BED
ALLOCATION
USERCASE DIAGRAM:
REGISTER
HOSPITAL
LOGIN
PATIENT REQUEST
ACCEPT
VIEW HISTORY
BED ALLOCATION
ER DIAGRAM:
REGISTER ACCEPT
START
REGISTER
LOGIN
REQUEST
ACCEPT
VIEW HISTORY
BED
ALLOCATION
STOP
MODULAR:
PATIENT MODULE
HOSPITAL MODULE
PATIENT MODULE:
LOGIN
REGISTER
REQUEST
LOGOUT
LOGIN:
The login module is the very first and the most common module in all
applications. In the suggested system only registered users will be allowed to login
the system the unauthorized users will be unable to login. Registered users with
their username and password only being correct will moved on to the next page. Or
else they will be unable to login.
REGISTER:
This module is User Registration; all the new users have to register. Each user
is given a unique password with their user name. To access their account they have
to give their valid username and password i.e. authentication and security is
provided for their account.
REQUEST:
Loging out means to end access to a computer system or a website. Logging out
informs the computer or website that the current user wishes to end the login
session. Log out is also known as log off, sign off or sign out.
HOSPITAL MODULE:
LOGIN
ACCEPT THE REQUEST
VIEW PATIENT HISTORY
BED ALLOCATION FOR THE PATIENT
LOGOUT
LOGIN:
The login module is the very first and the most common module in all
applications. In the suggested system only registered users will be allowed to login
the system the unauthorized users will be unable to login. Registered users with
their username and password only being correct will moved on to the next page. Or
else they will be unable to login.
The Accept request-header field can be used to specify certain media types
which are acceptable for the response
LOGOUT
Java Technology
Java technology is both a programming language and a platform.
The Java Programming Language
The Java programming language is a high-level language that can be
characterized by all of the following buzzwords:
Simple
Architecture neutral
Object oriented
Portable
Distributed
High performance
Interpreted
Multithreaded
Robust
Dynamic
Secure
Native code is code that after you compile it, the compiled code runs on a specific
hardware platform. As a platform-independent environment, the Java platform can
be a bit slower than native code. However, smart compilers, well-tuned
interpreters, and just-in-time bytecode compilers can bring performance close to
that of native code without threatening portability.
How does the API support all these kinds of programs? It does so with packages of
software components that provide a wide range of functionality. Every full
implementation of the Java platform gives you the following features:
The essentials: Objects, strings, threads, numbers, input and output, data
structures, system properties, date and time, and so on.
Applets: The set of conventions used by applets.
Networking: URLs, TCP (Transmission Control Protocol), UDP (User Data
gram Protocol) sockets, and IP (Internet Protocol) addresses.
Internationalization: Help for writing programs that can be localized for
users worldwide. Programs can automatically adapt to specific locales and
be displayed in the appropriate language.
Security: Both low level and high level, including electronic signatures,
public and private key management, access control, and certificates.
Software components: Known as JavaBeansTM, can plug into existing
component architectures.
Object serialization: Allows lightweight persistence and communication
via Remote Method Invocation (RMI).
Java Database Connectivity (JDBCTM): Provides uniform access to a wide
range of relational databases.
The Java platform also has APIs for 2D and 3D graphics, accessibility, servers,
collaboration, telephony, speech, animation, and more. The following figure
depicts what is included in the Java 2 SDK.
How Will Java Technology Change My Life?
We can’t promise you fame, fortune, or even a job if you learn the Java
programming language. Still, it is likely to make your programs better and requires
less effort than other languages. We believe that Java technology will help you do
the following:
Get started quickly: Although the Java programming language is a
powerful object-oriented language, it’s easy to learn, especially for
programmers already familiar with C or C++.
Write less code: Comparisons of program metrics (class counts, method
counts, and so on) suggest that a program written in the Java programming
language can be four times smaller than the same program in C++.
Write better code: The Java programming language encourages good
coding practices, and its garbage collection helps you avoid memory leaks.
Its object orientation, its JavaBeans component architecture, and its wide-
ranging, easily extendible API let you reuse other people’s tested code and
introduce fewer bugs.
Develop programs more quickly: Your development time may be as much
as twice as fast versus writing the same program in C++. Why? You write
fewer lines of code and it is a simpler programming language than C++.
Avoid platform dependencies with 100% Pure Java: You can keep your
program portable by avoiding the use of libraries written in other languages.
The 100% Pure Java TM Product Certification Program has a repository of
historical process manuals, white papers, brochures, and similar materials
online.
Write once, run anywhere: Because 100% Pure Java programs are
compiled into machine-independent byte codes, they run consistently on any
Java platform.
Distribute software more easily: You can upgrade applets easily from a
central server. Applets take advantage of the feature of allowing new classes
to be loaded “on the fly,” without recompiling the entire program.
ODBC
Microsoft Open Database Connectivity (ODBC) is a standard programming
interface for application developers and database systems providers. Before ODBC
became a de facto standard for Windows programs to interface with database
systems, programmers had to use proprietary languages for each database they
wanted to connect to. Now, ODBC has made the choice of the database system
almost irrelevant from a coding perspective, which is as it should be. Application
developers have much more important things to worry about than the syntax that is
needed to port their program from one database to another when business needs
suddenly change.
Through the ODBC Administrator in Control Panel, you can specify the particular
database that is associated with a data source that an ODBC application program is
written to use. Think of an ODBC data source as a door with a name on it. Each
door will lead you to a particular database. For example, the data source named
Sales Figures might be a SQL Server database, whereas the Accounts Payable data
source could refer to an Access database. The physical database referred to by a
data source can reside anywhere on the LAN.
The ODBC system files are not installed on your system by Windows 95. Rather,
they are installed when you setup a separate database application, such as SQL
Server Client or Visual Basic 4.0. When the ODBC icon is installed in Control
Panel, it uses a file called ODBCINST.DLL. It is also possible to administer your
ODBC data sources through a stand-alone program called ODBCADM.EXE.
There is a 16-bit and a 32-bit version of this program and each maintains a separate
list of ODBC data Sources.
From a programming perspective, the beauty of ODBC is that the application can
be written to use the same set of function calls to interface with any data source,
regardless of the database vendor. The source code of the application doesn’t
change whether it talks to Oracle or SQL Server. We only mention these two as an
example. There are ODBC drivers available for several dozen popular database
systems. Even Excel spreadsheets and plain text files can be turned into data
sources. The operating system uses the Registry information written by ODBC
Administrator to determine which low-level ODBC drivers are needed to talk to
the data source (such as the interface to Oracle or SQL Server). The loading of the
ODBC drivers is transparent to the ODBC application program. In a client/server
environment, the ODBC API even handles many of the network issues for the
application programmer.
The advantages of this scheme are so numerous that you are probably thinking
there must be some catch. The only disadvantage of ODBC is that it isn’t as
efficient as talking directly to the native database interface. ODBC has had many
detractors make the charge that it is too slow. Microsoft has always claimed that
the critical factor in performance is the quality of the driver software that is used.
In our humble opinion, this is true. The availability of good ODBC drivers has
improved a great deal recently. And anyway, the criticism about performance is
somewhat analogous to those who said that compilers would never match the
speed of pure assembly language. Maybe not, but the compiler (or ODBC) gives
you the opportunity to write cleaner programs, which means you finish sooner.
Meanwhile, computers get faster every year.
JDBC
In an effort to set an independent database standard API for Java, Sun
Microsystems developed Java Database Connectivity, or JDBC. JDBC offers a
generic SQL database access mechanism that provides a consistent interface to a
variety of RDBMS. This consistent interface is achieved through the use of “plug-
in” database connectivity modules, or drivers. If a database vendor wishes to have
JDBC support, he or she must provide the driver for each platform that the
database and Java run on.
To gain a wider acceptance of JDBC, Sun based JDBC’s framework on ODBC. As
you discovered earlier in this chapter, ODBC has widespread support on a variety
of platforms. Basing JDBC on ODBC will allow vendors to bring JDBC drivers to
market much faster than developing a completely new connectivity solution.
JDBC was announced in March of 1996. It was released for a 90 day public review
that ended June 8, 1996. Because of user input, the final JDBC v1.0 specification
was released soon after.
The remainder of this section will cover enough information about JDBC for you
to know what it is about and how to use it effectively. This is by no means a
complete overview of JDBC. That would fill an entire book.
JDBC Goals
Few software packages are designed without goals in mind. JDBC is one that,
because of its many goals, drove the development of the API. These goals, in
conjunction with early reviewer feedback, have finalized the JDBC class library
into a solid framework for building database applications in Java.
The goals that were set for JDBC are important. They will give you some insight
as to why certain classes and functionalities behave the way they do. The eight
design goals for JDBC are as follows:
2. SQL Conformance
SQL syntax varies as you move from database vendor to database vendor. In an
effort to support a wide variety of vendors, JDBC will allow any query statement
to be passed through it to the underlying database driver. This allows the
connectivity module to handle non-standard functionality in a manner that is
suitable for its users.
3. JDBC must be implemental on top of common database interfaces
The JDBC SQL API must “sit” on top of other common SQL level APIs.
This goal allows JDBC to use existing ODBC level drivers by the use of a
software interface. This interface would translate JDBC calls to ODBC and
vice versa.
4. Provide a Java interface that is consistent with the rest of the Java
system
Because of Java’s acceptance in the user community thus far, the designers feel
that they should not stray from the current design of the core Java system.
5. Keep it simple
This goal probably appears in all software design goal listings. JDBC is no
exception. Sun felt that the design of JDBC should be very simple, allowing for
only one method of completing a task per mechanism. Allowing duplicate
functionality only serves to confuse the users of the API.
6. Use strong, static typing wherever possible
Strong typing allows for more error checking to be done at compile time; also,
less error appear at runtime.
Compilers My Program
You can think of java byte codes as the machine code instructions for the java
virtual machine (java vm). Every java interpreter, whether it’s a java development
tool or a web browser that can run java applets, is an implementation of the java
vm. The java vm can also be implemented in hardware.
Java byte codes help make “write once, run anywhere” possible. You can compile
your java program into byte codes on my platform that has a java compiler. The
byte codes can then be run any implementation of the java vm. For example, the
same java program can run windows nt, solaris, and macintosh.
SYSTEM TESTING:
The purpose of testing is to discover errors. Testing is the process of trying to discover
every conceivable fault or weakness in a work product. It provides a way to check the
functionality of components, sub assemblies, assemblies and/or a finished product It is the
process of exercising software with the intent of ensuring that the
Software system meets its requirements and user expectations and does not fail in an
unacceptable manner. There are various types of test. Each test type addresses a specific testing
requirement.
TYPES OF TESTS
Unit testing
Unit testing involves the design of test cases that validate that the internal program logic is
functioning properly, and that program inputs produce valid outputs. All decision branches and
internal code flow should be validated. It is the testing of individual software units of the
application .it is done after the completion of an individual unit before integration. This is a
structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform
basic tests at component level and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a business process performs accurately
to the documented specifications and contains clearly defined inputs and expected results.
Integration testing
Integration tests are designed to test integrated software components to determine if they
actually run as one program. Testing is event driven and is more concerned with the basic
outcome of screens or fields. Integration tests demonstrate that although the components were
individually satisfaction, as shown by successfully unit testing, the combination of components is
correct and consistent. Integration testing is specifically aimed at exposing the problems that
arise from the combination of components.
Functional test
Functional tests provide a systematic demonstration that functions tested are available as
specified by the business and technical requirements, system documentation, and user manuals.
Business process flows; data fields, predefined processes, and successive processes must
be considered for testing. Before functional testing is complete, additional tests are identified and
the effective value of current tests is determined.
System Test
System testing ensures that the entire integrated software system meets requirements. It
tests a configuration to ensure known and predictable results. An example of system testing is the
configuration oriented system integration test. System testing is based on process descriptions
and flows, emphasizing pre-driven process links and integration points.
Unit Testing:
Unit testing is usually conducted as part of a combined code and unit test phase of the
software lifecycle, although it is not uncommon for coding and unit testing to be conducted as
two distinct phases.
Test objectives
Features to be tested
Integration Testing
The task of the integration test is to check that components or software applications, e.g.
components in a software system or – one step up – software applications at the company level –
interact without error.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional requirements.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
CONCLUSION:
This study introduces an approach for early prediction of LOS of stroke patients
arriving at a stroke unit. The approach involves a feature selection step based on
information gain and a prediction model development step using J48 decision tree
or Bayesian network. Prediction results were compared between the two models.
The performance of the Bayesian network based model was better (accuracy
81.28%) as opposed to the performance of the J48 based prediction model
(accuracy 77.1%). A partial representation of the J48 based model and the
Bayesian network based model are exhibited in Fig. 3 and Fig. 4 respectively. The
study faced some limitations in terms of data availability and the quality of the
data. More than 50% of the collected records were discarded due to
incompleteness. Nevertheless, the performance of the proposed prediction model is
quite promising. In future studies, the proposed approach will be tested on larger
datasets. Another important area for future research is to extend the proposed
approach to predict other attributes such as the Stroke Level of the patients
REFERENCES:
[3] A. M. Mimino, Deaths: Preliminary Data for 2008. DIANE Publishing, 2011.
[7] A. Lim and P. Tongkumchum, “Methods for analyzing hospital length of stay
with application to inpatients dying in Southern Thailand,” Global Journal of
Health Science, vol. 1, no. 1, p. 27, 2009.
[8] V. Gómez and J. E. Abásolo, “Using data mining to describe long hospital
stays,” Paradigma, vol. 3, no. 1, pp. 1–10, 2009.
[9] K.-C. Chang, M.-C. Tseng, H.-H. Weng, Y.-H. Lin, C.-W. Liou, and T.-Y. Tan,
“Prediction of length of stay of first-ever ischemic stroke,” Stroke, vol. 33, no. 11,
pp. 2670–2674, 2002.
[11] I. Nouaouri, A. Samet, and H. Allaoui, “Evidential data mining for length of
stay (LOS) prediction problem,” in 2015 IEEE International Conference on
Automation Science and Engineering (CASE), 2015, pp. 1415–1420.
[13] M. Rowan, T. Ryan, F. Hegarty, and N. O’Hare, “The use of artificial neural
networks to stratify the length of stay of cardiac patients based on preoperative and
initial postoperative factors,” Artificial Intelligence in Medicine, vol. 40, no. 3, pp.
211–221, 2007.
[14] X. Jiang, X. Qu, and L. B. Davis, “Using Data Mining to Analyze Patient
Discharge Data for an Urban Hospital.,” in DMIN, 2010, pp. 139–144.
[20] S. N. Ghazavi and T. W. Liao, “Medical data mining by fuzzy modeling with
selected features,” Artificial Intelligence in Medicine, vol. 43, no. 3, pp. 195–206,
2008.