HI KPI Guidelines
HI KPI Guidelines
HI KPI Guidelines
Guidance on Developing
Key Performance Indicators and
Minimum Data Sets to Monitor
Healthcare Quality
September 2010
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
Not all parts of the relevant legislation, the Health Act 2007, have been commenced. Those parts that
apply to childrens services are likely to be commenced in 2010.
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
Table of Contents
Executive Summary
Part 1:
Overview of performance monitoring
1 Introduction
1.1
1.2
Background
1.3
2 Quality
2.1
Structure/Process/Outcome
2.2
Quality improvement
10
2.3
Domains of quality
12
2.4
Conceptual frameworks
13
2.5
14
16
3.1
Types of indicators
16
3.2
Benefits
18
3.3
Considerations
20
Part 2:
Development of Key Performance Indicators and Minimum Data Sets
4 Development of KPIs
24
4.1
24
4.2
24
4.3
27
4.4
28
4.5
29
4.6
33
36
5.1
36
5.2
37
5.3
37
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
5.4
38
5.5
38
5.6
39
5.7
39
40
6.1
41
6.2
41
6.3
42
6.4
42
43
43
8 Conclusion
44
Reference List
45
Glossary of terms
50
Appendices
Appendix 1: HCQI Framework
53
54
55
56
Table of Figures
Figure 1: Quality Assurance Triangle(12)
10
17
18
25
Table of Tables
Table 1: Selection Criteria
30
39
ii
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
Executive Summary
The primary mandate of the Health Information and Quality Authority (the Authority) is
to drive patient safety in health and social care in Ireland. A key component of this
work relates to effectively monitoring the performance of healthcare services. Key
performance indicators (KPIs) are an essential tool in this process as they enable the
public, service users and healthcare providers alike to have reliable information on
current and desired standards in healthcare services. KPIs are used to identify where
performance is good and meeting desired standards, and where performance requires
improvement.
KPIs promote accountability to service users by facilitating comparisons with other
organisations and to stated objectives or targets of an organisation. Further, they
promote accountability to central Government for the efficient use of resources with
other comparable organisations
Reflecting an increased awareness of the importance of quality and safety in healthcare,
quality assessment has become increasingly critical - unless we actually measure the
quality and safety of care, we cannot determine if improvements are being made. This
is one of the key ways in which key performance indicators can have a positive impact
for patients and service users.
Performance monitoring is a continuous process that involves collecting data to
determine if a service is meeting desired standards or targets. It is dependent on good
quality information on health and social care which can only be achieved by having a
systematic process to ensure that data is collected consistently, both within, and across
organisations. One tool that is frequently used to assist in performance monitoring and
which can subsequently contribute to performance improvement in quality and safety, is
the development and monitoring of key performance indicators (KPIs).
KPIs, which are specific and measurable elements of health and social care, can be used
to assess the quality of care. They are measures of performance, based on standards
determined through evidence-based academic literature or through the consensus of
experts when evidence is unavailable.
The purpose of this document is to provide guidance for the development of KPIs and
associated minimum data sets (MDSs) to monitor healthcare quality. Minimum data sets
refer to the minimum amount of information required for the purpose of monitoring
quality and safety through KPIs.
The guidance outlined in this document is based on an analysis of evidence from an
extensive literature review. It is intended as a resource for all stakeholders, including
the public and service users, but more specifically, policy makers and frontline
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
practitioners, with responsibility for the development and implementation of KPIs and
associated MDSs.
Part 1 of this document provides an overview of relevant literature and outlines the
importance of performance monitoring in contributing to the safety and quality of health
and social care. It introduces key performance indicators (KPIs) and their role in
performance monitoring, including benefits and risks.
Part 2 of this document examines best practice and provides specific guidance on the
development of KPIs and minimum data sets (MDSs). It identifies important factors that
should be taken into consideration when developing and evaluating KPIs for
performance monitoring.
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
Part 1:
Overview of performance monitoring
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
1 Introduction
1.1 Overview of the Health Information function
Health is information-intensive, generating huge volumes of data every day. It is
estimated that up to 30% of the total health budget may be spent one way or another
on handling information, collecting it, looking for it, storing it. It is therefore imperative
that information is managed in the most effective way possible in order to ensure a high
quality, safe service.
Safe, reliable, healthcare depends on access to, and the use of, information that is
accurate, valid, reliable, timely, relevant, legible and complete. For example, when
giving a patient a drug, a nurse needs to be sure that they are administering the
appropriate dose of the correct drug to the right patient and that the patient is not
allergic to it. Similarly, lack of up-to-date information can lead to the unnecessary
duplication of tests if critical diagnostic results are missing or overlooked, tests have to
be repeated unnecessarily and, at best, appropriate treatment is delayed or at worst not
given.
In addition, health information has a key role to play in healthcare planning decisions where to locate a new service, whether or not to introduce a new national screening
programme and decisions on best value for money in health and social care provision.
The Health Information and Quality Authority was established under the Health Act,
2007 with the primary objective of promoting safety and quality in the provision of
health and personal social services for the benefit of the health and welfare of the
public.
Under section (8) (1) (k) the Health Act, 2007 the Authority has responsibility for setting
standards for all aspects of health information and monitoring compliance with those
standards. In addition, under section 8 (1) (j) the Authority is charged with evaluating
the quality of the information available on health and social care and making
recommendations in relation to improving the quality and filling in gaps where
information is needed but is not currently available.
Information and Communications Technology (ICT) has a critical role to play in ensuring
that information to drive quality and safety in health and social care settings is available
when and where it is required. For example, it can generate alerts in the event that a
patient is prescribed medication to which they are allergic. Further to this, it can support
a much faster, more reliable and safer referral system between the patients general
practitioner (GP) and hospitals.
Although there are a number of examples of good practice, the current ICT
infrastructure in Irelands health and social care sector, is highly fragmented with major
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
gaps and silos of information which prevents the safe, effective, transfer of information.
This results in service users being asked to provide the same information on multiple
occasions.
Information can be lost, documentation is poor, and there is over-reliance on memory.
Equally, those responsible for planning our services experience great difficulty in
bringing together information in order to make informed decisions. Variability in practice
leads to variability in outcomes and cost of care. Furthermore, we are all being
encouraged to take more responsibility for our own health and well-being, yet it can be
very difficult to find consistent, understandable and trustworthy information on which to
base our decisions.
As a result of these deficiencies, there is a clear and pressing need to develop a
coherent and integrated approach to health information, based on standards and
international best practice. A robust health information environment will allow all
stakeholders the general public, patients and service users, health professionals and
policy makers to make choices or decisions based on the best available information.
This is a fundamental requirement for a high reliability healthcare system.
Through its health information function, the Authority is addressing these issues and
working to ensure that high quality health and social care information is available to
support the delivery, planning and monitoring of services. One of the areas currently
being addressed through this work programme is the need to provide guidance on the
development of key performance indicators (KPIs).
1.2 Background
Information plays a pivotal role in promoting improvements in the safety and quality of
patient care. Performance measurement promotes accountability to all stakeholders
including the public, service users, clinicians and the Government by facilitating
informed decision-making and safe, high quality and reliable care through monitoring,
analysing and communicating the degree to which healthcare organisations meet key
goals(1). Accurate performance measurement is dependent on information that is of
good quality, comparable and can be shared within the health sector.
KPIs play an important role in the performance measurement process by helping to
identify and appropriately measure levels of service performance. In and of themselves,
KPIs cannot improve quality however, they effectively act as flags or alerts to identify
good practice, provide comparability within and between similar services, where there
are opportunities for improvement and where a more detailed investigation of standards
is warranted. The ultimate goal of KPIs is to contribute to the provision of a high quality,
safe and effective service that meets the needs of service users.
Data used to support KPIs should be standardised, with uniform definitions, to ensure
that it is collected consistently and that it supports the measurement process, facilitating
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
maintain performance relative to others and the reliability of the quality and safety of
services that they provide.
The idea of monitoring healthcare quality has been in existence for many years
however, it is only in recent years that it has received extensive attention in published
literature. In order to monitor the quality of the healthcare system it is essential to
determine what aspects need to be measured.
Performance monitoring is dependent on good quality information which can only be
achieved by having a systematic process to ensure that data is collected consistently
both within, and across, organisations. One tool that is frequently used to assist in
performance monitoring and which can subsequently contribute to performance
improvement is key performance indicators (KPIs).
KPIs are an invaluable tool that contribute immensely to the performance monitoring
process. However, for KPIs to be effective, they need to have clear definitions to ensure
that the data collected is of high quality (that is, consistent, reliable and in keeping with
shared definitions) and to enhance their validity and reliability. Valid KPIs measure what
they are intended to measure and reliable KPIs will consistently produce the same result
regardless of who performs the measurement.
Using KPIs can lead to improvements in quality and safety when they are used for
learning at organisational level, facilitating improvements in local service delivery rather
than solely being used as a tool to evaluate providers(6) at a national, system, level.
Using performance indicators at a local level assists organisations develop an insight into
safe and effective care processes.
This guidance has been developed to assist individuals and organisations identify
develop or select KPIs and associated minimum data sets for the purpose of monitoring
quality and safety in health and social care.
The delivery of health and social care is dependent on both clinical and administrative
staff, with a variety of information needs. This guidance is intended as a resource for all
staff and identifies important factors to be considered in order to deliver a balanced
suite of good quality KPIs.
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
2 Quality
Quality involves meeting and exceeding an acceptable level of performance through the
provision of a safe and effective service. It is a broad and complex concept which is
neither simple to define nor measure but is nonetheless central to effective, modern,
healthcare services. For this reason, improving quality has become an integral
component of effective healthcare delivery and is mandatory in some countries where
there are obligations to comply with standards for healthcare.
In healthcare, concerns about quality usually revolve around the ability of organisations
to achieve desired outcomes using processes that have been demonstrated to achieve
those outcomes(7). Even though quality can be improved without measuring it, for
example through the use of clinical practice guidelines and specialist education, it is only
through measurement that we can be sure that improvements are being made.
Measurement is therefore critically important both in identifying where quality and
safety is compromised and in monitoring quality improvement processes.
2.1 Structure/Process/Outcome
One of the most significant developments in relation to performance monitoring in the
last 30 years has been Avedis Donabedians(8) division of healthcare into structure,
process and outcome, for the purpose of defining and measuring quality. Donabedian
has contributed significantly to improvements in the quality and safety of health and
social care through his lifelong commitment to the use of performance measures.
According to Donabedian(9), healthcare quality can be assessed using a three-part model
based on the structures, processes and outcomes of the healthcare system. This division
of healthcare has allowed the identification of data across the full spectrum of
healthcare that contributes to monitoring the quality of the various constituents of
healthcare delivery.
Structure relates to the resources of the healthcare system that contribute to its
ability to meet the healthcare needs of the population. Structural indicators refer to
the resources used by an organisation to deliver healthcare and include buildings,
equipment, the availability of specialist personnel and available finances.
Process relates to what is actually done for the service user and how well it is done.
Process indicators measure the activities carried out in the assessment and
treatment of service users and are often used to measure compliance with
recommended practice, based on evidence or the consensus of experts.
Outcome relates to the state of health of the individual or population resulting from
their interaction with the healthcare system. It can include lifestyle improvements,
emotional responses to illness or its care, alterations in levels of pain, morbidity and
mortality rates, and increased level of knowledge(10).
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
Donabedian also stated that each part of the model is interdependent and that good
structures promote good processes and, in turn, good processes promote good
outcomes. The healthcare quality measurement process can be assisted through the use
of KPIs to capture a variety of selected factors and trends of both health and the
healthcare system(11).
2.2 Quality improvement
Improving quality is a continuous cycle involving defining quality, monitoring quality and
improving quality (Figure 1).
10
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
In other words, a quality healthcare service provides care based on the assessed needs
of the population, using finite resources efficiently to attain optimum outcomes and
minimise the risks associated with healthcare delivery.
According to Donabedian(8) healthcare quality is the combination of the science and
technology of healthcare and the application of that science and technology in actual
practice. Providing quality healthcare involves providing care that is accepted as best
practice at the time of delivery using available technology and resources.
The most common and most widely accepted definition for quality in healthcare has
been proposed by the US Institute of Medicine(14) as:
the degree to which services for individuals and populations increase the
likelihood of desired outcomes and are consistent with current professional
knowledge.
McGlynn(15) explains that this definition recognises a scale of performance which can
theoretically range from poor to excellent, identifies that monitoring can involve both
individual and population perspectives and that efforts to improve health outcomes must
be based on scientific evidence or on the consensus of experts in the absence of
research.
The variety of definitions of quality found in the literature reaffirms the view that quality
is a complex concept and also highlights the importance of having a shared
understanding of quality prior to commencing the process of monitoring.
2.2.2 Monitoring quality
As a result of the complexity of quality, monitoring quality can pose many challenges.
Monitoring quality involves evaluating current performance, including service-user
perspectives, against a standard or expected level of performance. This consists of
defining indicators, developing information systems and the analysis and evaluation of
results(12).
It is important that we are clear on the reasons for monitoring and that we are not
monitoring merely for the sake of it. The main reason for monitoring health and social
care quality is to identify opportunities to improve performance where it has been
highlighted that performance is not at the desired standard(8). Sub-standard
performance in the delivery of health and social care compromises the safety of service
users and contributes to undesirable outcomes.
The ability to monitor and report on quality is accepted as a basis for the improvement
in the delivery of healthcare. Monitoring and reporting on quality assists healthcare
providers improve performance through benchmarking, empowers consumers to make
11
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
12
Safe: the service protects the health and welfare of service users; it minimises the
risk associated with delivering care; it prevents adverse events, minimises their
impact when they occur and learns when things go wrong
Effective: care that delivers the best achievable outcomes through the evaluation
and use of available evidence
Person-centred: care that centres on the needs and rights of service users,
respects their values and preferences and actively involves them in the provision of
their care
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
Equitable: the service enables fair access to care which is delivered based on need.
It also addresses identified health inequalities of the population served
Efficient: the service manages and develops its available resources sustainably to
deliver and maintain the best possible quality of care.
income and socioeconomic status higher income and social class are associated
with better health
education lower levels of education are associated with poorer health
environment pollution, working environments, accommodation all contribute to
health status
employment unemployment is associated with poorer health status
genetics some people are more likely to develop illness based on their family
history
personal behaviour people can influence their health status by food choices,
physical activity levels, alcohol/drug consumption and smoking status
gender men and women are prone to developing different illnesses
health services access to and use of health services can influence the prevention
and treatment of illness.
The OECD HCQI project has developed a conceptual framework (see Appendix 1) to
recognise that health is determined by a number of interdependent factors, one of
which is healthcare. A conceptual framework provides a structure to guide the process
of developing KPIs. The OECD framework consists of four interconnected levels
representing(19):
13
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
regulatory inspection
surveys of consumer experiences
third-party assessments
key performance indicators.
14
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
15
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
16
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
To illustrate, Figure 3 outlines two indicators. The first KPI measures the percentage of
women between the ages of 25 to 60 that have a cervical screening test result within
the last five years. It is a process KPI, it is specific to a particular service user
population, it is preventive and is done for the purpose of screening.
The second KPI measures the number of service users that return to the emergency
department for an unscheduled visit within seven days with the same condition. It is an
outcome KPI, it is generic as it is applicable to all service users, the type of care is acute
and the function of care is intervention/treatment.
17
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
3.2 Benefits
Since the measurement of performance itself contributes to improvement, it is
necessary to monitor performance in order to improve the quality and safety of
healthcare delivery.
3.2.1 Benchmarking
KPIs facilitate the improvement of performance through benchmarking, which makes it
possible for organisations to document the quality of care they provide against that
18
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
19
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
20
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
Also, healthcare outcomes are usually the result of a combination of factors and so it is
important that the KPI used appropriately measures outcomes that are attributable to
the performance of the healthcare system in which they are employed(32).
3.3.3 Data availability
The decision to select or develop a KPI based solely on available data is another factor
which must be considered. Basing KPIs on what the organisation considers an intrinsic
component of a quality service will lead to measurements that enhance quality within
the organisation. In contrast, basing KPIs on available data, while more expedient, may
lead to measurements that do not contribute or have a negative impact on quality
improvement. It is however important to identify what information is available with the
aim of identifying significant gaps.
3.3.4 Local application of KPIs
National targets may allow services to be benchmarked against international
comparators, but they provide little information as to why there are variations in
results(33). As a result, national KPIs need to be supported by local operational KPIs to
provide information at a local level to inform practice.
Performance data, captured at the point of care delivery, can be used locally to involve
and inform clinicians in performance improvement. In order to be effective and not
overburden an organisations available resources, healthcare performance data needs to
be relevant to the healthcare provider and must not divert resources from the primary
purpose of providing frontline healthcare. In the United Kingdom the Healthcare
Commission developed the Better Metrics project(34) in response to the recognition
that clinicians were not always aware of targets being used in performance
measurement. This project aims to develop metrics that are relevant to clinicians dayto-day practice and to assist local services in developing their own metrics.
21
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
22
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
Part 2:
Development of Key Performance Indicators and
Minimum Data Sets
23
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
4 Development of KPIs
A number of factors, outlined below, should to be considered when developing and
evaluating KPIs (Figure 4)(33-35).
These factors are not presented as a series of steps and even though some may follow
a logical order, others can happen at any stage in the process or throughout the whole
process. These factors have been identified through a synthesis and analysis of
literature following an extensive review and should be considered when developing
KPIs.
4.1 Define the audience and use for measurement
It is important to define the goals of the measurement, reasons for measurement and
the intended audience in order to identify and develop a suitable KPI.
It is essential to note that whether the goal of the measurement is for benchmarking,
either internally for quality improvement purposes or externally against standards or
other organisations, will influence the KPI selection process. For example, if the KPI is
being developed for the purpose of benchmarking performance internationally, then a
KPI must be selected that is widely used internationally and has a clear definition.
There are many quality domains such as safety, effectiveness, efficiency, person
centredness and equity. Before embarking on the performance measurement process,
it is necessary to identify the domains for which the measurement is intended, which
may in turn be dependent on the audience. In order to fully evaluate quality it will be
necessary to identify a balanced suite of KPIs.
The intended audience can influence the unit of analysis or the way in which the result
is presented. The audience refers to the person or group for whom the KPI will aid
decision-making and can be the service-user, the clinician, the public, the facility or the
healthcare system. For example a patient waiting for surgery will be more interested in
the average waiting time for that surgery, rather than the number of people on the
waiting list.
4.2 Consult with stakeholders and advisory group
There should be consultation with all stakeholders throughout the data development
process. Consultation facilitates the identification of the needs of stakeholders while
simultaneously contributes to the acceptance of the selected KPIs. Consultation also
facilitates agreement about data elements and assists in familiarisation with the data
and standards(35).
24
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
25
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
Consultation with decision-makers can assist in identifying their information needs and
subsequent use for that information. Consultation with service providers can also assist
in identifying their information needs, and elicit what data they can provide. Discussions
with data capture and analysis staff can assist in determining skills base and training
requirements. Service user engagement can assist in identifying their information needs
and if the proposed data collection process raises any privacy and confidentiality
concerns(36).
Where appropriate, consultation should include ongoing engagement and eventual
endorsement by national or regional committees that have responsibility for health
information and standards to ensure compliance.
Methods of consultation can vary from once-off meetings to regularly scheduled
meetings with the advisory group and web forums. In keeping with best practice,
consultation should be tailored to appropriately meet the needs of the situation - the
chosen method should be based on the most efficient method of communicating with
the intended audience to disseminate the desired information and obtain the required
feedback. Consultation facilitates guidance from all stakeholders and in particular from
the expert panel.
The advisory group membership should include the relevant health professionals and
stakeholders for the area being measured. An appropriately constituted advisory group
will increase the likelihood that the chosen KPIs are fit-for-purpose and will be adopted.
Group members should be independent, should not have a conflict of interest and have
the primary objective of developing KPIs that provide a fair and accurate reflection of
the area being measured. Processes are required to ensure advisory group members
have the ability to be objective, have good teamwork and communication skills and be
willing to commit sufficient time for background reading and to attend meetings(37).
The service user is the most important stakeholder in healthcare and their involvement
is essential to help incorporate the consideration of those issues that are important to
service users into the decision-making process for the delivery of healthcare. Sufficient
support and processes should be put in place to facilitate the active participation of
service users in the advisory group. Service users have a broad perception of healthcare
quality that can include the availability of information, interpersonal relationships and
the environment whereas healthcare professionals are more likely to focus on treatment
outcomes(38). In addition, the inclusion of service users will encourage confidence in,
and support for, healthcare delivery decisions when they are made(39). Service-user
representation does not need to be in the form of a formally qualified member of the
public but should be an individual who has experience and knowledge of issues that are
important to service users(40).
26
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
27
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
medication safety(43). Other service-user safety KPIs monitor adverse events such as
falls and bedsores.
As it is not possible to exhaustively monitor every aspect of healthcare delivery, priority
should be given to conditions for which there is evidence to support potential for
improvement. Areas that have demonstrated variability in the quality of care or where
there is a clear gap between actual and potential levels of healthcare should be
considered(45).
The process or outcome measure being assessed should be susceptible to influence by
the healthcare system in relation to quality improvement(46). In other words, the
healthcare system should have the ability to address any problems identified through
measurement and likewise the measure should reflect policy/practice changes that
contribute to quality improvement.
Together with the reasons for collecting data, such as improvements in the safety and
quality of services, issues such as the efficient use of resources as a consequence of
improvements resulting from the measurement process should be considered and are
drivers towards the introduction of such a system.
4.4 Achieve a balance in measurement
The diversity of stakeholders in health and social care requires that there is a need for
measures across multiple domains to satisfy their different information needs(47). A
number of approaches have been developed to assist in identifying a balanced set of
KPIs including:
28
the balanced scorecard which was originally developed by Kaplan and Norton(48)
and suggests four perspectives of a performance indicator set to provide a
comprehensive view of the performance of an organisation:
- service user perspective measures how an organisation meets the assessed
needs and expectations of the service user
- internal management perspective measures the key business processes that
have been identified as necessary for a high quality and effective service
- continuous improvement perspective measures the ability of the
organisations systems and people to learn and improve
- financial perspective measures the efficient use of resources to achieve the
organisations objectives.
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
The Three Es framework(28) uses the three domains of economy, efficiency and
effectiveness:
- economy which measures the acquisition of human and material resources of
the appropriate quality and quantity at the lowest cost
- efficiency which measures the capacity to provide effective healthcare using
minimum resources
- effectiveness which measures the degree to which the organisation attains
established goals.
health improvement
fair access
effective delivery of appropriate care
efficiency
service-user/carer experience
health outcomes.
the process of achieving a balanced set of KPIs can be assisted by incorporating the
structure, process and outcome classification into the methodology for assessing the
healthcare system. These classifications are interdependent and structure can have
an impact on processes which in turn can have an impact on outcomes.
29
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
Validity
A valid KPI measures what it is supposed to measure and captures an important aspect of
quality that can be influenced by the healthcare facility or system. Ideally KPIs selected
should have links to processes and outcomes through scientific evidence. Measures that
have been selected using scientific evidence possess high content validity and measures
selected through consensus and guidelines will have high face validity.
Content validity refers to whether the KPI captures important aspects of the quality of care
provided. Face validity can be determined by the KPI making sense logically and clinically or
from previous usage.
Reliability
The KPI should provide a consistent measure in the same population and settings
irrespective of who performs the measurement.
Reliability is similar to reproducibility to the extent that if the measure is repeated you
should get the same result. Any variations in the result of the KPI should reflect actual
changes in the process or outcome. Reliability can be influenced by training, the KPI
definition and the precision of the data collection methods(6).
Inter-rater reliability compares differences between evaluators performing the same
measurement. Internal consistency examines the relationship between sub-indicators of the
same overall measurement, and, if reliable, there should be correlation of the results. Testretest reliability compares the difference between results when the same evaluator
performs the measurement at different times.
Explicit evidence
base
KPIs should be based on scientific evidence, the consensus of expert opinions among
health professionals or on clinical guidelines.
The preferred method of choosing KPIs is through evaluating scientific evidence in support
of each KPI and rating the strength of that evidence. One example of a rating system is to
give the highest rating to evidence (A evidence) from meta-analysis of randomised
controlled trials and give a lesser rating (B evidence) to evidence for controlled studies
without randomisation and a further lower rating (C evidence) to data from
epidemiological studies(46).
In healthcare, there may only be limited scientific evidence to support a KPI and it becomes
30
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
necessary to avail of expert opinion(27). There are a number of methods by which a KPI can
be developed through facilitating group consensus from a panel of experts, such as the
Delphi technique, the RAND appropriateness method and from clinical guidelines. Appendix
2 gives a brief description of each method and Appendix 3 provides an example of a Delphi
assessment instrument. The expert panel can exist independently of the advisory group and
are used as a point of reference for the KPI development process.
Acceptability
The data collected should be acceptable to those being assessed and to those carrying out
the assessment.
Feasibility
There should be a feasibility analysis carried out to determine what data are currently
collected and the resources required to collect any additional required data.
The feasibility analysis should determine what data sources are currently available and if
they are relevant to the needs of the current project. This will include determining if there
are existing KPIs or benchmarking processes based on these data sources.
The reporting burden of collecting the data contained in the KPI should not outweigh the
value of the information obtained. Preferably, data should be integrated into servicedelivery, and, where additional data are required that are not currently part of service
delivery, there should be cost benefit analysis to determine if it is cost-effective to collect.
The feasibility analysis should also include what means are used to collect data and the
limitations of the systems used for collection. It should also outline the reporting
arrangements, including reporting arrangements for existing data collection and frequency
of data collection and analyses.
Sensitivity
31
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
Specificity
Does the KPI actually capture changes that occur in the service for
which the measure is intended?
Only changes in the area being measured are reflected in the measurement results
Relevance
The results of the measurement should be of use in planning and the subsequent delivery
of healthcare and contribute to performance improvement
Balance
The final suite of indicators should measure different aspects of the service in order to
provide a comprehensive picture of performance, including user perspective(28).
Tested
There should be due consideration given to indicators that have been tried and tested in
the national and international arena rather than developing new indicators for the same
purpose.
Safe
The indicator should not lead to an undue focus on the aspect of care being measured that
may in turn lead to a compromise in the quality and safety of other aspects of the service.
Avoid duplication
Prior to developing the indicator due consideration should be given to other projects or
initiatives to ensure that there will not be a duplication of data collection.
Timeliness
The data should be available within a time period that enables decision-makers utilise the
data to inform their decision-making process. If the data is required for operational
purposes, then it will be required within a shorter timeframe than data used for long term
strategic purposes.
32
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
33
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
operationalise policy
achieve agreement and promote discussion regarding priorities and expectations
set benchmarks and monitor progress
as a means for performance contracting
all-the-time targets, which aim to provide a level of service all the time
percentage achievement targets, which aim to achieve a specified level of
performance against a standard
qualitative targets, which are descriptive of what standard of service to expect
time-bound targets, which are one-off for a specific service
national, regional or service specific targets, which are determined for a specific
demographic or service area.
Targets should be realistic but also challenge service delivery towards improvement.
They should be SMART, that is: specific, measurable, achievable, relevant and timebound. For example, service users presenting with myocardial infarction should receive
thrombolytic therapy within 60 minutes of calling for professional help, where that is the
treatment of choice. However, not all service users with myocardial infarction should
receive thrombolysis, some service users undergo alternative treatment such as primary
angioplasty. Therefore the target should be based on an agreed acceptable level of
performance that can be achieved incrementally over a specified timeframe. It will be
necessary to have baseline data in order to identify a target that is both achievable and
challenging.
34
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
35
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
36
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
system-level: which is necessary for policy and planning purposes on a systemwide or national basis. System-level data is an aggregate of all data elements in a
particular region and is derived from episode, case and facility-level information.
Frequently, the KPI will require data to be processed from different levels, using a
combination of data during analysis, to achieve a measurement. For example, episodelevel information will need to be combined with facility-level information to determine
the ratio of emergency physicians to the number of attendees at an emergency
department. In this example, episode-level information will be collected for each
service-user, while facility-level information needs only to be collected on an annual
basis.
5.2 Define the frequency of collection
Some data may need to be collected on a daily basis while other data can be collected
annually. The urgency of decisions to be made based on the KPI or the level of
monitoring required, will determine the frequency of data collection.
5.3 Document the data collection process
It is necessary to write detailed data collection specifications to ensure that data are
collected and measured consistently and to reduce the risk of bias. There should be a
data development process which results in data standards that contribute to a
consistent approach to data collection and use. Data standards are agreements on the
representation, format, and definition of common data. These data standards will then
assist in the process of ensuring data collection is of high quality and enable consistent
and comparable reporting of data and information(51).
Data can be collected manually, electronically or a combination of both. Methods of data
collection need to be explored with the advisory group to determine the feasibility of the
KPI and answer the following questions:
can existing data sources be used? During the feasibility analysis existing data
sources will have been identified and where possible these should be utilised.
However, if an existing data source does not meet the needs of the project, then it
should not be used
can existing data sources be enhanced? If the existing data source provides data
closely aligned with the required data but not completely fulfilling the requirements,
it may be possible to enhance the existing data source. Before enhancing an existing
37
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
data source it is necessary to consult with others using the data source to ensure the
modification does not impact on other uses of the data
is a new method of data collection needed? If a new data source is required it should
be determined that the reporting burden does not exceed the benefits gained from
collecting the data.
administrative databases, which are readily available and therefore will involve
minimal expenditure for data collection, however the information may not be specific
enough and may not be reliable
medical record data, which are also readily available and contain more detail than
administrative data, including diagnosis, treatment and outcome
prospective data collection, which involves collecting data specifically for quality
measurement purposes - it is more specific and can define exactly what data are
required. It is, however, not readily available and expensive to collect
survey data, which involves collecting data regarding knowledge, attitudes and
behaviours and is not otherwise available. It is not readily available and is expensive
to collect.
38
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
39
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
Information can also be presented using composite measures which present the results
of performance measurement using a single score representing an aggregation of a
number of underlying KPIs(52). Composite measures can provide a rounded picture of
the performance of an organisation or system rather than trying to identify a trend from
a range of individual KPIs(53). Each of the individual KPIs within the composite measure
must satisfy the requirements of a good KPI, otherwise the composite measure will not
represent an accurate picture of performance.
In certain instances weights are assigned to individual KPIs within a composite measure
to reflect their priority or importance, so that individual KPIs within the subset
contribute to a higher proportion of the result than the remaining KPIs.
For example, a composite measure that comprises seven indicators may assign a weight
of 0.25 to 2 of the KPIs and 0.1 to each of the other 5. This weighting is then reflected
in the overall result. There are, however, risks associated with aggregating the KPIs into
a composite measure. It is possible to lose important information, such as serious
failings in a particular part of the organisation, or to fail to identify specific areas where
40
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
significant improvement is required. The weighting system can also influence the result,
particularly when used for benchmarking between service providers. Service providers
that excel in the higher weighted KPIs will perform better than those that excel in the
lower weighted KPIs, so the weighting methodology needs to be robust.
6.1 Determine frequency of processing and analysis
The frequency of processing and analysing the data collected should be determined to
ensure the efficient use of resources and also meet the needs of the information user. It
may not always be necessary to process and analyse data at the same frequency as
data collection. It may be practical to collect data on a daily basis, but for analysis and
comparison purposes it may be appropriate that this data is processed and analysed on
a weekly, monthly or even annual basis.
6.2 Define method of analysis
A detailed protocol should be developed for presenting the result of the KPI. This should
address issues such as missing data, risk adjustment, and also what is an acceptable
level of performance or target to be achieved. In some cases the result can be
presented as the proportion of the total population that have experienced the particular
aspect of the service being measured. Other results can be based on the proportion that
has achieved a particular standard or threshold.
6.2.1 Define type of measure
The chosen method for analysing and presenting the results should be determined and
this is based on the topic/service being measured. The following is an example of
various ways of presenting the results of the measurement process(46):
rate-based KPIs: use information about events that are expected to happen
frequently. The measurements can be represented as proportions or ratios, detailed
as follows:
- proportion KPIs: to allow comparisons between organisations or trends over a
specified time they require both a numerator and a denominator. The KPI must
identify the population at risk of the event and the period of time within which
the event might take place. They are usually expressed as a percentage and the
numerator is contained in the denominator. An example of proportion KPIs is the
proportion of cardiovascular related deaths that are male
- ratio KPIs: the numerator is not contained in the denominator e.g. ratio of male
to female cardiovascular related deaths.
41
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
sentinel KPIs: identify individual events that are inherently undesirable and usually
warrant detailed analysis to determine why the event occurred. Sentinel events
depict extremely poor performance. An example of a sentinel KPI is the number of
deaths resulting from medical error.
42
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
43
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
8 Conclusion
Access to and use of good quality information is a key component of performance
measurement and improvement for high quality, safe and reliable healthcare.
Performance improvement involves monitoring the current level of performance and
instituting changes where performance is not at the desired level. KPIs support
organisations improve the safety and quality of care by providing information about the
current level of performance and identifying where there are opportunities for
improvement.
This document has been developed as a resource to support stakeholders in the process
of developing KPIs and associated MDSs used for monitoring the quality and safety of
health and social care. The guidance identifies important factors to be considered when
developing and identifying KPIs and has been identified through an extensive synthesis
and analysis of the literature.
KPIs that have been identified and developed based on the factors identified in this
document are more likely to lead to measurements that can be confidently relied upon
by decision makers. Data collection to support the KPI measurement process is more
efficient if it is incorporated into routine care. It is important that each KPI and the
associated MDS is clearly defined, so that the result of the measurement reflects actual
changes in the quality and safety of care.
Having completed this guidance, the Authority will continue to develop and publish
additional documents to support improvements in the quality and safety of healthcare.
44
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
Reference List
(1) Smith PC, Mossialos E, Papanicolas I. Performance measurement for health
system improvement: experiences, challenges and prospects. World Health
Organisation; 2008.
(2) Kohn LT, Corrigan JM, Donaldson MS. To Err is Human: Building a Safer Health
System. Washington: National Academies Press; 1999.
(3) Berwick DM, James B, Coye MC. Connections Between Quality Measurement and
Improvement. Medical Care. 2003; 43(1, Supplement): p.i30-i38.
(4) Department of Health (United Kingdom). High Quality Care for All: NHS Next
Stage Review Final Report. London: Department of Health; 2010.
(5) Raleigh VS, Foot C. Getting the Measure of Quality: Opportunities and challenges.
The Kings Fund; 2010.
(6) Rubin HR, Pronovost P, Diette GB. From a process of care to a measure: the
development and testing of a quality indicator. International Journal for Quality in
Health Care. 2001; 13(6): pp.489-96. Available online from:
http://intqhc.oxfordjournals.org/cgi/reprint/13/6/489.
(7) Rubin HR, Pronovost P, Diette GB. The advantages and disadvantages of processbased measures of health acre quality. International Journal for Quality in Health
Care. 2001; 13(6): pp.469-74.
(8) Donabedian A. An Introduction to Quality Assurance in Health Care. Oxford
University Press; 2003.
(9) Donabedian A. The Quality of Care: How Can It Be Assessed? Journal of the
American Medical Association. 1988; 260(12): pp.1743-8. Available online from:
http://post.queensu.ca/~hh11/assets/applets/The_Quality_of_Care__How_Can_it
_Be_Assessed_-_Donabedian.pdf.
(10) Mainz J. Defining and classifying clinical indicators for quality improvement.
International Journal for Quality in Health Care. 2003; 15(6): pp.523-30.
(11) Arah OA, Klazinga N, Delnoij DMJA, Ten Asbroek AHA, Custers T. Conceptual
frameworks for health systems performance: a quest for effectiveness, quality
and improvement. International Journal for Quality in Health Care. 2003; 15
pp.377-98. Available online from:
http://intqhc.oxfordjournals.org/cgi/reprint/15/5/377.
45
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
indicators for general practice: A practical guide for health professionals and
managers. The Royal Society of Medicine Press Ltd; 2002.
46
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
(24) Turpin RS, Darcy LA, McMahill C, Meyne K, Morton D, Rodriguez J, et al. A Model
to Assess the Usefulness of Performance Indicators. International Journal for
Quality in Health Care. 1996; 8(4): pp.321-9. Available online from:
http://intqhc.oxfordjournals.org/cgi/reprint/8/4/321.
(25) Kelley E, Arispe I, Holmes J. Beyond the initial indicators: lessons from the OECD
Health Care Quality Indicators Project and the US National Healthcare Quality
Report. International Journal for Quality in Health Care. 2006; pp.45-51.
(26) Pringle M, Wilson T, Grol R. Measuring "goodness" in individuals and healthcare
systems. British Medical Journal. 2002; 325 pp.704-7.
(27) Campbell SM, Braspenning J, Hutchinson A, Marshall M. Research methods used
in developing and applying quality indicators in primary care. Quality and Safety
in Health Care. 2002; 11 pp.358-64.
(28) Audit Commission. On Target: the practice of performance indicators. Audit
Commission for Local Authorities and the National Health Service in England and
Wales; 2000. Available online from: http://www.auditcommission.gov.uk/Products/NATIONAL-REPORT/266D51B7-0C33-4b4b-98327484511275E6/archive_mptarget.pdf.
(29) Bridgewater B, Grayson AD, Brooks N, Grotte G, Fabri BM, Au J, et al. Has the
publication of cardiac surgery outcome data been associated with changes in
practice in northwest England: an analysis of 25,730 patients undergoing CABG
surgery under 30 surgeons over eight years. Heart. 2007; 93 pp.744-8.
(30) Dranove D, Kessler D, McClellan M, Satterthwaite M. Is more information better?
The effects of 'report cards' on health care providers. Journal of Political
Economy. 2003; 111(3):
(31) Commission for Healthcare Audit and Inspection. Investigation into Mid
Staffordshire NHS Foundation Trust. Healthcare Commission; 2009. Available
online from:
http://www.cqc.org.uk/_db/_documents/Investigation_into_Mid_Staffordshire_N
HS_Foundation_Trust.pdf.
(32) Helfert M, Henry P, Leist S, Zellnor G. Koliman KS, (Ed.). Healthcare performance
indicators - Preview of frameworks and an approach for healthcare processdevelopment. 2005. pp. 371-8.
47
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
48
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
49
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
Glossary of terms
BALANCED SCORECARD:
A framework developed by Robert Kaplan and David Norton that suggests four
perspectives of performance measurement to provide a comprehensive view of an
organisation. These are service user perspective, internal management perspective,
continuous improvement perspective and financial perspective.
BENCHMARK:
A point of reference or standard by which something can be measured
BENCHMARKING:
The process of comparing the cost, cycle time, productivity, or quality of a specific
process or method to another that is widely considered to be an industry standard or
best practice.
CASEMIX:
Casemix is an internationally recognised system of measuring clinical activity
incorporating the age, gender and health status of the population served by an
organisation with a view to objective determination of hospital reimbursement.
DATA:
Data are numbers, symbols, words, images, graphics that have yet to be organised or
analysed
DATA DICTIONARY:
A descriptive list of names (also called representations or displays), definitions, and
attributes of data elements to be collected in an information system or database.
DATA ELEMENT:
A unit of data for which the definition, identification, representation, and permissible
values are specified by means of a set of attributes.
DELPHI TECHNIQUE:
A method for obtaining group consensus involving the use of a series of mailed
questionnaires and controlled feedback to respondents which continues until consensus
is reached.
DENOMINATOR:
The specifications that describe the sampling, inclusion and exclusion criteria that
determine the eligibility of data for a measure.
50
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
DOMAINS OF QUALITY:
Are those definable, preferably measurable and actionable, attributes of the system that
are related to its functioning to maintain, restore or improve health
HEALTH INFORMATION:
Health Information is defined as information, recorded in any form or medium, which is
created or communicated by an organisation or individual relating to the past, present
or future, physical or mental health or social care of an individual or cohort. It also
includes information relating to the management of the health and social care system
METADATA:
Data that defines and describes other data
MINIMUM DATA SET:
The minimum set of data elements that are required to be collected for a specific
purpose
NHS TRUST:
A National Health Service Trust provides services on behalf of the National Health
Service (NHS) in England and Wales. There are different types of Trusts, each
responsible for specific services such as Primary Care Trusts, Acute Trusts, Ambulance
Trusts, Care Trusts and Mental Health Trusts
NUMERATOR:
The specifications that define the subset of data items in the denominator that meet
the indicator criteria.
KEY PERFORMANCE INDICATORS:
Performance Indicators are specific and measurable elements of practice that can be
used to assess quality of care. Indicators are quantitative measures of structures,
processes or outcomes that may be correlated with the quality of care delivered by the
healthcare system.
51
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
PROCESS INDICATORS:
Performance indicators that monitor the activities carried out in the
assessment/diagnosis and treatment of service users.
OUTCOME INDICATORS:
Performance indicators that monitor the desired states resulting from care processes,
which may include reduction in morbidity and mortality, and improvement in the quality
of life.
RELIABILITY:
Reliability is the consistency of your measurement, or the degree to which an
instrument measures the same way each time it is used under the same condition with
the same subjects.
STRUCTURE INDICATORS:
Performance indicators that monitor the attributes of the health system that contribute
to its ability to meet the healthcare needs of the population.
VALIDITY:
Validity of indicators refers to whether performance indicators are measuring what they
are supposed to measure.
52
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
Health
How healthy are the citizens of Ireland?
Health
Conditions
Human Function
and Quality of Life
Mortality
E
q
u
i
t
y
Safety
Effectiveness
Personcentred
Efficient
Equitable
Living with
illness/
disability
Coping with
end of life
Efficiency
Health System design, policy and context
Other determinants of
performance
(e.g. country capacity)
53
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
54
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
Scoring Matrix
Domain
Validity
Reliability
Definition
Is the indicator satisfactory in
terms of:
Face validity
Content validity
Is the indicator satisfactory in
terms of reliability?
Score
1 3 Low degree of relevance
4 6 Medium degree of relevance
7 9 High degree of relevance
1
4
7
Acceptability Is the indicator acceptable?
1
4
7
Feasibility
How is the:
1
4
Availability of data
Burden of data collection 7
Scoring Sheet
Title:
Scores
Validity
Reliability
3
6
9
3
6
9
3
6
9
Acceptability Feasibility
Additional
Comments
55
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
KPI Title
2.
Description
3.
Rationale
4.
Target
5.
KPI collection
frequency
; Daily
Weekly
Monthly
Quarterly
Bi-annually
Annually
Other give details: ______________________________
6.
KPI reporting
frequency
7.
KPI calculation
Daily
Weekly
Monthly
; Quarterly
Bi-annually
Annually
Other give details: ________________________________
Numerator divided by denominator expressed as a percentage
Numerator: Total number of patients with a diagnosis of AMI
requiring reperfusion who receive thrombolytic therapy within 60
minutes of presentation to the Emergency Department
Denominator: Total number of patients with a diagnosis of AMI
requiring reperfusion who receive thrombolytic therapy following
presentation to the Emergency Department
8.
Reporting
aggregation
9.
Data Source(s)
Time to Thrombolysis
Percentage of patients with Acute Myocardial Infarction (AMI)
requiring thrombolysis who receive thrombolytic therapy within 60
minutes of presentation to the Emergency Department
Cardiovascular disease is the leading cause of death in Ireland and
research indicates that mortality is directly proportional to the time
delay from onset of symptoms to the commencement of definitive
therapy. The Cardiovascular Health Strategy in Ireland recommends
that eligible patients receive thrombolysis within 90 minutes of
seeking professional help. In the United Kingdom the Coronary
Heart Disease National Service Framework sets out that patients
suffering from Myocardial Infarction should receive thrombolysis
within 60 minutes of calling for professional help.
National
; Regional
LHO Area ; Hospital
County
Institution
Age
Gender
Socio Economic Class
Other give details:
Administrative data
Medical Record
56
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
10.
Tracer conditions
11.
First name:
Surname:
Date of birth:
Gender:
Date patient presents:
Time patient presents:
AMI Diagnosis:
Reperfusion type:
Thrombolytic drug:
Time thrombolysis
started:
Thrombolysis treatment
location:
Thrombolysis delay
reason:
Reason thrombolysis
not given:
12.
13.
14.
57
International
comparison
Web link to data
(where available)
Additional
Information
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority
58
Guidance on developing Key Performance Indicators and Minimum Data Sets to Monitor Healthcare Quality
Health Information and Quality Authority