Time To Act Mature - Gearing Ehealth Evaluations Towards Technology Readiness Levels

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Opinion Piece

Digital Health
Volume 8: 1–5
Time to act mature—Gearing eHealth evaluations © The Author(s) 2022
Article reuse guidelines:
towards technology readiness levels sagepub.com/journals-permissions
DOI: 10.1177/20552076221113396
journals.sagepub.com/home/dhj

Stephanie Jansen-Kosterink1,2 , Marijke Broekhuis1,2


and Lex van Velsen1,2

Abstract
It is challenging to design a proper eHealth evaluation. In our opinion, the evaluation of eHealth should be a continuous
process, wherein increasingly mature versions of the technology are put to the test. In this article, we present a model
for continuous eHealth evaluation, geared towards technology maturity. Technology maturity can be determined best via
Technology Readiness Levels, of which there are nine, divided into three phases: the research, development, and deployment
phases. For each phase, we list and discuss applicable activities and outcomes on the end-user, clinical, and societal front.
Instead of focusing on a single perspective, we recommend to blend the end-user, health and societal perspective. With this
article we aim to contribute to the methodological debate on how to create the optimal eHealth evaluation design.

Keywords

eHealth, evaluation, design, technology readiness level, continuous process, perspectives


Submission date: 11 May 2022; Acceptance date: 23 June 2022

Introduction to health services and information delivered or enhanced


through the Internet and related technologies. In a
The World Health Organization (WHO) stressed Digital broader sense, the term characterizes not only a technical
Health guidelines the need for rigorous evaluation of development, but also a state-of-mind, a way of thinking,
eHealth, in order to generate evidence and to promote the an attitude, and a commitment for networked, global think-
appropriate integration and use of technologies for improv- ing, to improve health care locally, regionally, and world-
ing health and reducing health inequalities.1 In the scientific wide by using information and communication
community that focuses on eHealth evaluation, there is no technology.”10
consensus on how to create the best evaluation design.2,3 To streamline the set-up of eHealth evaluations, various
According to the standards of evidence-based medicine, frameworks have been developed.3,11 The most widely used
large prospective randomized controlled trials (RCTs) are eHealth evaluation framework in European eHealth studies
considered the gold standard for evaluating the safety and is the Model for Assessment of Telemedicine (MAST).12
effectiveness of medical interventions.4 As the characteris- This model is based on the principles of Health
tics of an RCT do not match well with the evaluation of Technology Assessment (HTA)13 and is used to assess
eHealth, it is currently acknowledged among experts that
there is an urgent need for other evaluation designs.5–7
1
This makes it challenging to perform a proper eHealth eHealth Group, Roessingh Research and Development, Enschede, The
evaluation, which hampers the subsequent implementation Netherlands
2
Biomedical Signals and Systems Group, University of Twente, Enschede,
of eHealth in daily clinical practice.8,9 In this paper, we
The Netherlands
define eHealth according to Eysenbach (2001),10 as not
Corresponding author:
just a technology, but as a concept. Eysenbach’s definition Stephanie Jansen-Kosterink, Roessingh Research and Development,
of eHealth is: “An emerging field in the intersection of Roessinghsbleekweg 33b, 7522 AL Enschede, The Netherlands.
medical informatics, public health and business, referring Email: [email protected]; Twitter: @Sjansenius

Creative Commons NonCommercial-NoDerivs CC BY-NC-ND: This article is distributed under the terms of the Creative Commons Attribution-
NonCommercial-NoDerivs 4.0 License (https://creativecommons.org/licenses/by-nc-nd/4.0/) which permits non-commercial use, reproduction
and distribution of the work as published without adaptation or alteration, without further permission provided the original work is attributed as specified on
the SAGE and Open Access page (https://us.sagepub.com/en-us/nam/open-access-at-sage).
2 DIGITAL HEALTH

the effects and costs of eHealth from a multidimensional frameworks only present the results of a single perspec-
perspective. The strong points of MAST are the involve- tive.15 In previous eHealth evaluation studies the clinical
ment of all the actors and the assessment of outcomes in perspective is over-represented and findings related to
seven domains: (1) health problem and description of the usability, the user-experience, technology acceptance, and
application; (2) safety; (3) clinical effectiveness; (4) costs are rarely addressed.5 To overcome these limitations,
patient perspectives; (5) economic aspects; (6) organiza- and based on our experience within the field of eHealth
tional aspects; and (7) socio-cultural ethical and legal evaluation, we present our position toward eHealth evalua-
aspects. Another commonly used framework is the five- tions and present a model for the continuous evaluation of
stage model for comprehensive research on telehealth by eHealth, aligned to technology maturity levels, and incorp-
Fatehi et al.14 This framework outlines five important orating different evaluation perspectives.
stages for eHealth intervention: concept development,
service design, pre-implementation, implementation, and
post-implementation. By outlining these stages, this frame- Our position towards eHealth evaluation
work addresses the difference between the assessment of Evaluation, defined here as the collection, interpretation,
prototypes and the evaluation of mature technology. The and presentation of information in order to determine the
assessment of prototypes helps to identity the required value of a result or process,16 becomes a possibility and
improvements, while the evaluation of a mature technology necessity as soon as technology development starts.
aims to measuring the overall success factors and perform- Evaluation should be a continuous process, whereby the
ance after implementation. evaluation setup is geared towards the maturity of the tech-
The endorsement of applying an iterative approach and nology. There is no need to wait with the evaluation until
to focus on multiple perspectives, are strong points of the technology is mature, the evaluation of the technology
these current frameworks to streamline the set-up of can start from the first concept. In other disciplines, such
eHealth evaluations. While these frameworks are useful, as software development in which agile SCRUM is a
we foresee three major limitations. The first limitation is common approach, continuous evaluation is very common.
the current frameworks are only applicable for fully Next, we think that, instead of focusing on a single per-
mature technologies and are no solution for technology spective, evaluations should incorporate a multitude of
still in development. Therefore, the applicability of these complementary evaluation perspectives. In our opinion
frameworks is limited. The second limitation is that these these perspectives are (1) the end-user, (2) the health, and
frameworks do not provide a clear method for determining (3) the societal perspective. The end-user perspective
technology maturity. When using these frameworks, in focuses on the task-technology fit (which differs per type
combination with immature technologies to evaluate the of end-user), usability, the user-experience (UX) and tech-
value of an eHealth service, results are likely to be overly nology acceptance, to ensure that a technology is suitable
negative or, at the very least, biased. Finally, the third limi- for the intended end-users and their context; the health (or
tation is the over-representation of the clinical perspective. clinical) perspective should safeguard the health benefits
Most of the articles that report on the use of these that one derives from using the technology; the societal per-
spective should ensure that the technology can be imple-
mented with the support of relevant stakeholders, and is
durable.

Technology readiness levels


The maturity of a technology can be determined based on
the technology readiness levels (TRLs). TRLs are a
widely-accepted method to assess the maturity level of a
technology, also in the context of eHealth.17–19 These
levels (Figure 1) were developed by NASA in the early
70s as a means to determine whether emerging technology
is suitable for space exploration. In total, there are nine
levels divided in three phases: the research, development
and deployment phase. With TRLs we can clearly commu-
nicate the level of maturity of a technology and determine
whether the technology is ready for tests or evaluations in
a real-world setting. When a technology consists of differ-
ent modules, the weakest (or most immature) module deter-
Figure 1. Technology readiness level scale. mines the TRL.
Jansen-Kosterink et al. 3

A model for continuous eHealth evaluation spective) (e.g. as in van Velsen et al.20 and
Jansen-Kosterink et al.).21 These discussions aim to
Our model for continuous eHealth evaluation addresses
gauge the end-users’ reactions towards the basic concepts
both the maturity of the technology as the starting point
and main functionality of the prototype. The main aim of
for eHealth evaluations, as well as the inclusion of the dif-
the continuous evaluations in the research phase is to opti-
ferent evaluation perspectives. An overview of the sug-
mize the new concept and technology. As the technology
gested activities for the three perspectives in each phase is
mainly consists of ideas and simple prototypes at this
provided in Table 1.
stage, applying an iterative approach in this phase is
crucial. Quick rounds of testing-redesing-testing should
Research phase ensure the proper focus of the innovation. The work of
During the research phase, the technology is immature and Schnall and colleagues22–24 on their Health Information
the new concept, often in the form of a low-fidelity proto- Technology Usability Evaluation Scale (Health-ITUES)
type, is discussed with potential end-users (end-user per- fit very well with this phase.

Table 1. An overview of the activities on the end-user, health, and societal perspective for the research, development, and deployment
phase.

End-user perspective Health perspective Societal perspective

Research TRL 1 Testing of basic principles


phase with relevant
stakeholders.

TRL 2 Testing of basic concept to


obtain fit between use
and technology.

TRL 3 Identifying the merits of the


technology.

Development TRL 4 Small scale usability/UX


phase studies in a lab setting,
to test prototype
components.

TRL 5 Small scale usability/UX


studies in a lab setting,
to test integrated system

TRL 6 Clinical study to the use, acceptance


and potential health benefits of
the technology within the daily
clinical context.

Deployment TRL 7 Large scale clinical study to the use, Discussions with relevant stakeholders
phase acceptance, health benefits, and to assess the forecast of financial
safety of the technology within the and extra-financial value.
daily clinical context.

TRL 8 Long term monitoring of the health Strengthen the model of financial and
benefits and safety of the extra-financial-value with the
technology within the broad outcomes of clinical studies
clinical context.

TRL 9 Strengthen model of financial and


extra-financial-value with the
outcomes of long term monitoring.
4 DIGITAL HEALTH

Development phase Our model for the continuous evaluation of eHealth is


based on our experience within the field of eHealth evalu-
Within this phase the technology evolves from a prototype
ation and the lessons we have learned during our involve-
towards a more mature application. At this moment
ment in various national and international eHealth
end-users can interact with a high-fidelity prototype. We
projects. However, since this model reflects a vision on
start by incorporating small-scale usability tests and short-
eHealth evaluation, it would be impossible to prove the
term clinical studies in a controlled setting should be con-
truth of it. Therefore, case studies should inform us of its
ducted to identify usability issues and to assess use, accept-
worth and the opportunities for improvement.
ance, and potential health benefits (e.g. as in Olde Keizer
Additionally, the role of the environment and technical
et al., 2019).25 The outcomes of the technology-oriented
infrastructure in which an eHealth technology is embedded
evaluations (e.g. usability tests) should feed an iterative
plays a role.29 How does environmental and infrastructure
redesign process, in which technology is optimized. The
maturity affect evaluation? While we consider these
outcomes of the short-term clinical studies help to
factors to be aspects of the technology maturity, it would
compose hypotheses concerning health benefits for subse-
be interesting to see studies that aim to distinguish among
quent evaluations. Next to these activities the discussions
the different types of maturity. We hope that the research
with relevant stakeholders can be started to assess the fore-
community sees this article as a source of inspiration to
cast of financial and extra-financial value.
combine evaluation approaches with TRL levels and will
share their experiences with us.
Deployment phase
Contributorship: All authors (SJK, MB, and LvV) contributed
At this stage, the technology is almost ready for market substantially to this article and all participated in drafting the
launch. There are no more critical usability issues left and article and revising it critically for important intellectual content.
the next step is a large-scale clinical study combined with
a summative usability study in a real-life setting. This clin-
ical study could be a RCT to assess the safety and clinical Declaration of conflicting interests: The author(s) declared no
potential conflicts of interest with respect to the research,
effectiveness of the technology in comparison to usual
authorship, and/or publication of this article.
care in daily clinical practice (e.g. as in Kosterink
et al.).26 To comply with national or international legisla-
tion the technology needs to be certified based on the Funding: The author(s) received no financial support for the
outcome of these studies, such as a CE marking in research, authorship, and/or publication of this article.
Europe. Besides, based on the outcome of these studies
the forecast of financial and extra-financial value can be ORCID iDs: Stephanie Jansen-Kosterink https://orcid.org/
validated and finalized. During the deployment phase 0000-0002-2095-7104
there is little focus anymore on research and development. Lex van Velsen https://orcid.org/0000-0003-0599-8706
Although it is important to keep monitoring the long-term
health benefits and safety of the technology within the
broad clinical context by for instance a large cohort study. References
During this study, the long-term financial and extra- 1. World Health Organization. WHO guideline: recommenda-
financial value also need to be assessed (e.g. as in tions on digital interventions for health system strengthening.
Talboom et al.),27 so as to become aware of additional Geneva: World Health Organization, 2019.
2. Enam A, Torres-Bonilla J and Eriksson H. Evidence-based
exploitation opportunities.
evaluation of eHealth interventions: systematic literature
review. J Med Internet Res 2018; 20: e10971. 2018/11/25.
3. Bonten TN, Rauwerdink A, Wyatt JC, et al. Online guide for
Discussion electronic health evaluation approaches: systematic scoping
The evaluation of eHealth should be a continuous process, review and concept mapping study. J Med Internet Res
based on the maturity of the technology, and should focus 2020; 22: e17774. Original Paper 12.8.2020.
on the end-user perspective, the health perspective, and the 4. Hobbs N, Dixon D, Johnston M, et al. Can the theory of
societal perspective. The focus of an evaluation should be planned behaviour predict the physical activity behaviour of
individuals? Psychol Health 2013; 28: 234–249.
aligned with the maturity of the technology that is being
5. Kairy D, Lehoux P, Vincent C, et al. A systematic review of
put to the test. The use of TRL levels and their alignment
clinical outcomes, clinical process, healthcare utilization and
to evaluation perspectives is what mainly distinguishes costs associated with telerehabilitation. Disabil Rehabil
our model from other evaluation models for eHealth. 2009; 31: 427–447. 2008/08/23.
These models only focus on one perspective,22–24 are only 6. Ekeland AG, Bowes A and Flottorp S. Methodologies for
applicable for mature technology,12,15 or do not specify assessing telemedicine: a systematic review of reviews. Int J
how to assess the maturity of a technology.14,28 Med Inf 2012; 81: 1–11. 2011/11/23.
Jansen-Kosterink et al. 5

7. Laplante C and Peng W. A systematic review of e-health 19. Lyng KM, Jensen S and Bruun-Rasmussen M. A paradigm
interventions for physical activity: an analysis of study shift: sharing patient reported outcome via a national infra-
design, intervention characteristics, and outcomes. Telemed structure. In: MEDINFO 2019: health and wellbeing e-net-
J e-Health: Off J Am Telemed Assoc 2011; 17: 509–523. works for all. IOS Press, 2019, pp.694–698.
2011/07/02. 20. van Velsen L, Evers M, Bara C-D, et al. Understanding the
8. Granja C, Janssen W and Johansen MA. Factors determining acceptance of an eHealth technology in the early stages of
the success and failure of eHealth interventions: systematic development: an end-user walkthrough approach and two
review of the literature. J Med Internet Res 2018; 20: case studies. JMIR Formativ Res 2018; 2: e10474.
e10235–e10235. 21. Jansen-Kosterink S, van Velsen L and Cabrita M. Clinician
9. Broens TH, Huis in,’t Veld RM, Vollenbroek-Hutten MM, acceptance of complex clinical decision support systems for
et al. Determinants of successful telemedicine implementa- treatment allocation of patients with chronic low back pain.
tions: a literature study. J Telemed Telecare 2007; 13: 303– BMC Med Inform Decis Mak 2021; 21: 37.
309. 2007/09/06. 22. Brown W 3rd, Yen PY, Rojas M, et al. Assessment of the
10. Eysenbach G. What is e-health? J Med Internet Res 2001; 3: health IT usability evaluation model (health-ITUEM) for
E20–E20. evaluating mobile health (mHealth) technology. J Biomed
11. van Dyk L. A review of telehealth service implementation fra- Inform 2013; 46: 1080–1087. 2013/08/27.
meworks. Int J Environ Res Public Health 2014; 11: 1279– 23. Cho H, Yen PY, Dowding D, et al. A multi-level usability
1298. evaluation of mobile health applications: a case study. J
12. Kidholm K, Ekeland AG, Jensen LK, et al. A model for Biomed Inform 2018; 86: 79–89. 2018/08/27.
assessment of telemedicine applications: mast. Int J Technol 24. Schnall R, Rojas M, Bakken S, et al. A user-centered model
Assess Health Care 2012; 28: 44–51. 2012/05/24. for designing consumer mobile health (mHealth) applications
13. Lampe K, Mäkelä M, Garrido MV, et al. The HTA core (apps). J Biomed Inform 2016; 60: 243–251. 2016/02/24.
model: a novel method for producing and reporting health 25. Keizer RACM O, van Velsen L, Moncharmont M, et al. Using
technology assessments. Int J Technol Assess Health Care socially assistive robots for monitoring and preventing frailty
2009; 25: 9–20. 2009/12/25. among older adults: a study on usability and user experience
14. Fatehi F, Smith AC, Maeder A, et al. How to formulate challenges. Health Technol (Berl) 2019; 9: 595–605.
research questions and design studies for telehealth assess- 26. Kosterink SM, Huis in ’t Veld RM, Cagnie B, et al. The clin-
ment and evaluation. J Telemed Telecare 2016; 23: 759– ical effectiveness of a myofeedback-based teletreatment
763. service in patients with non-specific neck and shoulder pain:
15. Kidholm K, Clemensen J, Caffery LJ, et al. The model for a randomized controlled trial. J Telemed Telecare 2010; 16:
assessment of telemedicine (MAST): a scoping review of 316–321. 2010/08/28.
empirical studies. J Telemed Telecare 2017; 23: 803–813. 27. Talboom-Kamp E, Ketelaar P and Versluis A. A national
2017/08/02. program to support self-management for patients with a
16. Shaw I, Shaw IGR, Greene JC, et al. The sage handbook of chronic condition in primary care: a social return on invest-
evaluation. Thousand Oaks, CA: Sage, 2006. ment analysis. Clin eHealth 2021; 4: 45–49.
17. Liu L, Stroulia E, Nikolaidis I, et al. Smart homes and home 28. Sadegh SS, Khakshour Saadat P, Sepehri MM, et al. A frame-
health monitoring technologies for older adults: a systematic work for m-health service development and success evalu-
review. Int J Med Inf 2016; 91: 44–59. 2016/05/18. ation. Int J Med Inf 2018; 112: 123–130. 2018/03/04.
18. Roach DP and Neidigk S. Does the Maturity of Structural 29. Scherr TF, Moore CP, Thuma P, et al. Evaluating network
Health Monitoring Technology Match User Readiness? readiness for mHealth interventions using the beacon
2011. Sandia National Lab.(SNL-NM), Albuquerque, NM mobile phone app: application development and validation
(United States). study. JMIR Mhealth Uhealth 2020; 8: e18413. 2020/07/29.

You might also like