A1 AA 1 CPRS Usability Sittig
A1 AA 1 CPRS Usability Sittig
A1 AA 1 CPRS Usability Sittig
Houston, Texas;
3 Department of Clinical Effectiveness and Performance Measurement, St. Luke’s Episcopal Health System, Houston, Texas
Keywords
Diagnostic errors, patient safety, test results, care delays, health information technology
Summary
Background: Abnormal test results do not always receive timely follow-up, even when providers
are notified through electronic health record (EHR)-based alerts. High workload, alert fatigue, and
other demands on attention disrupt a provider’s prospective memory for tasks required to initiate
follow-up. Thus, EHR-based tracking and reminding functionalities are needed to improve follow-
up.
Objectives: The purpose of this study was to develop a decision-support software prototype en-
abling individual and system-wide tracking of abnormal test result alerts lacking follow-up, and to
conduct formative evaluations, including usability testing.
Methods: We developed a working prototype software system, the Alert Watch And Response En-
gine (AWARE), to detect abnormal test result alerts lacking documented follow-up, and to present
context-specific reminders to providers. Development and testing took place within the VA’s EHR
and focused on four cancer-related abnormal test results. Design concepts emphasized mitigating
the effects of high workload and alert fatigue while being minimally intrusive. We conducted a
multifaceted formative evaluation of the software, addressing fit within the larger socio-technical
system. Evaluations included usability testing with the prototype and interview questions about or-
ganizational and workflow factors. Participants included 23 physicians, 9 clinical information tech-
nology specialists, and 8 quality/safety managers.
Results: Evaluation results indicated that our software prototype fit within the technical environ-
ment and clinical workflow, and physicians were able to use it successfully. Quality/safety man-
agers reported that the tool would be useful in future quality assurance activities to detect patients
who lack documented follow-up. Additionally, we successfully installed the software on the local
facility’s “test” EHR system, thus demonstrating technical compatibility.
Conclusion: To address the factors involved in missed test results, we developed a software proto-
type to account for technical, usability, organizational, and workflow needs. Our evaluation has
shown the feasibility of the prototype as a means of facilitating better follow-up for cancer-related
abnormal test results.
1. Background
Improving the timeliness of abnormal test result follow-up is a significant challenge for clinical in-
formatics [1, 2]. Notifying providers of abnormal findings via electronic health record (EHR)-based
alerts is insufficient because notification is no guarantee of follow-up. In previous work involving
2,500 asynchronous alerts related to abnormal laboratory and imaging results, we found that 7% of
laboratory and 8% of imaging result alerts lacked timely follow-up at 4 weeks [3, 4], even in cases
with explicit acknowledgement of the notification by the recipient. Such lapses could lead to diag-
nosis and treatment delays, which can have serious implications [5, 6], especially for cancer, which
often benefits from early diagnosis and treatment [7].
Missed or delayed follow-up of abnormal test results is a multifaceted safety issue that occurs
within the complex “sociotechnical” system of healthcare, involving interactions between technical
and human elements, such as work processes and organizational factors [8–12]. The Department of
Veterans Affairs (VA) uses an EHR called VistA, also known by its front-end component CPRS
(Computerized Patient Record System). Similar to other EHRs, VistA automatically generates no-
tifications in response to abnormal laboratory or radiology results. These notifications are trans-
mitted to a receiving inbox, called the “ViewAlert” window in CPRS. This inbox is visible after log-
ging-in to the EHR, and when switching between patient records. In principle, a provider would see
an alert, open it up to read it, make a care planning decision based on the results, place orders, and
document the care plan. However, successful follow-up of an abnormal test result relies on many
processes, involving various human and technical resources [13, 14]. Factors thought to contribute
to missed results include information overload from all types of alerts [15], other competing de-
mands on a provider’s attention and memory, handoffs and coordination challenges [16, 17], and
system limitations related to tracking. Consequently, any intervention intended to improve follow-
up must address these factors as well as the dimensions of the encompassing sociotechnical system
[11, 18, 19], such as human-computer interaction, workflow, personnel, and organizational context
[20].
2. Objectives
To address the aforementioned issues, we developed and evaluated a functional prototype of a multi-
faceted software system for cognitive support for providers to reduce missed test results. We focused
on abnormal test result alerts related to four types of cancers: colorectal, lung, breast, and prostate. A
facility can use the tool on other types of asynchronous alerts, bearing in mind that use should be
part of a quality improvement process, and targeted only on a small set of alerts that have high risk
of missed or delayed follow-up with significant clinical consequences. The software is designed for
CPRS, but the design concept is exportable to other EHRs.
Within the VA, revisions to the CPRS software code are managed at higher VA levels, owing to
national implications for more than 200,000 users and over 8 million veterans (i.e., patients). Thus,
in order to be successful, field-based innovations, such as this one, need to limit software coding
changes and related interdependencies within the software code. This potentially reduces the extent
of testing needed pre-and post-implementation and facilitates adoption of the solution across differ-
ent VA facilities.
In order to address this in a multifaceted sociotechnical context, we intended the functional
prototype design to meet the following goals, many of which were conceived based on previous re-
search findings: [1, 3, 4, 27–29, 33–37]
1. Mitigate strain on providers’ prospective memory by serving as an external memory aid to track
the need for follow-up and when necessary, remind providers.
2. Facilitate the process of ordering appropriate follow-up by minimizing provider effort.
3. Minimize provider interruptions and maximize compatibility with provider workflow.
4. Minimize changes to the underlying VistA/CPRS code to minimize conflicts with existing ver-
sions of VistA/CPRS and facilitate testing and adoption across the VA system.
5. Address the technical, organizational, and policy constraints of the complex sociotechnical sys-
tem, such that the Information Technology (IT) staff can configure and maintain it within the fa-
cility policies.
6. Achieve high-reliability follow-up by tracking missed alerts at the clinic or facility level and sup-
porting identification of patients with test results still requiring interventions to facilitate timely
follow-up.
Thus, the sociotechnical dimensions to be addressed include software usability, technical compati-
bility, and fit with clinical workflow and organization.
3. Methods
To address these goals, we used a development and formative evaluation process that involved iter-
ative testing and refinement with stakeholders representing different roles. The core team was com-
prised of: clinician and informatician field-based researchers, VA contract usability professionals
with expertise in health IT, and VA contract programmers with expertise in VistA/CPRS. This latter
expertise was also essential to achieve design goal 4 (minimization of VistA/CPRS code changes).
Participants included VA primary care providers, quality and safety managers, and clinical IT
specialists who were responsible for EHR configuration and maintenance regarding clinical content,
including alerts, which was the focus of design goal 5. The development and evaluation project
started in late 2010 and continued to the end of 2011.
The prototype software system called AWARE (Alert Watch And Response Engine) has two com-
ponents. The first component is designed to provide cognitive support to busy providers in initiating
follow-up (the Reminder Prompt), while the second component is designed to support the detection
of alerts that likely did not receive any follow-up (the Quality Improvement Tool).
The provider can postpone addressing the follow-up by selecting the “Close and Address Later”
button which will keep the alert in the provider’s ViewAlert inbox as opposed to it disappearing,
which is what happens in CPRS currently. This helps maximize fit with workflow (design goal 3).
While the alert is still active, the software performs the same checks each time the provider exits the
patient’s chart. If the provider chooses the “Address Now” button, he or she is prompted to select an
“encounter and progress note title” with which to associate a follow-up action, and is then presented
with a window containing follow-up action options specific to the particular alert type (▶ Figure 3).
Based on the provider’s selections, a template-based progress note and selected orders are automati-
cally generated for review, edits and signature, thus facilitating documentation and order entry (de-
sign goal 2). Additionally, the status of the alert is changed to “completed” which stops the pop-up
from appearing again, and keeps the alert from being put back in the ViewAlerts inbox.
The list of follow-up actions for each type of abnormal result was based on a review of the litera-
ture and expert consensus. However, this list is customizable at the facility-level, enabling the soft-
ware to be adapted to different practice patterns. Furthermore, the prompt was designed to include a
free-text field and additional structured fields for situations when follow-up actions might not be
warranted, which allows more flexibility. See ▶ Figure 3.
3.1.2 Quality Improvement (QI) Tool
The QI Tool contains a database that stores information on alerts generated from the VistA alert
tracking file and from follow-up actions generated from the AWARE Reminder Prompt. The QI
Tool can generate reports for displaying transmitted alerts by provider, patient, time-frame, and/or
alert type to facilitate system quality measurement. Users can sort and filter data and obtain more
information on specific alerts, providers, or patients. For example, at periodic intervals, the user
could identify patients for whom no follow-up has been detected (thus supporting design goal 6,
identification of cases still requiring intervention). The QI Tool dashboard is a prototype interface
that provides an overview and navigation support [45] for risk analysis and detection of patients
without follow-up (▶ Figure 4).
phone to converse with the usability specialist. For all of the residents, and the rest of the other par-
ticipants, the interviewing and usability testing took place in a conference room at the local VA
medical center, with the usability specialist present. In both situations, the participant was video-
and audio-recorded, and the computer screen was video-captured.
Usability testing of the Reminder Prompt and the QI Tool was conducted using scenarios de-
signed to identify potential usability problems [46, 47]. The tool was loaded onto a test copy of the
VistA/CPRS EHR populated with artificial data. Participants were asked to think-aloud [48] as they
performed a small set of tasks using the functional prototype . Comments about the tool were cap-
tured. Additionally, two human factors experts individually performed heuristic evaluations [46, 49]
by walking through each step in the of use of the tool and reviewing it according to established us-
ability heuristics [50], afterwards collaborating to arrive at a consensus.
Usability testing of the Reminder Prompt was conducted with residents and staff physicians. Pro-
viders performed tasks that involved acknowledging alerts, accessing patients’ charts in VistA/
CPRS, responding to the appearance of the Pop-up, and addressing follow-up needs with the Rem-
inder Prompt template. For example, one task involved the participant acknowledging an abnormal
alert, but required exiting the chart prior to ordering follow-up. The AWARE Pop-up appeared, and
the participant responded to it and to the Reminder Prompt, using it to input the follow-up order.
To provide context and enable the clinical IT specialists and quality/safety managers to learn
about the components of the software to be used by providers, they were shown and given the op-
portunity to interact with the Reminder Prompt prior to the usability testing of the QI Tool. The us-
ability tasks for the QI Tool involved generating database reports for particular types of alerts for a
specified time frame, as well as finding specific information within generated tables. For example,
one task was: “Please find out how many total alerts were delivered to providers at the (City) VA
Medical Center during the month of December 2010. Of those alerts, how many went unacknowl-
edged by the providers?” Interview questions addressed both the usability of the software, and the
potential fit of the software within the larger sociotechnical system, particularly with the workflow
and the organizational dimensions. Background questions were asked before the usability test. An
example question asked of providers was: “Please explain the process you go through when manag-
ing your alerts.” An example question asked of clinical IT specialists and quality/safety managers
was: “Can you tell me a little about your daily responsibilities?”
After the usability test, we asked participants about their impressions of the software and its po-
tential impact on their work. For example, “How much time do you think this process [the Rem-
inder Prompt] would add to the time you spend working with patient records?” We asked the clini-
cal IT specialists about the tasks involved in configuring and managing the Reminder Prompt and
how they envisioned using the QI Tool. For example: “How do you see [the QI Tool] affecting your
role, positively or negatively?” Additionally, we asked the quality/safety managers about how they
would use the QI Tool. For example: “How might you use this tool to look at issues related to risk of
patients not getting timely follow-up to abnormal alerts?”
3.2.3 Installation
In order to ensure that the Reminder Prompt could run on a full scale VistA/CPRS EHR with real
medical records, it was installed on a non-production (testing/development) copy of VistA/CPRS
populated with real but de-identified data.
3.2.4 Analysis
Using the recordings, the set of responses for each interview question were reviewed by the usability
team to identify patterns. Each participant’s response was then categorized, and categories were ag-
gregated. Additionally, other comments and “think aloud” verbalizations from the usability testing
were reviewed, and items pertinent to tool’s usability or fit with workflow or organization were
noted. The initial findings were reviewed with the multidisciplinary research team to provide a sec-
ondary check on interpretation, particularly from the perspective of physicians, and from clinical IT
and safety/quality perspectives.
Observations of performance on the usability tasks, along with verbalizations and impressions
from participants, were used to identify problems with the usability of the software.
Analysis of the installation involved checking if the software correctly responded to patient rec-
ords with and without test alerts lacking the corresponding follow-up orders. Additionally, a debrief-
ing session was conducted with the IT staff to identify areas for improvement regarding the instal-
lation and configuration process.
4. Results
4.1 Usability
The usability testing of the Reminder Prompt indicated that many providers (11, 48%) experienced
some confusion when using the tool for the first time. Most of the confusion was related to the
wording on the Pop-up, which was identified as the primary source of confusion by 6 (26%) pro-
vider. One specific problem was using “Proceed” to label the Pop-up button responsible for advanc-
ing to the Reminder Prompt template. This conveyed to one provider that clicking the button would
proceed with closing the chart. Using this information, the button label was revised to “Address
Now.” Other problems identified included that the text on the pop-up did not successfully explain
the reason why the Pop-up appeared, and instructions provided were not specific enough to assist in
choosing a note template. The text was modified to include this information.
No problems affecting task performance were observed with the Reminder Prompt template.
However, the heuristic evaluation of the Reminder Prompt identified terminology inconsistencies
and a layout issue that could lead to misinterpretation of a text field. These issues have been address-
ed via layout and wording changes.
The main usability problems identified for the QI Tool concerned navigation between and within
screens. For example, 100% of the clinical IT specialists and quality/safety managers (17) expected to
be able to drill down into hyperlinked data cells to obtain more detail, which was not consistently
supported by the tested version of the QI Tool. The heuristic review of the QI Tool identified issues
with the graphical formatting of the reports, the need for better navigation support, and an incon-
sistency in how date ranges are defined (compared to other tools in CPRS). Changes to navigation
and formatting in the QI Tool have been made in response to these findings.
4.3 Installation
Despite differences between the version of VistA/CPRS used during development and the version of
VistA/CPRS being run at the facility, the clinical IT team and the programmers were able to run the
Reminder Prompt on the full-scale, testing/development copy of VistA/CPRS. It operated success-
fully, including detecting the test alerts that lacked follow-up actions.
5. Discussion
The design of our software system addresses several features that have been proposed for reliable
test result follow-up [51]. Our development and evaluation process took into account multiple so-
ciotechnical factors involved with missed test results (technical, usability, and fit with workflow and
organization), and incorporated input from key user- and stakeholder-groups (resident and staff
physicians, clinical IT specialists, and quality/safety managers). The evaluation has shown that the
prototype design is efficacious in detecting specific cancer-related abnormal test results lacking fol-
low-up, presenting reminder notifications to providers, and enabling providers to enter orders for
follow-up actions or other appropriate responses. Results suggest that this is a feasible design strat-
egy for helping to reduce missed or delayed follow-up of certain high-risk abnormal test results. The
six design goals are discussed below:
6. Limitations
6.1 Limitations of Prototype Design
The software does not have the ability to detect contraindications or follow-up actions documented
in free-text notes, so the Reminder Prompt might appear unnecessarily at times. As such, the soft-
ware is intended only for a small number of high-risk high-priority alerts and thus we expect these
situations to occur infrequently. The Reminder Prompt may never be triggered, such as when the as-
signed provider or trainee is no longer at the facility. However, the QI Tool will still capture the lack
of documented follow-up on the alert, enabling safety/quality personnel to respond. Additionally,
follow-up action orders may be placed but not fulfilled; however, tracking these downstream events
is beyond the scope of this tool.
7. Conclusions
Our field-based informatics research led to the development and evaluation of a novel functional
prototype system to improve safety of abnormal test follow-up in EHR systems. The problem-spe-
cific Reminder Prompt supports the provider’s prospective memory while minimizing additional
reminder burden and disruptions. Furthermore, in recognition of the limitations of the notification
system and that of the overburdened providers who deal with hundreds of test results per week [5],
AWARE’s QI Tool supports other members of the organization in detecting patients at risk of missed
abnormal test results.
Clinical Relevance
Previous research on follow-up for abnormal test results found 7–8% lacked timely follow-up [3, 4],
which can lead to treatment delays with potential negative consequences, especially for cancer [5,
6]. This work demonstrates a feasible solution for the problem of providers having difficulty
keeping track of important abnormal tests results in need of follow-up.
Statement on competing interests
All authors declare that they have no competing interests. The views expressed in this article are
those of the authors and do not necessarily represent the views of the Department of Veterans Af-
fairs.
Protection of Human Subjects
The Baylor College of Medicine Institutional Review Board approved the study.
Authors’ contributions
AE, DFS and HS conceived the product design and project. DM, AL, BR, HS and MS supervised
and participated in product design and development, and study design and data collection. MS
conducted analyses and drafted the manuscript. All authors read and approved the final manu-
script.
Acknowledgements:
The authors wish to acknowledge the expert contributions of: Dana Douglas, Mark Becker, and
Dick Horst of UserWorks, Inc.; Bryan Campbell and Angela Schmeidel Randall of Normal Modes,
LLC; and Ignacio Valdes and Fred Trotter of Astronaut Contracting, LLC. Additional thanks to
Velma Payne, Varsha Modi, Kathy Taylor, Roxie Pierce, Pawan Gulati, and George Welch. Project
supported by the VA Innovation Initiative (XNV 33–124), VA National Center of Patient Safety and
in part by the Houston VA HSR&D Center of Excellence (HFP90–020). The views expressed in this
article are those of the author(s) and do not necessarily represent the views of the Department of
Veterans Affairs. There are no conflicts of interest for any authors.
Fig. 1 Process for checking for follow-up and launching reminder prompt.
References
1. Singh H, Naik A, Rao R, Petersen L. Reducing Diagnostic Errors Through Effective Communication: Har-
nessing the Power of Information Technology. Journal of General Internal Medicine 2008; 23(4): 489-494.
2. Singh H, Graber M. Reducing diagnostic error through medical home-based primary care reform. JAMA
2010; 304(4): 463-464.
3. Singh H, Thomas EJ, Mani S, Sittig DF, Arora H, Espadas D, Khan MM, Petersen LA. Timely follow-up of
abnormal diagnostic imaging test results in an outpatient setting: are electronic medical records achieving
their potential? Arch Intern Med 2009; 169(17): 1578-1586.
4. Singh H, Thomas EJ, Sittig DF, Wilson L, Espadas D, Khan MM, Petersen LA. Notification of abnormal lab
test results in an electronic medical record: do any safety concerns remain? Am J Med 2010; 123(3):
238-244.
5. Poon E, Gandhi T, Sequist T, Murff H, Karson A, Bates D. „I wish I had seen this test result earlier!“: Dis-
satisfaction with test result management systems in primary care. Archives of internal medicine 2004;
164(20): 2223-2228.
6. Wahls T. Diagnostic errors and abnormal diagnostic tests lost to follow-up: a source of needless waste and
delay to treatment. The Journal of ambulatory care management 2007; 30(4): 338-343.
7. Singh H, Sethi S, Raber M, Petersen LA. Errors in cancer diagnosis: current understanding and future di-
rections. J Clin Oncol 2007; 25(31): 5009-5018.
8. Nemeth CP, Cook RI, Woods DD. The Messy Details: Insights From the Study of Technical Work in
Healthcare. IEEE Transactions on Systems, Man, and Cybernetics – Part A: Systems and Humans 2004;
34(6): 689-692.
9. Carayon P, Schoofs Hundt A, Karsh BT, Gurses AP, Alvarado CJ, Smith M, Flatley BP. Work system design
for patient safety: the SEIPS model. Quality & safety in health care 2006; 15 Suppl 1(suppl 1): i50-i58.
10. Wears R, Berg M. Computer technology and clinical work: still waiting for Godot. JAMA 2005; 293(10):
1261-1263.
11. Sittig DF, Singh H. A new sociotechnical model for studying health information technology in complex
adaptive healthcare systems. Quality and Safety in Health Care 2010; 19(Suppl 3): i68-i74.
12. Carayon P, Bass E, Bellandi T, Gurses A, Hallbeck S, Mollo V. Socio-Technical Systems Analysis in Health
Care: A Research Agenda. IIE transactions on healthcare systems engineering 2011; 1(1): 145–60.
13. Graber M, Franklin N, Gordon R. Diagnostic error in internal medicine. Archives of internal medicine
2005; 165(13): 1493-1499.
14. Raab S, Grzybicki D. Quality in cancer diagnosis. CA: a cancer journal for clinicians 2010; 60(3): 139–65.
15. Singh H, Spitzmueller C, Petersen NJ, Sawhney MK, Sittig DF. Information Overload and Missed Test Re-
sults in Electronic Health Record Based Settings. JAMA Internal Medicine 2013; 1–3.
16. Wears R, Perry S, Patterson E. Handoffs and Transitions of Care. In P Carayon (ed) Handbook of Human
Factors and Ergonomics in Healthcare and Patient Safety. L. Erlbaum Associates Inc.; 2006: 163-172.
17. Pennathur P, Bass E, Rayo M, Perry S, Rosen M, Gurses A. Handoff Communication: Implications For
Design. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 2012; 56(1): 863-866.
18. Ash JS, Sittig DF, Poon EG, Guappone K, Campbell E, Dykstra RH. The extent and importance of unin-
tended consequences related to computerized provider order entry. J Am Med Inform Assoc 2007; 14(4):
415-423.
19. Lawler E, Hedge A, Pavlovic-Veselinovic S. Cognitive ergonomics, socio-technical systems, and the im-
pact of healthcare information technologies. International Journal of Industrial Ergonomics 2011 Apr.
20. Singh H, Spitzmueller C, Petersen NJ, Sawhney MK, Smith MW, Murphy DR, Espadas D, Laxmisan A, Sit-
tig DF. Primary care practitioners views on test result management in EHR-enabled health systems: a
national survey. J Am Med Inform Assoc 2012.
21. Dismukes K. Remembrance of Things Future: Prospective Memory in Laboratory, Workplace, and Every-
day Settings. In Reviews of human factors and ergonomics. Human Factors and Ergonomics Society 2010:
79–122.
22. Westbrook J, Coiera E, Dunsmuir W, Brown B, Kelk N, Paoloni R, Tran C. The impact of interruptions on
clinical task completion. Quality and Safety in Health Care 2010; 19(4): 284-289.
23. Laxmisan A, Hakimzada F, Sayan O, Green R, Zhang J, Patel V. The multitasking clinician: decision-mak-
ing and cognitive demand during and after team handoffs in emergency care. International Journal of
Medical Informatics 2007; 76(11–12): 801-811.
24. Reason J. Combating omission errors through task analysis and good reminders. Qual Saf Health Care
2002; 11(1): 40-44.
25. Saleem JJ, Russ AL, Sanderson P, Johnson TR, Zhang J, Sittig DF. Current challenges and opportunities for
better integration of human factors research with development of clinical information systems. Yearb Med
Inform 2009; 48–58.
26. Hutchins E. How a cockpit remembers its speeds. Cognitive Science 1995; 19(3): 265-288.
27. Murphy D, Reis B, Sittig DF, Singh H. Notifications Received by Primary Care Practitioners in Electronic
Health Records: A Taxonomy and Time Analysis. American Journal of Medicine 2012; 125(2): 209-el.
28. Hysong S, Sawhney M, Wilson L, Sittig D, Esquivel A, Singh S, Singh H. Understanding the Management
of Electronic Test Result Notifications in the Outpatient Setting. BMC Med Inform Decis Mak 2011; 11(1):
22.
29. Murphy D, Reis B, Kadiyala H, Hirani K, Sittig D, Khan M, Singh H. Electronic Health Record-Based
Messages to Primary Care Providers: Valuable Information or Just Noise? Arch Intern Med 2012; 172(3):
283-285.
30. van der Sijs H, Aarts J, Vulto A, Berg M. Overriding of drug safety alerts in computerized physician order
entry. Journal of the American Medical Informatics Association : JAMIA 2006; 13(2): 138-147.
31. Sorkin R, Woods D. Systems with Human Monitors: A Signal Detection Analysis. Human–Computer In-
teraction 1985; 1(1): 49–75.
32. Singh H, vis Giardina T, Petersen L, Smith M, Paul L, Dismukes K, Bhagwath G, Thomas E. Exploring
situational awareness in diagnostic errors in primary care. BMJ Quality & Safety 2011 Sep 2.
33. Singh H, Arora H, Vij M, Rao R, Khan MM, Petersen L. Communication outcomes of critical imaging re-
sults in a computerized notification system. J Am Med Inform Assoc 2007; 14(4): 459-466.
34. Singh H, Wilson L, Petersen L, Sawhney MK, Reis B, Espadas D, Sittig DF. Improving follow-up of abnor-
mal cancer screens using electronic health records: trust but verify test result communication. BMC Medi-
cal Informatics and Decision Making 2009; 9(49).
35. Singh H, Vij MS. Eight recommendations for policies for communicating abnormal test results. Jt Comm J
Qual Patient Saf 2010; 36(5): 226-232.
36. Hysong SJ, Sawhney MK, Wilson L, Sittig DF, Espadas D, Davis T, Singh H. Provider management strat-
egies of abnormal test result alerts: a cognitive task analysis. J Am Med Inform Assoc 2010; 17(1): 71-77.
37. Singh H, Wilson L, Reis B, Sawhney MK, Espadas D, Sittig DF. Ten Strategies to Improve Management of
Abnormal Test Result Alerts in the Electronic Health Record. Journal of Patient Safety 2010; 6(2): 121-123.
38. Gandhi TK, Kachalia A., Thomas EJ, Puopolo A.L., Yoon C, Brennan TA, Studdert DM. Missed and de-
layed diagnoses in the ambulatory setting: A study of closed malpractice claims. Ann Intern Med 2006;
145(7): 488-496.
39. Nepple KG, Joudi FN, Hillis SL, Wahls TL. Prevalence of delayed clinician response to elevated prostate-
specific antigen values. Mayo Clin Proc 2008; 83(4): 439-448.
40. Singh H, Kadiyala H, Bhagwath G, Shethia A, El-Serag H, Walder A, Velez ME, Petersen LA. Using a
multifaceted approach to improve the follow-up of positive fecal occult blood test results. Am J Gastroen-
terol 2009; 104(4): 942-952.
41. Singh H, Hirani K, Kadiyala H, Rudomiotov O, Davis T, Khan MM, Wahls TL. Characteristics and predic-
tors of missed opportunities in lung cancer diagnosis: an electronic health record-based study. J Clin
Oncol 2010; 28(20): 3307-3315.
42. Zeliadt SB, Hoffman RM, Etzioni R, Ginger VA, Lin DW. What happens after an elevated PSA test: the ex-
perience of 13,591 veterans. J Gen Intern Med 2010; 25(11): 1205-1210.
43. Saleem J, Patterson E, Militello L, Render M, Orshansky G, Asch S. Exploring Barriers and Facilitators to
the Use of Computerized Clinical Reminders. J Am Med Inform Assoc 2005; 12(4): 438-447.
44. Grundgeiger T, Sanderson PM, MacDougall HG, Venkatesh B. Distributed prospective memory: An ap-
proach to understanding how nurses remember tasks. In Proceedings of the Human Factors and Ergo-
nomics Society Annual Meeting. SAGE Publications; 2009: 759-763.
45. Woods DD, Watts JC. How not to have to navigate through too many displays. In: Helander MG, Land-
auer TK, Prabhu PV, editors. Handbook of Human-Computer Interaction, Second Edition. North Hol-
land; 1997: 617–50.
46. Jaspers MW. A comparison of usability methods for testing interactive health technologies: Methodologi-
cal aspects and empirical evidence. International Journal of Medical Informatics 2009; 78(5): 340-353.
47. Smith PJ, Stone RB, Spencer AL. Applying cognitive psychology to system development. In: Marras WS,
Karwowski W, editors. Fundamentals and Assessment Tools for Occupational Ergonomics (The Occupa-
tional Ergonomics Handbook, Second Edition. CRC Press; 2006: 24–1.
48. Ericsson A, Simon H. Verbal Reports as Data. Psychological Review 1980; 87(3): 215-251.
49. Dumas JS, Salzman MC. Usability Assessment Methods. Reviews of Human Factors and Ergonomics
2006; 2(1): 109-140.
50. Nielsen J. Finding usability problems through heuristic evaluation. In Proceedings of the SIGCHI confer-
ence on Human factors in computing systems. ACM 1992: 373-380.
51. Schiff G. Medical Error. JAMA 2011; 305(18): 1890-1898.
52. Poon E, Wang S, Gandhi T, Bates D, Kuperman G. Design and implementation of a comprehensive out-
patient Results Manager. Journal of Biomedical Informatics 2003; 36(1–2): 80–91.
53. Guerlain S, Smith P, Obradovich J, Rudmann S, Strohm P, Smith J, Svirbely J, Sachs L. Interactive Criti-
quing as a Form of Decision Support: An Empirical Evaluation. Human Factors: The Journal of the
Human Factors and Ergonomics Society 1999; 72–89.
54. Smith PJ, McCoy CE, Layton C. Brittleness in the design of cooperative problem-solving systems: the ef-
fects on user performance. Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transac-
tions on 1997; 27(3): 360-371.
55. Adamczyk P, Bailey B. If not now, when?: the effects of interruption at different moments within task
execution. In Proceedings of the SIGCHI conference on Human factors in computing systems. Vienna,
Austria: ACM; 2004: 271-278.
56. Grudin J. Groupware and social dynamics: eight challenges for developers. Communications of the ACM
1994; 37(1): 92–105.
57. Leveson N, Dulac N, Marais K, Carroll J. Moving Beyond Normal Accidents and High Reliability Organ-
izations: A Systems Approach to Safety in Complex Systems. Organization Studies 2009; 30(2–3): 227-249.
58. Karsh BT, Weinger M, Abbott P, Wears R. Health information technology: fallacies and sober realities. J
Am Med Inform Assoc 2010; 17(6): 617-623.
59. McDonald C, McDonald M. INVITED COMMENTARY – Electronic Medical Records and Preserving
Primary Care Physicians’ Time: Comment on „Electronic Health Record-Based Messages to Primary Care
Providers“. Arch Intern Med 2012; 172(3): 285-287.