JAMDA 13 (2012) 611e617
JAMDA
journal homepage: www.jamda.com
Original Study
MDS 3.0: Brief Interview for Mental Status
Debra Saliba MD, MPH a, *, Joan Buchanan PhD b, Maria Orlando Edelen PhD c, Joel Streim MD d,
Joseph Ouslander MD e, Dan Berlowitz MD f, Joshua Chodosh MD, MSS g
a
UCLA/Jewish Home Borun Center for Gerontological Research, Los Angeles, CA; Greater Los Angeles VA GRECC and HSR&D Center of Excellence; RAND, Santa Monica, CA
Department of Health Care Policy (retired), Harvard Medical School, Boston, MA
c
RAND, Boston, MA
d
Veterans Integrated Service Network 4, Mental Illness Research Education and Clinical Center, Philadelphia Veterans Affairs Medical Center, Philadelphia, PA; Geriatric Psychiatry
Section, Department of Psychiatry, University of Pennsylvania, Philadelphia, PA
e
Charles E. Schmidt College of Medicine, Florida Atlantic University, Boca Raton, FL; University of Miami Miller School of Medicine, Miami, FL
f
Center for Health Quality, Outcomes and Economic Research, Bedford VA Hospital, Bedford, MA; Boston University School of Public Health, Boston, MA
g
Greater Los Angeles VA GRECC and HSR&D Center of Excellence, UCLA Department of Medicine, Los Angeles, CA
b
a b s t r a c t
Keywords:
Cognition
nursing home
assessment
Minimum Data Set
screening
interview
procedural memory
Objectives: To test the feasibility and validity of the Brief Interview for Mental Status (BIMS) as
a performance-based cognitive screener that could be easily completed by nursing home staff. The
current study examines the performance of the BIMS as part of the national testing of the Minimum Data
Set 3.0 (MDS 3.0) for Nursing Homes.
Methods: The BIMS was tested as part of the national MDS 3.0 evaluation study among 3822 residents
scheduled for MDS 2.0 assessments. Residents were from 71 community nursing homes (NHs) in eight
states. Residents were randomly included in a feasibility sample (n ¼ 3258) and a validation sample
(n ¼ 418). Cognition was assessed with three instruments: the Brief Interview for Mental Status (BIMS),
the MDS 2.0 Cognitive Performance Scale (CPS), and the Modified Mini-Mental State Examination (3MS).
Trained research nurses administered the 3MS and BIMS to all subjects in the validation study. The CPS
score was determined based on the MDS 2.0 completed by nursing home staff who had undergone
additional training on cognitive testing. Standard cutoff scores on the 100-point 3MS were used as the
gold standard for any cognitive impairment (<78) and for severe impairment (<48). Staff impressions
were obtained from anonymous surveys.
Results: The BIMS was attempted and completed in 90% of the 3258 residents in the feasiblity sample.
BIMS scores covered the full instrument range (0e15). In the validation sample, correlation with the
criterion measure (3MS) was higher for BIMS (0.906, P < .0001) than for CPS (e0.739, P < .0001); P < .01
for difference. For identifying any impairment, a BIMS score of 12 had sensitivity ¼ 0.83 and specificity ¼
0.91; for severe impairment, a BIMS score of 7 had sensitivity ¼ 0.83 and specificity ¼ 0.92. The area
under the receiver operator characteristics curve, a measure of test accuracy, was higher for BIMS than
for CPS for identifying any impairment (AUC ¼ 0.930 and 0.824, respectively) and for identifying severe
impairment (AUC ¼ 0.960 and 0.857, respectively). Eighty-eight percent of survey respondents reported
that the BIMS provided new insight into residents’ cognitive abilities. The average time for completing
the BIMS was 3.2 minutes.
Discussion: The BIMS, a short performance-based cognitive screener expressly designed to facilitate
cognitive screening in MDS assessments, was completed in the majority of NH residents scheduled for
MDS assessments in a large sample of NHs, demonstrating its feasibility. Compared with MDS 2.0
observational items, the BIMS performance-based assessment approach was more highly correlated with
a criterion cognitive screening test and demonstrated greater accuracy. The majority of surveyed staff
reported improved assessments with the new approach.
Published by Elsevier Inc. on behalf of the American Medical Directors Association, Inc.
This work was funded by the U.S. Department of Veterans Affairs, Veterans
Health Administration, Health Services Research and Development (HSR&D)
Service (Project SDR 03-217), the Centers for Medicare & Medicaid Services, and the
UCLA/JH Borun Center. The views expressed in this article are those of the authors
and do not necessarily reflect the position or policy of the U.S. Department of
Veterans Affairs or the Centers for Medicare and Medicaid Services.
* Address correspondence to Debra Saliba, MD, MPH, UCLA Borun Center for
Gerontological Research, 10945 Le Conte Ave, Suite 2339, Los Angeles, CA 90095.
E-mail address:
[email protected] (D. Saliba).
1525-8610/$ - see front matter Published by Elsevier Inc. on behalf of the American Medical Directors Association, Inc.
http://dx.doi.org/10.1016/j.jamda.2012.06.004
612
D. Saliba et al. / JAMDA 13 (2012) 611e617
More than 50% of nursing home (NH) residents are reported to
have cognitive impairment.1,2 Cognitive impairment is associated
with poorer functional outcomes3,4 and influences resource and
support needs. Changes in cognitive function may indicate an
important clinical change related to delirium or significant mood
disorder.5 Reliable and valid screening is therefore important for
identifying residents who need further evaluation and modified care
planning. For these reasons, the Minimum Data Set (MDS) has
traditionally included items focused on cognitive function.
The MDS 2.0 cognitive assessment items have been based on
synthesizing staff member(s) subjective observations of the resident.
These items can be used to calculate the Cognitive Performance Scale
(CPS) for research or case-mix purposes.6 Although CPS scores based
on research-nurse cognitive assessments have strong to moderate
correlation with Mini-mental State Exams (MMSE) scores,6e9 the
subjective nature of the items increases the risk for misclassification
because of variation in staff vigilance or because of assumptions
based on resident appearance or age. While MDS 2.0 misclassification
may seem insignificant on a population basis, incorrect cognitive
screening can have serious implications for care planning at the
patient level.10
In addition, the cognitive items in MDS 2.0 are unique to the NH
setting and do not align with items used or recognized by providers
in other settings. A related limitation of the MDS 2.0 cognitive
assessment is that derivation of the CPS score requires application of
an algorithm; as a result, scores are not typically transparent to
facility staff members. This effectively limits the assessment’s impact
on communication across providers.
In the early phases of national efforts to develop MDS 3.0, the MDS
2.0 cognitive items were identified as potentially problematic.
Written feedback from providers and content experts included strong
objections to the subjective nature of MDS 2.0 cognitive items.
Providers especially objected that the “memory OK” items were
subjective and ill-defined. A MDS 3.0 validation panel rated the
individual MDS 2.0 memory items, when scored by nursing home
staff, as having indeterminate validity. In addition, facility nurses
expressed discomfort with trying to accurately complete these
subjective assessments. Only 29% of the nurses in our baseline survey
on attitudes surrounding MDS 2.0 reported that the MDS 2.0 cognitive items were easy to complete accurately.
Because subjective screening is more likely to err in identifying
cognitive impairment than objective testing11e14 and is more likely to
be influenced by unrelated patient characteristics and staff attitudes,15,16 objective performance-based testing is the preferred
approach for cognitive assessment,17,18 reserving subjective assessments for instances when residents cannot communicate.19 Another
primary rationale for performance-based cognitive testing is the key
role that structured cognitive assessment plays in identifying
delirium. Delirium, an extremely important medical condition, is
often missed in NHs as well as in hospital settings. Valid delirium
screening protocols rely on a structured, objective cognitive screens
to better observe delirium-related behaviors.20,21
For these reasons, we developed a simple performance-based
cognitive screen, the Brief Interview for Mental Status (BIMS). 22
Because of the number of domains assessed in MDS and staff
concerns about burdensome assessment, we aimed to develop a short
screen that could be reliably and rapidly administered by staff. Initial
testing in 374 Veterans showed that, after eliminating poorly performing judgment and organized thinking items, the BIMS could be
used by NH staff and had higher correlations with criterion cognitive
screening than was demonstrated by the CPS. These results were
consistent whether the BIMS was collected by research or by facility
staff. In addition, staff reported increased confidence in the accuracy
of their cognitive assessments when using the structured assessment
instead of the current MDS 2.0 syntheses of observations. The finding
that staff could use structured cognitive assessments opened the door
to including in MDS 3.0 cognitive items with greater recognition and
credence in other settings, potentially improving communication
with providers.
The current study examines the performance of the BIMS as part of
the national testing of the Minimum Data Set 3.0 (MDS 3.0) for Nursing
Homes. In this national evaluation, we aimed to test the feasibility and
validity of the Brief Interview for Mental Status (BIMS) as a performance-based cognitive screener that could be easily completed by
nursing home staff in a broader sample of community NHs.
Methods
Sample
Sample selection is detailed in the description of overall MDS 3.0
development and testing.23 In brief, 3822 residents who were
scheduled for MDS 2.0 assessments were assigned to the study based
on the type of scheduled assessment, with preference for full and
annual assessments. The only exclusion criterion was comatose
status. From the larger sample scheduled for MDS assessments, an
algorithm was used to randomly assign 3258 scheduled assessments
to a feasibility sample (further divided into reliability and facility only
samples), 141 to a test of research nurse agreement in collecting 3MS,
and 419 residents to the validation sample. Nurses were trained in the
randomization process and were provided templates to guide
assignment.
We instructed staff members to approach for BIMS testing all the
residents who were scheduled for MDS 2.0 assessments during the
data collection window and who were capable of any communication.
If a resident’s primary mode of communication was writing, assessors
were allowed to present the questions on separate sheets of paper or
cue cards. For residents interviewed with BIMS, the MDS 2.0 that was
collected by NH staff was also obtained. As part of the study, NH staff
received training on completing cognitive assessments, including
MDS 2.0 cognitive items.
Trained research nurses collected a more detailed cognitive
assessment, the 3MS (described further in measures below), in those
residents who were assigned to the validation arm of the study. For
the residents in the validation sample, one research nurse completed
the BIMS and another independently completed the 3MS. The order
and assignment of these interviews were intentionally mixed for each
research nurse pair.
The RAND institutional review board (IRB) approved the data
collection and the data safeguarding plan. The RAND, VA, and Harvard
IRBs granted a waiver of informed consent because the evaluation
was an activity designed to improve CMS and VA program operations
surrounding the MDS assessment. Residents were able to decline to
participate in the interviews.
Measures
BIMS
The BIMS performance-based screen includes temporal orientation and recall items, common elements in frequently used cognitive
screening tools.19,24 The response scales allow differential scoring for
answers to temporal orientation that are “close” to correct answers
and partial credit when a resident could recall an item after being
prompted or cued. Possible BIMS scores range from 15 (all items
correct) to 0 (no items correct). The categories for BIMS cognitive
score (intact/borderline, moderate impairment, severe impairment)
were based on pilot testing.22 The national study also collected the
MDS 2.0, and we calculated a CPS score based on published
D. Saliba et al. / JAMDA 13 (2012) 611e617
algorithms. Possible CPS scores range from 0 (no impairment) to 6
(severe impairment)
Criterion Measure
For our criterion measure of cognitive status, we used the Modified Mini-Mental Status (3MS) exam, an expanded version of the Mini
Mental State exam (MMSE) that has greater reliability and validity
than the briefer MMSE.25e28 In addition to the items used in the
MMSE, the 3MS includes four items that more broadly test cognitive
function. The 3MS uses an expanded total score of 100, increased
from 30 for the MMSE, increasing the test’s discriminatory capability
at different levels of cognitive function.
Additional Cognitive Items
We also tested, for possible inclusion in MDS 3.0, a modified set of
organized thinking items adopted from the Confusion Assessment
Method for the Intensive Care Unit (CAM-ICU)29 at the recommendation of content experts for delirium screening. The national validation test also included a proposed additional item “Procedural
memory OKdCan perform all or almost all steps in multitask
sequence without cues” Response choices were “0. Memory OK; 1.
Memory Problem.”
Survey
Nurses who participated in the MDS national study anonymously
completed a feedback survey at the end of the national study. Specific
BIMS feedback statements in the survey were: (1) “MDS 3.0 items will
better allow staff to calculate a score and trigger resident assessment
protocols appropriately;” (2) “I prefer the specific interview questions
in C2eC5 to the MDS 2.0 staff assessment of cognition”; (3) “The
interview items C2eC5 provided me or the facility with new insights
into at least one resident’s cognitive abilities.” The structured questionnaire used Likert scale responses to obtain feedback on BIMS.
Design and Timing of Assessments
To allow direct comparisons of CPS and BIMS in the same resident,
facility nurses were trained to complete MDS 2.0 cognitive items per
standard protocol before conducting BIMS cognitive status interviews.
This order was determined because we reasoned that the information
obtained from the resident during the BIMS interview might influence
MDS 2.0 assessments. Since staff were to approach all residents able to
communicate at least some of the time and were to record only the
resident’s direct response to BIMS items, we reasoned that the resident’s responses would not be influenced by staff assessments in the
medical record. As part of facility trainings, the instructions for
completing the MDS 2.0 cognitive items were reviewed.
For validation cases, the BIMS and 3MS were collected within 24
hours of each other. To minimize order effects, the order of collection
was reversed for approximately half of the sample in each facility. The
data collection was timed to start within 24 hours of the assessment
reference date for the resident’s MDS 2.0 cognitive assessment.
Interviewers were unaware of facility MDS 2.0 scores.
Analysis
The ability of residents to complete interviews was calculated as
the number of completed interviews (numerator) divided by the
number of residents in the sample (denominator).
To obtain time estimates for completing the BIMS, research nurses
entered start and stop times in hours and minutes directly on the data
collection form. These times were entered into the data set and we
calculated the time elapsed.
We calculated 3MS, CPS, and BIMS scoring based on published
values and scoring rules. We considered two analyses to test whether
613
MDS 3.0 BIMS or MDS 2.0 CPS better matched the 3MS criterion
measure. We examined correlation coefficients comparing 3MS to the
BIMS and to CPS scores. We next considered sensitivity and specificity
of different BIMS and CPS cutpoints for predicting any cognitive
impairment and moderate to severe cognitive impairment (defined,
based on established cutpoints, as 3MS <78 and 3MS <48, respectively). We also calculated the area under the receiver operating
characteristic curve (AUC). The AUC is derived based on sensitivity
and specificity rates and provides a single number to reflect the
accuracy of a test (in this case the MDS 3.0 BIMS and the MDS 2.0 CPS)
relative to a gold or criterion standard (in this case the 3MS). An AUC
value of 1 represents a perfect test and a value below 0.5 represents
performance at chance levels. The larger the AUC, the more accurate
the test is considered to be. Statistical analyses were performed using
SAS (SAS Institute, Inc., Cary, NC).
For participating staff responses to the anonymous survey
described above, we recoded the 5-point Likert scale (strongly agree,
agree, neutral, disagree, strongly disagree) into three categories as
follows: strongly agree or agree, neutral, disagree or strongly
disagree. Frequency responses were entered into and percents were
calculated in excel.
Results of MDS 3.0 National Testing
Ability of Residents to Complete the BIMS
The interview was attempted in 94% of the 3258 residents in the
feasibility sample (reliability and facility samples combined); 3.5% of
those who were approached for interview did not complete the
interview. Thus, of the overall sample of 3258 residents, 90%
completed the BIMS.
Time to Complete the BIMS
The average BIMS completion time, based on recorded start and
stop times for the validation interviews, was 3.2 minutes (SD 2.0;
median ¼ 3.0 minutes; mode ¼ 2.0).
Staff Feedback on BIMS
National survey feedback from BIMS users echoed what we heard
in the pilot study. Eighty-eight percent reported that the BIMS
provided new insights into resident’s cognitive abilities (7% neutral;
4% disagreed/strongly disagreed). Eighty percent strongly agreed or
agreed that BIMS would better allow staff to calculate a score and
trigger resident assessment protocols (15% neutral, 4% disagreed/
strongly disagreed). Seventy-eight percent of the survey respondents
strongly agreed or agreed that they preferred the MDS 3.0 interview
to the MDS 2.0 assessment approach ( 9% neutral, 13% disagreed/
strongly disagreed).
Validation Results
Table 1 shows the age distribution for the 418 residents in the
validation analyses. The validation sample included residents with
a broad range of cognitive scores. BIMS scores ranged from 0e15. 3MS
scores ranged from 0e100. CPS scores ranged from 0 (intact) to 6
(severe impairment). Table 2 shows the distribution of scores for each
cognitive assessment.
We examined the strength of the correlation of the 3MS criterion
with the BIMS and with the CPS in the validation sample. The
correlation coefficient between BIMS and 3MS was 0.906 (P < .0001)
while the correlation coefficient between CPS and 3MS was e0.739
(P < .0001). Thus, both were significantly correlated with the criterion
614
D. Saliba et al. / JAMDA 13 (2012) 611e617
Table 1
Age Distribution for MDS 3.0 Validation Sample
Table 3
Sensitivity and Specificity for identifying Any Impairment*
Age
Percent (%) (n ¼ 418)
BIMS score
Sensitivity
Specificity
<65
65e84
85þ
15
43
42
10
11
12
13
.65
.73
.83
.90
.99
.97
.91
.84
measure, but the BIMS was more highly correlated (P < .01 for
difference). To understand whether age influenced the strength of the
relationship between the 3MS and the different screeners, we also
considered a model that included age. For the model of the association between BIMS score and 3MS, age was not significant (P ¼ .3623).
In the model testing the association between CPS and 3MS, age was
significant (P ¼ .0001). These finding suggest that age was not
important in the association between BIMS and 3MS, while the
strength of the association between CPS and 3MS was influenced by
the resident’s age.
We next examined sensitivity and specificity for identifying any
cognitive impairment (Table 3) and for identifying severe cognitive
impairment (Table 4) for both the BIMS and the CPS. Examination of
the results in these tables indicates better performance of the BIMS.
The AUC for identifying any impairment was .930 for the BIMS and
.824 for the CPS. The AUC for identifying moderate/severe impairment was .960 for the BIMS and .857 for the CPS.
Additional Items Tested
In validation testing, one trained research nurse used staff
observation and chart review to code the “Procedural Memory OK”
item. The other trained research nurse in the pair independently
conducted the 3MS exam that included a 3-step command performance test. As with the overall validation protocol, the trained
research nurses alternated the order and assignment for testing based
on an algorithm to avoid order effects. Correlation between the items
was only modest (e.32, P < .001).
Also consistent with pilot testing, the disorganized thinking items
(not part of the BIMS) were less highly rated by staff participating in
the evaluation of the MDS 3.0. Sixty-one percent of survey respondents reported that the organized thinking were insulting to residents. In analyses the addition of these items to the BIMS score did
not improve the agreement between BIMS and the criterion measure
nor were these items significant in a model testing their contribution.
Because the primary reason for including these items was to improve
screening for delirium, we tested whether responses to these items
were associated with identification of delirium using the CAM; they
were not (c2 ¼ .052; P ¼ .82).
Discussion
These results confirm the findings from the earlier pilot testing of
the BIMS.22 The BIMS, a structured brief cognitive screener, was
completed by 90% of nursing home residents scheduled for MDS 2.0
Table 2
Distribution of Scores for Each Cognitive Assessment
Categories
Intact or borderline/mild impairment
Moderate impairment
Severe impairment
% of Validation Sample in Each
Category
BIMS
CPS
3MS
48
26
27
36
52
12
43
30
26
BIMS, Brief Interview for Mental Status; CPS, Cognitive Performance Scale; 3MS,
Modified Mini- Mental State Examination
CPS score
Sensitivity
Specificity
2
3
4
5
.02
.59
.84
.93
.98
.88
.67
.49
BIMS, Brief Interview for Mental Status; CPS, Cognitive Performance Scale; 3MS,
Modified Mini- Mental State Examination.
*3MS <78
assessments and was more highly correlated with a criterion measure
of cognition than was the CPS calculated from the more subjective
MDS 2.0 assessment. The area under the receiver operating characteristics curve, a measure of how well a test captures a criterion
measure, was also significantly higher for the BIMS than for the CPS.
The average time to complete the BIMS was 3.2 minutes, representing
a somewhat low time investment, an important consideration given
the overall number of domains addressed in MDS. The BIMS assessment was preferred by the majority of staff and had the added
advantage of giving staff members a brief performance test to
improve use of standardized assessments for delirium signs and
symptoms.
The most likely explanation for this differential performance is
that performance based testing avoids the subjectivity of unstructured observational assessment. Our analyses showed that the
strength of the association between BIMS and 3MS score was not
associated with resident age, unlike the relationship between CPS
score and 3MS. This difference may reflect the tendency of some staff
to be biased by resident age when scoring the MDS 2.0 items used to
calculate the CPS. The influence of staff perceptions may explain why
age was a significant predictor in the relationship between the staff
assessment of cognition and a criterion screener. Although not part of
BIMS, the procedural item that we tested may provide the most direct
comparison of subjective assessment and performance-based testing.
This item compared a synthesis of staff report and chart documentation to the actual performance of a three-step command. These
approaches, although associated statistically, had only modest
correlation.
Based on the results of this national study, the BIMS was included
in the final MDS 3.0 assessment form. Assessors are instructed to
attempt the BIMS with all residents who are capable of some
communication. The instruction manual provides specific instructions for completing and coding the BIMS, but many of the instructions are on the form (Appendix A shows the current MDS 3.0 BIMS
Table 4
Sensitivity and Specificity for Identifying Severe Impairment*
BIMS score
Sensitivity
Specificity
7
6
5
4
.83
.79
.73
.61
.92
.95
.97
.98
CPS score
Sensitivity
Specificity
3
4
5
.82
.96
1.0
.75
.49
.33
BIMS, Brief Interview for Mental Status; CPS, Cognitive Performance Scale; 3MS,
Modified Mini- Mental State Examination.
*3MS <48
D. Saliba et al. / JAMDA 13 (2012) 611e617
items). The instruction manual also notes “scores from a carefully
conducted BIMS assessment where residents can hear all questions
and the resident is not delirious suggest the following distributions:
13e15: cognitively intact; 8e12: moderately impaired; 0e7: severe
impairment.” For residents who cannot communicate, the MDS 3.0
form, consistent with the national test, includes the MDS 2.0 staff
items and assessors are instructed to complete these staff items based
on staff interviews across all shifts, family interview, chart review,
and direct observations and interactions with the resident. These data
sources for staff assessment are the same as those included in the
MDS 2.0 instruction manual.
It is important to note that, although direct or performance-based
testing decreases the chance of incorrect labeling of cognitive ability,
the BIMS is not a diagnostic test. Additional evaluation is needed to
rule out reversible causes of impairment or to diagnose dementia.30
The BIMS provides information on a resident’s performance in some
recognized domains of cognitive performancedattention, temporal
orientation, and item recall. As a brief screen for cognitive impairment,
it covers only a few cognitive domains and does not test for some forms
of cognitive impairment such as impaired executive function or mild
neurocognitive impairment (MCI).24 In addition, the BIMS items, may
not be sensitive for all forms of dementia. Facilities may elect to
employ longer cognitive assessments to achieve more sensitive and
specific testing. Nonetheless, for many facilities, the introduction of
specific direct items on the form will be a step forward in the use of
standardized items and should reduce the proportion of residents
whose cognitive abilities are incorrectly characterized.
In our pilot testing and in national testing, we trained staff to
complete the BIMS. We also trained on MDS 2.0 cognitive items because
of the number of questions we received in pilot trainings. Training
for MDS 2.0 cognitive items focused more on defining, based on the
MDS 2.0 instruction manual, the variables being reported. For the BIMS,
some assessors expressed initial reluctance to conduct direct testing.
Our training therefore included a brief rationale for direct testing, review
of the items, a modeled BIMS assessment and peer practice. Although
either training requires a time investment, in the case of the BIMS, it
also gives staff potentially useful assessment skills applicable beyond
the mandated MDS assessment. After national testing, we developed
the Video on Interviewing Vulnerable Elders (VIVE). VIVE includes
a demonstration of how to include residents in structured interviews,
including the BIMS. VIVE is publicly available for use in training (Available at: http://pickerinstitute.org/vive-interviewing-vulnerable-elders/
and https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assess
ment-Instruments/NursingHomeQualityInits/NHQIMDS30TrainingMat
erials.html. Accessed May 15, 2012.).
The direct assessment of a resident’s mental state can provide
a more informed understanding of the resident’s function that may
be used to enhance future communication and assistance. The BIMS
provides an opportunity to observe attention, understand extent of
any disorientation and observe whether prompts assist in recall. This
understanding could assist in creating a more resident-specific care
plan. For example, knowing that a resident can learn new information
but has difficulty with retrieval of that information should prompt
use of posted reminders or other prompts to enhance resident
independence. In addition to providing a performance- based test
that can be a foundation for improving initial delirium assessment,
the BIMS also provides an opportunity for earlier recognition of
cognitive changes over time (that may be missed by observation
alone) and may herald the onset of delirium caused by an underlying
but unrecognized disease process or an adverse medication reaction.
In pilot testing of the BIMS, we had also tested judgment and
disorganized thinking items as potential candidates to add to the
assessment. However, facility nurses in our pilot study had difficulty
with scoring these items and the items evidenced poor inter-rater
615
reliability for scoring.22 We, therefore, eliminated both sets of items
from BIMS scoring. In the national study, we re-tested a modified set
of disorganized thinking items. However, consistent with VHA
testing, the disorganized thinking items were less highly rated by
assessors. Since these items did not contribute to delirium identification, the primary rationale for their inclusion, the difficulty reported by staff confirmed our decision to exclude these items from
the MDS.
Limitations
As noted above, the BIMS may not detect problems with executive
function or earlier, more mild forms of cognitive impairment. In
addition, we did not compare the BIMS to a clinical assessment that
applied Diagnostic and Statistical Manual of Mental Disorders IV
(DSM-IV) criteria for dementia. However, the BIMS was highly
correlated with a criterion cognitive screener, the 3MS that has been
validated compared with clinical diagnosis.
Conclusions
The BIMS is a performance based cognitive screener that can be
quickly completed by nursing home staff in the overwhelming
majority of nursing home residents. It is highly correlated with longer
cognitive assessment and provides information on cognitive performance that is included in many existing cognitive tests. Residents
with incident scores in the moderate or severe range should be
evaluated by their primary care provider or mental health specialist
for exacerbating causes of cognitive impairment as well as for
possible dementing illness. For those residents with scores in the
intact/borderline range, providers could perform additional assessments to better characterize cognitive function for those residents in
need of a diagnosis.
References
1. Research Findings #5: Characteristics of Nursing Home Residents, 1996.
December 2004. Agency for Healthcare Research and Quality, Rockville, MD.
Available at: http://www.meps.ahrq.gov/data_files/publications/rf5/rf5.shtml.
Accessed August 2010.
2. Magaziner J, Zimmerman S, Gruber-Baldini AL, et al. Epidemiology of Dementia
in Nursing Homes Research Group. Mortality and adverse health events in
newly admitted nursing home residents with and without dementia. J Am
Geriatr Soc 2005;53:1858e1866.
3. Aguero-Torres H, Fratiglioni L, Guo Z, et al. Dementia is the major cause of
functional dependence in the elderly: 3-year follow-up data from a population
based study. Am J Public Health 1998;88:1452e1456.
4. St. John PD, Montgomery PR, Kristjansson B, McDowell I. Cognitive scores, even
within the normal range, predict death and institutionalization. Age Aging
2002;31:373e378.
5. Morley JE. Managing persons with dementia in nursing home: High touch
trumps high tech. J Am Med Dir Assoc 2008;9:139e146.
6. Morris JN, Fries BE, Mehr DR, et al. MDS cognitive performance scale. J Gerontol
1994;49:M174eM182.
7. Snowden M, McCormick W, Russo J, et al. Validity and responsiveness of the
minimum data set. J Am Geriatr Soc 1999;47:1000e1004.
8. Hartmaier SL, Sloane PD, Guess HA, et al. Validation of the minimum data set
cognitive performance scale: Agreement with the mini-mental state examination. J Gerontol A Biol Sci Med Sci 1995;50:M128eM133.
9. Gruber-Baldini AL, Zimmerman SI, Mortimore E, et al. The validity of the
minimum data set in measuring the cognitive impairment of persons admitted
to nursing homes. J Am Geriatr Soc 2000;48:1601e1606.
10. Singer C, Luxenberg J. Diagnosing dementia in long-term care facilities. J Am
Med Dir Assoc 2003;4:S134eS140.
11. Chodosh J, Petitti DB, Elliott M, et al. Physician recognition of cognitive
impairment: Evaluating the need for improvement. J Am Geriatr Soc 2004;52:
1051e1059.
12. Macdonald AJ, Carpenter GI. The recognition of dementia in “non-EMI” nursing
home residents in south east England. Int J Geriatr Psychiatry 2003;18:
105e108.
616
D. Saliba et al. / JAMDA 13 (2012) 611e617
13. Pinholt EM, Kroenke K, Hanley JF, et al. Functional assessment of the elderly. A
comparison of standard instruments with clinical judgment. Arch Intern Med
1987;147:484e488.
14. Sorenson L, Foldspang A, Gulmann N, et al. Assessment of dementia in nursing
home residents by nurses and assistants; Criteria validity and determinants. Int
J Geriatr Psychiatry 2001;16:615e621.
15. Kane R. Mandated assessments. In: Kane R, Kane R, editors. Assessing Older
Persons. NY: Oxford University Press; 2000.
16. Borson S, Scanlan JM, Watanabe J, et al. Improving identification of
cognitive impairment in primary care. Int J Geriatr Psychiatry 2006;21:
349e355.
17. Registered Nurses Association of Ontario. Screening for Delirium, Dementia
and Depression in Older Adults. Toronto, Canada: Registered Nurses Association of Ontario; 2003.
18. Petersen RC, Stevens JC, Ganguli M, et al. Practice parameter: Early detection of
dementia: Mild cognitive impairment (an evidence-based review). Report of
the Quality Standards Subcommittee of the American Academy of Neurology.
Neurology 2001;56:1133e1142.
19. White H, Davis PB. Cognitive screening tests. J Gen Intern Med 1990;5:
438e445.
20. Inouye SK, Foreman MD, Mion LC, et al. Nurses’ recognition of delirium and its
symptoms: Comparison of nurse and researcher ratings. Arch Intern Med 2001;
161:2467e2473.
21. McNicoll L, Pisani MA, Ely EW, et al. Detection of delirium in the intensive care
unit: Comparison of confusion assessment method for the intensive care unit
with confusion assessment method ratings. J Am Geriatr Soc 2005;53:495e500.
22. Chodosh J, Edelen MO, Buchanan JL, et al. Nursing home assessment of
cognitive impairment: Development and testing of a brief instrument of
mental status. J Am Geriatr Soc 2008;56:2069e2075.
23. Saliba D, Buchanan J. Making the investment count: Revision of the Minimum
Data Set for nursing home, MDS 3.0. J Am Med Dir Assoc 2012;13:602e610.
24. Tariq SH, Tumosa N, Chibnall JT, et al. Comparison of the Saint Louis University
mental status examination and the mini-mental state examination for
detecting dementia and mild neurocognitive disorder: A pilot study. Am J
Geriatr Psychiatry 2006;14:900e910.
25. Bland RC, Newman SC. Mild dementia or cognitive impairment: The Modified
Mini-Mental State Examination (3MS) as a screen for dementia. Can J Psychiatry 2001;46:506e510.
26. Grace J, Nadler JD, White DA, et al. Folstein vs modified mini-mental state
examination in geriatric stroke. Stability, validity, and screening utility. Arch
Neurol 1995;52:477e484.
27. McDowell I, Kristjansson B, Hill GB, et al. Community screening for dementia:
The mini mental state exam (MMSE) and modified mini-mental state exam
(3MS). Compared. J Clin Epidemiol 1997;50:377e383.
28. Teng EL, Chui HC. The modified mini-mental state (3MS) examination. J Clin
Psychiatry 1987;48:314e318.
29. Ely EW, Margolin R, Francis J, et al. Evaluation of delirium in critically ill
patients: validation of the confusion assessment method for the intensive care
init (CAM-ICU). Crit Care Med 2001;29:1370e1379.
30. Mansdorf IJ, Harrington M, Lund J, Wohl N. Neuropsychological testing in
skilled nursing facilities: The failure to confirm diagnoses of dementia. J Am
Med Dir Assoc 2008;9:271e274.
D. Saliba et al. / JAMDA 13 (2012) 611e617
Appendix A
617