MILES - Cap. 7
MILES - Cap. 7
MILES - Cap. 7
Methods of Describing
Chapter Summary
This chapter illustrates methods for organizing condensed qualitative data, from highly systematic
to artistically rendered ways, for purposes of descriptive documentation. The descriptive profiles
focus on describing participants, variability, and social action.
Contents
Introduction
Describing Participants
Role-Ordered Matrix
Context Chart
Describing Variability
Construct Table
Conceptually Clustered Matrix
Folk Taxonomy
Describing Action
Vignettes
Poetic Display
Cognitive Maps
Closure and Transition
Introduction
Wolcott (1994) advocates that description is qualitative representation that helps the reader see
what you saw and hear what you heard. A solid, descriptive foundation of your data enables higher
level analysis and interpretation. Usually, it is hard to explain the “hows” and “whys” of something
satisfactorily until you understand just what that something is.
You begin with a text, trying out codes on it, then moving to identify patterns, categories, or
themes, and then to testing hunches and findings, aiming first to delineate the “deep structure” and
then to integrate the data into an explanatory framework. In this sense, we can speak of data
transformation as information that is condensed, clustered, sorted, and linked over time. The
researcher typically moves through a series of analysis episodes that condense more and more data
into a more and more coherent understanding of what—building a solid foundation for later analyzing
how and why (Wolcott, 1994).
Describing Participants looks at the relationship dynamics of the people you study. Describing
Variability charts the spectrum and landscape of what we’re finding in the field. And Describing
Action documents the experiences and processes of our participants from systematic to artistically
rendered ways.
Describing Participants
The role-ordered matrix charts the essential characteristics relevant to the study of the various
participants. A context chart illustrates the hierarchies and interrelationships within, between, and
among the participants.
Role-Ordered Matrix
Description
A role-ordered matrix sorts data in its rows and columns that have been gathered from or about a
certain set of “role occupants”—data reflecting their views. The display systematically permits
comparisons across roles on issues of interest to a study and tests whether people in the same role
see issues in comparable ways (see Display 7.1).
Applications
People who live in groups and organizations, like most of us, and social scientists who study
groups and organizations know that how you see life depends, in part, on your role. A role is a
complex amalgam of expectations and actions that make up what you do, and should do, as a certain
type of actor in a setting—a family, a classroom, a committee, a hospital, a police department, or a
multinational corporation.
A role-ordered matrix groups, summarizes, and compares different people’s role perceptions
about selected topics or issues that enable the researcher to compare and contrast those perceptions.
For example, mothers tend to see the world differently than fathers. Bosses tend not to see the
frustrations faced by workers, partly because they are distant from them and partly because
subordinates often censor the bad news when reporting upward. A teacher’s high-speed interactions
with several hundred children over the course of a day have a very different cast to them from the
principal’s diverse transactions with parents, vendors, secretaries, central office administrators, and
other teachers. We each experience the world differently, and a role-ordered matrix is just one way
of documenting those varied experiences.
Example
We (Miles and Huberman) draw on our school improvement study for an example. The innovation
involved is an intensive remedial program, implemented in a high school, emphasizing reading in the
subjects of English, science, and math. The question of interest is “How do people react to an
innovation when they first encounter it?” This general question can be unbundled into several
subquestions, such as the following:
• Which aspects of the innovation are salient and stand out in people’s minds?
• How do people size up the innovation in relation to its eventual implementation?
• What changes—at the classroom or organizational level—do people think the innovation will
require?
• How good a fit is the innovation to people’s previous classroom styles or to previous
organizational working arrangements?
Keeping in mind that we want to see answers to these questions broken out by different roles, we
can consider which roles—for example, teachers, department chairs, principals, central office
personnel—could be expected to attend to the innovation and could provide meaningful reactions to
it. The matrix rows could be roles, but if we want to make within-role comparisons, the rows should
probably be persons and clustered into role domains. It might be good, too, to order the roles
according to how far they are from the actual locus of the innovation—from teachers to central office
administrators. The columns can be devoted to the research subquestions. Display 7.1 shows how
this approach looks.
The researcher searches through coded write-ups for relevant data, and the data entered in each
cell are a brief summary of what the analyst found for each respondent. The main decision rule was
as follows: If it’s in the notes, and not internally contradicted, summarize it and enter a phrase
reflecting the summary. There are also “DK” (“don’t know”) entries, where data are missing because
the relevant question was never asked of that person, was asked but not answered, or was answered
ambiguously.
Analysis
Now, we can begin looking down the columns of the matrix, both within and across roles, to see
what is happening. Scanning the first two columns (Salient Characteristics and Size Up) shows us
that many teachers—notably in English—see the new remedial program as prescriptive, with little
latitude given for adaptation (tactics: counting and making comparisons). And the teachers who see
the innovation as prescriptive are also those who have used it the longest, suggesting that
prescriptiveness was highest when the program was first introduced (tactic: noting relations
between variables). A number of teachers also mention complexity (but note that first-year users are
more likely to see the program as simple and easy to use, suggesting program stabilization).
When we drop down to department chairs and central office administrators, the picture is
somewhat different. They are more likely to take the “big picture” view, emphasizing the
“curriculum,” “strands,” and the like. Although they too emphasize prescriptiveness (“Depends on
being used as it’s set up” or “Works if followed”), they either do not give clear answers on the issue
of complexity or (as in the case of the curriculum director, a major advocate of the program) say that
“any teacher can use [it] successfully.” But teachers, faced with an initially demanding, rigid
program, are not so sure, it seems (tactic: making comparisons).
Moving to the third column (Anticipated Classroom or Organizational Changes) of Display 7.1,
we can see role–perspective differences. Two teachers mention teaming as an anticipated change,
one that curtailed their freedom and made them accountable to peers’ schedules and working styles.
Administrators, the field notes showed, considered the teaming necessary to implement the
program’s several strands and as a way of helping weaker teachers do better through learning from
stronger ones. Even so, they do not consider it a salient change, saying either that no organizational
changes are required (“The program is designed to fit the structure”) or that they do not know
whether organizational changes were anticipated.
Finally, if we continue the making comparisons tactic, the fourth column (Fit With Previous Style
Finally, if we continue the making comparisons tactic, the fourth column (Fit With Previous Style
or Organizational Setting) shows a range of “personal fit” for different teachers, depending on their
views of the content, their own styles, and the organizational issues involved. The administrators,
however, uniformly emphasize good fit at the organizational level, stressing the appropriateness of
the curriculum and its fit into the existing structure; the director also invokes the fact that teachers
wrote it.
In short, a matrix of this sort lets us see how perspectives differ according to the role, as well as
within a role. In this case, users from the English department who came in at the onset of the program
had an initially tougher time than later users or math and science users. A within-role analysis,
moving across rows, shows that the superintendent, as might be expected, knows very little about the
innovation. More surprisingly, the principal does not either. In this case, a recheck with the field
notes (tactic: following up surprises) told the field-worker that the formal role description for high
school principals in this district actually forbids them from making curriculum decisions, which are
the province of the curriculum director and department chairs.
We also can apply the tactic of making if-then tests. If the director and the chairs have a shared
province of work (curriculum decisions), then their views of the innovation should resemble each
other more closely than the teachers’ views. Looking vertically once again, we can see that
department chairs’ views are much more like those of central office administrators than those of
teachers.
The role-ordered matrix display emphasizes different roles as sources of data and perceptions. It
is also possible to develop a role-ordered matrix that treats roles as targets of others’ actions or
perceptions. (For example, how are teachers treated by department chairs, principals, and central
office personnel?)
Clarify the list of roles you consider to be most relevant to the issue at hand; avoid overloading the
matrix with roles that are clearly peripheral. Differentiate the matrix by subroles (e.g., teachers of
math or science) if relevant. If your case is an individual, role-ordered matrices may well be helpful
in showing how role partners view or interact with the person at the center of your case.
Notes
Indicate clearly when data are missing, unclear, or not asked for in the first place. Return to field
notes to test emerging conclusions, particularly if the decision rules for data entry involve, as in this
case, a good deal of condensation. Role-ordered matrices, because of our prior experience with role
differences, can lend themselves to too quick conclusion drawing. Ask for an audit of your analysis
from a colleague (see Chapter 11).
Context Chart
Description
A context chart is a network, mapping in graphic form the interrelationships among the roles and
groups (and, if appropriate, organizations) that make up the contexts of individual actions (see
Display 7.2).
Applications
One problem a qualitative researcher faces is how to map the social contexts of individual actions
economically and reasonably accurately—without getting overwhelmed with detail. A context chart
is one way to accomplish these goals. Context charts work particularly well when your case is an
individual—they show you the real richness of a person’s life setting.
Most qualitative researchers believe that a person’s actions have to be understood in their specific
contexts and that contexts cannot be ignored or held constant. Contexts can be seen as immediately
relevant aspects of the situation (where the person is physically, who else is involved, what the
recent history of the contact is, etc.), as well as the relevant aspects of the social system in which the
person appears (a classroom, a school, a department, a company, a family, a hospital ward, or a
local community). Focusing solely on individual actions without attending to their contexts runs the
risk of misunderstanding the meanings of events. Contexts drive the way we understand those
meanings, or, as Mishler (1979) notes, meaning is always within context, and contexts incorporate
meaning.
Most people do their daily work in organizations: They have superiors, peers, and subordinates;
their work is defined in a role-specialized way; and they have different relationships with different
people in other roles in their social vicinity. But you are not simply drawing a standard
organizational chart; you are mapping salient properties of the context. Also, your chart will not be
exhaustive or complete. It is a collection of organizational fragments or excerpts. (In Display 7.2,
e.g., custodians, secretaries, and the immediate subordinates of most of the school district office
personnel are excluded.) Context charts also can be drawn for people in families or in informal
groups or communities.
Example
Networks ought to reflect the core characteristics of organizations: authority/hierarchy and
division of labor. So it ought to show who has formal authority over whom and what the role names
are. But those things don’t tell us very much. We should also know about the quality of the working
relationships between people in different roles.
Suppose you were interested, as we were, in organizations called schools and school districts—
and with the general problem of how innovations enter and are implemented in those organizations.
The display should show us who advocated the innovation, who is actually using the innovation, and
people’s attitudes toward it (whether or not they are using it). The display should show us how the
specific school we are studying is embedded in the larger district organization. Above all, we need a
display that will not overload us with information but will give us a clear, relevantly simplified
version of the immediate social environment.
Display 7.2 shows how these requirements were met after a field-worker made a first visit to
Tindale East, a high school involved in implementing a new reading program. The analyst selected
out the roles and groups that are most critical for understanding the context. District office roles are
above, school roles below. The network is thus partially ordered by roles and by authority level.
For each individual, we have a name, the age (a feature the analyst thought was important in
understanding working relationships and career aspirations), a job title, whether the individual was a
user of the innovation or not, and whether his or her attitude toward the innovation was represented
through magnitude codes:
+ = positive
± = ambivalent
0 = neutral
Special symbols (such as *) are applied when the individual was an innovation advocate or
influenced implementation strongly. The relationships between individuals are also characterized
(positive, ambivalent, and neutral). Once past the upper echelons, the display simply counts
individuals without giving detail (a secondary context chart at the level of individual teachers was
also developed but is not shown here).
To get the data, the analyst consults field notes and available organization charts and documents.
The decision rules look like this:
• For information such as job title, number of persons, and so on, assume accuracy for the
moment, and enter it.
• A relationship rating (how X gets along with Y) should not be discounted by the other party to
the relationship, though it need not be directly confirmed.
• The “innovation advocate” and “high influence” ratings should be given only if there is at least
one confirmation and no disconfirmations.
• If there is ambiguous or unknown information, enter “DK.”
Analysis
After a context chart has been constructed, the researcher reviews the hierarchies, flows, and
magnitudes entered, in combination with the field notes, to develop an analytic memo or narrative
that tells the relationship story thus far. An analytic excerpt about Display 7.2 reads as follows:
Looking at lines of authority, we can see that only one central office person (Crowden) has direct authority over department
chairs as they work on the innovation. Crowden is not only an advocate but also has high influence over implementation, and
seems to have a license from the superintendent to do this.
The department chairs, it appears, have three other “masters,” depending on the immediate issue involved (discipline, teacher
evaluation, scheduling). Because, in this case, the innovation does involve scheduling problems, it’s of interest that V. Havelock
is not only an advocate, but has actually used the innovation and is positive toward it. We might draw the inference that
Crowden serves as a general pusher, using central office authority, and V. Havelock aids directly with implementation issues;
the field notes support this.
Note, too, that Principal McCarthy (a) is not accountable to the superintendent for curriculum issues and (b) has a good
relationship with V. Havelock. Perhaps McCarthy gets his main information about the innovation from Havelock and thus
judges it positively.
So the chart shown in Display 7.2 helps us place the actions of individuals (e.g., Crowden, V.
Havelock) in context to understand their meaning. For example, when Crowden, discussing the
innovation, says, “It is not to be violated; its implementation is not based on the whim of a teacher at
any moment in class, and its success is not dependent on charismatic teachers,” the chart helps us
understand that this prescriptive stance is backed up with direct authority over department chairs for
curriculum issues—an authority that is accepted neutrally. In short, the analyst has been employing
the tactic of seeing patterns or themes, as well as subsuming particulars into the general (see
Chapter 11 for more on these tactics).
The symbols employed for Display 7.2 were Miles and Huberman’s original magnitude codes, but
you are not bound to using them. Context charts can employ other visual devices to enhance analysis.
For example, dashed lines can be used to show informal influence, while thick lines suggest strong
influence. Font size can be used to represent power relationships—for example, the names in a larger
or bolded font have more authority than the names in a smaller font. Circles can be drawn enclosing
informal groups and subcultures. Linkages to other affecting organizations in the environment can be
added. Physical contexts (e.g., a classroom, the teacher’s desk, resource files, student tables and
chairs, and entrances) can be mapped to help understand the ebb and flow of events in a setting. And
for an organizational context that seems to change a lot over a short time, revised context charts can
be drawn for comparison across time.
Notes
Use context charts early during fieldwork to summarize your first understandings and to locate
questions for next-step data collection. Keep the study’s main research questions in mind, and design
the context chart to display the information most relevant to them. If you’re new to qualitative
research, keep your first context charts simple. They can be embroidered as you continue the
fieldwork.
Describing Variability
A construct table shows the variability or range of a central construct in a study. A conceptually
clustered matrix charts participants’ varying perspectives about selected concepts. And a folk
taxonomy systematically charts the unique ways in which participants organize and categorize their
worlds.
Construct Table
Description
A construct table includes data that highlight the variable properties and/or dimensions of one key
construct (or concept, variable, category, etc.) of interest from a study (see Display 7.3).
Applications
Construct tables are particularly valuable for qualitative surveys, grounded theory, and
phenomenological studies since they enable an analyst to focus on one core item of interest (a
construct, concept, variable, core category, phenomenon, etc.). Traditional grounded theory charges
the researcher to examine the dimensions or variable ranges of a property, and a construct table
assembles that variability for analytic reflection.
Display 7.3
Lifelong Impact: Variability of Influence
Although you may have a general idea in advance about the properties and dimensions of some
major variable, such as “lifelong impact,” such variables do not usually become clear until real case
data have been explored in some depth. Cross-case construct tables are an excellent way to bring
together and examine a core concept because the way the variable plays out in different contexts
illuminates its nature.
Example
McCammon et al. (2012) surveyed 234 adults by e-mail to gather their perceptions of how their
participation in high school theatre and speech programming may have influenced and affected their
adult life course trajectories. There were certainly influences on careers, since approximately half of
the respondents currently worked in the entertainment industries. The other half pursued careers in
fields ranging from business to education to health care but still looked back fondly on their high
school theatre and speech activities. The vast majority acknowledged a lifelong impact from arts
participation during their high school years, yet not everyone testified that the impact was
comparable. Variability in the amount and quality of lifelong impact was observed, and these needed
to be documented and acknowledged for a more credible and trustworthy analysis.
Display 7.3 is a table that includes salient data about the construct Lifelong Impact. The
variability of the construct is illustrated through five selected respondent quotes, including the
participants’ genders, graduation years from high school, and current occupations, since these were
deemed potentially important variables for later analysis. The researcher-assigned assessments of
Lifelong Impact range from “none” (“It really has not changed my adult life at all”) to “very high”
(“Theatre and speech saved mine and my brother’s lives”).
Analysis
The construct table is a case study of sorts. It contains representative data about one important
element in your study that merits enhanced analysis. The variability of that element challenges the
researcher to ponder questions such as the following:
Further analysis revealed that high school graduation year—that is, the respondents’ generational
cohorts—played a significant role in the way memories were recalled and perceived. Gender and
current occupation played a less important role in survey patterns.
Scanning the construct table (and reviewing related portions of the database as needed) enables
you to see the range and thus the parameters of your data. This keeps you from constructing too
narrow an assertion about your observations and helps you modify your interpretive claims to be
more inclusive of the breadth of findings from your data.
Notes
Keep a construct table short and sweet. Its primary goal is to focus on the variability of one item
of interest in a study through a sample of representative data. See the Role-Ordered Matrix and
Conceptually Clustered Matrix displays (in this chapter) for formats that contain much more data for
interrelationship analysis.
Applications
Many studies are designed to answer a lengthy string of research questions. As a result, doing a
separate analysis and case report section for each research question is likely to tire out and confuse
both the analyst and the reader. One solution is to cluster several research questions so that meaning
can be generated more easily. Having all of the data in one readily surveyable place helps you move
quickly and legitimately to a boiled-down matrix by making sure that all the data fit into a reasonable
scheme and that any evaluations or ratings you make are well-founded.
Display 7.4
Conceptually Clustered Matrix: Motives and Attitudes (Format)
Source: Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An expanded sourcebook (2nd ed.). Thousand Oaks,
CA: Sage Publications.
Conceptually clustered matrices are most helpful when some clear concepts or themes have
emerged from the initial analysis. They also can be used with less complex cases, such as
individuals or small groups.
Example
In our (Miles and Huberman) school improvement study, we had a general question about users’
and administrators’ motives for adopting a new educational practice, and a more specific question
about whether these motives were career centered (e.g., whether participants thought they could get
a promotion or a transfer out of the project). So here we had an a priori idea of a possible
relationship between two concepts. Then, during data collection, we saw some inkling of a
relationship between the motives questions and two others: (1) a centrality question (whether the
innovation loomed larger than other tasks in the daily life of a user) and (2) an attitude question
(whether the participant liked the new practice when first introduced to it). We wondered whether a
relationship existed between people’s motives and their initial attitudes toward the practice.
The best way to find out would be to cluster the responses to these questions. Not only is there a
relationship to probe, but there is also a general theme (initial attitudes) and a possibility of handling
three research questions and their concepts at the same time.
The conceptually clustered matrix is a format that
When you are handling several conceptually or thematically related research questions together, a
likely start-up format is a simple participant-by-variable matrix, as shown in Display 7.4. Thus, we
have on one page a format that includes all respondents and all responses to the four research
questions (i.e., the concepts of interest in this study). Note that we have set up comparisons between
different kinds of participants (users and administrators), so it is role ordered as well as
conceptually ordered. The format also calls for some preliminary sorting or scaling of the responses:
types of motive, career relevant or not, degree of centrality, and valence of initial attitudes.
Next, we go back to coded segments of data keyed to the research questions and their suggested
concepts. The analyst notes down the Motives given by or attributed to a participant and then tries to
put a label on the motive. One participant, for example, gave several motives: She heard how good
the new practice was (social influence), her principal was “really sold on it” and “wanted it in”
(pressure), most other teachers were using it or planned to—“It’s what’s coming” (conformity), and
using the new practice was an occasion to “keep growing” (self-improvement). At this stage, it is
best to leave the start-up labels as they are, without trying to regroup them into fewer headings that
cover all participants; this practice gives you more degrees of freedom while still providing a
preliminary shaping of the data.
Turning to Career Relevance, the second concept, the analyst summarizes in a phrase or sentence
the relevance of adopting the practice for each participant. The next task is to look for evidence of
the Centrality of this new practice for people and what their Initial Attitudes seemed to be. For
these two columns, the analyst assigns a general rating, backing it with specific quotes. When these
data are entered in the matrix, we get something like Display 7.5.
Display 7.5 contains about as many data as a qualitative analyst can handle and a reader can
follow. The analyst has ordered participants according to their time of implementation (Early Users,
Second Generation, and Recent Users) and their roles (Users and Administrators) and, within the
group of users, has included a Nonuser to set up an illustrative contrast between motives for adopting
and motives for refusing the new practice.
For cell entries, the analyst reduced the coded chunks to four kinds of entries: (1) labels (e.g., self-
improvement), (2) quotations, (3) short summary phrases, and (4) ratings (none/some, low/high, and
favorable/unfavorable). The labels and ratings set up comparisons between participants and, if
needed, between cases. The quotations supply some grounded meaning for the material; they put
some flesh on the rating or label and can be extracted easily for use in the analytic text.
The summary phrases explain or qualify a rating, usually where there are no quotations (as in the
Career Relevance column). In general, it’s a good idea to add a short quote or explanatory phrase
beside a label or scale; otherwise, the analyst is tempted to work with general categories that lump
together responses that really mean different things (as seen in the “high” responses in the Centrality
column). If lumping does happen and you are puzzled about something, the qualifying words are
easily at hand for quick reference.
It’s important to hold on to the common set of categories, scales, and ratings for each case—even
if the empirical fit is poor in one or another of these columns—until the full set of cases can be
analyzed.
Analysis
Reading across the rows gives the analyst a thumbnail profile of each participant and provides an
initial test of the relationship between responses to the different questions (tactic: noting relations
between variables). For example, L. Bayeis does have career-relevant motives, sees the practice as
very important, and is initially favorable. But R. Quint’s entries do not follow that pattern or a
contrasting one. We have to look at more rows.
Reading down the columns uses the tactic of making comparisons between the Motives of
different users and administrators, as well as comparisons between these groups. It also enables
similar comparisons between responses to the Career Relevance, Centrality, and Initial Attitudes
data.
A scan down the columns of Display 7.5 provides both information and leads for follow-up
A scan down the columns of Display 7.5 provides both information and leads for follow-up
analyses. The tactic of making contrasts/comparisons leads to conclusions. For example, there is
some career relevance in adoption for users but practically none for administrators. Centrality is high
—almost overwhelming—for users but less so for administrators. Users are less favorable initially
than administrators.
Looking across rows, we can use the tactic of noting relations between variables and see that for
two of three career-motivated users a relationship exists among the variables: High centrality and
favorable attitudes are also present. But the opposite pattern (low career relevance, low centrality,
and neutral/unfavorable attitudes) does not apply. In fact, it looks as if some people who are neutral
would have been favorable were they not so apprehensive about doing well (tactic: finding
intervening variables).
In sum, a conceptually clustered matrix brings together key data from key participants into a single
matrix. The goal is to summarize how things stand with regard to selected variables, concepts, or
themes of interest. Avoid using more than five related research questions for a conceptually clustered
matrix, otherwise the mind will boggle. There will be too many data to see inclusively at one time
and too much time spent manipulating blocks of data to find clusters and interrelationships.
Notes
Conceptually clustered matrices need not be organized by persons or roles, as in Display 7.5.
More general concepts and themes can be the ordering principle in the rows as well as in the
columns. For example, rows can consist of cells broken into Types of Problems , with columns
divided into various Forms of Coping Strategies. Less emphasis is placed on specific cases and
people and more on the conceptual and thematic matters of the study.
Folk Taxonomy
Description
A folk taxonomy is best described by a series of its unique constituent terms explained in a
particular order.
McCurdy, Spradley, and Shandy (2005) identify
categories that categorize other categories domains and the words that name them cover terms. . . . Taxonomies are simply
[hierarchical] lists of different things that are classified together under a domain word by members of a microculture on the basis of
some shared certain attributes. (pp. 44–45)
Spradley (1979) further defines a folk taxonomy as “a set of categories organized on the basis of a
single semantic relationship.” The taxonomy “shows the relationships among all the folk terms in a
domain” (p. 137). A verbatim data record to extract folk terms is necessary for constructing a
taxonomy. But when no specific folk terms are generated by participants, the researcher develops his
or her own—called analytic terms.
Semantic relationships are somewhat akin to “if-then” algorithms and include types such as the
following (Spradley, 1979):
As an example, “fruit” can be a domain/cover term, with “berries” as one kind of fruit—a
semantic relationship of strict inclusion: X is a kind of Y. The category of berries continues with its
own list, such as strawberries, raspberries, blueberries, blackberries, and so on, but without
hierarchy at this level—in other words, a strawberry is not more important or more significant than a
blackberry, so it doesn’t matter in what order the types of berries are listed. “Apples” are then
classified as another kind of fruit, with its list of granny smith, delicious, honey crisp, and so on.
In sum, a folk taxonomy is an organized network list of participant- and sometimes researcher-
generated terms that are appropriately classified and categorized (see Display 7.6).
Applications
Concepts can’t always be properly sorted into matrix rows and columns. Sometimes a network
format is needed, such as a taxonomic diagram, to illustrate the interconnected complexity of social
life.
Display 7.6
A Folk Taxonomy of the Ways Children Oppress Each Other
Source: Saldaña, J. (2013). The coding manual for qualitative researchers (2nd ed.). Thousand Oaks, CA: Sage Publications.
Taxonomies are useful when large or complex sets of unique terms appear in participant data, and
researchers’ sense of their organization seems necessary to better understand the subculture or
microculture’s ways of perceiving and living in its immediate social world. Taxonomies also help
with classifying and categorizing various related pieces of data—a primary analytic step in
synthesizing a massive set of field notes, interview transcripts, and documents. The goal is not to
impose some artificial order onto the messiness of everyday living and working but to bring
enhanced cognitive clarity for the analyst’s interpretations of the people he or she is learning from
and about.
Example
Saldaña (2005) and his research team explored how fourth- and fifth-grade children at one
Saldaña (2005) and his research team explored how fourth- and fifth-grade children at one
particular elementary school “oppress” each other. Participant observation, written surveys, and
whole-class interviews were prepared for a short-term artists’ residency to teach children, through
dramatic simulation, how to proactively deal with bullying and peer pressure. The term oppression
was key to the study, and it was defined to the children. After they grasped its meaning, they offered
to the researchers examples of oppression they themselves experienced or witnessed at school and at
home:
Fourth-Grade Boy [group Sometimes when we’re playing games and stuff, and this one boy
interview]: comes over and he says, “Can I play with you guys?”, and
people say, “No, you’re not our kind of people, so you better get
out now.”
Fifth-Grade Girl [written survey I was made fun of my fatness. I was called fat, huge fatso are you
response]: going to have a baby. I was sad all the time. I’m trying to luse
wiaght but I just gain, gain and gain. Wiaght. I have not lose eney
wight. I have not stoped being appresed.
These children’s experiences were examples of the means–end semantic relationship: X is a way
to do Y, or excluding is a way to oppress; name calling is a way to oppress. After all of these (often
heart-wrenching) stories were collected, they were descriptively coded as to the types of oppressive
acts they illustrated, for further analysis.
Analysis
The proper classification and categorization of individual items for taxonomy development can be
conducted through a variety of systematic methods ranging from item-by-item queries with
participants, to card and pile sorts, to follow-up interviews (Bernard, 2011; Spradley, 1979, 1980).
Which method you choose depends on your own knowledge of the folk terms and their contextual
meanings. Most often, intimate familiarity with your data corpus enables you to code and extract
these folk terms directly from the database and onto a text-based software page or graphics program.
A recommendation is first to cut and paste the array of terms into a traditional outline format and then
to transfer that arrangement into network format with nodes and lines. Most often, nothing more
complex than deep reflection and logical reasoning help you figure out how the terms align according
to their semantic relationship (X is a kind of Y, X is a way to do Y, etc.).
Display 7.6 (from Saldaña, 2013, p. 162) shows an excerpt from the folk taxonomy constructed to
illustrate the ways in which children oppress each other. Long story short, the children themselves
told the researchers that oppression was either by “force” or by “feelings,” usually enacted by boys
and girls, respectively. From these two major categories, forms of physical and verbal oppression
were arrayed. Some were folk terms told to us by children (e.g., fighting, scratching, and pushing),
while others were researcher-constructed analytic terms that identified the types of oppression
children described but had no vocabulary for (e.g., excluding, coercing, and slandering).
As with matrices, eyeballing and scanning the taxonomy may lead to analytic insights or questions
for further investigation. For example, the folk term putting down has the most complex set of
extended nodes and lines, suggesting not only that this type of oppression may be more frequent
among older children but also that verbal belittling is potentially more violent than physical harm.
For the action research project at the elementary school, these findings suggested that the adult team
focus on how to get children to reduce their verbal put-downs and to offer students constructive
strategies for coping with verbal abuse from peers.
Note that not everything in this (and possibly any other) taxonomy is perfectly bounded. For
example, “taking things away” from someone may be a physical act of “force,” but it can eventually
lead to victim “feelings” of loss and hurt. There are many shades of grey, and exceptions to virtually
every rule. The taxonomy is not a perfect model of the ways in which humans classify things in the
social world. It is, at best, an analytic heuristic for mapping complexity to grasp at a glance the
constituent elements of a culture.
Most CAQDAS programs include graphic capabilities to draw taxonomies. Some programs, such
as ATLAS.ti, can “calculate” and display a visual model that illustrates your codes’ organizational
arrangement based on their frequency and researcher-initiated linkages. CAQDAS programs can also
arrange and manage your codes into hierarchies and trees, based on your input.
Notes
Analysts may use the taxonomic method to sort out their own theoretical ideas exclusively—being
careful, of course, to call it a researcher rather than a folk taxonomy. This is particularly advised for
those employing grounded theory, who might construct a taxonomy composed of their codes: from
theoretical, to axial/focused, to in vivo/process/initial codes (see Saldaña, 2013).
Describing Action
Vignettes capture significant moments or the action of an extended portion of fieldwork into
evocative prose renderings. A poetic display condenses data into poetic formats for capturing the
essences and essentials of meanings. And cognitive maps diagram an individual’s thinking processes
as he or she goes through a series of actions.
Vignettes
Description
A vignette is a focused description of a series of events taken to be representative, typical, or
emblematic in the case you are studying. It has a narrative, story-like structure that preserves
chronological flow and that normally is limited to a brief time span, to one or a few key actors, to a
bounded space, or to all three. The vignette can be written solely by the researcher or
collaboratively with research team members and/or research participants. A vignette can range from
being as short as a single paragraph to as long as a chapter (see the example under “Analysis” further
down).
Applications
Like poetic displays (described next), vignettes are rich prosaic renderings of primarily fieldwork
observations but can also include adaptations of stories embedded within interview transcripts.
Examples are a day in the life of an intensive care unit nurse, the events in a typical college faculty
meeting, the story of how a key management decision was reached over a period of several weeks,
and the way in which a student solves a particular math problem.
Evocatively written vignettes can be a useful corrective when your data—coded, displayed, and
pondered on—somehow lack meaning and contextual richness. Collaboratively written vignettes
offer an opportunity to engage study participants actively in producing, reflecting on, and learning
from the data.
During early data collection, as a researcher becomes more familiar with how things work in the
case at hand, he or she often finds rich pockets of especially representative, meaningful data that can
be pulled together in a focused way for interim understanding. Vignettes offer a way to mine such
pockets fairly easily. They are also helpful in formulating core issues in a case—that is, your theory
of what is happening—for yourself, for your study colleagues, and for external consumers of interim
reports that may be required. They can be embedded usefully in a longer and more formal case report
as well.
Example
Saldaña (1997) conducted an ethnography of an inner-city, largely Hispanic, arts magnet school in
the southwest, whose theatre program was headed by a novice White female teacher. One of the key
themes that emerged from the study was the theatre teacher’s unconditional support for her students.
Despite the ethnic differences between Nancy, the middle-class White teacher, and her lower income
Hispanic students, a sense of mutual respect was evident in selected interactions.
One of the events observed during the fieldwork period was a school district–wide speech
tournament for its junior high school students. Further down are excerpts from the raw field notes that
were taken during this off-campus event. They are sketchy, hastily written jottings about a slice of
action that happened over no more than 3 minutes of real time:
Beatriz did a “don’t drink and drive” speech. Elian shouted, “You go, Bea!” as she was coming up the stage. Beatriz spoke softly,
little inflection. Needed prompting from another Martinez School girl about 4 times. When Beatriz comes back to the row Nancy
rubs her shoulder. Beatriz looks hurt yet smiles, and as if she’s about to cry.
OC: Even though there may not be much competence in comparison to the others, the Martinez School team seems to
have a lot of support. Like in Damn Yankees, the baseball team may not be good, but they’ve got “heart.”
These notes do not give the total picture to an outside reader of what the researcher was
observing, thinking, and feeling at the time. Thus, a narrative vignette that more fully describes the
significance of the event is merited.
Analysis
There are no hard-and-fast guidelines for writing a vignette, though some may prescribe that the
content should contain sufficient descriptive detail, analytic commentary, critical or evaluative
perspectives, and so forth. But literary writing is a creative enterprise, and the vignette offers the
researcher an opportunity to venture away from traditional scholarly discourse and into evocative
prose that remains firmly rooted in the data but is not a slave to it.
Below is a vignette about the jottings presented in the example above, composed to illustrate why
this seemingly small slice of social action held special significance for the researcher and the study’s
key findings:
The well-dressed eighth grade persuasive speaker from Canton Junior High spoke with clarity about her topic—adopting shelter
animals. She obviously had been coached well to present a polished argument with confidence and volume. After finishing her
flawless speech, the hundred or so student and teacher spectators in the auditorium applauded loudly as she stepped off the stage.
The facilitator of the event stood up from the judge’s table and spoke to the assembly: “Thank you. Next: Beatriz Guzman from
Martinez School.”
Beatriz, in her quiet and unassuming way, rose from her chair and scooted across her seated classmates as Elian shouted, “You
go, Bea!” Nancy, her coach, smiled as she passed and gave her a “rah-rah” pep-rally gesture. Beatriz, dressed in a pale yellow
dress, walked hesitantly toward the stage, obviously nervous, walking up each stair step to the platform with measured care so as
not to trip as someone had done earlier.
Beatriz walked to the center of the stage as her student prompter, Maria, took her place below, script in hand, ready to offer
Beatriz a line in case she forgot her one-minute memorized persuasive speech (a safety measure permitted by speech tournament
rules for young contestants).
Beatriz began and continued to speak softly with a monotonous voice. About four sentences into the speech, she said: “And
when people get arrested for drunk driving . . .” There was a long and uncomfortable pause as Beatriz stared blankly into the
darkened auditorium. Nancy looked helplessly at Beatriz and leaned forward in her seat. The student prompter cued, “their lives
are. . . .” Beatriz shut her eyes, looked downward briefly, then raised her head and continued: “their lives are ruined forever.”
Her speech continued for less than a minute. She needed prompting for forgotten lines three more times. On the final line of her
speech, “And that is why people shouldn’t drink and drive,” Beatriz started leaning toward her right, as if she wanted to quickly
finish and run off the stage. When she delivered her final line, the audience clapped politely as they had been instructed to do, with
Beatriz’s schoolmates cheering and calling out an occasional “Yay!” for her.
Beatriz returned quickly to her seat and sat next to Nancy, both of them silent for a few seconds as the next contestant from
another school walked confidently toward the stage. Nancy stretched her arm across Beatriz’s shoulder and pulled her student
close. Beatriz leaned her head against her teacher’s motherly body as Nancy started gently rubbing her student’s back. Beatriz
smiled through her hurt and looked as if she were about to cry. The two young women sat and said nothing to each other. They
really didn’t need to say anything at all.
Later that evening, as I reflected on the events of the day, I thought about Beatriz and her deer-in-the-headlights moment on
stage—a moment that will probably never be forgotten for the rest of her life—and the reassuring comfort Nancy gave her
afterward. It was such a peaceful yet riveting moment for me to observe: a young girl leaning against a young woman in a moment
of unconditional support after failure. All I could think of was Nancy’s love for her students and the realization that she’s just so
damn human.
Erickson (1986) advocates that writing a vignette after reading through field notes can be a
powerful means for surfacing and clarifying your own perspective on what is happening. The method
generates “an analytic caricature (of a friendly sort) . . . that highlights the author’s interpretive
perspective” (p. 150). A well-written vignette as a concrete, focused story will be vivid,
compelling, and persuasive to a reader that the researcher has “been there.” If it is not really
representative, you and your readers run the risk of misunderstanding the case it refers to. Using
multiple vignettes helps, but the question “Is this really typical?” must always be asked.
If you choose to make your vignettes collaborative constructions with your participants, it helps to
meet with several of them to explain the vignette idea. Each person then chooses a situation to be
described, makes some notes, and retells or writes an account in everyday language. The researcher
reads the typed or transcribed account, makes marginal notes and queries on it, and sends it back to
the writer for review. The notes and queries are discussed, and the researcher produces a revised
and expanded version, later sent back for further review and discussion. A final version (with
pseudonyms replacing real names) can then be circulated to others in the fieldwork setting—an extra
benefit in terms of recognition and potential learning for participants.
Notes
The best discussion of this method we have found is in Erickson (1986). Seidman (2006)
describes a more extended version called a “profile,” a narrative summary using a participant’s own
words from interview transcripts to describe experience over an extended time period. Of course,
the fields of narrative inquiry and oral history have developed unique and intriguing methods for
extensions of vignette writing. See the Appendix for recommended resources in these subject areas.
Poetic Display
Description
Poetic displays arrange carefully selected portions of qualitative data into traditional and variant
poetic structures for the evocative representation and presentation of a study, its findings, or a key
participant’s perspectives (see the display under “Analysis” further down).
Applications
At times, the researcher can feel overwhelmed by the massive amount of detail in a database and
needs to grasp its most important or salient contents. One of poetry’s unique features is its ability to
represent and evoke human experiences in elegant language. Thus, the literary genre can be used as
one way to extract core meanings from a large collection of texts.
Poetic displays are arts-based representations and presentations of qualitative data that capture the
essence and essentials of the corpus from the researcher’s perspective. Their constructions are
primarily exploratory for the researcher’s use, but a completed poem could be included in a
published report if it is of sufficient artistic and scholarly caliber.
A poetic display brings the reader very close to a condensed set of data that forbids superficial
attention by the analyst. You have to treat the data set—and the person it came from—seriously
because a poem is something you engage with at a deep level. It is not just a figurative transposition
but an emotional statement as well.
Example
A female principal of an arts magnet school was interviewed about the site’s philosophy and
mission. Here is just one verbatim excerpt from an hour-long interview about her perceptions of the
school and its goals for students:
It’s, um, it’s a very different kind of environment because what we’re trying to do here is create whole people, give them the
opportunity to become lifetime learners, um, to think that learning is joyful, to support them and to, um, be respectful of the
backgrounds they bring to us. And that’s very different from having a school in which there is a curriculum and these are, these
things you have to learn. We haven’t been able to find any single thing that people have to learn. You don’t have to know the
alphabet, you can always just put it down. You don’t have to know the multiplication tables, you can carry them in your hip pocket.
What you have to learn is attitudes. You know, we want them to have a taste for comprehensive elegance of expression. A love of
problem solving. These, these are attitudes, and those are what we’re teaching. And we try to teach them very respectfully and
joyfully. And that’s different—I know it’s different.
A poem could (and should) be constructed from the entire interview transcript to holistically
capture the principal’s major perspectives or some facet that struck the analyst as intriguing. But for
illustrative purposes only, a poem will be constructed solely from the transcript excerpt given above.
Analysis
Verbatim theatre playwright Anna Deavere Smith attests that people speak in “organic poetry”
through their everyday speech. The listener needs to be sharply attuned to a speaker’s rhythms,
parsing, pausing, and, of course, significant words and passages of text that transcend everyday
discourse to become insightful and meaningful communication.
The researcher becomes thoroughly familiar with the data corpus and extracts significant and
meaningful in vivo words and phrases from the text. In the first sentence of the transcript above, for
example, the phrases “whole people” and “lifetime learners” stood out as significant passages that
were highlighted. This technique continued with the 170-word transcript.
Selected passages are then reassembled on a separate page to experiment with their arrangement
and flow as poetry. Not everything extracted from a database will be needed, and some grammatical
leeway may be necessary to change the structure of a word now and then as it gets reformatted into
verse.
Eventually, the analyst took what were the selected words and phrases from the 170-word
transcript and made artistic choices to compose a 23-word poem that, to him, represents the
philosophy, mission, and goals of this particular site—an artistic rendering of an arts-centered
school:
Teach attitudes:
Create whole people
Lifetime learners
Learn attitudes:
A love of problem solving
Elegance of expression
Teach and learn:
Respectfully
Supportively
Joyfully
Two points that need to be remembered are that (1) the selection, organization, and presentation of
data in a display are decisive analytic actions and (as in this case) need to be done in a thoughtful,
lucid way and (2) displays owe as much to art and craft as they do to science. Attend to the poet
within you to help find the organic poetry within your participants.
Classic literary poetry can stand on its own, but research as poetry almost always needs some type
of introductory framing or supplemental narrative for the reader to contextualize or expand on the
artwork. Also, acknowledge that poetry has a distinctive set of conventions and traditions, as does
academic scholarship. Footnotes and citations of the academic literature have no place in the poem
itself; save these, if necessary, for any accompanying prose narrative.
Notes
Do not fall into the paradigmatic trap of feeling the need to defend or justify your use of poetry as
research if you choose to present and publish it. Many practitioners in the field of qualitative inquiry
have transcended the outmoded perception of poetry as an “experimental” and “alternative” (read
“marginalized”) form of research, and now, they see it as a more progressive one. But realize that if
you do choose to write poetry, it must be artistically sound to make a persuasive case as research
representation.
For more on found poetry, poetic structures, and their applications as qualitative research
representation and presentation, see Mears (2009) and Prendergast, Leggo, and Sameshima (2009).
Cognitive Maps
Description
A cognitive map displays a person’s representation of concepts or processes about a particular
domain, showing the relationships, flows, and dynamics among them. The visual map helps answer
the question “What may be going through a person’s mind as he or she experiences a particular series
of actions and/or reflects on an experience?” Descriptive text accompanies the map for explanation
(see Display 7.7).
Display 7.7
A Cognitive Map of One Person’s Housecleaning Process
Applications
There are times when the visual representation of concepts and processes is more effective than
narrative alone. If we put stock in the classic folk saying “A picture is worth a thousand words,” then
cognitive maps are one way of efficiently and elegantly portraying what may be going through
people’s minds as they reflect on or enact an experience.
Many of our examples so far have been complex, multilevel cases. But cases are often focused at
the individual level. We need displays that show us the complexity of the person. People’s minds—
and our theories about them—are not always organized hierarchically as in folk taxonomies. They
can be represented fruitfully in nonhierarchical network form: a collection of nodes attached by
links, and/or bins extended with arrows.
But qualitative researchers are not mind readers and most likely not brain surgeons, so we can
never truly know what’s going through someone else’s mind. The cognitive map then is our best
attempt to put into fixed form the dynamic and sometimes idiosyncratic thinking processes of a
participant.
Example
Some research studies examine the mundane in humans’ lives to understand concepts such as roles,
relationships, rules, routines, and rituals—the habits of daily existence (Duhigg, 2012). The mundane
example illustrated here is housecleaning, which we will soon learn is not as simple or as
“mindless” as it may seem to be. Some people put a great deal of thought into it to develop time-
efficient patterns of action across time.
An older and slightly arthritic married woman is interviewed at her home about her housecleaning
routines. She shows the interviewer where all her cleaning supplies (broom, dust mop, glass cleaner,
furniture polish, etc.) are kept; she then takes the interviewer through each room of her four-bedroom
home, pointing out specific tasks and challenges during her every-other-week “cleaning days”:
Woman: I clean my house over two days, not because it takes that long to do it,
but at my age it’s easier to space it out over two half-days. I do all the
tiled rooms on the first day, then the carpeted and laminate floor
rooms on the second day. . . .
Interviewer: Why do you clean tile rooms the first day?
Woman: Because they’re the hardest and I want to get them out of the way first.
And since they all use sort of the same cleaning supplies, I just move
them from one room to another. . . . I usually start out with the
bathrooms.
Interviewer: Which one gets done first?
Woman: Sometimes it doesn’t matter. I might clean the smaller one first to
“warm up” for housecleaning, then tackle the master bath[room],
which takes about three times as long because I have to clean the
shower stall and there’s more mirrors and stuff in there. . . . Then I do
the laundry room, and you can see I have to deal with cat litter in
here. And it takes awhile to move everything around because there’s
so little space. It might look like a small room but it actually takes
about 20, 25 minutes for me to clean. Then I go to the foyer, and that’s
a snap—5 to 10 minutes at most. Then I go to the breakfast nook and
kitchen, and you know how long that takes.
Interviewer: Well, I have a much smaller kitchen. (laughs) How long does it take
for you?
Woman: Top to bottom for the kitchen, about an hour? I always start at this end
(pointing to the coffeemaker on the counter) then work my way
around to the sink last. Well, the floor is last, cleaning that with the
steamer. And when the floor’s dry, I put the throw rugs back down on
it.
The interview continues, covering in detail the woman’s second-day cleaning routines.
Throughout, specific questions were asked by the interviewer to ascertain what, where, how, and
why things are done in certain ways. Time was also discussed and demarcated into when and for
how long, since the interviewee herself assigns ranges of minutes it takes to clean each room in her
home. The basic goal of the interview is to collect sufficient information to construct a cognitive map
of a person’s process. In other words, we need to gather enough data to answer the question “What
may be going through a person’s mind as he or she experiences a particular series of actions
and/or reflects on an experience?” This includes not just facts but reasoning, memories, and
emotions as well.
The initial interview is transcribed and reviewed. Follow-up questions, if needed, are composed
for a second interview. The transcripts then become the verbal directions for designing the visual
map.
Analysis
Drawing and constructing a cognitive map is the analysis, for you are trying to visually represent a
real-time process. Tools available to you are paper and pencil, “sticky notes” and a wall board, or
graphics/modeling software such as those found in most CAQDAS programs. Whatever method
works best for you is fine, so long as you realize that you will be going through several mapping
drafts before you feel you’ve captured the process on paper or on a monitor screen. You’ll also
discover that alternately drawing a map and writing the accompanying narrative help inform each
other. After a draft of a cognitive map, the narrative gets written, which then stimulates a redrafting
of the map and clarification of the narrative’s details, and so on.
Display 7.7 shows the resulting cognitive map of this case study’s housecleaning process,
extracted from interview data and visually represented through captions, text, bins, nodes, lines, and
arrows. The visual display also needs an accompanying narrative to explain the nuances of her
thinking (excerpts):
Housecleaning is dreaded but nevertheless “prepared for” a day ahead of time. To the mildly arthritic Janice, the every-other-week
task is a “necessary evil.” When time permits, any laundering of towels, throw rugs, and bed sheets is done on a Wednesday so
that Janice doesn’t have to “hassle” with it as she’s cleaning house on Thursday and Friday. This is just one way of making a
burdensome task less strenuous.
Time and energy are two important concepts she thinks about when housecleaning. The routine is highly organized from over
two decades of living in this home. On Day 1 of formal housecleaning, Janice’s strategy is to tackle the “hard” rooms first to “get
them out of the way.” This strategy enables her to continue for approximately three hours (which includes numerous short breaks,
due to her arthritis) to complete her scheduled tasks with sufficient energy: “If I save the hardest rooms for last, they’d probably
never get done, or get done only part way. Bathrooms are the worst; I hate cleaning them, so that’s why I do them first—get them
out of the way.”
Day 1’s six tile floored rooms each have a preparation ritual: “Before I start each room, I bring into it everything I’m going to
need for cleaning it: the Windex, paper towels, steamer, duster, trash bag. . . . That way, I don’t have to waste time going back and
forth to get this and that—it’s all in the room, ready to go.” Each room cleaning routine also follows two spatial patterns: “Clean
from top to bottom—wall stuff first, then to counters, then the floor,” concurrent with analog clock-like movement: “Start at one
end of the room and work my way around it.”
The process described above is the researcher’s interpretation of what’s going through a person’s
mind. But cognitive maps can also be collaboratively constructed between the researcher and
participant. The procedure engages the respondent and the researcher in joint work, simultaneously
building the display and entering data.
After an initial interview about the experience or process, the researcher transcribes the exchange
and extracts key terms, concepts, in vivo codes, and so on. Each one gets written on its own “sticky
note,” and a follow-up mapping interview is scheduled with the participant.
At the audio-recorded mapping interview, the participant is given the sticky notes and is asked to
arrange them on a large poster-size sheet of paper on a wall “in a way that shows how you think
about the words.” When this task is complete, the researcher asks, “Why are they arranged this
way?” The researcher draws lines around concepts that the person says belong together, and evokes
a name for the group, which also is written on the display. The question “What relationship is there
between _____ and _____?” leads to the person’s naming of links between concepts and/or concept
groups, and those too are written on the display.
During preliminary analysis, the researcher listens to the recording of the mapping discussion,
clarifies any errors, and writes a descriptive text that walks through the complete map. The revised
map and narrative are fed back to the respondent to ensure that it is an accurate representation of the
concept or process.
This version of cognitive mapping makes for maximum idiosyncrasy—and complexity—in the
results. A simpler version (Morine-Dershimer, 1991) asks the respondent to generate a list of
concepts related to a major topic. The major topic is placed in the center, and then other concepts
are placed around it, with unnamed links radiating out to them, and from them, to other concepts in
turn.
Cognitive maps have a way of looking more organized, socially desirable, and systematic than
they probably are in the person’s mind. Allow for those biases when making analyses and
interpretations. Also acknowledge that one person’s cognitive map does not necessarily represent
others’ ways of thinking and acting in comparable situations. (For example, when the husband of this
case study cleans the house, he chooses to accomplish the task in 1 day instead of 2. He begins at one
end of the house and works his way from one room to the adjacent room, regardless of flooring
surface, until he reaches the other end of the house.)
Notes
Cognitive maps also can be drawn from preestablished narratives such as interview transcripts,
fiction, or other longer documents. Here, the analyst is interrogating the text rather than the person.
You can even use cognitive mapping techniques to clarify your own ideas or analytic processes
about the meaning of a particular set of data.
For a quirky and humorous fictional media representation of participant observation and cognitive
mapping, see the outstanding Norwegian/Swedish film Kitchen Stories.