ThePracticeOfQualitativeDataAnalysis (2021)
ThePracticeOfQualitativeDataAnalysis (2021)
ThePracticeOfQualitativeDataAnalysis (2021)
Embedded in the context of each research example, readers can follow analytical
processes step-by-step and gain insights into efficient ways to use MAXQDA.
Authors
Dr. Michael C. Gizzi is a professor of criminal justice at Illinois State University, USA.
He holds a doctorate in political science, and his research focuses on constitutional
criminal procedure and judicial process. He is a professional trainer and consultant
for MAXQDA and uses it in research courses, workshops, and webinars.
Dr. Stefan Rädiker is a consultant and trainer for research methods and evaluation.
He holds a doctorate in educational sciences and his research focuses on computer-assisted
analysis of qualitative and mixed methods data (www.methoden-expertise.de). Edited by Gizzi & Rädiker
ISBN 978-3-948768-10-2
90000
MAXQDA
PRESS MAXQDA
www.maxqda-press.com 9 783948 768102 PRESS
The Practice of Qualitative Data Analysis
Research Examples Using MAXQDA
Edited by
Michael C. Gizzi
Stefan Rädiker
ISBN: 978-3-948768-10-2 (paperback)
ISBN: 978-3-948768-05-8 (eBook PDF, identical page numbers as paperback edition)
https://doi.org/10.36192/978-3-948768058
All rights reserved, in particular the right of reproduction and distribution as well as translation. No
part of this work may be reproduced in any form (by photocopy, microfilm, or any other method) or
processed, duplicated, or distributed using electronic systems without written permission by the
publisher.
Publisher and authors have compiled the information in this book to the best of their knowledge.
They give no warranty for the correctness and assume no liability for the use of the information.
Introduction 9
Michael C. Gizzi, Stefan Rädiker
MAXQDA is a powerful tool for qualitative and mixed methods research. The research
community that uses MAXQDA spans the globe. This book brings together examples of the
diverse types of research that MAXQDA can be used for. While researchers have access to
a detailed user manual, multiple books that provide in-depth background about the soft-
ware’s functionality, and an active research blog, this book fills in a missing gap—provid-
ing case studies with concise real-world examples from different disciplines and using dif-
ferent methods of how MAXQDA is used in practice.
The book illustrates more than 28 MAXQDA features and showcases how MAXQDA can
be effectively used. Each case study provides a brief overview of the research topic being
explored, the methodological approach, and a detailed description of how MAXQDA was
used to conduct the research. The case studies focus on the usage of MAXQDA, and not the
substantive research outcomes, and they answer a variety of practical questions for the
reader, such as how the coding system was developed, how coded documents were ana-
lyzed, what tools were used, and how those tools informed the results.
Each chapter is intended to be used by researchers as a resource when approaching
new projects. The book was inspired by the excellent research posters that have been pre-
sented at the MAXQDA International Conference in Berlin from 2017 to 2020 (confer-
ence.maxqda.com), as well as the numerous research blog posts on the MAXQDA website
(maxqda.com/blog). The examples do not replace detailed user manuals or textbooks but
provide the researcher with concrete examples that they can draw insights from in crafting
their own research.
The book covers different methodologies, data types, and tools, including thematic
analysis, qualitative content analysis, ethnography, and grounded theory, process-gener-
ated historical research, typology building, and more. The book is not comprehensive in
covering every qualitative method used by scholars, nor is it intended to be a text to replace
many of the books that give an overview of and describe several research methods (e.g.,
Creswell & Poth, 2018; Flick, 2014). This book, instead, is meant to provide real-world ex-
amples from research in a variety of disciplines and approaches, that are conducted almost
entirely with MAXQDA.
10 M. C. Gizzi, S. Rädiker
Example-based learning
We approached this book out of a strong belief in the power of example-based learning.
MAXQDA provides tools to conduct a qualitative analysis, and the user manual and other
works, such as Kuckartz and Rädiker’s Analyzing Qualitative Data with MAXQDA (2019)
provide in-depth guidance about specific functions, but this book does something unique.
It offers the reader insights into actual research projects, providing examples that can serve
to inspire the researcher. We were inspired ourselves by similar learning by example books,
such as a text on SPSS (Morgan et al., 2006) that provided instructions on using statistical
tools but included real-world examples and explanations of how to interpret results. The
Practice of Qualitative Data Analysis isn’t quite the same, as our focus is less on how to
interpret analyses but more about the ways you can use software to conduct specific tasks
in qualitative analysis. In this context, we would be remiss not to mention Nicholas Woolf
and Christina Silver’s Qualitative Analysis Using MAXQDA: The Five Level QDA Method
(2018), which includes two case studies of how their five-level approach to qualitative anal-
ysis can be used in practice: Christian Schmieder’s illustration of a thematic analysis eval-
uating an education program was particularly valuable for one of us (Michael) and served
as a spark for providing better learning opportunities for showcasing the power of
MAXQDA. It is perhaps no surprise that we invited Christian and his colleagues to contrib-
ute a chapter for this book.
In their review of example-based learning, van Gog and Rummel (2010) suggest that for
novice learners, “instruction that relies more heavily on studying worked examples, than
on problem solving is more effective for learning, as well as more efficient in that better
learning outcomes are often reached with less investment of time and effort” (p. 156). We
believe that an example-based book like The Practice of Qualitative Data Analysis is not
only effective for novices but for researchers of all levels, from the undergraduate user of
MAXQDA conducting their first research project to doctoral students working on their dis-
sertation to researchers who have been working with qualitative data for years. Case stud-
ies of how others have completed a project, with a clear description of what they did, can
serve to inspire the reader in ways that a textbook often cannot do.
The examples in this book come from many disciplines, such as education, health sci-
ences, history, sociology and social sciences, political science, criminal justice, and public
policy and cover several topics. One of the great advantages of example-based learning is
that you can learn from every chapter, regardless of what discipline, topic, or method it
deals with. You might not think a study of historical legal documents from 400 years ago
might be relevant to your own work, but the methods and visualizations that Andreas Mül-
ler uses in chapter 3 are applicable to many studies. Likewise, you might not be an educa-
tional specialist, but you can learn from the ways Natalie Santos, Vera Monteiro, and
Introduction 11
Lourdes Mata combined focus groups of students with interviews of teachers in chapter 2
and the ways they used MAXQDA’s tools to inform their analysis.
Of course, everyone needs to learn on their own, and there is no better way to learn
MAXQDA than to use it and to learn from your mistakes. You will develop your own best
practices, but this book will help shorten the learning curve, and you can potentially avoid
pitfalls that slow down research or make it less efficient. To that end, each chapter ends
with a section titled “lessons learned” in which the authors provide their own advice about
what they learned from their experiences, and you can draw from that.
Chapter 3. Using MAXQDA’s Visual Tools: An Example with Historical Legal Documents
Andreas Müller demonstrates how MAXQDA can be used with process-generated data, like
court records, media reports, and other materials, but from the perspective of a historian
looking at court records from the 16th and 17th centuries. He uses the Compare Groups
function, the Code Matrix Browser, and the Code Relations Browser to identify differences
between documents, relations between codes, and changes of subjects over time. The
chapter provides an example of how the Document Comparison Chart can be used in an
analysis to examine the internal structure of documents by using meaningful code colors.
Chapter 4. Using MAXQDA from Literature Review to Analyzing Coded Data: Following a
Systematic Process in Student Research
Michael Gizzi and Alena Harm provide a case study in the field of criminal justice of how
MAXQDA can be used by student (and other) researchers in a systematic way from the
creation of a literature review through coding analtyzed data. The chapter draws on step-
wise learning to provide a structured approach to conducting a research project and dis-
cusses the usage of numerous tools for paraphrasing, memo writing, restructuring codes,
as well as an easy-to-replicate process for analyzing coded data.
Chapter 5. Using MAXQDA for Analyzing Focus Groups: An Example from Healthcare
Research
Matthew Loxton provides a detailed explanation of how he uses focus group data in the
field of health sciences in chapter 5. He shows how focus group transcripts are imported
into MAXQDA and the ways that the user can fix errors from transcripts. The analysis of
focus group transcripts is illustrated using a variety of tools such as the Word Cloud, Key-
word-in-context, and Document Portrait, and how to prepare writing of the report using
the Summary Grid.
Introduction 13
Chapter 7. Using MAXQDA for Identifying Frames in Discourse Analysis: Coding and
Evaluating Presidential Speeches and Media Samples
Betsy Leimbigler provides a research example from political science in the area of Ameri-
can presidential politics and media coverage in chapter 7. She used discourse analysis to
explore the “frames” used by American presidents surrounding health care reform. Be-
ginning with a deductive set of six broad codes derived from the literature, she added sev-
eral inductive sub-codes to explore the frames in greater depth. Leimbigler illustrates
how memo writing was key to her analysis, particularly for summarizing hundreds of
documents. While Code Frequency charts illustrated the usage of frames, the Code Rela-
tions Browser helped to explore connections between the frames.
Chapter 10. Using MAXQDA in Teams and Work Groups: An example from Institutional
Evaluation and Organizational Data Analysis
Christian Schmieder, Joel Drevlow, and Josset Gauley share how they work and communi-
cate together as a team to analyze a constantly growing dataset with MAXQDA. The
demonstration project involves several persons divided into four different roles (lead,
manager, analysis team, and data users). They present their workflows to distribute
MAXQDA projects among team members and how they use the Teamwork Export and Im-
port features to bring it all together in one master file again. Among other tools, Comments
on coded segments and Paraphrases are used to develop suitable coding schemes.
Acknowledgments
We began talking about the idea of a book of case studies at the 2019 MAXQDA Interna-
tional Conference (MQIC), after being impressed by the diverse posters that were pre-
sented at the annual conference. A year later we moved to make the idea a reality and be-
gan work. The pandemic eliminated our plans to meet for a week on the project in Berlin
in June 2020, but we quickly adapted to virtual meeting tools and began a year-long col-
laboration which has resulted in the book before you.
We are especially grateful to the incredibly talented individuals who responded to our
invitation to participate in this project. We pushed hard for short-deadlines, and each
chapter went through numerous revisions as we sought to provide consistency in the
structure and style but still let the individual author’s own ideas and intellectual process
for using MAXQDA come through. We also are particularly grateful to those who are writ-
ing in a second or third language in this book. While our editing has attempted to provide
a coherent American English feel, we are so pleased by what the final product is. Any errors
remain ours and not our contributors.
We also want to give our thanks to Dr. Udo Kuckartz, the founder and creator of
MAXQDA, for his support and encouragement in the development of this book. Thanks are
also due to the staff at VERBI and MAXQDA Press in Berlin, including Anne Kuckartz, Isabel
Kuckartz, and Aikokul Maksutova. Elizabeth Jost and Sarah Schneider were incredibly
helpful in the early stage of the book, as we were seeking to identify individuals to partici-
pate in the book.
Michael Gizzi wants to also thank his wife Julie for her support during this year of “work
from home,” in encouraging me to develop the book. As has been true for years, William
Wilkerson and Ethan Boldt have served as sounding boards, even though neither are
Introduction 15
MAXQDA users. He also especially wants to thank his co-editor, Stefan Rädiker, for agree-
ing to take a chance on this book and pushing it forward over this past year.
Stefan Rädiker thanks his wife Marina for her always clear view of things and bringing-
up helpful questions, not only while he was working for this book project. Special thanks
also go to his co-editor, Michael Gizzi, for the extremely productive exchange that made
this exciting book project possible. It was a great pleasure to work together on the book
and to have the possibility to learn so much in the process.
Bibliography
Creswell, J. W., & Poth, C. N. (2018). Qualitative inquiry and research design: Choosing among five ap-
proaches (4. ed.). Sage.
Flick, U. (Ed.). (2014). The SAGE handbook of qualitative data analysis. Sage.
Kuckartz, U., & Rädiker, S. (2019). Analyzing qualitative data with MAXQDA: Text, audio, and video.
Springer Nature Switzerland. https://doi.org/10.1007/978-3-030-15671-8
Morgan, G. A., Leech, N. L., Gloeckner, G. W., & Barrett, K. C. (2007). SPSS for introductory statistics:
Use and interpretation (3rd ed.). Erlbaum.
van Gog, T., & Rummel, N. (2010). Example-based learning: Integrating cognitive and social-cognitive
research perspectives. Educational Psychology Review, 22(2), 155–174. https://doi.org/10.1007/
s10648-010-9134-7
Woolf, N. H., & Silver, C. (2018). Qualitative analysis using MAXQDA: The five-level QDA method.
Routledge.
Using MAXQDA in Ethnographic Research:
An Example with Coding, Analyzing, and Writing
Danielle N. Jacques
Abstract
Using the example of my 2018 Master’s thesis on public transportation in Senegal, I show
how MAXQDA may be used to code and analyze ethnographic data. Field observations and
semi-structured interviews were conducted to investigate the “car rapide” mini bus system
as a space for cultural production and participation while situating it within a larger polit-
ical discourse on modernity. Coding was conducted in three cycles using a grounded the-
ory approach in conjunction with descriptive and thematic coding. Highlighter pen codes
were utilized to perform a descriptive coding cycle in which each type of bus was cata-
logued. Thematic codes were superimposed on top of the descriptive highlight codes. The
resulting code system was organized using the Creative Coding Tool, and patterns and re-
lationships were identified using the Code Relations Browser. Complex Coding Queries
were conducted to add context and color to the relationships identified in the Code Rela-
tions Browser. The thesis that resulted from this work provided an ethnographic account
of the soon-to-be-retired car rapide ecosystem while also situating the debate over its re-
tirement in political and historical contexts.
nomic backgrounds, the CR is one of the most instantly recognizable symbols of Senegal
today.
Despite their cultural significance, however, the car rapide’s reign over the streets of
Dakar may soon be coming to an end as a result of the targeted initiatives of the Emerging
Senegal Plan (ESP). The ESP is a developmental framework that aims to achieve middle-
income status in Senegal by 2035 through a series of structural transformations of the
economy, the promotion of human capital, and good governance.
My research sought to understand the car rapide as a space for cultural production and
participation, while also situating it within a larger political discourse on development and
modernity. While the central goal of my research was to provide an ethnographic account
of the soon-to-be-retired car rapide “ecosystem”—everything from the way in which one
rides the CR, how its routes and fare systems work, and how it compares to other forms of
public transit in the city—it also asked the following questions: How do middle class Da-
karois feel about the loss of a cultural icon in the name of “modernity?” How do they envi-
sion the future of public transportation in Dakar?
Coding was conducted in three cycles. First, descriptive coding was applied to each
transcript in which general topics of conversation were identified. Next, the transcripts
were revisited using initial coding, in which tentative codes were applied to the data based
on emerging themes, ideas, and theories. In vivo coding, the act of creating codes using
direct quotes and phrases from informants, was also applied at this stage. Finally, a third
cycle of coding was applied in which the initial codes were refined and further analyzed.
Utilizing a grounded theory lens enabled me to uncover themes that I would not have
found if I had been approaching the data from a purely developmental framework, as is
often the case when it comes to public transportation. Literature on public transportation
in developing countries often adopts an urban planning lens and stresses the need to re-
duce congestion and motorization. Dakar in particular has been the focus of many trans-
portation studies due to its status as a pilot city for many of the World Bank’s transporta-
tion infrastructure projects. Although my data certainly spoke to these themes, coding in-
ductively from a grounded theory perspective allowed me to uncover a larger political dis-
course about what it means to be “modern” in Dakar.
Fig. 1: Descriptive codes applied to the text using the highlighter tool, allowing the researcher
to visualize shifts in the topic of conversation
Fig. 2: Highlight codes are semi-transparent and change color when two or more colors are ap-
plied to the same segment of text. Used descriptively, they can help visually identify
when two or more topics of conversation overlap
Using MAXQDA in Ethnographic Research 21
Furthermore, the semi-transparent nature of the highlighter pens allowed me to not only
visualize shifts in conversation, but to also take note when two or more buses were being
directly referenced or compared together. Take, for example, Fig. 2, where you can clearly
see that the respondent referenced the car rapide (blue) and Tata (pink) buses in the same
sentence, changing the paragraph’s color to a new purple, while comparing both to the
Dem Dikk bus (green).
Descriptive highlight codes were renamed directly in the Code System so that the name
of each code was no longer the name of the respective color but instead matched the type
of bus being defined; in that regard, “Blue” became “CAR RAPIDE,” “Pink” became
“TATA,” and so on.
By descriptively coding each mode of public transit in my first cycle, I laid the ground-
work for a more complex analysis in later phases, as I could then leverage tools such as the
Code Relations Browser. Because subsequent coding cycles involved the application of
thematic codes on top of these descriptive codes, further analysis would enable me to
identify, compare, and contrast the lived experiences on each type of bus.
scriptive codes for the various actors involved in operating the bus, such as the drivers and
apprenti (young men in charge of collecting fares).
Other codes created in this cycle were more thematic in nature and captured the larger
political arguments made by my interlocutors surrounding the modernization of public
transit in Dakar. Thus, while I coded for different aspects of waiting for, boarding, and rid-
ing the buses in Dakar, I also coded more abstract themes, such as the responsibility of the
State to its citizens, respect for others, and teranga (hospitality).
The descriptive highlight codes created in the previous cycle were turned off for the
duration of the second and third coding cycles. Codes, including highlight codes, can be
toggled on and off by right-clicking in the coding strip on the left-hand side of the Docu-
ment Browser window (Fig. 3). Hiding codes through this tool removes them from the dis-
play in the Document Browser without deleting them from the Code System. In this way,
codes can be hidden and recalled, allowing the researcher to “declutter” the Document
Browser view and focus on one or multiple codes, either for aesthetic or analytical reasons.
Fig. 3: Clicking on the gear wheel icon or right-clicking in the coding stripe area on the left-hand
side of the Document Browser allows the researcher to toggle the view of certain codes
without deleting them from the code system. This tool may be leveraged for analytical or
aesthetic reasons, such as to focus on just a few codes or to “declutter” the view
Using MAXQDA in Ethnographic Research 23
In vivo codes were also applied at this stage using the Code in Vivo icon in the toolbar of
the Document Browser. In vivo codes use respondents’ own words or phrases to capture
“participants’ words as representative of a broader concept in the data” (Birks & Mills 2015,
p. 90). Kuckartz (2014) notes that in vivo codes “enable us to access the participant’s ob-
servations directly, without obstructing them by the theories we develop” (p. 23). I created
in vivo codes when a respondent said something that felt particularly striking or profound,
or that underlined the ways in which they thought about public transportation and mo-
dernity in Dakar. In this way, nearly all of my in vivo codes encapsulated my respondents’
political arguments for and against modernity, as well as their attitudes toward the gov-
ernment's ability to deliver on its promises of development.
Because in vivo codes are inserted into the Code System window as a top-level code, I
organized them manually by creating a new code called “In Vivo Codes” and by moving all
in vivo codes, each containing one coded segment, as sub-codes of this category (Fig. 4).
Fig. 4: In vivo codes, each containing one coded segment, are sub-codes of a top-level category
called “In Vivo Codes” for organizational purposes
24 D. N. Jacques
Similarly, I created a code using the Gold Star emoticon and renamed it “Great Quotes.”
As I read through my transcripts, I coded sentences or passages that were particularly good
at illustrating certain points as “Great Quotes.” Sometimes but not always overlapping
with the in vivo codes, this category allowed me to collect passages of text that would later
serve as a repository of quotes to use when writing my thesis.
Once all transcripts had been initially coded in the second cycle, I revisited each tran-
script for an iterative third cycle in which I worked to apply new and existing codes to the
data until no new themes or concepts emerged. Codes created towards the end of the sec-
ond cycle were applied to transcripts that were coded earlier during the second cycle; like-
wise, transcripts that were coded late in the second cycle were reviewed for consistency
with codes that were created early within the second cycle. Several new codes were also
generated at this stage and were applied across all transcripts when applicable. Through
working iteratively in this way, I ensured the consistent application of codes across the
data and reached data saturation.
The Lexical Search tool (available in the Analysis menu) was used at this stage to create
a code for every instance of a particular word that I had identified as central to my research.
Early on in my fieldwork, my interlocutors surprised me by arguing that “indiscipline” and
Fig. 5: The Lexical Search tool identified 38 unique instances of the French terms “indiscipline”
and “indiscipliné” in the data, which were then auto-coded with a weight of 0
Using MAXQDA in Ethnographic Research 25
“undisciplined” people were the root cause of Dakar’s public transportation woes. Recog-
nizing the concept as interesting and important, I began probing deeply on the meaning
of these terms with my respondents whenever it came up in conversation. Although I could
have coded for “indiscipline” and its adjective forms manually, I chose to perform a Lexical
Search so that no instances of it were accidentally missed or omitted. The Lexical Search
returned 38 instances of the word in my data set (Fig. 5), which resulted in 24 auto-coded
segments named “indiscipline.”
When choosing the amount of text to be auto-coded in Lexical Search, I opted to code
the entire paragraph in which the terms “indiscipline” or “indiscipliné(e)(s)” occurred (Fig.
5). Because I was primarily concerned with understanding exactly what it meant to be “un-
disciplined” in and outside of the context of public transportation, coding the full para-
graph in which the term occurred set the groundwork for future analysis using visual tools,
and in particular the Code Relations Browser.
and applied it to the segments in which that theme appeared. In other instances, pre-ex-
isting codes were applied to the segments in instances where the code may have been mis-
takenly missed in the second coding cycle. This process enabled me to further investigate
the notion of “indiscipline” and other abstract codes by categorizing them into smaller
sub-themes and categories.
Fig. 6: The Smart Coding Tool was used to review coded segments in any given code, review
other codes that are applied to the segment concurrently, and to add, delete, or refine
codes as needed
The code system was then further organized and refined into top-level categories with sub-
categories using the Codes > Creative Coding tool. Although codes can be organized into
hierarchies directly in the Code System window itself, I prefer the Creative Coding tool be-
cause its interactive nature allows one to drag and drop codes around the screen, think
through relationships, and save the results as a map in MAXMaps, which may be used later
in the analytical process to further visualize relationships between themes and concepts.
Many of the codes created during my second and third cycles were also descriptive in
nature and catalogued aspects of the public transportation experience, from waiting for
Using MAXQDA in Ethnographic Research 27
the bus, to the bus’ routes and timetables, and cost of fare. These codes were brought into
the Creative Coding tool and grouped thematically under the top-level category “Bus Jour-
ney/Experience” (Fig. 7). For example, the codes “theft,” “fights/disputes,” “old/dilapi-
dated vehicles,” and “traffic accidents” were grouped together and added as sub-codes un-
der the new category “safety.” “Safety” was then linked to “Bus Journey/Experience” as a
second level sub-category.
Through grouping my descriptive codes thematically and by linking them as second,
third, or even fourth level codes underneath “Bus Journey/Experience,” I created a net-
work of codes that described the various dimensions of traveling on and operating public
transportation. These codes and their respective segments later became the backbone of
my ethnographic account of the car rapide in Dakar.
Fig. 7: Descriptive codes capturing the various dimensions of public transportation (safety, ac-
tors involved, behavior, culture, etc.) were arranged thematically in the Creative Coding
tool. Directional arrows imply hierarchy, and the established hierarchy is saved and ap-
plied to the Code System in the form of codes and sub-codes
6 Memos
Throughout my coding and analytical processes, I relied extensively on writing document
memos. Memos have a broad application in MAXQDA and can be applied at multiple lev-
els of the project, including directly to documents or codes. I predominantly utilize in-
28 D. N. Jacques
document memos as a way to capture and categorize my thoughts, questions, and theories
as I work through the data. Although the majority of my memos were “free” memos with-
out any label, I also utilized memos to document questions I had about particular refer-
ences made by my respondents. The memo type was used sparingly to mark particularly
important information with follow-up thoughts. Because my data was collected in my
non-native language, I also occasionally used memos to define Senegalese French collo-
quialisms and other language that I was not immediately familiar with.
Although many of my memos were short in length, they served as powerful “notes to
self” and, on more than one occasion, helped inform or jumpstart sections of my final the-
sis. Take, for example, the memo in Fig. 8; just 2 sentences in length, this memo ultimately
represented one of the most important opinions expressed by my interlocutors and, con-
sequently, one of the most important themes underlining their view of the modernization
of public transportation in Dakar.
Fig. 8: A short memo inserted in the text of Daouda’s interview captured my thoughts in the
moment, and later became an important analytical tool as I thought through my data set
and began writing the final thesis
7 Analysis
I began my analysis of coded data with the Code Relations Browser from the Visual Tools
menu. The Code Relations Browser is a visual representation of the frequency of overlap-
ping codes and aids in the identification of patterns and relationships in the data.
Using MAXQDA in Ethnographic Research 29
Because I had structured my coding so that thematic codes were superimposed on top
of descriptive highlight codes, the Code Relations Browser enabled me to compare and
contrast the “journey/experience” on each type of bus. This was achieved by first activat-
ing the descriptive highlight codes in the Code System window, and then by setting the
Code Relations Browser rows to “all codes” and the columns to “all activated codes.”
The resulting output allowed me to view the frequency at which codes such as “safety”
were mentioned in relation to the 4 main types of public transit. This visual representation
of themes was critical in the composition of my ethnography of the car rapide; at a glance,
I was able to compare and contrast the buses and identify the most common “problems”
on each bus, as reported by my respondents. For example, Fig. 9 clearly shows that “safety”
and “overcrowding” were a larger concern for respondents on the car rapide than it was
for them on the Dem Dikk or the Tata buses.
Fig. 9: The Code Relations Browser enables the researcher to visualize the frequency of overlaps
between two codes. Here, descriptive highlight codes (columns) are analyzed with the-
matic codes (rows) to quickly visualize the frequency of codes such as “safety” across sev-
eral modes of public transportation
30 D. N. Jacques
While the Code Relations Browser was key in identifying patterns and in comparing and
contrasting my interlocutors’ experiences on each type of bus, the results in the Browser
lack the color and detail that would come to inform my ethnographic account of public
transportation in Dakar. In other words, while the output told me that “disputes/fights”
were discussed more frequently when referring to the car rapide than to the Dem Dikk or
the Tata, it did not provide any context behind this relationship. Therefore, to answer
questions such as “What types of fights and disputes does one encounter on the car rapide?
How do these fights differ from those you find on the Dem Dikk and Tata?” I had to conduct
further analysis using the Complex Coding Query tool.
The Complex Coding Query tool is accessed via MAXQDA’s Analysis tab in the tool rib-
bon. The tool retrieves coded segments of text using a variety of nuanced functions.
Whereas the Retrieved Segments window will, by default, simply list coded segments of
text in one or more activated categories, the Complex Coding Query allows one to perform
a complex search for coded segments based on two or more specified criteria. Therefore,
in order to identify the segments of text in my dataset that were coded both as “car rapide”
and as “disputes/fights,” it was not enough to activate both codes in my Code System win-
dow—doing so would result in a long list of segments coded as “car rapide,” followed by a
list of all segments coded as “disputes/fights.” Instead, I conducted an “overlapping” Com-
plex Coding Query using both codes “car rapide” and “disputes/fights” (Fig. 10). This func-
tion resulted in 19 segments that contained both codes. The results were then reviewed
directly in the Retrieved Segments window, and I took physical written notes on the find-
ings.
The process of identifying patterns in the Code Relations Browser, adding color and
context to these patterns using the Complex Coding Query, and reviewing the results in
the Retrieved Segments window was repeated countless times for each new relationship
identified throughout the analysis phase of my research. Although coding had officially
ended by this phase in the research, I added segments to the “Great Quotes” category as I
worked through the data and identified important quotes to be used in the final thesis.
8 Lessons learned
My research journey was a highly iterative process in which I constantly created, applied,
and refined codes and categories. These categories were later explored on a case-by-case
basis first using the Code Relations Browser to identify patterns, and subsequently using
Complex Coding Queries to add context and detail to these patterns. To that end, the Re-
trieved Segments window was the most important and most utilized analytical tool in my
project.
Using MAXQDA in Ethnographic Research 31
Fig. 10: The Complex Coding Query tool is used to identify segments of text which have all of the
codes listed in section A; in this case, the tool has identified 19 segments that were coded
with both “CAR RAPIDE” and “disputes/fights”
Admittedly, my analytical process was influenced by my chosen coding strategies, and es-
pecially by the quality of my codes themselves. Although I refined and categorized my
codes in several cycles and using various tools, there may have been opportunities to refine
my codes even further; for example, with “disputes/fights,” a Complex Coding Query or
Overview of Coded Segments could have been avoided had I better refined the category
upfront during my coding cycles. In other words, rather than reviewing coded segments to
distill the various types of disputes one might find on a car rapide, I could have preemp-
tively coded each type rather than lumping them all into one large “bucket.”
Saldaña (2016) notes that initial coding, my chosen second-cycle coding strategy, may
“alert the researcher that more data are needed to support and build an emerging theory”
(p. 115). Indeed, I found this to be the case as I worked through my dataset and became
increasingly interested in my interlocutors’ notion of “indiscipline.” The frequency of a
given code may not necessarily correspond with its importance or relevance to the re-
search question; “indiscipline” accounted for just 24 coded segments out of a total 1,200
(accounting for just 2% of all coded segments). Nevertheless, these 24 segments ultimately
proved fruitful in describing how Dakarois envision “indiscipline” on public transporta-
tion in the city. However, they fell short of fully describing how “indiscipline” in the public
32 D. N. Jacques
sphere is linked to their understanding of “good governance,” state responsibility, and de-
velopment in the post-colonial context. Given the tight time constraints of my MA pro-
gram, returning to the field to collect more data proved impossible, but this gap in the data
leaves the door open for further investigation should I go on to pursue a doctorate degree
in the future.
Furthermore, because many of the tools in MAXQDA can be accessed 2 or sometimes
even 3 different ways, one may access a particular function or tool one way, only to later
discover an alternative way of accessing or working with that tool. For example, my process
of first using the Code Relations Browser to identify relationships and then conducting a
Complex Coding Query to pull corresponding segments could have been simplified, had I
known at the time that double-clicking on the square or circle icon in the Code Relations
Browser will automatically pull up those intersecting segments in the Retrieved Segments
window, without the need of actually pulling up the Complex Coding Query dialog and
manually running the query.1
The biggest lesson learned from this mini-ethnography, however, is that MAXQDA is a
truly dynamic tool that can flex to meet the needs and unique work style of the re-
searcher—there is no one “right” or “wrong” way to leverage its analytical and visual tools.
Even today, with four years of MAXQDA experience under my belt, I am still learning new
ways of accessing and utilizing the various tools in the software. While I perhaps could
have been more thorough in refining certain codes upfront in my coding process, or saved
time by double-clicking in the Code Relations Browser rather than running a separate
Complex Coding Query, my processes still enabled me to arrive at a robust analysis of the
data, a thesis I am proud of, and a strong foundational knowledge of MAXQDA’s functions
that I continue to build upon and draw from today as I use the software in a professional
capacity as a MAXQDA trainer and market researcher.
Bibliography
Birks, M., & Mills, J. (2015). Grounded theory: A practical guide. Sage.
Kuckartz, U. (2014). Qualitative text analysis: A guide to methods, practice & using software. Sage.
Kuckartz, U., & Rädiker, S. (2019). Analyzing qualitative data with MAXQDA: Text, audio, and video.
Springer Nature Switzerland. https://doi.org/10.1007/978-3-030-15671-8
Saldaña, J. (2016). The coding manual for qualitative researchers (3rd ed.). Sage.
1 It should be noted, however, that double-clicking in the Code Relations Browser will result in an
“intersection” complex query. The Complex Coding Query will therefore need to be used when the
researcher wishes to conduct a query based on other attributes (such as “overlapping,” “near,”
etc.), or when the researcher is also interested in filtering by weight or by user.
Using MAXQDA in Ethnographic Research 33
Abstract
This chapter illustrates a qualitative project aimed at understanding the similarities and
disparities that occur when students’ and teachers’ conceptions of assessment are com-
pared. We used a multiple-case study design, with five third grade teachers and their stu-
dents. The data were gathered through both single-person (with teachers) and focus group
(with students) interviews. The purpose of this chapter is to provide a detailed description
of how the visual tools of MAXQDA were used to compare how elementary Portuguese
teachers and students conceive mathematics assessment. We present this case illustration
in four sections. The first section contains the background, objectives, and guiding meth-
odology of the project. The second section describes the preparation of the data and the
development of the coding system by using both concept-driven and data-driven catego-
ries. In the third section, we explain how we compared and contrasted all summarized data
in terms of categories by using the MAXQDA visual tools: Document Portrait, Code Matrix
Browser, and Code Relations Browser. The final section describes how we used the Sum-
mary Grid and the Summary Tables to organize our findings and draw our conclusions. We
conclude with a review of the pros and cons of alternative ways of comparing single-person
and focus group interviews using MAXQDA.
1 Introduction
There are different approaches to classroom assessment. The predominant one in schools
is Assessment OF Learning (AoL). Its purpose is to certify learning and report students’
progress in school to parents and students, thereby promoting extrinsic motivation and
social comparison. Assessment FOR Learning (AfL) is designed to assist teachers and stu-
36 N. Santos, V. Monteiro, L. Mata
dents in improving teaching and learning by providing them with specific feedback that
both need to make adjustments to the learning process (Azis, 2015). Students and teachers
believe that assessment is crucial for the efficiency of teaching and learning processes and
a shared understanding of the purposes of assessment in meeting learning and teaching
goals. This shared understanding of what is being worked on is essential to help students
learn from their learning experiences (Andersson, 2016). In teacher-and-student interac-
tions and peer interactions, knowledge acquisition is dependent on the shared represen-
tation of the task and the context of learning. According to Gipps (1999) and Andersson
(2016), assessment can be viewed as an intersubjectivity setting, where shared under-
standing between teacher and student is central to learning outcomes. Carless (2009)
states that such shared understanding improves assessment integrity and the quality of
student learning experiences. Therefore, it is essential that student and teacher concep-
tions of assessment are aligned.
Few studies have compared teachers’ and their students’ assessment conceptions (e.g.,
Brown, 2008; Remesal, 2006). These previous researchers have found that, in general, their
conceptions differ. While students have a clear conception that assessment has a funda-
mental purpose—the certification of student learning—teachers’ conceptions of assess-
ment are somewhat unclear but show a strong tendency toward the purpose of improving
teaching and learning (Remesal, 2006). Since teachers and students are directly involved
in the same pedagogical process, including assessment, it is strange that they perceive it
as having different purposes.
Our study is part of a larger research project that aimed to study how teachers’ and
students’ conceptions of assessment and teachers’ assessment practices were related to
students’ outcomes. In particular, we sought to explore the conceptions that Portuguese
elementary teachers and students have of assessment and investigate whether these con-
ceptions are aligned. We used MAXQDA’s tools to help us in our research process.
2013). Therefore, teachers who participated (one male and four females) had between 3
and 25 years of experience and class size ranged from 11 to 23 students (82 third-grade
class students in total).
The data were gathered both single-person interviews with teachers, and focus group
sessions with students. The conversations with teachers, with an approximate duration of
45 minutes each, addressed 5 assessment topics based on the literature (Azis, 2015;
Remesal, 2006): (1) definition, (2) targets, (3) purposes, (4) practices, and (5) criteria. The
individual interviews were experience focused (Brinkmann, 2013) to elicit accurate reports
of teachers’ experiences regarding assessment. The conversations with students, with an
approximate duration of 30 minutes, were conducted with groups of four to five students
(two groups for Class D, three groups each for Classes B and E, and four groups each for
Classes A and C). Only three assessment topics were addressed in the focus groups to
maintain children's concentration: (1) definition, (2) purposes, and (3) practices. Our ob-
jective was not to reach consensus but to collect all students’ experiences and beliefs about
assessment. Therefore, the focus group moderator kept the discussion informative rather
than argumentative, ensuring the participation of all students.
smaller groups of closely related codes to create sub-categories. Later, these sub-catego-
ries were organized in both concept-driven and data-driven categories with a latent anal-
ysis in which we tried to find the underlying meaning of the participants' discourse
(Bengtsson, 2016).
Starting with the typology of assessment conceptions previously described by Brown
(2008, 2013, 2018) (Assessment for (1) improvement, (2) school’s accountability, and (3)
students’ accountability), categories and sub-categories were progressively redefined
through a cyclical process, creating new categories when necessary to fit the reality of our
data (Miles, Huberman, & Saldaña, 2014).
Fig. 3 presents a concept map created in MAXMaps (Visual Tools > MAXMaps) with the con-
ceptions described by Brown (2008, 2013, 2018) and our final four categories. The catego-
ries Students’ Accountability and Improvement of Learning and Teaching were very similar
to those described by Brown (2008, 2013, 2018). Our External Reporting category had some
similarities to Brown’s School’s Accountability conception since our participants consid-
ered that assessment involved some type of external reporting. Still, the school and teach-
ers’ accountability was not as present in our participants' discourse as in Brown’s assess-
ment conceptions, so we highlight this difference in the name of the category. Externally
Motivating Students was a data-driven category, defined inductively from the data. The
sub-categories are also represented in Fig. Fig. 3. The widths of the linking lines indicate
the frequency at which these sub-categories were assigned in the participants’ discourse.
Fig. 3: Visualization of the Assessment Conceptions’ Categories using the MAXMaps Hierarchical
Code-Subcodes Model
The categories and sub-categories were organized in the MAXQDA’s Code System as sub-
codes of the top-level code Assessment Conceptions (see Fig. 2). The order of the categories
was based on the literature (Azis, 2015). We organized our categories within a continuum
that moves between an assessment OF learning pole (AoL, with a greater focus on certifi-
Using MAXQDA in Qualitative Content Analysis 41
cation and accountability) and an assessment FOR learning pole (AfL, with greater empha-
sis on improving learning and teaching) (see Fig. 3). We assigned different colors for each
category. The colors helped us to differentiate between different conceptions in the visual
analysis tools. Code memos were used to describe the meaning of a category as clearly as
possible (including their theoretical background, inclusion and exclusion criteria of appli-
cation, examples, and differentiation from other categories).
Our unit of analysis was a unit of meaning (i.e., one or more consecutive sentences with
a common meaning). Consequently, the categories and sub-categories were applied to
pieces of text with the same meaning that could include several sentences or several par-
ticipants' interactions in the focus group interviews. For example, in Fig. 4, we observed
several comments from three Class A students, who defined assessment as the practice of
assigning grades and marks to students’ work. Therefore, all the interactions between the
three students were coded with the sub-category Grading and its respective category, Stu-
dents’ Accountability. In Fig. 4, we can see that there are 4 coding stripes: one for the topic
been discussed (Assessment definition, in black), one to identify the group that we are an-
alyzing (Focus group 1, also in black), one for the assessment conception category and one
for the sub-category (both in blue). Any block of data could be coded simultaneously only
with sub-categories of the same category of assessment conceptions. We only coded units
of meaning that were relevant to our research questions. Henceforth, not every portion of
the interviews’ transcripts was coded. The categories and sub-categories were carefully
cross-examined by three external researchers and found to be descriptive of the data. For
intercoder consistency, a second coder, working as a supervisor, confirmed the analyses of
the first coder. Discrepancies were discussed by three external researchers.
lot about assessment as a tool for Students’ Accountability—the category most mentioned
by his/her students. For Class C, the teacher’s discourse was most often focused on Im-
provement, while students focused on Students’ Accountability. If we wanted to know the
exact percentage of coded segments in each category, we could complement the infor-
mation provided by the Document Portrait with the Code Coverage function (Analysis >
Code Coverage > Texts, Tables and PDFs). This feature indicates the number of characters
coded in the document with our assessment conceptions categories and can calculate per-
centages based on the total characters in the document (Percentages of the entire text) or
based only on the coded characters (Percentages of “coded”).
Fig. 5: Visualization of the category most often using MAXQDA’s Document Portrait (Sorting by
color frequency, with 900 tiles). Purple = External Reporting; blue = Students’ Accountabil-
ity; red = Externally Motivating Students; green = Improvement of Learning and Teaching
44 N. Santos, V. Monteiro, L. Mata
One useful aspect of the Document Portrait feature is that it creates not only a repre-
sentation of the time the categories were discussed but also indicates how many times a
unit of analysis was coded with the same color. This information indicates the number of
times a category was reintroduced into the discussion during the interview. This data is
displayed when you hover your cursor over a tile with the color of interest (see boxes dis-
played in Fig. 5). These are the same values displayed by the Code Matrix Browser (see next
section). For example, in Fig. 5, for Teacher of Class A the number of coded segments with
purple (External Reporting) was three (23% of the coded segments in the document evalu-
ated for the Document Portrait). Conversely, the topic Students’ Accountability (in blue)
was mentioned seven times (54% of the evaluated coded segments). It seems that Teacher
A spent more time talking about External Reporting (there were more characters coded
with this category), but she/he keeps bringing back to the discussion the Students’ Ac-
countability category during the interview. Hence, the information reported in the boxes
provided valuable information for our analysis, especially considering how we coded the
focus group data. Since one segment code could include several sentences and several stu-
dents’ interactions as long as they remained on the same category, extensive coverage
could be observed even when the participants were just repeating or rephrasing each
other’s ideas, with little contribution to the discussion. In addition, since the names of the
speakers were coded (“Student A,” “Student B,” and “Student C”, as shown in Fig. 4), it was
possible that we artificially increased the length of their contributions. Therefore, we
needed to continue exploring our data.
The Code Matrix Browser: How often was a category or sub-category mentioned?
We employed the Code Matrix Browser (Visual Tools > Code Matrix Browser) to visualize the
categories and all the specific aspects of the categories (or sub-categories) that the partic-
ipants discussed. This tool creates a matrix with activated documents in the columns and
activated codes in the rows. The Code Matrix Browser displays how many times the codes
were assigned to a document (see Fig. 6). A square indicates if the code was present in the
document or not, and the size of the square indicates how many times it was mentioned.
In Fig. 6, the symbol sizes were calculated considering the total frequency of coded seg-
ments in the document (i.e., by column), so the frequency was relativized. If we click on
the symbol Display nodes as values, we can view information about how many times the
codes were assigned to a document. This is the same information that was accessible in
the Document Portrait when one’s cursor hovers over a tile.
We thought that if (nearly) all sub-categories of an assessment conception category
were present in a document, it would indicate the richness of the content addressed by the
participants, reflecting how deeply they had thought about specific aspects of assessment.
For example, the teacher’s discourse from Class A about External Reporting was very rich,
mentioning several aspects about how she/he used assessment for external reporting.
Using MAXQDA in Qualitative Content Analysis 45
Conversely, the discourse on Students’ Accountability was limited to the aspects of stu-
dents’ Grading. The Code Matrix Browser also allowed us to easily compare the teacher’s
interviews with their students’ focus group discussions. In contrast to their teacher, stu-
dents of Class A had a more detailed description of the assessment aspects related to Stu-
dents’ Accountability. They also mentioned almost all sub-categories of the Externally Mo-
tivating Students and the Improvement of Learning and Teaching categories. In addition,
this display allowed us to recognize that even if both the teacher of Class C and his/her stu-
dents mentioned most of the sub-categories of the Improvement category, the teacher
mentioned such more times than the students (the symbol of the nodes in the Teacher of
Class C document are larger than the nodes of his/her students).
Fig. 6: Visualization of the category most mentioned by the participants using MAXQDA’s Code
Matrix Browser (relativized by column)
Having all the focus groups in one document allowed us to compare teachers’ and stu-
dents’ conceptions easily. However, we do not know if all four focus group discussions we
performed with students of Class A were consistent in their views about assessment (be-
cause in the same document were between two to four different focus groups transcrip-
tions). Fortunately, by clicking on the Interactive Quote Matrix icon (first one on the left),
we had access to all the coded segments in the categories. These coded segments include
46 N. Santos, V. Monteiro, L. Mata
information about the source, highlighted in blue. By selecting such, we had access to the
corresponding section within the document displayed in the Document Browser, and we
could see the segment in its original context. Therefore, we could check which focus group
mentioned the category or sub-category. Still, this was an arduous task, so we decided to
use the Code Relations Browser to assess the consistency of our results with a more rapid
and straightforward approach.
The Code Relations Browser: When and by whom were the categories mentioned?
Each document in our Document Browser was coded with three sets of codes: One group
of structural codes identified the interview Topic that was being discussed by the partici-
pants; another identified which group of students’ interactions we were analyzing in the
students’ documents (remember that there were between two to four different focus
groups in the same document); and the last related to the categories and sub-categories of
Assessment Conceptions. Therefore, in students’ documents, three codes could be assigned
to the same text, identifying what they said about assessment, which group of students said
it, and when. Therefore, if we wanted to know which focus group mentioned the assess-
ment categories, we needed to analyze the co-occurrence of codes in students’ documents.
This was done with the Code Relations Browser that generates matrices code by code of
one or several documents.
In Fig. 7, we display part of the Code Relations Browser for the students of Class A doc-
ument (Visual Tools > Code Relations Browser). We activated the Assessment Conceptions
codes and the Focus Group codes, and we chose the Intersections option in the context
menu. As we can see, all focus groups mentioned the sub-categories of Students’ Account-
ability and Externally Motivating Students’ categories. Only Focus Group 3 mentioned sev-
eral sub-categories of the Improvement category. Therefore, the conception of assessment
as a useful tool for Improvement was not consistently present in all focus group discus-
sions.
Besides assessing the consistency of results through the different focus groups, the
Code Relations Browser also allowed us to visualize the category with the most consistent
presence in the single-person/focus group discussion (i.e., the one mentioned throughout
nearly all topics of conversation). We created one display for each document, activating
the Assessment Conceptions categories and Topics of interview codes, choosing the Inter-
section of codes in a segment option. In Fig. 8, we observe that even the Document Portrait
and the Code Matrix Browser indicated that the discourse of Teacher of Class A was more
extensive and richer regarding the External Reporting purpose of assessment, the Code Re-
lations Browser indicated that this category was only mentioned at the end of the inter-
view. The category that was systematically mentioned through all interviews was Students’
Using MAXQDA in Qualitative Content Analysis 47
Accountability. Similarly, for students, the most mentioned category in all topics was Stu-
dents’ Accountability, although the Externally Motivating Students was also present in all
topics, but not as often.
Fig. 7: Visualization of the categories and sub-categories mentioned in each focus group discus-
sion of Class A, using MAXQDA’s Code Relations Browser
Teacher
Students A
Fig. 8: Visualization of the category with the most consistent presence in the interview of
Teacher A and his/her students using MAXQDA’s Code Relations Browser
48 N. Santos, V. Monteiro, L. Mata
Further, since we had more than one focus group per class, we needed to assess the con-
sistency of the results by observing each focus group separately. Using the Code Relations
Browser option Only for segments of the following code, we limited the search to segments
of one focus group at a time, as shown in Fig. 9. The matrix confirmed that the category
consistently mentioned by all groups in all topics was Students’ Accountability.
Fig. 9: Visualization of the category with the most consistent presence in one of the focus
groups of Class A using MAXQDA’s Code Relation Browser features
We used the Summary Grid (Analysis > Summary Grid) to organize our findings. With
this tool, we could create thematic summaries for each document (displayed in the grid as
columns) about the categories in the analyses (displayed in the rows). Since writing sum-
maries can take a long time, we chose to summarize, for each document, only the catego-
ries most often covered (visualized with the Document Portrait), the categories broken
down into the maximum number of aspects (visualized with the Code Matrix Browser),
and the categories mentioned in (nearly) all the interview topics (visualized with the Code
Relations Browser). Since the Summary Grid displayed the corresponding code segments
of the category in analyses in the middle window, we were always close to the data it sum-
marized, making it easy to check our conclusions and to include translated quotes into a
summary. We also included some ideas about why there were similarities or differences
between teachers’ and students’ conceptions, and how we could test these hypotheses in
future studies.
After writing all summaries, we created a Summary Table with all our cases for system-
atic case comparisons and contrast. This is easily done by clicking on the icon Summary
Table in the Summary Grid window. In this table, all associated summaries are displayed
together. We displayed our cases (documents) in the columns (teacher and students on
the same class side by side) and the Assessment Conceptions categories in the row. We in-
spected all categories and compared each teacher's discourse with their students’ dis-
course (within-case analysis). We also conducted a cross-case analysis to deepen our un-
derstanding of the phenomenon (Miles et al., 2014). We wrote down our conclusion in a
Microsoft Word document, including quotes and excerpts that demonstrated our findings.
The process was straightforward because summaries are linked to the coded segments of
interest.
Briefly, our results indicated some inconsistency between students’ and teachers’ as-
sessment conceptions. If we ordered our participants along the continuum ranging from
the assessment OF learning pole (AoL) to the assessment FOR learning pole (AfL) (Fig. 10),
most of the students seemed to be predominantly at the AoL pole of the continuum, while
most of the teachers were at the AfL pole. Only for Class A were the teacher’s and students’
conceptions of assessment aligned. In Class E, we also observed some similarities, but in
the majority of classes, the teacher’s and students’ assessment conceptions were not
aligned. We believe that this disparity may be due to inconsistencies between teachers’
conceptions and assessment practices. We hope to deepen this issue in future studies.
50 N. Santos, V. Monteiro, L. Mata
4 Lessons learned
This chapter detailed a case study with both single-person and focus group interviews con-
ducted in the field of educational research. We hope that our explicit descriptions provide
guidance and inspiration to researchers who wish to use MAXQDA not only to study indi-
vidual cases but also for comparing cases. We illustrated several practical aspects of the
analyses, such as possible strategies for organizing the data, developing a code system, vis-
ualizing and summarizing data, and writing findings. Moreover, it is important to highlight
the following for those considering comparing single-person and focus group interviews:
Document organization: The way data are organized in the Document System affects
whether we can directly compare two documents or two groups of documents in a sin-
gle action. Because we were comparing teachers’ and students’ conceptions, we
thought that maintaining all focus groups in one document would facilitate the com-
parison between the subjects. In fact, it did help. However, this raised some difficulties
when assessing consistency within the focus group data. Another possibility is organ-
izing the data of each focus group in one single document, and then storing all the focus
group documents from the same class in a document group. Such organization allows
for a more efficient review of the data for each focus group and most of the features of
the MAXQDA allow comparisons between document groups, so it is still possible to com-
pare all students’ data with the teacher’s data. The only feature that cannot be dis-
played for the document group is the Document Portrait. Still, it is possible to calculate
the percentage of text covered for the document group with the Code Coverage feature.
However, it is challenging to change the organization of the data after they become set
as documents, so it is essential to decide how to organize the data before adding it to
the MAXQDA project. Luckily, MAXQDA is very versatile, and “there will always be
other ways to accomplish the same task” (Woolf & Silver, 2018, p. 73), even though less
straightforward, as we demonstrated in this study case.
Coding process: In our case study, we only demonstrated how we analyzed data at the
group level. However, owing to the characteristics of focus group discussions, it is pos-
sible to analyze the data at the participant level. MAXQDA can simplify the process of
importing focus group transcripts that can be coded automatically with structural
codes with the name of the focus group and sub-codes with the name of participants
(see Kuckartz & Rädiker, 2019, pp. 203–205). These codes will allow for the analysis of
each participants’ contributions.
Analysis process: We analyzed our data using standard analysis methods for qualitative
interviews. However, there are special techniques developed for group analyses, and
there are several functions in MAXQDA that were developed specifically for this form
of data. We recommend exploring these functions in Kuckartz and Rädiker (2019).
52 N. Santos, V. Monteiro, L. Mata
We hope that these recommendations will help and inspire other researchers who are con-
ducting similar analyses and experience similar issues.
Bibliography
Andersson, N. (2016). Teacher’s conceptions of quality in dance education expressed through grade
conferences. Journal of Pedagogy, 7(2), 11–32. https://doi.org/10.1515/jped-2016-0014
Azis, A. (2015). Conceptions and practices of assessment: A case of teachers representing improve-
ment conception. TEFLIN Journal, 26(2), 129–154. https://doi.org/10.15639/teflinjournal.v26i2/
129–154
Bengtsson, M. (2016). How to plan and perform a qualitative study using content analysis. Nursing-
Plus Open, 2, 8–14. http://dx.doi.org/10.1016/j.npls.2016.01.001
Brinkmann, S. (2013). Qualitative interviewing. Oxford.
Brown, G. T. L. (2008). Conceptions of assessment: Understanding what assessment means to teachers
and students. Nova Science Publishers.
Brown, G. T. L. (2013). Conceptions of assessment. Understanding what assessment means to teachers
and students. Nova Science Publishers.
Brown, G. T. L. (2018). Assessment of student achievement. Routledge.
Carless, D. (2009). Learning-oriented assessment: Principles, practice, and a project. In L. H. Meyer,
S. Davidson, H. Anderson, R. Fletcher, P. M. Johnston, & M. Rees (Eds.), Tertiary assessment and
higher education student outcomes: Policy, practice, and research (pp. 79–90). Wellington, NZ: Ako
Aotearoa & Victoria University of Wellington
Chmiliar, L. (2012). Multiple-case designs. In A. J. Mills, G. Durepos, & E. Wiebe (Eds.), Encyclopedia
of case study research (pp. 583–584). Sage. https://doi.org/10.4135/9781412957397.n216
Creswell, J. W., & Poth, C. N. (2018). Qualitative inquiry and research design: Choosing among five ap-
proaches. Sage.
Gipps, C. (1999). Socio-cultural aspects of assessment. Review of Research in Education, 24, 355–392.
https://doi.org/10.3102/0091732X024001355
Guest, G., Namey, E. E., & Mitchell, M. L. (2013). Collecting qualitative data. A field manual for applied
research. Sage.
Kuckartz, U., & Rädiker, S. (2019). Analyzing qualitative data with MAXQDA. Text, audio, and video
Springer Nature Switzerland. https://doi.org/10.1007/978-3-030-15671-8
Miles, M. B., Huberman, A. M., & Saldaña, J. (2014). Qualitative data analysis: A methods sourcebook
(3rd ed.). Sage.
Remesal, A. (2006). Los problemas en la evaluación del aprendizaje matemático en la educación oblig-
atoria: perspectiva de profesores y alumnos [Problems in the evaluation of mathematical learning
in compulsory education: Perspectives of teachers and students] (Doctoral Thesis). Universitat de
Barcelona, Departament de Psicologia Evolutiva i de l'Educació, Barcelona.
Saldaña, J. (2013). The coding manual for qualitative researchers. Sage.
Woolf, N. H., & Silver, C. (2018). Qualitative analysis using MAXQDA: The five level QDA method.
Routledge.
Using MAXQDA in Qualitative Content Analysis 53
Lourdes Mata is an Assistant Professor at the ISPA – Instituto Universitário. She graduated
in Educational Psychology and has a PhD in Children’s Studies by the Universidade do
Minho. She studies the affective components of the learning processes, aiming to identify
and characterize individual beliefs and affective learning facets among students through-
out schooling.
ORCID: https://orcid.org/0000-0001-8645-246X
This study was supported by the FCT – Science and Technology Foundation – Research
project PTDC/MHC-CED/1680/2014 and UID/CED/04853/2016.
Using MAXQDA’s Visual Tools:
An Example with Historical Legal Documents
Andreas W. Müller
Abstract
To analyze process-generated data, like court records, protocols, media reports, or forms,
MAXQDA and especially its visual tools are useful to investigate their internal structure.
This chapter illustrates how court records from the 16th and 17th century were analyzed
using the Compare Group function, Code Matrix Browser, Document Comparison Chart,
and the Code Relations Browser. By using consistent analytical units that reflect the inter-
nal structure of the documents and by utilizing code colors, structures within the data
emerge that have major implications for studying their creation and use. In this example,
the Compare Group function highlighted differences between the two analytical groups.
The Code Matrix Browser was used to investigate the changes of subject along the tem-
poral sequence of the documents. The Document Comparison Chart allowed me to ana-
lyze the practices of the two courts by analyzing their rigor and consistency in the subjects
they cover. Finally, the Code Relations Browser provided insights into the internal con-
sistency and usage differences of the concept-driven code system.
1 Introduction
The data at the heart of this analysis is not a very recent one. In fact, it was written 400 years
ago. Still it can be effective in demonstrating a general methodological challenge: how to
analyze process-generated data with the help of visual tools. Unlike interviews or surveys,
the umbrella term “process-generated data” (compare Bauernschmidt, 2014, p. 418) co-
vers all research data that is not created with a scientific research interest in mind. Instead,
the researcher is faced with data that was created out of a practical purpose. Instead of
carefully controlling the data creation, the researcher is often unfamiliar with the material
that is already in final form and cannot be influenced or changed. The analyst here must
56 A. W. Müller
make the best of the material available. For analyzing such data, visuals tools can be the
researcher’s best ally because they allow to unveil thematic structures that provide insights
into the usage and circumstances of the creation of the data. Overall, this use case demon-
strates how MAXQDA and especially its visual tools can be useful to analyze process-gen-
erated data and what precautions must be considered before setting up the analysis.
The data analyzed here are both: strongly structured and unfamiliar to anyone living in
the 21st century. As court records, they are the product of a process of varying formality.
The result of these legal processes comes by and large out of a black box. Researchers know
the general proceedings of early modern courts but in most cases cannot trace the individ-
ual people, customs and decisions that caused the final form of the material (Voltmer,
2015, pp. 30–33).
The documents analyzed here are summarized “confessions protocols.” This type of
document is called “Urgicht” by contemporaries and summarizes the final deeds that the
accused (supposedly) admitted (Dillinger, 2007, p. 193). They were created during witch-
craft trials in Rostock (Germany, 1584) and Hainburg (Austria, 1617–18). The aim of the
research was to apply a theoretical model of witchcraft belief, created by theologians in the
early modern period and reconstructed by historians today. Do these two distant places
follow the doctrinal teachings of their time? Do they use shared ideas and concepts of
witchcraft? What unique local elements do they incorporate? Such were the research ques-
tions that were largely answered with the help of MAXQDA. In this paper the methods and
the software side of the research are presented. The relevant findings for historical research
have been published in detail in Müller (2019).
These five aspects are well-known and commonly used in the literature as the “elaborated
concept of witchcraft” (Behringer, 1997, p. 15, 2004, p. 57; Dillinger, 2007, p. 21; Goodare,
2016, p. 76; Lorenz, 2004, p. 131). In this analysis the five aspects served as the concept-
driven code system applied to the material. For a more detailed analysis the category of
“harmful magic” was subdivided by creating data-driven, inductive sub-categories
(Mayring, 2015, pp. 97–109). In several iterations of coding, each paragraph of the tran-
scripts was coded with one or several of the five categories (if applicable). Thus, the content
was structured along specific themes based on the literature.
However, at the center of the analysis stood four analytical tools: the Compare Group
function, the Code Matrix Browser, the Document Comparison Chart and the Code Rela-
tions Browser.
In this project, the number of coded segments in the compared groups was almost identi-
cal (373 to 353) and likewise the number of documents were close (18 to 19). If the two
groups had differed more regarding the number of documents and coded segments, using
percentages might become an important step. In Fig. 3, the tendency does not change
much when percentages per column are chosen to even out the different number of para-
graphs between the groups.
To elaborate only on one example, in Hainburg, a small town in a wine growing region
transforming the weather by spells was the most frequently reported instance of “black
magic” (65 paragraphs). Not only did the destruction of crops and wine during the so-
called “Little Ice Age” (Behringer, 1991, p. 339) cause great mischief for the people, but also
the idea of the weather making witch was more prominent in Catholicism. In contrast, this
category is almost nonexistent in Rostock, where trade and growing hop and barley was
less vulnerable to weather and theological ideas regarded influencing the weather as su-
perstition. Instead, causing damage against people, such as spreading sickness and caus-
ing accidents, was the dominant deed reported in the large town of Rostock.
Here the Compare Group function utilizes the internal structure of the process-gener-
ated data and shows some broad stroke differences between groups of the material. In this
analysis, both groups were of a similar size and length. The segment was in itself meaning-
ful. However, in other cases it is more important to think closely about the size of coded
segments. If there is no consistent meaning attributed to each segment, one is often forced
to binarizing the frequency based on the number of documents (by clicking on the icon
Count hits only once per document in the toolbar). In this case, each document would only
be counted once. As Fig. 4 shows, this would have blurred most of the more nuanced find-
ings.
Fig. 4: Compare groups, count coded segments from each document once only
Here, the difference in weather magic is still visible, as most documents in Rostock do not
treat weather magic at all. However, the strong emphasis put on the pact of the devil in
Hainburg becomes completely invisible. The pact was mentioned in all documents in Ros-
Using MAXQDA’s Visual Tools 61
tock at least once, but in Hainburg it was elaborated for many dozens of paragraphs. This
effect only becomes visible when total numbers of meaningful segments are considered.
By having gathered document variables throughout the process, the researcher is not
limited to contrasting document groups. In fact, a table just like the above can be created
based on any document variable. For example, one could contrast the very unevenly large
groups of people tortured and not-tortured in this sample (Fig. 5).
Although only 2 of 19 people in Hainburg were not tortured and instead confessed quickly
to escape the pains of interrogation, we can compare the percentages of these two with the
rest of the sample. As Fig. 5 shows, we see that the confession without torture focused far
less on harmful magic (30% compared to 49%). This indicated that the court was less in-
terested in black magic compared to the diabolic elements of the pact, intercourse, gath-
ering and flight. Again, if one would only look at binarized values per document, no effect
at all would be visible as these five broad categories come up one way or another in each
document.
We can see therefore when intending to analyze code frequencies (with whatever tool)
it is of crucial importance to think beforehand what each segment is reflecting. If one can-
not attribute meaning and consistency to each segment, the analytical options will be lim-
ited. When analyzing process-generated data an internal structure often exists. Paragraphs
in legal texts, numbers of articles in a newspaper, or numbers of speeches in parliamentary
debates, they all can be meaningfully quantified and be analyzed with the Compare Group,
Code Matrix, and Cross Table function.
get longitudinal insights in the changes of topic over time. Each of the two mass trials
lasted for roughly a year from the first accusation of a person to the execution of the last.
In this way one can investigate different stages of the trials and compare them based on
the topics that appear.
Fig. 6 shows the Code Matrix Browser for the documents of Hainburg. Now, instead of each
column representing a group, each column represents an individual document. The num-
ber of H01 to H19 follows the sequence of the trials. The size of each square reflects the
frequency of this code in the respective document. Because the individual documents vary
in length, I chose the option to calculate the size of the symbols based on the column to
correct for that. In relation to the absolute number of segments, the observed effect would
remain the same, however shorter documents would become less visible. In Fig. 6, we see
that weather magic was much more prominent in documents H05 to H12. Also, we see that
the first document H01 is much more diverse in topics.
Manually adding some lines in an image editing software between the three stages of
the trial (1st, 2nd, and 3rd wave), the pattern becomes even more pronounced. As Fig. 7
shows, in the early stage of the trial, the accusations were more “individual.” In the second
stage from H05 to H12 a strong emphasis on bad weather (maybe due to recent events)
influenced the trials. In the third stage, black magic became less prominent in general, as
shorter trials only focused on the more diabolic issues such as pact and intercourse with
the devil or the witches’ gathering and flight.
Looking at the same overview for Rostock in Fig. 8, no such clear stages exist. However, a
clear change in the pattern from document R09 to R12 occurs. There, the focus shifts from
damage against people to damage against animals. This shifting pattern led to a follow up
investigation. First, I checked the document variables for commonalities that might help
to explain this shift. Here, I noticed that all four accused came not from Rostock itself but
from the small village of “Warnemünde” near the town of Rostock. Closely rereading the
statements with a focus on the socio-economic life of the accused brought to light that they
were herders from the countryside, not beggers in the streets. Thus, they are more fre-
quently connected to causing sickness or death among livestock.
For interpretations such as these, again a meaningful size of the coded segment is key. With
only binarized values per document, no clear pattern would become visible. This shows
how a look at the distribution of code frequencies can help to identify or analyze different
stages that process-generated data covers. For this it is necessary to think closely on how
to name and sort the documents. Only by using document IDs that reflect the time se-
quence, temporal changes could have been made visible. A random order of the docu-
ments above would make it much more difficult to interpret the results. This analysis
shows where patterns are followed and where individual differences occur. This can be
particularly useful in tracing different influences on the data and understanding the mech-
anisms that created it.
As a follow up to this analysis, Document Sets could be created, representing the dif-
ferent stages of the trial and to compare them qualitatively or quantitatively with the Com-
pare Groups function. In this way the visual tools not only allow to look at final results but
also can inspire further research steps as they allow to identify patterns that can be inves-
tigated further.
64 A. W. Müller
In Fig. 9, the 19 documents of the Hainburg trials can be seen. Each line represents one
document and each column one of the numbered paragraphs: Paragraphs 3–6 are pre-
dominantly red, while paragraphs 7–9 are blue and pink, and from 10 onwards black is of
great prominence. This shows that a structure was consistently followed in the creation of
these documents. The documents all began with describing the meeting and alliance with
the devil, afterwards went on describing the flight and meeting of the witches and lastly
listed a varying number of evil magical deeds such as influencing the weather, harming
crops, livestock, or humans. This visualization also draws attention to the changing length
of the documents. Here, the variation in length is mainly made up by the varying detail on
black magic. In some documents the list of magical deeds continued extensively (e.g., H01,
H07, H10). Others ended quickly after the first four categories were briefly covered. In this
case the individual variety of the documents lies not in the more “theological” elements of
the pact of the devil but in how many or how few statements about magic were made. Here,
individual statements were more likely to influence the subject, whereas the more firmly
codified verdicts of the devil’s pact and magical meetings are at least in quantity less im-
pacted by the statements of the accused.
Fig. 10: Document Comparison Chart, Rostock (only first 30 paragraphs are displayed)
As a contrasting example, Fig. 10 shows the trials of Rostock. Here, the structure is less
clear. Many gaps appear early in the material and the distribution of topics is less pro-
nounced. The courts in Rostock apparently did not follow such a clear pattern as their
counterparts in Hainburg. The contents were more flexible. However, the basic pattern
red-blue/pink-black is still recognizable here. In fact, the pattern seems to be trans re-
gional and be followed by both places. Here too, most documents start with the pact with
the devil and move on to magical flights and meetings while ending on black magic. When
filling in the gap by activating further codes created during the analysis, the differences
between both places quickly become clear.
66 A. W. Müller
Fig. 11: Document Comparison Chart, Rostock with popular magic (only first 30 paragraphs are
displayed)
Fig. 11 shows a yellow code that represents non-harmful magic use such as fortune telling
or healing, which fills in many of the gaps. Whereas in Hainburg all magic was by definition
diabolic and harmful, in Rostock positive magic attempts (considered as superstition or
fraud) were considered to exist without the immediate influence of the devil. Several doc-
uments (R01, R02, R03, R05, R07, R12, R13) start with non-maleficent magic use and more
resemble a two-part structure of non-harmful and harmful magic use intersected by the
meeting with the devil.
In many contexts, such structural analysis and comparison can be of great use espe-
cially for process-generated data. However, the Document Comparison Chart here does
not only severe as a tool for the presentation of results, instead it can be used as a roadmap
into once own data. Each of the cells above can be used to navigate into the specific docu-
ment right away. If the gap in R01, paragraph 6, needs investigation, one click opens the
corresponding section in the Document Browser.
Furthermore, this visual overview can provide similar insights as the Code Coverage
analysis. One look at the Document Comparison Chart and the researcher will immedi-
ately be reminded how much data was coded in comparison with the total material. Espe-
cially during inductive category formation, this can be an important reminder and in times
quite humbling when one is reminded of how little of the data most of the interpretations
are built on.
Using MAXQDA’s Visual Tools 67
In Fig. 12, the Code Relations Browser (available in the Visual Tools menu) for Rostock is
shown. The size of the icons reflects how frequently each code co-occurs with another
code at the same text segment (paragraph). Here, we see 16 co-occurrences of the witches’
flight and gathering as well as 21 of learning of magic and the devil’s pact. Both combina-
tions narratively often appear together. Damage against people and intercourse with the
devil highlights five narratives where the jealous devil is taking action against the witch’s
husband.
More interesting than these clear-cut co-occurrences in Rostock is the much greater
richness in code-co-occurrences in Hainburg (Fig. 13). Here, various combinations exist
frequently. Although the length and level of detail of each paragraph is similar in both data
sets, the patterns observed here are strongly different. In Hainburg, all kinds of witchcraft
elements are described together and reflect a well-integrated set of beliefs. For example,
the magical flight co-occurs with other codes at 48 segments. The flight’s details are dis-
cussed, locations mentioned, and various spells conjured in the air. In Rostock, on con-
trast, the witches flight exclusively occurs whenever there is a gathering. In all other deeds
68 A. W. Müller
or daily narratives this element is completely absent from the trial records. This is a strong
indicator that at the protestant court of Rostock the magical flight was a rather unfamiliar
element that was only needed to explain the gatherings. Likely the flights were interpreted
as dreams and not considered an essential set of a witch’s powers.
4 Lessons learned
MAXQDA can be useful for identifying patterns in process-generated data. Whether they
are reports in the media, protocols from political events, records from administrations, e-
mail or letter correspondence or even archival data, MAXQDA can help to identify patterns
of topics and to investigate the potential connections and external influences causing reg-
Using MAXQDA’s Visual Tools 69
Bibliography
Bauernschmidt, S. (2014). Kulturwissenschaftliche Inhaltsanalyse prozessgenerierter Daten. In C. Bi-
schoff, K. Oehme-Jüngling, & Leimgruber (Eds.), UTB Kulturwissenschaft: Vol. 3948. Methoden der
Kulturanthropologie (pp. 415–430). Haupt Verlag.
Behringer, W. (1991). Climatic change and witch-hunting: The impact of the Little Ice Age on mental-
ities. Climatic Change, 43, 335–351. https://doi.org/10.1023/A:1005554519604
Behringer, W. (1997). Hexenverfolgung in Bayern: Volksmagie, Glaubenseifer und Staatsräson in der
Frühen Neuzeit (3rd ed.). R. Oldenbourg.
Behringer, W. (2004). Witches and witch-hunts: A global history. Polity Press.
Dillinger, J. (2007). Hexen und Magie: Eine historische Einführung. Campus Verlag.
Goodare, J. (2016). The European witch-hunt. Routledge.
Kuckartz, U. (2014). Mixed Methods: Methodologie, Forschungsdesigns und Analyseverfahren. Springer
VS. https://doi.org/10.1007/978-3-531-93267-5
70 A. W. Müller
Lorenz, S. (2004). Der Hexenprozess. In S. Lorenz (Ed.), Wider alle Hexerei und Teufelswerk: Die Euro-
päische Hexenverfolgung und ihre Auswirkungen auf Südwestdeutschland (pp. 131–154). Thorbe-
cke.
Mayring, P. (2015). Qualitative Inhaltsanalyse: Grundlagen und Techniken (12. ed.). Beltz Juventa.
Müller, A. (2019). Elaborated concepts of witchcraft? Applying the “elaborated concept of witchcraft”
in a comparative study on the witchcraft trials of Rostock (1584) and Hainburg (1617–18). E-Rhi-
zome, 1(1), 1–22. https://doi.org/10.5507/rh.2019.001
Voltmer, R. (2015). Stimmen der Frauen? Gerichtsakten und Gender Studies am Beispiel der Hexen-
forschung. In J. Blume, J. Moos, & A. Conrad (Eds.), Frauen, Männer, Queer: Ansätze und Perspek-
tiven aus der historischen Genderforschung (pp. 19–46). Röhrig Universitätsverlag.
Abstract
MAXQDA is a powerful tool for researchers of all levels, from undergraduates to doctoral
students to seasoned researchers. With an organized structure to guide new users,
MAXQDA can be easily used in undergraduate research. This chapter provides a case study
of a student-driven research project showing how MAXQDA was used in a systematic way
from crafting a literature review to developing a coding system and then to analyzing
coded data. The research project showcased is focused on a legal research project in an
undergraduate criminal justice course, which examines how lower courts interpret and
comply with a U.S. Supreme Court decision. MAXQDA was used for the entire project, in-
cluding conducting a literature review, paraphrasing cases to develop a coding system, and
analyzing coded data. Code Frequencies were used to get an overview of the categorized
crimes, the Code Relations Browser made it possible to look for code combinations, and
Summary Tables were used to provide concise summaries of specific code usage.
1 Introduction
Student research using MAXQDA can seem overwhelming at first glance, given the wide
range of tools available in the software, and no clear process of where to begin in a project,
and once cases are coded, how to structure an analysis of the data. This chapter provides
an example of a research project that was conducted by undergraduates and is intended
to suggest a workflow that can be easily modified by others in conducting research across
different disciplines.
72 M. C. Gizzi, A. Harm
The students in this project were new to MAXQDA and learned how to use the software
in a stepwise learning approach, in which we only taught the tools necessary to accomplish
each task. For example, in the beginning, we introduced MAXQDA to students using ex-
amples from other projects. And then proceeded to discuss the systematic approach we
would take in this project, beginning with writing a literature review using MAXQDA to
develop our research questions, exploring the documents using paraphrasing to develop a
codebook, coding the data, and then analyzing the data with a focus on visual tools. We
only discussed how to use the specific tools we needed for each task as we were completing
that step in the process. It was a form of “just-in-time” training which was quite effective.
The student research project was focused on the judicial impact of a United States Su-
preme Court decision, which explores the ways the lower courts implement and interpret
judicial policies established by the Supreme Court (Canon & Johnson, 1988). In our case,
we were examining a judicially created policy called the “third-party doctrine” (TPD),
which allows law enforcement to seek information from third parties (banks, phone com-
panies, internet service providers, etc.) without a search warrant. The third-party doctrine
is based on the principle that when an individual conducts business with a business or
organization, like a phone company or bank, they have no privacy interest in the transac-
tion records of the user’s business relationship. And as a result, they cannot make a claim
to be protected under the Fourth Amendment to the U.S. Constitution against “unreason-
able searches and seizures” and government actors are not required to seek a warrant.
The third-party doctrine developed in a series of cases in the 1970s (United States v.
Miller, 1976; Smith v. Maryland, 1979) and has been consistently been interpreted by lower
courts as not requiring search warrants and has been used extensively by police as a way
to gain evidence in criminal cases. Criticisms of the vast amount of discretion that the
third-party doctrine provided law enforcement at the cost of individual rights and privacy
has grown in the past decade (see, e.g., United States v. Jones, 2012).
In 2018, the Supreme Court decided a case that for the first time limited the govern-
ment’s ability to conduct warrantless searches under the third-party doctrine. Carpenter
v. United States involved law enforcement requests to cell phone providers to provide “cell
site location information” (CSLI) for specific phones. These records provided a detailed set
of breadcrumbs providing information as to the location of a user’s cell phone. The Su-
preme Court held that the privacy interests were so significant that the third-party doctrine
would not be applied to this type of request. The Court’s decision was seen as the first step
in reconsidering the third-party doctrine.
A judicial impact study might appear to be a major undertaking for an undergraduate
research project. MAXQDA offers the tools to minimize these concerns. Through a struc-
tured process, we were able to move the project from an idea to a complete analysis. This
had several distinct steps:
Using MAXQDA from Literature Review to Analyzing Coded Data 73
2 Literature review
In our study, we were interested in both the primary question of how case outcomes have
changed as a result of the Carpenter decision and a broader question of better understand-
ing the population of cases utilizing the third-party doctrine. We also wanted to know what
types of criminal investigations involved third-party doctrine requests, and what types of
third-party doctrine tools were used. We began the study with a literature review on the
Carpenter decision and the third-party doctrine in general. We conducted searches in sev-
eral academic databases, resulting in 14 articles from law review and popular media. Arti-
cles were downloaded as PDF files. News stories from websites were downloaded using the
Google Chrome MAXQDA Web Collector extension.
A MAXQDA project was created for the literature review and the collected data was im-
ported (Import > Documents and Import > Web Collector Data). This project file was used
only for the review. The newspaper and magazine articles were stored in a separate docu-
ment group to make it easier to distinguish them from the academic research (Fig. 1). Each
article was read, and relevant parts of articles were paraphrased using MAXQDA’s Para-
phrase tool (Analysis > Paraphrase Document).
Once completed, we used the Paraphrases Matrix option (Analysis > Paraphrase > Para-
phrases Matrix) to view each document’s paraphrases in their entirety. The paraphrases
were copied and pasted into a new document memo created for each article. The docu-
ment memos became the equivalent of 3”x5” index cards for each article’s notes. The doc-
ument memos were then used to add in a written synopsis of the article’s main points. In
addition, when creating the document memos, we used five differently colored memo
icons to distinguish between articles that focused on different aspects, such as technology
or government efforts to invade privacy. Together the memos were used to write up the
results of the literature review.
The literature review helped us to identify our final research questions, which were es-
sential for the main part of the project:
How have lower courts interpreted the Carpenter precedent, as it relates to other third-
party doctrine issues, beyond cell phone tower logs (CSLI)?
What surveillance tools do police use in third-party doctrine cases?
What types of crimes do these cases involve?
What legal doctrines have judges relied on in making their decision?
To what degree have arguments made in Carpenter for potentially eliminating the
third-party doctrine been referenced in lower court decisions?
the documents in MAXQDA’s Document System, where we created two similarly named
document groups (Fig. 2).
Fig. 2: Two document groups in the Document System for structuring the legal cases
The paraphrases were then explored in the Categorize Paraphrases tool available in the
Analysis > Paraphrase menu, and codes were developed from them. This was not a purely
inductive process, as we had initial categories, but there were several groups of codes that
were developed entirely through the paraphrasing.
This gave us a simpler coding system (Fig. 4) which became important when analyzing
the data. Were we to do this again, we would have begun with a simpler set of alleged
crimes, and just used comments on coded segments to identify the specific type of crime.
Became:
Fig. 5: Code Frequencies showing the number of documents containing a “Crime” sub-code
The second to the last row of the Code Frequencies table revealed that there were 21 doc-
uments without codes. Considering that every case involved a crime, this meant that there
were documents that had not been coded for “alleged crimes.” Since the original coding
scheme included both codes for “alleged crimes” and then the specific crime as sub-codes,
we right-clicked on the top-level code “Alleged Crime” and selected Transform into Cate-
gorical Variable to easily identify the documents without alleged crimes coded. To accom-
Using MAXQDA from Literature Review to Analyzing Coded Data 79
Fig. 6: The use of code memos to provide descriptive statistics about the top-level group
80 M. C. Gizzi, A. Harm
When prompted for the column top-level codes, we chose “Crime groups” and this re-
sulted in a Code Relations Browser as shown in Fig. 9. The default view presented the rela-
tionships of codes with different sized squares. We then used the Heat Map option to focus
on the key differences in the data.
Fig. 9: Code Relations Browser used to demonstrate the relationship between crime type and
third-party doctrine (TPD) surveillance tools
The Code Relations Browser helped us see the clear difference in the use of TPD tools by
crime types. For example, surveillance cameras were used in every category except sexual
misconduct with a minor/child pornography cases but was most prevalent in armed rob-
bery and drug cases. Internet IP addresses and subscriber information was used across
many crime types but was most common in the sexual conduct cases. If our sample was a
random sample (instead of a complete population of cases), we could have used MAXQDA
Stats to confirm the hypotheses that differences were statistically significant by calculating
crosstabs and p values for example. With this information in hand, we were ready to begin
to explore the data more carefully.
of the data. We used three of the options in the title bar of the Retrieved Segments window:
the Smart Coding Tool, the Overview of Coded Segments, and Word Cloud.
Fig. 10: Compiling relevant coded segments in the Retrieved Segments window
The Overview of Coded Segments and the Smart Coding Tool accomplish much of the
same in terms of scrolling through the coded segments, but we chose to utilize the Smart
Coding Tool, because it provides the ability to view the segments in tabular view, with ac-
cess to any code comments, and shows all of the codes linked to each segment (Fig. 11). It
is a true multi-purpose tool that can be retrieved from several places within MAXQDA. The
comments column is editable, and you can add additional comments as you are examining
the data.
Fig. 11: Using the Smart Coding Tool to examine coded segments and add comments
Using MAXQDA from Literature Review to Analyzing Coded Data 83
The Smart Coding Tool enabled us to gain deeper insights into the data, at a micro-level,
looking only at one specific set of coded segments. We turned next to the Word Cloud fea-
ture within retrieved segments, not because we wanted to create a visual representation of
our data (although we could certainly do that later in the process of writing) but because
we wanted to investigate the context of specific words that appeared in the Word Cloud.
Accessing Word Cloud from Retrieved Segments, instead of from the main menu item (Vis-
ual Tools > Word Cloud), has the advantage of only using the words that appeared in the
Retrieved Segments window. It is a valuable way to get a picture of the prevalence of what
was included in that particular sample (Fig. 12).
Fig. 12: Word Cloud for the coded segments compiled in the Retrieved Segments window
The Word Cloud’s true power of analysis is the ability to view its underlying data. While
viewing the Word Cloud, we double-clicked on a term, for example, “address” and this
produced the results of a search for the term “address” within the retrieved segments. We
were able to read through how the word appeared in the data and began to identify pat-
terns and underlying context (Fig. 13).
Like with the Smart Coding Tool, the Word Cloud search results could easily be copied,
further coded, and exported. Together these tools provided us with a richer understanding
the reasons why IP addresses were found in combination with sexual misconduct cases.
84 M. C. Gizzi, A. Harm
Fig. 13: Use of Word Cloud to further examine common words in coded segments
Fig. 14: Code Matrix Browser view analysis by document sets (in the columns)
With this information, we could double-click on either cell (21 or 6) and view the retrieved
segments for those cases, and further examine them using the tools described above (Re-
trieved Segments, Smart Coding Tool, and Word Cloud).
Fig. 15: The Summary Grid is used to create concise summaries of coded segments
Once completed we opened the Summary Tables view (Analysis > Summary Tables) and
created a new table, including the codes “Sexual Misconduct” and “IP Address.” We also
selected several document variables, including “Year,” “Shepards,” and “Case Name.”
Fig. 16: A Summary Table of the cases involving sexual misconduct and IP address searches
Using MAXQDA from Literature Review to Analyzing Coded Data 87
The resulting table provided a concise summary of the cases that included the “Sexual Mis-
conduct” code (Fig. 16). The Summary Table can be exported out of MAXQDA, but it can
also be turned into a table document, where it could be further coded if necessary. For our
project, the Summary Table was primarily used in writing the final results of the study. We
included several Summary Tables in the final paper.
6 Lessons learned
This chapter has provided a case study of one way of using MAXQDA to conduct a research
project. It has demonstrated how the power of MAXQDA can be harnessed by undergrad-
uates with little experience with the software. We believe that by following a similar pro-
cess, students can achieve results using MAXQDA at a level significantly beyond what is
reasonable for a traditional student research paper. With proper guidance, students can
accomplish a lot with MAXQDA.
One lesson we learned from this process is to realize that every project is different, and
the workflow we use will differ depending on the research questions being answered and
the nature of the data. It is important to recognize that you do not need to know every tool
MAXQDA offers. We selected an analytical process that made sense for our questions.
There were other ways we could have proceeded, and other tools we did not even consider.
Just because MAXQDA offers a smorgasbord of qualitative research tools does not mean
you have to eat everything that is on the table. We chose a set of tools that made sense
given our questions, and which could be easily used for each of the questions we examined.
It makes sense to have one systematic process in mind, which you can adjust to an indi-
vidual project by selecting the appropriate tools for each main step. Typically, these are
literature reviews, data exploration, coding data, analyzing coding data, and transforming
insights into a final report or paper.
To determine what that looks like for a project will take some exploration, and obvi-
ously the more familiar you are with the software and its options, the easier it is to establish
a workflow. While the “guide map” of steps we chose made sense for this project, an en-
tirely different approach might be selected with a different research question. For example,
in this project, we knew that many of the cases being studied would have a result unfavor-
able to the defendant, because the factual circumstances of the cases occurred before Car-
penter v. United States was decided. Knowing this, our research questions were generally
not focused on “who won” but instead on exploring how courts were evaluating third-
party doctrine issues. This caused us to select a specific set of analytical tools. If our goal
was to see how Carpenter impacted the outcome of cases, we would have centered our
coding scheme on that issue and quantified our results to do more of a mixed methods
approach, with both the qualitative descriptions of what happened in cases and also used
88 M. C. Gizzi, A. Harm
statistical analysis to look for correlations and do comparisons of how case outcomes and
legal reasoning differed in state and federal courts.
Finally, it’s important to know that this process occurred over the course of a semester.
The undergraduate students had no prior experience with MAXQDA and were able to learn
how to use the software to conduct the literature review, and use the analytical tools de-
scribed above, with just a few tutorial overview sessions. MAXQDA is a powerful software
tool, with numerous analytical techniques, but it can be successfully used with just a few
hours training. Indeed, much of the process described in this chapter was first included in
a brief handout with step-by-step instructions and screenshots of how to do it. And then
we walked through each task together. We found that stepwise learning is a highly effective
approach in teaching MAXQDA in a research class. We did not teach all of the tools that
are in MAXQDA, but only those necessary to accomplish the next step. This process of
learning avoided having the user overwhelmed by all of the features in MAXQDA. Thus, in
the beginning, we taught data management and writing memos, then paraphrasing and
literature reviews. In the next step, we turned to coding and finally analysis, with a partic-
ular focus on visual tools.
Bibliography
Canon, B. C. & Johnson, C. A. (1988). Judicial policies: Implementation and impact (2nd ed). CQ Press.
United States v. Jones, 565 U.S. 400 (2012).
United States v. Miller, 425 U.S. 435 (1976).
Smith v. Maryland, 442 U.S. 735 (1979).
Carpenter v. United States, 585 U.S. ___ (2018).
Alena Harm is a master’s student in criminal justice at Illinois State University, USA, grad-
uating in 2021. She has a bachelor’s degree in criminal justice and psychology and has used
MAXQDA in graduate work for two years.
Using MAXQDA for Analyzing Focus Groups:
An Example from Healthcare Research
Matthew H. Loxton
Abstract
This chapter will discuss the ways in which MAXQDA supported the collection, analysis,
and reporting of our focus group data related to healthcare improvement. It deals mainly
with one specific study regarding the activation of a new primary care facility (PCC), but
also draws from many other examples where focus groups were used. The chapter de-
scribes our use of MAXQDA focus group features, Word Clouds, and Keyword in Context,
as well as use of Visual Tools such as the Document Portrait, Code Relations Browser, and
Code Matrix Browser. The chapter also deals with importing variables, and use of memos,
paraphrases, and summaries related to focus groups. The use of Lexical Search, code im-
ports, and auto-coding are also covered.
1 Introduction
Hospitals are multi-domain expert communities in which collaboration, science, and evi-
dence are prioritized, but frequently silos of expertise or specialty-focus result in processes
that are fragile, poorly meet patient needs, or are mutually corrosive. High-functioning
processes in one department often result in chaos for another. We have found focus groups
to be very useful in bridging silos, identifying key issues, and exploring new methods.
Popularized by Merton at the US Bureau of Applied Social Research at Columbia Uni-
versity in 1946 (Lee, 2010), and possibly first described as a method by Bogardus in the
1920’s (Jackson, 1998), focus groups have been used extensively in varied applications,
such as marketing, public relations, political campaigns, product design, quality manage-
ment, and computer user experience and interface design. Focus group studies have en-
90 M. H. Loxton
joyed broad adoption as a relatively low-cost and moderately effective means to explore
open-ended questions with small groups of people selected to represent some target pop-
ulation, to test ideas, products, concepts, or scenarios, elicit reactions and sentiments, and
spark innovation.
Our focus group sessions often included the use of discussion prompts such as product
examples, images, audio tracks, or video clips. Some involved walking through a location,
such as a ward floor, surgery, or other medical environments. These artifacts and events
required capture and analysis within MAXQDA.
We used the full array of MAXQDA’s rich functionality for analysis and reporting on
focus group data, including metadata variables, analysis and mixed methods tools, visual
tools, and reporting. We also used MAXQDA statistical analysis and comparison tools. Go-
ing back to the initial description by Bogardus of a “group interview” (Bogardus, 1926), we
used two kinds of group interviews: traditional and nominal focus groups.
ting all the transcripts together in a single document allowed us to use the focus groups
Transcripts import functions and treat them as if they had been in a common session.
For example, we individually interviewed several nurses from different wards in the
same hospital and formed a nominal group. While they were never in the same session,
the degree to which they shared occupational experiences and commonality of work, al-
lowed us to combine their transcripts into a single nominal focus group document. We
grouped their responses by question, as if they had been in the same session, and had re-
sponded to each question in turn.
From a data perspective, the nominal focus group implies a need to capture the selec-
tion process and argument as part of the project data.
Topic
Participant
Group Interaction
Moderator
In our focus groups, we typically coded to suit a set of phenomenologically grounded top-
ics—typically related to what the participants felt “worked well”, had negative outcomes,
or where they believed there was a missed opportunity. In specific, our coding typically
focused on topics related to healthcare delivery; policy implementation, medical or allied
technology deployments, or changes to clinical or administrative workflow.
Our coding typically related to participants such as healthcare stakeholders, including
researchers, clinicians, administrators, and patients. Participants increasingly included
patients, patient advocates, and caregivers. Our focus group coding accommodated group
interactions, especially in the sense of participant-participant interactions, but also situa-
tions in which the moderator interacted with participants or acted as a participant.
From the descriptions of the traditional and nominal focus groups above, several data
types were identified in our studies.
1. Transcripts of sessions
2. Audio and video recordings of the sessions
3. General session notes captured by the moderator(s)
4. Participant interaction notes captured by a moderator
5. Audio, image, or video artifacts used as prompts
6. Planning and logistical notes
92 M. H. Loxton
7. Data related to characteristics of the participants, such as age, gender, role, position,
salary, race, accreditations, etc.
In some cases, we used multimethod and mixed methods approaches with focus groups
to explore or explain shifts, spikes, or dips in operational metrics, as well as why policy,
technology, or workflow adoption had been different to expectations or between groups.
More recently, with the trend towards performing appreciative inquiry studies, we are us-
ing focus groups to assist in providing the “thick” account of what is working well, and
potentially lead to innovation by exploring the contexts and dynamics of positive outliers
in the measurements.
For example, we used an existing patient safety code system in conjunction with com-
bined phenomenological, ethnographic, and grounded theory approaches to unpick root
causes and uncover unexpected dynamics and forces of change. These approaches were
augmented in some cases by running natural language processing (NLP) tools on the par-
ticipant contributions in order to quantify sentiment. The quantitative metrics, root
causes, and sentiment analysis enabled us to weave a powerful reporting narrative, and
may have helped to effect changes in policy, technology, and workflow.
Although these focus group studies varied considerably in size, duration, make-up, and
topic focus, some elements were frequently used across all projects, including:
Existing code corpus for quality and safety. The code system included dimensions and
facets that cover safety, timeliness, effectiveness, efficiency, equitability, patient-cen-
teredness, and accessibility of care delivery and supporting services.
Quantitative data from a wide variety of healthcare sources, that included incident sta-
tistics and details, patient flow measurements, patient outcomes metrics, and opera-
tional throughput and performance data. These included Control Charts from statisti-
cal tools such as R, Minitab, or SAS.
Sentiment analysis data from tools such as R.
Interviewer 1: Hi. So just some background. Like I said before, we are working with various primary
care stakeholders to help set up and facilitate planning and workflow development sessions for acti-
vating the Ambulatory Care Center (ACC) in October. XXXXX suggested that you would all be able to
help us to get some detailed insights on the patient and staff flow aspects of current day to day oper-
ations.
Interviewer 1: Can we start with introductions? Can you describe your role and involvement in the
project?
Participant 1: I am in PCMH, only one of a a few now. Am one of the unit managers team leader for
unit 1. One unit is offsite, and I am unsure how their workflow is done. Right now each unit is under a
nurse manager, and I have been working in a nursing home for 2-3 years, as well as the OR, doing
med-surg. Last 4 years I was in primary care.
Unit 1 has three care managers and we are short staffed. Have 8-9 doctors but they are part time.
Two medical residents. Only clinic operating on Saturday, but that is the NP.
Participant 2: Hi
Interviewer 1: Hi What is your role, and how long have you been here?
Participant 2: Hi, I’ve been at XXXXXXXXX for 12 yrs. I currently coordinate the mental health program
and intake using the biopsychosocial model. I also cover crisis care management. I was recently taken
on as acting supervisory psychology role, but I wasn’t previously directly involved in the care integra-
tion team
Participant 3: Sorry. Can you repeat that first bit again? Is this to do with the flow workshop?
Interviewer 1: Sure! Hi, yes we are working with XXXXXXXXXX to help set up and facilitate the work-
shop next week for planning and workflow development for the new facility, especially the integra-
tion of primary care and mental health in the new facility. XXXXXXXX suggested that you would be a
good person to provide us with some insights on the patient and staff flow aspects of current day to
day operation.
When imported, MAXQDA creates a code with the name of the speaker, and auto-codes
the text with that code. Fig. 2 shows the results of importing the same transcript, and the
count of coded segments (participant contributions) is reflected in a number to the right
of the code.
Fig. 2: Imported focus group transcript in MAXQDA’s Document System (left) and Document
Browser (right)
94 M. H. Loxton
Note: Each speaker should start with a new paragraph, and the speaker name, i.e., the
characters to the left of the colon, should be no more than 63 characters (including
spaces). To avoid inappropriate associations in the text, it is therefore necessary to re-
move all colons that are not associated with speaker changes prior to importing. In
this example, the transcript file was pre-edited to have speaker names and topics in
boldface simply to aid readability, but was not required by MAXQDA for coding.
After importing focus group data, we found it prudent to begin by reviewing the Document
System and Code System window to ensure all files were imported, all participants and
moderators were represented, and no unexpected participants were listed. Unexpected
items in the Document System or Code System were typically the result of errant colons in
the first 63 characters of a line in the transcription text, or variation in the spelling of par-
ticipant names. The easiest corrective action was to make a note of all the unexpected
items, and then before doing any other work in MAXQDA, use the Undo (Ctrl/cmd+Z) func-
tion to back out of the import, and correct the issues in the source transcription text.
MAXQDA will sort the codes in both the Document System and Code System window
in the order in which the text was encountered in the transcript. The order in the display
can be changed in both Document System and Code System by right-clicking on the par-
ent code and using the Sort function to sort alphabetically or by frequency, or by simply
dragging the code to the desired location in the hierarchy. Any changes in the order made
in the Code System will automatically be reflected in the Document System pane. As with
any other document type, focus group documents can be added to document sets. Adding
a focus group transcript to a set can be accomplished by dragging and dropping the docu-
ment onto a preexisting set. Additionally, if a number of focus group documents are acti-
vated prior to creating a new document set, they will automatically be added to the new
set when it is created.
Where recorded sessions were allowed by our client and agreed to by participants, we
used Skype to record the sessions. We auto-transcribed the audio files using an online tran-
scription service. We found this to be highly cost-effective, and it reduced our workload.
Although the audio files are large, we typically imported these and linked them to the tran-
scripts so that whenever there was any doubt about the context or meaning of a text seg-
ment, we had the original audio. Having a linked audio was often very useful to identify the
sentiment of a particular segment, as well as a way to perform quality checks on the tran-
scription and the interpretation by our team in their coding, paraphrases, or summaries.
This was particularly helpful when checking segments that had critical meaning, or to ver-
ify if something contradictory was said with sarcasm or in irony.
Using MAXQDA for Analyzing Focus Groups 95
Note: If transcripts contain timestamps, these will be removed by MAXQDA once the
associated media file has been assigned to it. If no transcript is available, the full fea-
tures of the inbuilt MAXQDA transcription tools are available.
Right-clicking the document name in the Document System results in the context menu
which now has an additional entry, Focus Group Speakers. Clicking on “Focus Group
Speakers” results in an overview pane (Fig. 4). The content of the pane can be filtered by
clicking on the Only activated focus group speaker’s icon.
The table provided us with an overview of the number of coded segments and coverage
associated with each participant. It was an early indicator of whether one or more partici-
pants were outliers, and either dominated the discourse or were mostly silent. We exported
the numerical values and graphs for use in our progress reports. The table view also gave
us the ability to add new user variables. In this example, the variable “Role” has been added
to distinguish between the major work divisions at the hospital. Variables can also be ac-
cessed by going to the “Variables” section in the ribbon bar, and clicking the List of Speaker
Variables, or the Data Editor for Speaker Variables icons.
96 M. H. Loxton
The portrait view in this example shows some expected results, such as that the modera-
tors (light blue) started the dialogue, are scattered throughout, and are not overly repre-
sented. However, it also alerted us to the fact that the sessions did not terminate with the
moderator—no light blue at the end. This is an example of not seeing an expected element
in the visualization, and which prompted us to query if text was cut off prematurely.
Using MAXQDA for Analyzing Focus Groups 97
We used Visual Tools > Word Cloud as a further data orientation feature, which in conjunc-
tion with a stop list and activating only the participant codes, gave a tabular, and also a
graphic view of the most frequent words in the transcript. We used MAXDictio > Word Fre-
quencies and MAXDictio > Word Combinations to see frequencies of multi-word phrases.
From a methodological point of view, we see this as an important step in gaining a
broad overview of the “voice of the corpus” prior to reading the transcripts. The argument
is that frequency of words and phrases is a good initial indication of what was most salient
to the participants, and perhaps an early insight into elements of the coding that will likely
be required. We also used it to compare word or phrase clouds of different strata of partic-
ipants. For example, we could see if the participants from one facility, or one role, or one
gender used different terms to another grouping, or used terms more or less frequently
than another. This was useful, for example, in identifying potential power gradients be-
tween different groups by the language that predominated in the Word Cloud and fre-
quency table.
The Keyword-in-context (KWIC) function (available in the MAXDictio menu) retrieves
text preceding or trailing a specified keyword. This function allowed us to look for key-
words in the context of surrounding text and therefore notice patterns and nuances that
may otherwise have escaped attention.
98 M. H. Loxton
Fig. 7 shows instances of occurrence of either “handoff” or “hand-off” across all partic-
ipants. For this focus group, the KWIC raised three points for further exploration—firstly,
it was unexpected that participants would refer to both “warm” and “hot” handoff of pa-
tients. “Hot-handoff” was an expected result of recent changes and a defined facility term,
but there was no such thing as a “warm” handoff in the project lexicon. Noticing the use of
“warm-handoff” as a term helped identify that clinicians were not always able to achieve
the desired handoff, but counted as “warm” those that “almost satisfied” the require-
ments. Secondly, since the handoff was a critical success factor for integrating Mental
Health and Primary Care at this facility, eight occurrences was unexpectedly low. Thirdly,
the clinician who used the qualifier “when it happens” raised further questions as to what
was causing handoff failure. This is an example of how our research often followed the path
of the data, rather than stick to the letter of the session topics.
MAXQDA. Fig. 8 shows the List of Variables pane superimposed on the Data Editor after
variables were imported for Facility and Gender.
1. Read the full transcripts in order of occurrence. This gives an overall perspective of the
sequence, and may help to identify any maturing or shift in how topics were presented,
or how the moderators may have adapted over time.
2. Read the combined transcripts for each topic. In many projects, the same topics or
questions are posed to several groups over a period of time. Reading by topic, gives a
strong narrative perspective of the group responses to the topic.
100 M. H. Loxton
3. Read the transcripts by participant. This approach gives a perspective of the contribu-
tions of a single participant at a time and allows the researcher to gain closer under-
standing of themes, habits, or styles of each participant that may otherwise have been
lost in the interaction.
MAXQDA provides an easy way to see all the contributions of an individual participant.
Right-clicking on any specific participant code in the Document System, will result in a
context menu with the Overview of Contributions option. The resulting Coded Segments
pane provides a listing of all contributions for that participant, and has all the familiar op-
tions to select some or all contributions and code them with another existing code, a new
code, or to export the list in several formats. Fig. 9 shows an example of this process, and
the resulting Coded Segments pane. The Analysis > Smart Coding Tool function is also avail-
able for focus group coded segments.
Memos
As with other types of research, memos are often added prior to coding, and used as a ve-
hicle to develop codes. For our focus group coding, we made extensive use of in-document
memos and found it an excellent way to store specific moderator or observer notes related
to a segment. This often included any notes about reactions that one participant had to
the speech or actions of another, or moderator behavior that may have influenced partic-
ipants. For example, in one session, a nurse clapped her hands to her face when another
spoke of a near-miss due to confusion between two similar-sounding but very different
medications. This would be very difficult to capture in a transcript, but if it appears in mod-
erator notes, it can be attached to the relevant segment as an in-document memo.
Using MAXQDA for Analyzing Focus Groups 101
Likewise, in-document memos were used to record events such as the details of a
prompt, or a walkthrough. Memos were also used to record moderator notes to them-
selves, such as “Participant-2 rolled her eyes when the topic of the ePortal was raised, and I
am unsure what this meant. We did not get another chance to ask her to elaborate.” Notes
of this kind were helpful to remind us during the coding phase of events that changed the
meaning or implications of the transcription text, or that needed to be followed up.
The in-document memos were also frequently used as a record of possible code sugges-
tions by different team members. Memos that contained follow-up suggestions or ques-
tions were given the “?” memo label . This feature was used extensively in a nominal
focus group project related to patient experience of radiology, and were a crucial means to
communicate between the researchers.
Paraphrases
Paraphrases accurately, but concisely state the meaning of a particular text. This was im-
portant in our focus group settings when one or more participants interrupted or inter-
jected when another was speaking. In these cases, it was sometimes necessary to piece
together a participant’s full contribution in order to present it in its full and uninterrupted
form. It was a challenge because a contribution sometimes spread across several para-
graphs and interleaved with other speakers. An effective approach was to construct the
precis and attach it to the participant’s initial text where they first started a train of thought
as a paraphrase.
This re-ordering could not be done in the transcript itself without destroying the se-
quence and interaction, so it was done as a paraphrase.
The paraphrase tools (Analysis > Paraphrase) include options to categorize existing par-
aphrases, view a paraphrase matrix, or print the current document including paraphrases.
We used the Categorize Paraphrases function to assist in developing new codes. This func-
tion was especially useful in developing new codes, because it provided a side-by-side view
of original vs paraphrased text.
In retrospect, what may improve the Morgan and Hoffman coding structure, is a way to
indicate the directionality of interaction, and researchers may wish to apply codes to indi-
cate directionality of group interaction that is specific to their study topic. For example, it
may be important to note whether it is a male interrupting or supporting a female, or
whether doctors support nurses.
There are many possible approaches to coding this directionality, but here are three
that you may consider (points 2 and 3 courtesy of Stefan Rädiker):
1. Create a family of “directionality” codes specific to the context. For example, if I want
to depict male vs female directionality, I might have the following codes “M->F”, “F-
>M”, “F->F”, and “M->M”, and then code any segment reflecting a directional action
or speech act. Used in conjunction with the Morgan & Hoffman Group Interaction
system, the directionality codes applied to the same segment could denote, for exam-
ple, that a male disagreed with a female.
2. Another approach using codes and variables, is to create a code family for “Target of
Action” and duplicate all the participants and moderators as sub-codes. In this case
in a speech act by “Participant-1”, we might code the segment as being “Support”, and
code it with the “Target of Action” as “Participant-2” to show who did the supporting,
and who received the support. The participant variables could contain biographical
data such as gender, and therefore allow us to show by gender, who was supporting
whom by using the Visual Tools > Code Relations Browser.
3. A third option is to use the Edit comment function (e.g., by right-clicking on a coding
stripe in the Document Browser) to add a text to the interaction code saying who ad-
Using MAXQDA for Analyzing Focus Groups 103
dressed whom. This function can be accessed in the context menu for a coded seg-
ment. For example, by clicking on the coding stripe in the Document Browser.
Some code systems are typical to an industry. For example, in healthcare, it is common to
have a number of codes specifically related to safety that will always be applied to any in-
terview or focus group transcript in addition to any codes developed for that specific study
or derived through a grounded theory approach. Such “institutional code systems” may be
applied more easily through the use of standardized search terms such as described in the
next section.
Bulk coding
We used the Lexical Search tool (Analysis > Lexical Search) to search within the transcripts,
and to bulk-code segments in focus group transcripts matching pre-determined codes.
This feature saved time when applying codes that related to searchable constructs. For ex-
ample, searching clinician transcripts for “patient safety” OR “hazard” OR “hospital ac-
quired”, etc. made it easier to identify and code segments relating to patient safety prior to
a complete read-through of the transcripts. To save time and reduce inter-coder variance,
complex search strings were saved and reused. In general, the researcher can apply any
institutional coding relatively quickly, and thus not detract too much from the core focus
of a specific focus group project, although care must be taken not to let bulk coding ob-
scure the need for careful analysis and coding.
5.1 Summaries
Paraphrases religiously reflect the voice of the participant, but the summary reflects the
voice of the narrator. We used summaries to provide the significant events, exchanges, and
meanings of a transcription segment in our own words, as well as our analysis of meaning
and implication. As such, there was overlap between how in-document memos and sum-
maries were used to code focus group transcripts.
In-document memos were typically created before coding and as a means to develop
codes, whereas the summaries were developed after initial coding was done. A summary
reflected all the segments for a specific existing code within a document, and was therefore
104 M. H. Loxton
typically created after initial coding. Summaries provided an analysis for reporting, but
were also used to develop additional codes or make code refinements. We frequently used
summaries in our progress reports, and it gave us a good overview of the focus group cod-
ing as a whole. To a large degree, the summary content was directly transportable to the
final report and reflected the process and results of analysis. MAXQDA provides a toolbox
for developing summaries and shows all coded segments for a topic per focus group as a
Summary Grid.
Fig. 11 shows a Summary Grid in which a code that denotes questions between partic-
ipants related to the transcript for “Group 1.” The center column contains the original tran-
script text of the coded segments, with a highlighted hyperlink to the Document Browser
location. The right column contains the researcher’s description of the text. In the first seg-
ment we are summarizing the transcript context as part of the patient registration process
in an ambulatory clinic. The second text summary in the example refers to staff access to
the electronic health record (EHR) system.
The summaries were typically the skeleton of what went into our reports, and together with
text quotations, and ideas in in-document memos and comments on coded segments,
formed the basis of our analysis. In practical terms, once summaries had been completed,
there was often little left to do other than piece together the narrative of a report from the
confluence of these four sources. Where necessary, we would support a point or assertion
by providing instances of quoted transcript text or images derived from the Crosstabs,
Code Relations Browser, or MAXDictio Word Clouds. We found it especially useful to build
Word Clouds in MAXDictio by setting the word combination to 2–5 words and applying
lemmatization and a stop list. This helped us to demonstrate frequently used phrases, such
Using MAXQDA for Analyzing Focus Groups 105
as “warm handoff,” which was highly salient terminology for our client. In one project, we
were able to show that there was broad staff acceptance and support for the “Hot-Handoff”
concept, but that there were issues that reduced its effect. We could show that the changes
in policy had not led to any increases in reportable incidents and accidents, by drawing
from statistical analysis of facility safety reporting data, and coupling that with qualitative
analysis that gave an understanding of why staff were satisfied with the new processes. We
were also able to identify gaps in processes and reporting that should be addressed in a
follow-up initiative.
MAXQDA provides two other tools related to summaries: The Summary Table offers a
compilation of summaries, and we found this useful for presentations and reports, while
the Summary Explorer enabled us to compare the summaries of different cases or groups.
6 Lessons learned
MAXQDA has proven to be an effective tool for analysis and reporting of focus group data
in our healthcare setting. The following are a few lessons learned:
Clean before you import. Don’t assume data from a transcription service, or a video
clip or audio track, are clean. They may require significant editing and cleaning prior
to import and may require highly specialized tools that take time to acquire. Plan ahead
and conduct dry runs and tests before running a live session. For example, in a video
clip of a focus group on “patient flow” there was detail in the background that showed
patient information. It needed someone with special graphic tools to blur the back-
ground in the clip before importing.
Determine beforehand how prompts will be captured. In some focus groups, prompts
are used to initiate discussion, and may be the object of the discussion. For example, a
set of images might be shown to the participants, and then several questions related to
the image will be posed. The prompt might be patient record to examine, a video clip
to watch, or a software app, a tool, or an in-situ walkthrough of a process. These
prompts need to be represented in some fashion in the data, which may take significant
forethought and planning.
Be prepared for dealing with conflicts between the participants. It is not to say a con-
flict resolution expert will be needed, but it is wise to prepare responses in case conflicts
arise between participants.
Consider how you will deal with “narrator’s voice” and significant actions. For exam-
ple, when a participant got a call and left, we added that as text in square brackets in
the transcript. In retrospect, this was a mistake, because it then counted as their con-
tribution. Before the session, think through how you will denote, code, and use events
like people entering or leaving, dropping things, etc. Will you code them as “actions”
perhaps, and add a Code Comment, or will you add a memo, or do something else?
106 M. H. Loxton
Plan how large media files will be handled. Audio and video files for an entire 90-mi-
nute focus group session can become very big. Plan ahead for where you will store these
files, and how archiving and backups will be managed. We kept media files on a thumb-
drive, but this very quickly filled up, and made access to the files from within MAXQDA
slow. Moving them to a large-capacity USB-3 hard-drive improved things.
Consider using more than one recording device. We experienced a hardware failure
and lost the recording of an entire session. Once media files are uploaded, ensure all
backups are complete before deleting any data from the recording device.
Bibliography
Bogardus, E. S. (1926). The group interview. Journal of Applied Sociology, 10(4), 372–382.
Gallagher, M., Hares, T., Spencer, J., & Bradshaw, C. (2017, 1). The nominal group technique: A re-
search tool for general practice? Family Practice, 10(1), 76–81. https://doi.org/10.1093/fam-
pra/10.1.76
Jackson, P. (1998). Focus group interviews as a methodology. Nurse Researcher, 6(1), 72–84. https://
doi.org/10.7748/nr.6.1.72.s7
Kuckartz, U., & Rädiker, S. (2019). Analyzing focus group data. In Analyzing qualitative data with
MAXQDA: Text, audio, video (pp. 201–217). Springer Nature Switzerland. https://doi.org/10.
1007/978-3-030-15671-8_15
Lee, R. M. (2010). The secret life of focus groups: Robert Merton and the diffusion of a research
method. The American Sociologist, 41(2), 115–141. https://doi.org/10.1007/s12108-010-9090-1
Morgan, D. L., & Hoffman, K. (2018). A system for coding the interaction in focus groups and dyadic
interviews. The Qualitative Report, 23(3), 519–531. https://nsuworks.nova.edu/tqr/vol23/iss3/2
Van Bennekom, F. C. (2002). Customer surveying: A guidebook for service managers. Customer Service
Press.
Abstract
Prioritization research design is an approach to identify priorities in development strate-
gies on various areas using MAXQDA. The design incorporates a combination of different
methodological approaches, including systematic literature review, evaluative qualitative
text analysis, and transformative mixed methods research. This chapter provides an exam-
ple of an urban development issue in the city of Gori, Georgia. We highlight the usage of
four MAXQDA tools, the Smart Coding Tool, Complex Code Configurations, Document
Portrait, and MAXMaps. The Smart Coding Tool was used to re-check the codes and coded
segments for consistency in coding according to the methodology and to create and apply
evaluative codes in addition to thematic codes. Complex Code Configurations was used to
illustrate the distribution and frequencies of the combination of thematic and evaluative
codes. MAXQDA’s visual tools (MAXMaps and Document Portrait) enabled us to present
the links between the urban development dimensions and evaluative codes. The Docu-
ment Portrait was used to depict the proportion of text segments dedicated to each urban
development issue in the analyzed documents. MAXQDA made it possible to synthesize
and quantify document variables and thematic and evaluative codes. Ultimately, it ena-
bled us to examine urban development issues in a way that brought together globally-pro-
moted principles, while considering local peculiarities.
1 Introduction
The 21st Century era has raised unique challenges for urban settlements and the develop-
ment of many cities around the world still hinges on outdated urban planning approaches.
Urban planning is often hindered by low planning trends, which serve as barriers to devel-
opment and divorce global goals from accurate localization. Even though many interna-
tional policy documents1 outlined guidelines for inclusive and sustainable development,
the real obstacle of how to execute global or national objects on the local level remains.
Every settlement is a dynamic organism, shaped by centuries of events that create distinc-
tive characteristics and form vibrant destination-specific identities. These historical de-
tails make the transformation of global principles into local solutions even more difficult.
This study is part of the urban planning project related to the completion of a “Basic
Plan” for the city of Gori, which will serve as a strong foundation for the city’s forthcoming
“Master Plan” for land usage. The project was implemented by the City Institute Georgia
(CIG), a non-profit organization focused on sustainable urban development. Gori is lo-
cated in eastern Georgia and serves as a connecting highway between the country’s west-
ern and eastern regions, and was a focal point in the five-day Russian-Georgian war in
2008, causing displacement of the local population. As a result, war has bought fundamen-
tal changes and new challenges for the future development of Gori. A particularly im-
portant issue on the city’s urban development plan was to integrate large-scale new settle-
ments, which were constructed both in the city and in its surrounding area after the reset-
tlement of internally displaced persons during the war.
After a thorough analysis of possible methodological approaches to achieve the stated
aim of developing a land use master plan for Gori, we realized there was no one approach
that could solve the problem of matching globally promoted urban principles with the
needs of a specific locality or region. To fill that gap, we devised an approach that we call
“prioritization research design,” which draws on the tools of qualitative and mixed meth-
ods data analysis including systematic literature review, evaluative qualitative text analy-
sis, and transformative mixed methods. MAXQDA has provided a valuable platform to in-
tegrate all of these different forms of analysis to execute this new approach. The research
design was developed in a way to handle information appearing from different sources
(policy documents, articles, research reports, expert interviews, participatory workshops).
As a result, the analyzed data integrates both, globally-promoted principles (e.g., interna-
tional development strategies, agendas) and local characteristics of the case study area.
1 See, e.g., such as the Sustainable Development Goals (Goal 11 – Sustainable Cities and Communi-
ties), https://www.globalgoals.org/11-sustainable-cities-and-communities; New Urban Agenda
(Habitat III), http://habitat3.org/the-new-urban-agenda/; and the EU/Georgia Association Agree-
ment, https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L:2014:261:FULL&from=EN.
Using MAXQDA for Analyzing Documents 109
2 Data collection
The first phase of our research design utilizes principles of systematic literature review
(Petticrew & Roberts, 2006). A comprehensive literature search was conducted based on
the following inclusion criteria: latest international policy documents promoting sustain-
able urban development principles, local, regional strategic vision documents, primary re-
search findings related to urban issues of targeted city Gori, etc. After collecting the rele-
vant literature materials, the files were imported into MAXQDA and distributed in the pre-
defined document groups created in the Document System. The documents were grouped
according to the following thematic sub-groups, such as International Urban Develop-
ment Agenda, Urban Development Agenda, National Policy, Regional Development/Stra-
tegic Vision, Research Findings. As a result, up to 10 documents were included into the five
different thematic categories (Fig. 1).
It should be noted that in the first stage, the attribute information about documents was
collected in Microsoft Excel format, and then imported into MAXQDA via Variables > Im-
port Document Variables (Fig. 2 and 3).
Scale of Discussion
During the coding process, it was apparent that there was a wide range in the documents
in terms of how acute the problems are in the settlement and which should be considered
in the urban development process. Based on this, the evaluative code Scale of Discussion
was created, which assessed the addressed area of the debate through the above-men-
tioned indicator. The sub-codes reflected the different scale levels:
high
medium
low
Each sub-code had a description in the form of a code memo containing the following in-
formation: high – 3 problems and more; medium – 1 or 2; low – 0.
112 T. Gugushvili, G. Salukvadze
Validation of Discussion
Finally, a Validation of Discussion code was created to distinguish the quality of evidence
on which the thematically coded passages relied because, in the documents, some pro-
vided arguments that were not clearly reasoned and therefore not evidence-based. The
following evaluative sub-codes were developed:
The principle of assigning codes, as explained in the brackets, had been included in code
memos (Fig. 4).
Fig. 5: Usage of Smart Coding Tool for assigning evaluative sub-codes to segments coded with
thematic codes
114 T. Gugushvili, G. Salukvadze
By checking the co-occurrence of the thematic code Urban Development Dimensions with
the evaluative sub-codes of Validation of Discussion, the analysis revealed that substantial
evidence is rarely presented when naming urban development issues. Insufficient infor-
mation/evidence indicates, on the one hand, the need for additional research and, on the
other hand, the fact that the provisions presented in the main strategies are not reliable.
Using MAXQDA for Analyzing Documents 115
6.2 MAXMaps: Creating concept maps showing the relations between the
analyzed aspects
Data displayed in MAXMaps (Visual Tools > MAXMaps) proved to be the best way to portray
information succinctly and efficiently, illustrating details provided in more comprehen-
sive textual information. Fig. 7 shows a concept map illustrating the co-occurrences of the
main thematic codes (in the center) with the three evaluative codes from our study (in the
outer circle). The map was built based on the Code Co-occurrence Model (Code Intersec-
tion), which shows the code links by which same text segments were coded. The type of
lines varies according to the codes and indicates family codes, whereas colors differ
through different sub-codes.
The concept map added life to the coded qualitative data. It successfully depicted the
research findings for analysis and presentation, and allows both researchers and readers
to gain insights in a more effective way than just textual material. It clearly shows that tour-
ism is mentioned in the context of the municipality, regional and urban development con-
texts without proof of argumentation, and it has a high impact on other urban develop-
ment dominations.
The Document Portrait clearly showed that most of the analyzed documents were devoted
to the information provided about Gori Municipality (green color in Fig. 8) rather than the
city of Gori (purple color in Fig. 8). At the same time, as for the city of Gori, it occupies a
tiny part of the text/narrative in the text. Fig. 9 shows the proportional distribution of the
scale of thematic codes (urban development directions) in the Shida Kartli Regional De-
velopment Strategy document. The MAXQDA option Ordered by color has been switched
on for this purpose.
Using MAXQDA for Analyzing Documents 117
Document-level (international -> regional): The quantifying of the document level was
carried out using the following principle: in the document group (e.g., international)
one point is awarded in case of one code, and two points in case of two or more codes;
this could be accomplished by using the Code Matrix Browser (Visual Tools > Code Ma-
trix Browser).
The scale of discussion (national -> settlement): According to the scale of the debate,
points were awarded according to the following principle: Georgia – 1 point; Urban set-
tlement – 2 points; Shida Kartli – 3 points; Gori Municipality – 4 points; City of Gori – 5
points.
118 T. Gugushvili, G. Salukvadze
As a result of the use of the prioritization research design, the areas of urban development
were given appropriate weights to identify priority issues for the city of Gori in the urban
planning process. Tab. 1 illustrates the calculated scores for each Urban Development Di-
mension using Tourism dimension as an example: Tourism was mentioned in all types of
documents, particularly two times on international, European and regional level, whereas
only one time on the national level. Therefore, in total, seven points were assigned to the
tourism dimension for the document-level component.
In the case of tourism, the scale of discussion was on the municipal level, which assigned
four points to tourism issue. As a result, the sum of the seven points on the document-level
and the four points from the scale of discussion amounted to a total of eleven points, which
made tourism one of the top Urban development dimension for the city of Gori. The same
principle of weight calculation was applied to other urban issues (see Tab. 2).
7 Lessons learned
This chapter highlighted how MAXQDA can be used for conducting a prioritization re-
search design in urban development, focused on the Smart Coding Tool, Document Por-
trait, and Code Configurations. The smart coding tool proved particularly useful in dealing
with the problem that coding rules that call for the assignment of multiple codes to one
segment of text can often result in differences in coding among researchers and the need
to refine the coding rules after initial coding. The Smart Coding Tool enabled us to easily
review the segments we coded and adapt them to the updated rules/protocols. It can be
used to revise, verify, and correct codes and code assignments simultaneously. More spe-
cifically, one of the coding protocols involved modifying a segment of the same text to de-
termine whether it is encoded with different codes (e.g., thematic and evaluative). The
Smart Coding Tool allows creating new codes by merging, splitting, or modifying existing
codes. In our case, too, one of the evaluative categories was created entirely in the Smart
Coding Tool.
Using MAXQDA for Analyzing Documents 119
MAXQDA makes it easy to quantify the descriptive results of codes through Subcode Sta-
tistics. Code Configurations is an extremely valuable tool in going in greater depth to see
not only the frequency of one code, but the frequencies of combinations of two or more
codes. This made it much easier for us to evaluate the overall data we examined.
120 T. Gugushvili, G. Salukvadze
MAXQDA offers the researcher with a wide range of analysis and visual tools. These can
be used to visualize data in ways that just reading documents can’t. The Document Portrait
was particularly helpful to see the ways that documents covered the thematic areas being
studied. The grouping of the documents in combination with the assigned codes per doc-
ument can be used to apply priority scores to which can be used as a basis to rank priori-
ties.
Bibliography
Driscoll, D. L., Appiah-Yeboah, A., Salib, P., & Rupert, D. J. (2007). Merging qualitative and quantitative
data in mixed methods research: How to and why not. Ecological and Environmental Anthropol-
ogy, 3(1), 11. http://digitalcommons.unl.edu/icwdmeea/18
Kuckartz, U. (2014). Qualitative text Analysis: A guide to methods, practice and using software. Sage.
Petticrew, M., & Roberts, H. (2006). Systematic reviews in the social sciences: A practical guide. Black-
well. https://doi.org/10.1002/9780470754887
Tashakkori, A., & Teddlie, C. (1998). Mixed methodology: Combining qualitative and quantitative ap-
proaches. Sage.
Abstract
MAXQDA was used in my doctoral dissertation to uncover the types of frames used by
American presidents and in media samples surrounding the contentious issue of health
care reform at three critical junctures in U.S. history. The four main functions of MAXQDA
used in this project included 1. creating and applying a sophisticated code system to hun-
dreds of speeches and media samples, 2. using the Memo Editor and Overview of Memos
to take notes, quickly summarize hundreds of documents, and highlight particularly out-
standing or critical documents and patterns, 3. using the Code Frequencies chart func-
tions, particularly using the unit of analysis “coded segments” to observe the number of
codes used highlighting a certain major frame to then compare with other codes, and fi-
nally, 4. the Code Relations Browser function, which was particularly critical in highlight-
ing the overlap, or co-occurrence of codes. This last function provided evidence for a major
finding—that health care is not simply framed in one term (such as in economic terms),
but rather in mixed ways (economic and human rights frames in particular). The co-oc-
currence function illustrates this pattern and confirms the presence of mixed
frames. MAXQDA’s tools provided a rich analysis into political and mediated discourse
and supported the transformation of major public discourses on health care into frames
through a deductive and inductive process.
1 Introduction
MAXQDA was used in my doctoral dissertation to uncover the types of frames used by
presidents and in media samples surrounding the contentious issue of American health
care reform at three critical time periods in U.S. history. Literature in political communi-
cation on issue framing has been growing rapidly as political science and political com-
122 B. Leimbigler
munication scholars seek to understand the power structures that shape policy and public
opinion (Entman, 2004). Empirical studies conducted on media framing of policy issues
confirm the importance of framing scholarship and the measurable existence of opposing
viewpoints in public discourse (D’Angelo & Kuypers, 2010; Dorfman et al., 2013).
Frames can be identified in the discussions of politicians and media elites on health
care reform. Qualitative and mixed-methods textual data analysis software is essential for
conducting a frame analysis. Some of the most prominent frames in health care reform
policy are “human rights vs. market commodities,” raising the question of whether health
care is a human right or a privilege; whether every person should have access to it, or
whether it is primarily a good to be purchased. In addition, individualism, collectivism,
and the state-federal government structure and financing relationship are further major
ways in which health care is discussed in American public discourse. MAXQDA supported
the transformation of these discourses into frames by providing the tools for a deductive
and inductive process.
The project was guided by three main research questions:
1. How do frames emerge, evolve, and interact over time—both within and throughout
each of the three critical junctures examined (1960s, 1990s, 2010s), based on the actor
in question, and also within each critical juncture?
2. How do presidents and media frame health care reform proposals and attempts?
3. How are health care reform frames influenced by various institutional and historical
contexts and public discourses, and consequently, what is the impact this has on trans-
forming discourses into frames?
A qualitative study was most appropriate to assess the concepts that have been used
around health care reform and to evaluate how reforms are influenced by various institu-
tional and historical contexts.
speeches over the three time periods.1 In addition 257 media samples were selected
through the Proquest (New York Times) database. Media sample selection occurred
around the dates of each speech, with the 4 most relevant media samples found with +/–2
days of the given speech retained. The samples from the later time-periods were reduced
for feasibility.
The MAXQDA project was structured around the research questions for the dissertation,
namely: How do frames emerge and how do presidents and media outlets frame the issue
of health care? Therefore, data was organized into six different document groups according
to data set (three sets of presidential speeches, three sets of media samples, Fig. 1). All
speech samples were in Microsoft Word format; media samples were in PDF format and
were imported in the respective document groups.
1 The relevance of speeches was evaluated manually. Speeches containing terms such as “health” or
“Medicare”, but concerning topics not directly related to health policy reform, were not included
in the category of “relevance”. One-sentence speeches and statements were also excluded. Also,
The American Presidency Project database underwent design and filtering options changes during
the time period of this research, and the new database may have different numbers for search
terms than the ones outlined above, as the data collection occurred in October 2017.
124 B. Leimbigler
The MAXQDA tools used for data exploration and analysis included 1) creating the coding
frame with many sub-codes in the Code System window, 2) Memo Editor and Overview of
Memos, 3) Code Frequencies for analyzing the usage of codes and sub-codes, and 4) using
the Code Relations Browser to evaluate co-occurrences of codes.
One systematic way of uncovering health care reform frames is through a coding process
that is both deductive and inductive—both having predetermined categories (deductive)
as well as categories arising from the data (inductive) (Kuckartz, 2014). Thus, the code book
was a key element of tracking how frames are used throughout each of the three critical
junctures examined (1960s, 1990s, 2010s), based on the actor in question, and also within
each critical juncture.
Using MAXQDA for Identifying Frames in Discourse Analysis 125
Fig. 3: Reworked code system with 5 main codes and additional inductive codes
In the second coding process, the datasets were recoded manually to ensure consistency.
Each segment coded adhered to the codebook rules, such as each coded segment being no
longer than 2 sentences in order to establish consistency. Another codebook rule stated
that the parent code and sub-code must both be applied to a given segment in order for
the parent code to have an overview of all sub-codes. A complete overview of every coding
category that was developed can be seen in Fig. 4.
The process of coding inductively and deductively with a topic as diverse and conten-
tious as health care reform shows how MAXQDA software allows us to challenge the pre-
determined categories that are often used to explain scholarly research on this topic. For
this project, merely creating the code system was the first one of the important outcomes.
4 Writing memos
Following the creation of the sophisticated coding system specific to American health care
reform debates, the memo function was used in coding and was also used during the whole
process of analyzing the data. The memo function was used frequently to keep track of the
hundreds of media samples I was coding and to identify interesting patterns. The memos
are crucial in helping researchers keep track of which articles were actually useful, which
ones introduced important information, and which ones were irrelevant to the research
questions. Information on hundreds of news articles were succinctly summed up in the
document memos. Combing the memos during the analysis was essential in uncovering
patterns in reporting. Impressions about article topics were noted, including frames that
had been used in speeches but were noticeably absent from the media samples. An exam-
ple is how small business were coded in the speeches but were not mentioned as much in
the media samples.
Using MAXQDA for Identifying Frames in Discourse Analysis 127
Fig. 4: Code system with grouped inductive codes—codes with relation to 5 main frames (left)
and additional codes (right)
The document memo function was used in the form of free-written note-taking after the
codes were applied to the speeches and articles. Fig. 5 shows an example of how the memo
function was particularly useful when large samples were being coded and analyzed. This
example is from the media samples analyzed during the Obamacare debate, highlighting
certain topics to which I expressed surprise and tracking my research process. Colored
memo icons were used to mark very important or noteworthy memos.
128 B. Leimbigler
Microsoft Word was used in combination with MAXQDA, as the tabular Overview of
Memos for the code memos and the Codebook (available in the Reports menu) were easily
exported to Word for use in my dissertation. Code memos were reread and then compared
with memos from other time periods in order to understand different patterns. After ex-
porting the Codebook to Word, some minor formatting and structuring changes were in-
tegrated.
Fig. 6: Code frequencies for 5 main codes of the frame analysis (unit of analysis is document)
Fig. 7: Code frequencies for 5 main codes of the frame analysis (unit of analysis is segment)
what frames are discussed in relation to other frames, providing important insights to the
complex field of political communication. In the case of Obama’s speeches, the co-occur-
rence between the ‘injustice’ code as well as the ‘economic’ and ‘cost of health care’ code
was one area that merited further observation. Previous research demonstrated a signifi-
cant uptick in Obama’s usage of the “mixed” frame: neither purely framing health care in
an economic or market-based sense, nor purely describing the human rights aspect of
healthcare (Leimbigler & Lammert, 2016). Rather, the mixed frame revealed injustice and
cost: the injustice of being priced out of health care. In other words, it takes elements from
both the economic and as well the human rights frame. Manually coding for this, it became
clear that injustice and economic framing worked together and were sometimes part of the
same frame.
The example of Obama’s speeches illustrates the overlap or co-occurrence of codes, as
depicted both visually and numerically in the Code Relations Browser. The screenshots in
Fig. 8 show the code co-occurrence between economic sub-codes and the injustice sub-
code, which illustrates if mixed framing was occurring—and between which codes.
Fig. 8: Code Relations Browser showing the code co-occurrence of codes for the frame analysis
The Code Relations Browser example with the ‘injustice’ code and the ‘Economic/Market
overview’ parent code shows 71 instances of code co-occurrence, which leads to a finding
that mixed framing was indeed occurring. This was particularly useful for my findings, as
this interaction shows a new dimension of the data and allows for a more complex under-
standing of how health care is framed. This is replicable and can be used for other projects
examining different types of frames and how they intersect, while continuing to build on
research examining mixed frames (Leimbigler & Lammert, 2016). Therefore, part of the
analysis rested upon this finding that was made possible through MAXQDA’s Code Rela-
tions Browser.
Using MAXQDA for Identifying Frames in Discourse Analysis 131
7 Lessons learned
MAXQDA has multiple functions for thorough and systematic speech and media analysis.
These tools facilitate our understanding of how framing is carried out. Four specific
MAXQDA tools were used in this project. As outlined in this chapter, those tools included
the Code System, tracking hundreds of documents using the memo functions, using charts
to analyze Code Frequencies, and the Code Relations Browser to look for co-occurrences
after coding all documents. MAXQDA also allowed for the easy creation of a full codebook
with explanations and justification for each code.
Researchers should take note with regards to coding and being aware of subjectivity in
qualitative analysis. Coding everything systematically and applying more than one code to
a given segment is encouraged for researchers who want to make good use of the code co-
occurrence functions, which can highlight patterns that may otherwise be overlooked. Re-
searchers with similar datasets could replicate a similar deductive-inductive coding pro-
cess. That said, caution should be taken in not creating too many coding categories, as this
can easily become overwhelming.
The process of coding and re-coding will always entail a certain level of subjectivity.
Throughout the lengthy process of coding and re-coding both inductively and deductively,
there was often the potential of including another code or having omitted a different code.
A limitation of qualitative studies is the certain level of subjectivity inherent to creating a
code book based on the concepts around health care. To reduce subjectivity, only codes
that clearly constituted a pattern were included. Many codes were clearly dominant, oc-
curring hundreds of times. This means that a code used only once, twice, or very few times
would not be viewed as a significant pattern. A boundary had to be drawn with regards to
which public discourses and concepts became coded to demonstrate a framing process.
Another important point is the need of analyzing the relation between codes. This is
significant for any mixed frames that occur, as the code statistics only show the frequency
of a code being used; their interaction is not shown unless analyzing for code co-occur-
rence. This results in code frequency charts that may not illustrate the complete story of
the data, which also points to the importance of utilizing many of the tools available in
MAXQDA. An example of this is how the Obama speeches that contained a high level of
mixed framing (economic and market framing together) appear with only the frequency of
economic and rights framing in the charts, and no insight into their overlap, when a more
accurate visualization shows them occupying the space as a mixed code, hence the im-
portance of also illustrating code co-occurrence. In the context of this project, code co-
occurrence constitutes one of the most significant aspects of coding with MAXQDA be-
cause it illustrates the mixed framing methods and which topics are discussed in connec-
tion with which other ones.
132 B. Leimbigler
One of the most important lessons learned from using MAXQDA on a project with a
large dataset is to simply begin diving into the data and coding, even when first coding
processes can be messy. When faced with a new project and the ability to create an entirely
new coding system, many students and researchers can hesitate and be wary of making
mistakes. As such, an important takeaway is to emphasize the importance of the structure
that deductive coding can provide, as well as the importance of the creativity of inductive
coding. The project’s results illustrate how more research should clearly highlight the in-
ductive and deductive coding and categorizing that occurs during the coding stages.
Simply diving into the data without any predetermined categories would have been over-
whelming, given that over 90 codes were added and then needed to be grouped. Con-
versely, applying the rigid categories to this body of data without the space to add or re-
move major parent codes would also have resulted in missing a lot of data. Therefore,
MAXQDA is essential in creating coding systems, and students in particular should be en-
couraged to think of the first coding process as an important initial step to create a draft of
the coding system. The code-co-occurrence is also part of a major finding and lesson
learned: MAXQDA affords tools that can be used to better understand qualitative analysis.
Computer assisted coding can give us a much more nuanced picture of what we are re-
searching. MAXQDA contains many useful tools for finding linkages between different
codes and assisting with analysis of large qualitative datasets. Researchers looking at
speeches and media samples in particular can use the simple coding and grouping func-
tions of MAXQDA, as well as the code co-occurrence functions to gain a deeper and more
sophisticated analysis of the discourses and frames in speech and media samples. This can
be extended to the analysis of other types of document samples as well.
Bibliography
D’Angelo & J., Kuypers. (2010). Doing news framing analysis: Empirical and theoretical perspectives.
Routledge.
Dorfman, L., Cheyne, A., Gottlieb, M., Mejia, P., Nixon, L., Friedman, L., & Daynard, R. (2014). Ciga-
rettes become a dangerous product: Tobacco in the rearview mirror, 1952–1965. American Journal
of Public Health, 104(1), 37–46. https://doi.org/10.2105/AJPH.2013.301475
Entman, R. (2004). Projections of power: Framing news, public opinion, and U.S. foreign policy. Chi-
cago University Press.
Kuckartz, U. (2014). Qualitative Text Analysis: A guide to methods, practice & using software. Sage.
Leimbigler, B. & Lammert, C. (2016). Why health care reform now? Strategic framing and the passage
of Obamacare. Social Policy & Administration, 50(4), 467–481. https://doi.org/10.1111/spol.12239
Proquest NYT. https://search.proquest.com/hnpnewyorktimes (data set). Data accessed through FU
Berlin Primo.
The American Presidency Project. https://www.presidency.ucsb.edu/ (data set). Data accessed Sep-
tember–October 2017.
Using MAXQDA for Identifying Frames in Discourse Analysis 133
Abstract
This chapter depicts how MAXQDA has been used in my PhD dissertation on the integra-
tion potential of labor migrants from three migrant sending countries of Central Asia. It
discusses how MAXQDA supported data processing and initial data reduction, and further
focuses on the use of summary functions to generate qualitative social types. The use of
MAXQDA’s summary function proceeds in two distinct stages, beginning with the Sum-
mary Grid and then creating Summary Tables. The Summary Grid is used to delve into the
coded qualitative data and to summarize thematic highlights for individual documents by
bringing the data to a higher level of abstraction. Once completed, the summarized data is
clustered together to form a compilation of thematic summaries for selected document
groups with a view to explore emerging patterns, similarities, and diverging trends across
different research sub-groups. Subsequently, the thematic Summary Tables generated for
document groups are further summarized within the theoretical framework of the study,
as a result of which certain qualitative social types are crystallized.
Memos
Summary Grid
Summary Table
1 Introduction
The research project examines the integration potential of labor migrants from three mi-
grant sending countries of Central Asia—Kyrgyzstan, Tajikistan, and Uzbekistan—in Rus-
sia by investigating integration trajectories emerging among different migrant groups and
the role of transnational involvement in these processes (Maksutova, 2019). With respect
to underlying theoretical frameworks, the study applies the four-dimensional social inte-
gration model by Hartmut Esser (2001) to craft a meta-structure for data collection and
analysis, and it uses transnational migration theories to examine micro-level processes
taking place in and across different dimensions. Epistemologically, the study employs a
136 A. Maksutova
Although MAXQDA was utilized throughout the entire study, this chapter will focus on the
use of MAXQDA’s summary functions in formation of social types.
or Russian while interviewing migrants from Kyrgyzstan, and Uzbek or Russian with mi-
grants from Uzbekistan and Tajikistan. For qualitative data analysis, 60 interviews were
transcribed word-by-word and coded line-by-line, using MAXQDA.
1 The translation of the primary data from the original languages into English was necessary because
the dissertation was to be written in English and not all the academic supervisors and reviewers
mastered the Central Asian languages to adequately review the data in the original languages.
138 A. Maksutova
and clearing out redundant codes using MAXQDA’s Creative Coding tool (available in the
Codes menu) and other coding features. The coding was subjected to a rigorous review
process which resulted in appropriate corrections to the coding system. The connotation
and logic behind each higher-level code (category) and the necessary adjustments they
underwent during the first data reduction stage were properly documented in code
memos, so that the history of code system development could be traced.
Fig. 1: Document System and Code System after coding (first data reduction)
bles. For each region (document group) one table was created. In the third step, the com-
piled summaries were aggregated into meta-summaries, which were further exported to a
text file for further analysis. In the last step, all meta-summaries assigned to specific cate-
gories across all three document groups were put together to explore emerging regularities
and singularities, compare the country-based findings, and generate cross-national social
types.
The coded data was summarized using MAXQDA’s Summary Grid and Summary Table.
The main idea behind the Summary Grid is to extract the key ideas from the coded seg-
ments of a topic for each document. Once written, the Summary Table compiles the final
summaries for specific codes and documents together with selected document variables.
Their use in the different stages of the data compilation plan will be discussed in detail in
the sections below.
140 A. Maksutova
Fig. 3: Summary Grid—a summary is written (right pane) for 4 coded segments of the code “EM-
PLOYMENT” (displayed in the middle pane)
Using MAXQDA’s Summary Features 141
When writing a summary, the edit and display options of the summary window were help-
ful. The options to display code comments and memos attached to coded segments in the
middle column “Coded segments” right below the segments allowed me to delve into ini-
tial thoughts and insights that appeared while coding and incorporate them into my sum-
maries. This ensured that in the second stage of data reduction no intellectual work, which
was performed in the data processing and coding phase, was left behind. Additionally,
when coding transcripts I often assigned a weighting score of 10 to indicate that coded
segments are suitable as citations for my paper. Using the Display origin option in the Sum-
mary Grid which shows the weight and location of a coded segment, I could easily copy
the most relevant segments over to the summary along with the source information for
later use in the paper.
Having summarized the category “Structural Dimension” for the document group
“Kyrgyzstan,” I did the same with the other three categories “Social Dimension,” “Cultural
Dimension,” and “Identification Dimension,” and applied the same technique for the doc-
ument groups “Uzbekistan” and “Tajikistan.”
Fig. 6 shows how the summaries written for sub-categories “Employment,” “Working
condition,” and “Living condition” flow into meta-summaries displayed in the cells of the
last row. These cells are assigned to the newly created document “Meta-Summaries Kyr-
gyzstan.” To elaborate only one example of a meta-summary for sub-category “Employ-
ment,” it was written based on 20 document-based summaries which were in turn pro-
duced from 56 coded segments (5,931 words). This specific meta-summary comprised 111
words. With regard to the time required for writing one meta-summary, it largely de-
pended on the number and length of document-based summaries, and the analytical com-
plexity of findings. On average, I spent 2–3 hours to produce one meta-summary.
trajectories pertinent to the Central Asian migrant population. These are empirical types
induced from the analytical notes, and not ideal types in the understanding of Max Weber
(1973), which constitute an artificial construction of certain aspects of reality (Kuckartz,
1991, p. 45). Social types represent the migrants categorized into certain groups based on
the shared level of propensity towards social integration into receiving society. Essentially,
I identified the following five general types of social integration among the Kyrgyz, Tajik,
and Uzbek research sample:
I. The excluded, disillusioned and angry
II. The low-profilers
III. The adaptionists
IV. The migrant-entrepreneurs
V. The settlers
Each of these types represents a particular group of labor migrants who display similar
patterns of social integration trajectories across the structural, social, cultural, and identi-
fication dimensions. The migrants allocated to each type do not share complete homoge-
neity in their actions, behavioral tendencies, or thinking patterns, but conform to the dom-
inant features that categorize them into a particular group and distinguish them from
other groups. Methodologically, the creation of social types was not as challenging it first
seemed. The research data were subjected to a three-stage analytical generalization via
summaries, meta-summaries, and analytical notes, so that social types gradually crystal-
lized throughout the process. It remained to identify, categorize, and situate them within
the theoretical framework of the research. Ultimately, I discussed the social types against
the main research question and presented a number of empirically validated theoretical
arguments.
146 A. Maksutova
Kyrgyzstan\Interviews Kyrgyzstani migrants use both for- Kyrgyzstani migrants have few Kyrgyzstani migrants’ investment
with migrants\Meta- mal and informal channels to housing options in Russia. Most behaviour positively correlates
Summaries Kyrgyzstan search for employment, but they migrants are forced to live in with their long-term migration
have more trust and confidence in shared migrant flats without offi- plans. If migrants have the inten-
finding jobs through their private cial registration, as a result of tion to return, they tend to chan-
ethnic networks. Using main- which they break migration law. nel their earnings to Kyrgyzstan to
stream formal employment chan- However, migrants considerably invest in different family projects.
nels does not guarantee entry into improve their living conditions by The migrants who intend to settle
the primary employment market. moving from shared migrant flats in Russia for a long time are in-
Migrants mostly end up in the to their own (rented) flats when clined to invest in projects based
secondary employment market a) they have already been natural- in Russia. There is also a separate
due to different legal restrictions, ized in Russia; b) they have category of migrants (transmi-
unscrupulous employers, and brought their families to Russia; c) grants) who invest in their home
ubiquitous anti-migrant senti- they make an informed decision country but still plan to stay in
ments in the mainstream popula- to settle in Russia for a long term. Russia long term.
tion. Migrants are not only victims
of the above socio-structural cir-
cumstances, but also informed us-
ers and beneficiaries of wide-
spread corruption schemes and
legal loopholes. Thus, both main-
stream employers and migrants
may be interested in the suste-
nance of a thriving secondary em-
ployment market under the exist-
ing legal frameworks.
Tajikistan\Interviews [Here is another summary but [Here is another summary but [Here is another summary but
with migrants\Meta- omitted from the figure] omitted from the figure] omitted from the figure]
Summaries Tajikistan
Uzbekistan\Interviews [Here is another summary but [Here is another summary but [Here is another summary but
with migrants\Meta- omitted from the figure] omitted from the figure] omitted from the figure]
Summaries Uzbekistan
Analytical Note The everyday life of Kyrgyzstani, Uzbekistani and Tajikistani migrants in Moscow is navigated through feel-
ings of insecurity, fear of employment fraud and exploitation, and threat of police harassment, abuse and
deportation. Few possibilities for legal protection and a lack of trust in the Russian law enforcement agencies
and other officials have compelled Central Asian labour migrants to adopt different coping strategies. Such
strategies differ from one national group to another, depending on the level of access to primary structures
of the host country. Whereas all three national groups invested in their informal ethnic networks in order to
address the risks and uncertainties related to their migration, some groups are already institutionalizing their
ethnic structures. For example, Kyrgyzstani migrants have established a large number of migrant organisa-
tions in Moscow delivering a wide range of migration-relevant services and products to cater not only to Kyr-
gyzstanis but also to other Central Asian nationals. Thereby they have capitalised mainly on their citizenship,
diversified social networks and good command of the Russian language. At the same time, Tajikistani mi-
grant structures are primarily consolidated around delivering ethnic consultancy services which address Ta-
jikistani labour migrants’ pressing concerns about residence and employment legalisation, communication
with law enforcement and other state agencies, employment fraud and others. Uzbekistanis prove to be the
least organised and consolidated among Central Asian migrant communities in Moscow despite being the
greatest in number. A comparatively low level of institutionalisation of Uzbekistani migrant structures seems
to be a consequence of the Uzbek Government’s hostile attitudes towards unregulated labour migration to
Russia and its scepticism about the potential political role of self-organisation among Uzbekistanis abroad.
5 Lessons learned
As demonstrated in my example above, MAXQDA helped me to consolidate and efficiently
organize all research-relevant data in a single project. Its Summary functions not only ena-
bled me to summarize the thematically categorized data, but also helped to conduct cross-
category comparisons across different sample groups and synthesize consolidated findings
from the entire research data. In retrospect, however, I realized that I had not fully exploited
the software's analytical potential. After producing document-based summaries, I could
have used the mixed methods analysis tool Qualitative Themes by Quantitative Groups (Sum-
maries) to compare summaries for groups of documents that share the same variable values.
For example, using this tool I could have cross-verified my preliminary findings on migrant
women's occupational mobility across national groups by comparing summaries written
on the category “Employment.” In addition, I regret not having used MAXQDA's visualiza-
tion tools such as MAXMaps to illustrate certain relationships and trends that were emerg-
ing in the meta-summaries. In particular, MAXMaps’ Single-case Model (Summaries)
would have been ideal for visually representing how the five social types gradually emerged
by means of the three-stage analytical generalization. Unfortunately, I learned of these op-
tions when it was too late to integrate them into my work. So, the key takeaway for me was
to thoroughly learn about the analysis options available in the QDA software before em-
barking on data analysis in order to take full advantage of its potential.
Bibliography
Bourdieu, P. (1986). The forms of capital: Handbook of theory and research for the sociology of educa-
tion. Greenwood Press.
Esser, H. (2001). Integration und ethnische Schichtung. Arbeitspapiere – Mannheimer Zentrum für Eu-
ropäische Sozialforschung 40.
Kuckartz, U. (1991). Ideal types of empirical types: The case of Max Weber’s empirical research. Bul-
letin de Methodologie Sociologique (31), 44–53. https://doi.org/10.1177/075910639103200103
Maksutova, A. (2019). Children of Post-Soviet Transnationalism. Integration Potential of Labour Mi-
grants from Central Asia in Russia. LIT Verlag.
Saldaña, J. (2016). The coding manual for qualitative researchers (3rd ed.). Sage.
Weber, M. (1973). Objectivity in social science and social policy. In M. Weber (Ed.), Collected essays on
the theory of science. Mohr. (in German)
Abstract
Bibliographic Documentary Analysis is an advanced type of systematic literature review
that uses the research method of documentary analysis to create a data analysis process
that allows us to improve the performance of literature reviews, developing them in less
time or in a more accurate way. Using MAXDQA for Bibliographic Documentary Analysis
is very helpful, because it offers tools to combine automatic and manual procedures within
the analysis process.
In this chapter, we offer a detailed explanation of the analysis process we use in our
bibliographic tasks. This analysis process is suitable for setting the research purpose, de-
veloping a conceptual framework, and even can improve the theoretical dialogue or dis-
cussion about existing research.
We introduce a project about a literature review on financial literacy in early childhood.
Using MAXQDA, we were able to analyze 129 documents in less than a week by combina-
tion of automatic and manual procedures. Among other tools we used Lexical Search and
Word Combinations to identify literature concepts. The goal was to build a conceptual
framework and write an article describing the conceptual framework.
1 Introduction
“In today's world, the true exercise of freedom and sovereignty is in knowledge;
science is needed to lower the limits of ignorance and increase the ability to
solve problems” (Ruiz Ramírez, 2010).
150 A. Casasempere, M. Vercher
The significant research production of recent times and deadlines are added problems to
research activity. On the one hand, a systematic review of the literature is necessary to dis-
tinguish which papers are of sufficient quality to work as a bibliography since, currently,
we can lose ourselves in an ocean of information in which we find from irrelevant infor-
mation to essential information (Guirao Goris, 2015). On the other hand, we require sys-
tematic processes and supporting computer tools to be efficient in reviewing and appre-
hending the main ideas of the texts to be analyzed in tight deadlines.
This chapter presents a case study of a systematic literature review performed with a
significant number of documents. We used MAXQDA’s Word Combinations and Para-
phrases, among other analytic tools, to conduct a Bibliographic Documentary Analysis.
The process we used was a very efficient way to explore a huge amount of literature and
build a conceptual framework for future field work and data analysis and support the writ-
ing of a paper.
The analytic process was performed with 129 documents (59 reports, 35 papers, 17
book chapters, 12 web pages and 6 laws) by two members of a research team to answer the
research question: Which dimensions, aspects, and properties are discussed regarding the
concept of money in a context of financial literacy in the literature that deals with early
childhood?
After the selection process for articles, books, and other documents, the mission was to
review the selected literature. With a large amount of literature to review and short time
frame to deliver the final report in five days, we required a strategy to optimize the work
for the desired result. Conducting Bibliographic Documentary Analysis with MAXQDA
provides just such a strategy and a systematic process to facilitate the necessary tasks of
analyzing documents by applying documentary analysis strategies to literature docu-
ments. Using MAXQDA, we followed this sequence:
1. Formulate a one-sentence research purpose and derive important concepts from it.
2. Search for relevant literature in databases using the derived concepts as search terms.
3. Import the search hits in a reference manager, such as Mendeley, including the full-
text PDF files.
4. Export the literature data in RIS format and import references together with full-texts
in MAXQDA.
5. Read the abstracts of the files in order to organize the files in different document groups
regarding their main topics and discard documents that do not fit exactly the research
purpose.
6. Explore the literature by:
a) Using “Lexical search” with the concept(s) created beforehand to identify relevant
sections.
b) Checking each search hit, including its context, and code and paraphrase im-
portant ones.
Using MAXQDA for Bibliographic Documentary Analysis 151
Step 6a and step 7a are led by word-based auto-routines; the search hits are handled and
evaluated manually; some of them are auto-coded to establish thematic contexts for fur-
ther exploration. The process described above is performed in three main phases in the
research: state the research concern, build the conceptual framework, and perform the
discussion or theoretical dialogue. In completing the literature review, the researcher
should know in which of these three stages he or she is working. When using manual cod-
ing, for example, the researcher is looking for the indicators of the conceptual framework:
properties or dimensions of the concept(s). The role of the conceptual framework is to
drive the data collection and the analysis of the subsequent empirical study. The re-
searcher finds the proper initial indicators within step 7. The conceptual framework is part
of the coding system.
2 Data collection
This section describes the tasks we undertook to delimit the boundaries of the research by
developing the research concern; to select search terms to identify literature to be included
in the study; and how that literature was first brought into a reference citation software.
We set the following statement of purpose for our project: Study the properties and di-
mensions of the concept of money in early childhood in the family and school contexts in
order to build a conceptual framework to be used in further studies.
We arranged the search system in a table organized in decreasing order according to
their relevance, respecting the research concern (Tab. 1).
Tab. 1. Building the search system from the statement of purpose
Research concern underlining the key concepts: “Study the concept of money and its dimensions in
early childhood in the family and school contexts”.
Relevance Concept 1 Concept 2 Concept 3
high relevance money early childhood family/school
(saving, budget, pocket money…)
medium relevance money early childhood
(saving, budget, pocket money…)
low relevance money
(saving, budget, pocket money…)
low relevance childhood
After searching the combined concepts (money + early childhood + family/school), for ex-
ample, we started retrieving results in the form of articles or books, usually in PDF format.
We searched for the full-text documents and saved them in a computer folder changing
their name: author(s) surname, publication year in brackets, first words of the title.
In our study, we did not import the PDF files from Mendeley in this process because we
preferred to import them manually, but in Fig. 2, we show the different import options that
MAXQDA allows when importing from a reference manager like Mendeley.
The goal was to evaluate the imported references reading their abstract field, which
usually has all the information needed to understand the document and to delimit the
boundaries of the research concern by analyzing what had been researched before in the
topic and which new knowledge could be generated from it. The abstract text usually has
all the information needed to understand the document: theoretical context, design and
methods, and results of the study. In order to clarify the research concern, we evaluated
the documents abstracts in the way Smallbone and Quinton (2011) suggested. We added a
document memo to each evaluated RIS metadata document by right-clicking on the doc-
ument’s name in the Document System window and selecting Memo. In the memo, we can
insert a layout as a table, right-click in the writing area and select Insert table. The table
included two columns, one for the evaluative questions and another column for the re-
sponses. Questions included, which is the purpose of reading the material, the type of lit-
erature, the audience of the document, or the analytic approach, were pertinent to ap-
praise the document. Fig. 3 shows a document memo with a fragment of the evaluation
table based on the proposal by Smallbone and Quinton (2011).
Fig. 3: Document memo with standardized questions and responses to evaluate each literature
source
Using MAXQDA for Bibliographic Documentary Analysis 155
The selection of the documents began with the criteria of the dimensions of the concepts
related with the research concern and the emerging conceptual framework (money: saving,
budget, etc.), and we created document groups to classify documents by ideas and themes,
right-click button with the mouse in Documents structure, New Document Group, to store
the PDF files previously retrieved from the journals or databases. We dragged the files from
the computer folder and dropped them to the desired document group depending on its
central topic, as was stated in its document memo (Fig. 3). For example, if we found in the
literature a transversal dimension with gender issues concerning money and girls, we cre-
ated a document group ‘Girls and Financial Education’ to group all related documents.
Other transversal dimensions in the project were behavioral finances and education re-
search-based in evidence.
where the authors develop the concepts (e.g., money, pocket money) in their works. We
usually used two or more search strings, so using the AND operator within one sentence
was the proper configuration for this dialogue box to retrieve both search strings inside a
sentence (Fig. 4).
The documents were organized in document groups regarding their main topic, so we
activated only one group of documents at a time and performed the Lexical Search on each
document group individually to optimize the performance of the analysis, as working with
many documents at once might slow it down. In any conceptual framework, there are sev-
eral conceptual flows, so the search strings must be changed from a document group to
the other document group. It could be useful to save a set of search strings for further anal-
ysis by saving them in the Lexical Search pane by clicking the Save button. This way, the
search strings setting is saved in the computer with the file format *.sea. Opening this file
in the same pane will load the configuration in the future.
Fig. 5 shows the retrieved context in the Search Results window; we performed searches
and read the original text in their contexts. This close contact with the data allowed us to
code the literature documents (right-click on the selection Code > With New Code, see sec-
tion 4.1) and write first paraphrases (Analysis > Paraphrase, highlight a passage and start
writing the paraphrase, see section 4.2).
Using MAXQDA for Bibliographic Documentary Analysis 157
Fig. 5: Search results window (left) and context of the search hit in the Document Browser
(right)
1 Please note that “indicator” is a term used in conceptual frameworks that we understand as a code
in MAXQDA, so you can think code and indicator are synonymous. Indicator is the proper word
when talking about conceptual frameworks and code is the proper word when talking in a
MAXQDA context.
158 A. Casasempere, M. Vercher
Tab. 2: Excerpt of the conceptual framework from the final literature review article
Concept Category Indicator Description Cognitive processes
Knows how to count the money,
Identify monetary amounts
buy an item, and count the
with which he/she buys and
change he/she might receive
receives the change
(UNICEF, 2013).
Money can only be spent once, Explain how money can be
1. Social de- 1.2 Explore 1.2.6 Learn the
after buying something a person spent once and summarize
velopment the social basic handling
needs more money to buy some- the importance of renewing
in childhood environment of money
thing else (Jump$tart, 2017). it
Understand that money can be Reports that the money is
exchanged for goods or services exchanged for things or ser-
and that if it is spent, it cannot be vices and cannot be spent
spent again (OECD, 2017). again
As we are doing a literature review, there is a gap between the indicator and the content of
the descriptions shown in Tab. 2. We could add an additional analytic level that would in-
clude money is spent only once or personal money versus others money, but we thought this
way was enough because the conceptual framework is flexible and open. It has to help us
but not tie our hands, it evolves during the research process and is contrasted again at the
end of the research during the discussion. The relevance of a quote in a document is de-
cided in several ways. For example, in any topic, there are only a bunch of experts. In our
project, we found good information from JumpStart, OECD, and UNICEF, or when doing
a literature review on grounded theory, probably we only find three or four primary
sources.
Fig. 6 shows a sample of the code system in MAXQDA with indicators for “Trust and
handling transactions”. In the context of the literature review, a code is a word or set of
words that requires an explanation to be understood; this explanation is the paraphrase.
We use the paraphrases also to build code descriptions for the indicators of the conceptual
framework.
The researcher creates codes to build the conceptual framework or to support the lit-
erature review process by collecting similar ideas under one code. The researcher creates
paraphrases to support the writing of the final article and to understand the codes and
their future use during the data analysis. We do both actions at the same time in most
Using MAXQDA for Bibliographic Documentary Analysis 159
cases. In our opinion, codes are proper to get indicators for the conceptual framework be-
cause, in the future, they will be used as deductive codes. Paraphrases are a fine tool for
supporting the final report writing process in the field of literature reviews.
Fig. 6: Part of the code system with indicators of the conceptual framework
After coding and paraphrasing one text segment, we returned to the search results and kept
on exploring the rest of the hits. The code system kept on growing and the set of para-
phrases too, our insight into the theoretical flows in the readings was wider from moment
to moment. Coding and paraphrasing relevant text segments helped us to further develop
the indicators, properties, and dimensions of our conceptual framework.
only gave us a partial view. Thus, we decided to use the Word Combinations tool (MAXDic-
tio > Word Combinations) to retrieve more attributes and properties of the conceptual
framework indicators and categories. Fig. 8 shows the Word Combinations dialogue box
and the settings we used.
We used to set the combinations with at least three to three words because combina-
tions of two-to-two words retrieve much information and intuitively we thought that
an interesting combination of words should have an article or preposition, a noun, and
an adjective.
Setting the option for Only for activated documents, allowed us to explore document
group by document group, for example for different subthemes of the conceptual
framework developed so far: children attitudes towards money, the family role or the
school task in training skills like saving. Document groups serve to organize the docu-
ments by themes coming from the literature; the code system organizes the codes by
themes and subthemes of the conceptual framework.
162 A. Casasempere, M. Vercher
In this step, we did not differentiate by documents inside the document group.
Selecting Only word combinations within sentences or Only word combinations within
parts of sentences narrowed the results but offered them with more quality.
We decided not to lemmatize words in English because otherwise, the results would not
be as precise as wished.
Grouping the documents was a good idea; the Word Combinations search can delay in
getting the results if we launch it with all the documents in the project and searching by
ideas or topics is more efficient. Fig. 9 shows the retrieved context. We searched inside the
Word Combinations for topics related to the concept, in which we were interested in that
moment, “biases” in the example. If we found an interesting result in the Search Results
pane, we proceeded to read it in its context.
Fig. 9: Results window with word combinations (top left), detailed hits in the Search Results win-
dow after double-clicking on a word combination (bottom) and full context in the Docu-
ment Browser window (top right)
In most cases, reading the quote in its context is enough to get an idea of the content and
the properties or dimensions involved in that idea. From time to time, we read the full
chapter or section of the document. The reason was that sometimes it is easy to find inter-
esting ideas linked to the search hit that took us there.
If MAXQDA is provided with the right search terms, the system will do the rest. Once
we found an interesting passage and read it carefully, we started working in the analysis
manually without quitting the process started by Word Combinations; this is important. If
the results window from Word Combinations has been closed to code the text manually,
you should start the process again. The point is to take advantage of an automatic tool as
Word Combinations, even searching inside the results list, and in another screen read the
Using MAXQDA for Bibliographic Documentary Analysis 163
context of the Search Results by navigating the list of hits. If you read something interesting
for the concept you are working on, you can code it and/or paraphrase it.
Fig. 10: Using Categorize Paraphrases to support writing the final report
We revised the conceptual codes applied to the paraphrases to know better the links be-
tween the conceptual framework and the emerging narrative of the final report or future
article. We recoded the paraphrases with instrumental codes, that followed the structure
of the article (as can be seen in Fig. 10), to match the ideas in our minds with the narrative
of the report. During this process, we started numbering the paraphrases inside a section
of the report with the logic of the narrative of the report. For example, the introduction of
an article is a hard piece of text to develop, so we ordered the proper paraphrases to get a
logical narrative sense.
We numbered the paraphrases in the sequence in which they will be included in the
literature review article; by using this collection of excerpts, we kept on coding and working
in the properties and dimensions of the conceptual framework. By the end of the process,
we were ready to start writing the final report of the conceptual framework and the article.
After clicking in the “Paraphrases” heading of the column to order the paraphrases by
number, we exported the output into a Microsoft Excel file by using the Excel export icon.
This Excel file has a beautiful landscape what we have done; we could see the source doc-
ument, the original paraphrased text at the left, but at the right part we had the ordered
paraphrases along with the exact part of the article in which they should be inserted rep-
resented by the instrumental codes of the article structure. The analytic codes that repre-
sent the parts of the developed conceptual framework also came along in that column to
help better to develop the narrative of the article and link it with the tables inserted in it
representing the parts of the conceptual framework.
164 A. Casasempere, M. Vercher
Fig. 12: Codes representing the structure of the literature review article to be written
Developing such a code system representing the structure of the final report is also very
useful in other academic tasks like doctoral dissertations, research projects or writing an
article. We developed this structure from the beginning but sometimes it is better to have
been working with the indicators of the conceptual framework for some time because you
will know better the themes and subthemes that will guide the sections of the report. This
makes it easy to add the sections of the future academic product.
166 A. Casasempere, M. Vercher
10 Lessons learned
We learned in this process that MAXQDA has many tools that properly combined can offer
usages different from those that we initially think that they were created for. Writing para-
phrases made sense in our task applied to our Bibliographical Documentary Analysis com-
bined with the interesting lexical tools as Word Combinations in MAXDictio menu.
Being able to perform a quick analysis starting from automatic tools, Lexical Search or
Word Combinations, combined with the manual analysis by coding and paraphrasing,
saved us many hours of tedious work just by organizing the tasks differently.
The mentioned process was systematic, and we only had to focus our attention on the
ideas flow that was inside the literature documents. Furthermore, the constant contact
with the original data that MAXQDA enables, facilitates that the ideas inside the concep-
tual framework, and the later article, were accurate and close to the interpretation that the
original authors had about them. Because the proximity to the data, members of the re-
search team or the academic community can validate that the ideas have been properly
apprehended.
We had prior experience both in the usage of MAXQDA and working on the topic of the
research, but we are sure other researchers can benefit from this approach to perform bet-
ter systematic literature reviews. Our approach has many advantages for novice doctoral
candidates; anyway, they have to improve their skills in software like MAXQDA and choose
a system to perform the literature review of their dissertations. In our opinion, MAXQDA
can help them to optimize their time and resources better and end successfully.
Bibliography
American Psychological Association. (2020). Paraphrasing. https://apastyle.apa.org/style-grammar-
guidelines/citations/paraphrasing
Guirao Goris, S. J. A. (2015). Utilidad y tipos de revisión de literatura. Ene, 9(2). https://doi.org/10.4321/
s1988-348x2015000200002
Kuckartz, U. (2014). Qualitative text analysis: A guide to methods, practice and using software. Sage.
Kuckartz, U., & Rädiker, S. (2019). Analyzing qualitative data with MAXQDA. Text, audio, and video.
Springer Nature Switzerland. https://doi.org/10.1007/978-3-030-15671-8
Real Decreto 1630/2006 (2007). de 29 de diciembre, por el que se establecen las enseñanzas mínimas
del segundo ciclo de Educación infantil. Boletín Oficial del Estado, 4(4 de enero), 474–482.
https://www.boe.es/boe/dias/2007/01/04/pdfs/A00474-00482.pdf
Ruiz Ramírez, J. (2010). Importancia de la investigación. Revista Científica, 20(2), 125.
Saldaña, J. (2011). Fundamentals of qualitative research. Oxford University Press.
Smallbone, T., & Quinton, S. (2011). A three-stage framework for teaching literature reviews: A new
approach. The international journal of management education, 9(4), 1–11.
Using MAXQDA for Bibliographic Documentary Analysis 167
Abstract
With the outbreak of COVID-19 across the United States in March 2020, 500+ faculty and
educators at the University of Wisconsin’s Division of Extension began to report weekly on
how they respond to emerging community needs related to the pandemic. In this chapter
we share how we designed and facilitated the team-based analysis of this large and con-
tinuously growing dataset. We illustrate how we use a variety of MAXQDA’s features to de-
velop, apply, and manage coding schemes while working in a team that operates com-
pletely remotely. We share how we structure iterative workflows amongst our six to ten
analysts, and we share strategies and technical tips regarding managing and merging large
team-based project files. Through outlining this team-based thematic coding process we
illustrate how teams can collaboratively prepare datasets for further in-depth analyses that
utilizes Subcode Statistics, code-based Document Variables and MAXQDA’s retrieval
tools.
and physical well-being with a strong focus on nutrition education and support of emer-
gency food systems; and providing community development support across the state.
Most issues Extension is working on have intensified during the COVID-19 pandemic.
As an organization, we had an immediate need to understand and communicate our cen-
tral and distributed state-wide responses to COVID-19, and how our staff adapted program
delivery to online channels and social distancing settings. Additionally, we needed to un-
derstand how existing local issues (such as farm sustainability or equitable access to safe
and healthy food) intensified during the developing emergency.
Since the beginning of the pandemic, on a weekly basis, our educators write and update
brief narratives on their work and submit them to our central Planning and Reporting Plat-
form. Between April and July 2020, we collected and analyzed approximately 1,500 narra-
tives, with collection and analysis ongoing as of the writing of this article. Each record in-
cludes a standardized abstract sentence, a brief outcome narrative (50–250 words),1 as well
as optional narrative information on how our colleagues expand access to educational pro-
gramming to under-served audiences. Each record contains background information such
as the county geographies served, the affiliated Extension Institute, and project collabora-
tors.
Our task was to set up an analysis process that would allow us to analyze a large volume
of weekly incoming data. One immediate goal was to provide regular reports on our organ-
izations’ COVID-19 response, highlighting emergent areas of educational focus. Because
we use MAXQDA as an institution-wide workspace for distributed analysis of large
amounts of data at Extension (Schmieder, Caldwell, & Bechtol, 2018), it was pivotal to pre-
pare the dataset for a variety of subsequent analytic questions and methodological ap-
proaches. We needed to build a database that our own analysis team, Institute-based Ex-
tension Evaluators and other colleagues (such as Program Managers) could use to quickly
execute more detailed analyses themselves.
The process we describe here (Fig. 1) is an institution-wide evaluation with the hybrid
purpose of organizational learning, internal program development and streamlined stake-
holder communication. However, the teamwork flows and software management frame-
work we describe seamlessly translate to mid-scale to large-scale qualitative research pro-
jects that require distributed analysis of data—especially if data are collected and/or ana-
lyzed in several stages by several teams. In fact, our process was based upon best practices
derived from the project managers’ 10+ year research and research management experi-
ence utilizing various Qualitative Data Analysis Software (QDAS) packages.
Fig. 1: Overview of the data analysis strategy for a multi-focus analysis project
Project Lead. Responsible for determining scope of analysis, deliverables and method-
ological/procedural design.
172 C. Schmieder, J. Drevlow, J. Gauley
Project Manager. Responsible for setting up and managing the MAXQDA file and for
communicating with Analysis Team Members regarding concrete analytic tasks. In our
case this role was filled by one of the Project Leads.
Analysis Team Member. Responsible for the initial analysis of the dataset and for writ-
ing reports in collaboration with the Project Leads. In our case, the Analysis Team
Members consisted of our team of Student Evaluators and the Project Leads.
Data Users. Subsequent analysts who use the pre-coded dataset, such as Institute-af-
filiated Program Development & Evaluation Specialists and Extension Program Man-
agers.
Project Managers need to be as explicit as possible when it comes to how different analytic
tools are constructed based on explicit analytic tasks, combining the different components
of the software and the components of other artifacts used in the analysis (Schmieder,
2019; Silver & Woolf, 2019).
In our experience as analysts and consultants it is also important to ensure that the
QDAS Project Manager has enough experience as a qualitative analyst, and that the project
manager has a voice at the table regarding the analytic strategy. Too often we see in re-
search and evaluation projects that technical aspects are outsourced to team members
with limited to no agency when it comes to analytic processes (such as graduate students
or administrative staff). Division of labor alongside technological divides creates discon-
nects between how the software is concretely utilized to enact analytic processes. In turn,
this is likely to foster incoherent software use, incoherent analytic strategies, and incoher-
ent analytic products.
Fig. 2: MAXQDA splices data from Excel spreadsheets (top) into codable text and Document
Variables (bottom)
174 C. Schmieder, J. Drevlow, J. Gauley
We use the document group memos to keep track of the teamwork import for each batch.
For example: Adam and Tina analyzed the data in Batch 5—so when the Project Manager
imports their teamwork, he notes in the document group memo that the analysis was done
by them, and when their work was imported. That way we avoid confusion regarding what
the main analysis file contains, and we know who did the analysis in the different batches
at a glance.
For information about the project that is important, we create documents in the root
folder of the Document System. For example: The document “COVID ANALYSIS TEAM:
READ BEFORE ANALYSIS” (Fig. 3 and Fig. 4) contains information about analysis roles and
outlines the broad workflow. We discuss these topics in team meetings, but the document
serves as a reference point and documentation of these discussions. By integrating them
into the MAXQDA file, we create a one-stop-shop for analysts that does not rely on multiple
shared documents.
Using MAXQDA in Teams and Work Groups 175
Fig. 4: Analysis outlines and tasks are kept track of in MAXQDA in the form of documents
In our experience this helps to avoid confusion and additional communication via emails,
which in turn keeps our analysis on schedule.
Additionally, we uploaded some of the resulting analytic products in a document group
labelled “Archived Docs” (Fig. 3). That way, analysts could quickly reference the ‘big pic-
ture’ of our analysts’ products, which is valuable especially if analysts work only on specific
sections of the data, or if they do initial coding that is later utilized by other analysts, as
done in our project.
We use document sets to create sub-batches of data for different analysts. For example,
one part of Batch 5 was analyzed by Tina and another part by Adam. To facilitate this anal-
ysis, we created two document sets each containing the corresponding parts of Batch 5.
That way each analyst knew exactly what they needed to work on. Using sets also allowed
us to have a more controlled teamwork import process. Additionally, we used memos at-
tached to document sets to specify details for the analysts who were assigned to the sets
(such as deadlines, things to look out for, important procedural updates that may have
emerged since the last team meeting).
our aim was to support them in understanding the “so what” of the coding work they were
doing while ensuring that they fully grasp the categories we were coding for.
In our joint analysis sessions and check-ins, we reviewed memos to clear up ambiguities
and questions, and we crafted definitions and decision guides. Throughout the analysis we
encouraged all analysts to add suggestions to code definitions while they analyzed individ-
ually and in small groups. In those scenarios, we asked them to change the tag color of the
memo icon to red. This way the Project Manager could easily identify where someone had
made remarks. Secondly, analysts were asked to highlight their additions in the memos by
changing the text color in the memo. Again, that way the Project Manager could easily see
where we needed to adjust coding guides and definitions. At the time of our project,
MAXQDA did not feature an option to merge memos when using the Import Teamwork
function (available in the Start > Teamwork menu or by right-clicking in the Document Sys-
tem window). To mitigate this, the Project Manager reviewed memos (which was easy due
to the use of colors), and then made changes manually in the core project file.
our organization’s reporting system. As soon as the Project Manager merged the analysts’
files into a new core file, he would archive all files into an “archived” folder in Microsoft
Teams, and uploaded only the new core file. By regularly archiving files submitted by Anal-
ysis Team Members and updating a core file that would always be in the same spot in Mi-
crosoft Teams we mitigated confusion amongst the analysts regarding which file(s) to work
with, and as a side effect we created a regular and traceable data backup process.
Fig. 6: Outline of the team workflow based on simultaneous data analysis of different batches of
data
To transfer the analysis work into the core MAXQDA file, the Project Manager exported the
coding and paraphrases from each analyst team member’s file via the Teamwork Export
function. In our case, we typically export the teamwork for the most recent batch of data,
which is contained in a document group (right-clicking on document group and selecting
Teamwork > Export Teamwork: Export Data to Exchange File).
The Export Teamwork function creates a small file that contains only the coding, par-
aphrases and memos. The Project Manager then imports this specific analytic work into
the respective uncoded document group in the core file (right-clicking on document group
and selecting Teamwork > Import Teamwork: Import Data from Exchange File).
In general, our process for teamwork is not dependent on integrating files via teamwork
import. It is possible to stagger the analysis by assigning the file to different team members
at different times. We generally prefer this strategy because it makes the step of Teamwork
Export and Teamwork Import superfluous. But this was only possible once the time pres-
178 C. Schmieder, J. Drevlow, J. Gauley
sure related to coding incoming data eased off, i.e., when we did not need to do simulta-
neous analysis to meet internal deadlines.
Regardless of the teamwork strategy, we found it crucial to communicate the merg-
ing/pushing workflow weekly with the team—that way all team members know how their
work feeds into the larger process, and they understand how everyone’s work and progress
is dependent on each other.
Project leads establish a preliminary coding scheme and define analysis tasks for the
team
The creation and iteration of our thematic code system began with the Project Leads ex-
amining the first batch of data collected from Extension’s reporting platform. Our goal was
to become familiar with the data to create a rough framework that had the capacity to do
some initial coding but held the flexibility to adapt as codes began to emerge. As a whole
team, we later modified/re-iterated the coding scheme as we analyzed additional incom-
ing data.
Our coding scheme was framed by the general questions we had of the data. For exam-
ple, we wanted to understand what types of programming we delivered during the COVID
outbreak (consulting, virtual classes, online fact sheets, etc.), and we needed to understand
which broad issues (economic, health-related, etc.) our colleagues addressed in their daily
work.
In the early stages of our projects, we typically utilize code comments and paraphrases,
rather than relying solely on the “codes” in MAXQDA. For example, we wanted to under-
stand how the delivery of educational programming changed due to the pandemic. We
created a code “Response Medium” to represent this analytic perspective. We then applied
this code to data segments that discussed the response medium. But rather than creating
sub-codes right away, we coded the data via the comment function for coded segments.
For example, as Fig. 7 demonstrates, we created a code comment that said, “virtual train-
Using MAXQDA in Teams and Work Groups 179
ing.” The advantage of this strategy is that we did not need to create and define sub-codes
upfront, which could easily create a deluge of codes and a fragmentation of data. Instead,
the Project Leads created code comments during the first exploration and rough organiza-
tion of the data. It is important to emphasize that these code ‘comments’ are methodolog-
ically speaking codes—for a comparison, see for example Charmaz’ (2006, p. 44) examples
regarding the initial coding.
Fig. 7: Comment on a segment that has been coded with “Response Medium”
By pulling up coded segments with their respective comments in the Retrieved Segments
window, the Project Leads then began sorting the comments to identify thematic clusters.
Through this, they developed more stable sets of thematic categories which were added as
codes to the MAXQDA project. Simultaneously, they began writing out definitions for these
codes, which they stored in code memos. To test the emergent code definitions, the Team
Leads then applied these new codes to additional data and modified where needed. The
initial codes were now ready to be further tested, re-iterated and defined in an analysis
session that included all team members (we will describe this second set below).
Some codes in our coding scheme were straightforward, such as codes that categorized
different response media (newsletters, online coaching, etc.). But the Project Leads’ initial
analysis indicated that our team needed to read through more data to develop an emergent
coding scheme that would help us distinguish between different COVID-related issues and
the respective educational responses. Instead of developing a coding scheme based on an
insufficient amount of data, the Team Leads decided to charge the Analysis Team with an
additional task: For each record, they were supposed to identify sections in which educa-
tors described the issue they were addressing and to synthesize that description using
MAXQDA’s Paraphrase function (Fig. 8).
In later analysis steps (after the team members had paraphrased data), the Project Leads
reviewed the paraphrases in order to establish a coding scheme based on issues and edu-
cational focus areas.
In a separate document, he began to write about the different areas of issues and the con-
nected responses. This separate document became the first report that the team shared
with leadership. The review by leadership helped us in making sure that the general ana-
lytic focus of the project was on point. Next, one of the Project Leads imported the different
Institute-focused reports back into the MAXQDA file. Based on these reports (which were
derived from the paraphrases), he developed a coding scheme (including definitions) de-
signed to capture the broad programmatic issues and responses to the pandemic. With
this coding scheme in place, we conducted additional team analysis sessions to familiarize
the team with the codes and to further develop the code definitions, which we maintained
in code memos.
including the creation of public-facing reports and data summaries that were used for pro-
gram planning purposes.
To complete the analysis, the Evaluator approached the data using a multi-step pro-
cess. First, the program evaluator completed an initial review and cleaning of the data by
retrieving coded segments for each code. Miscoded segments were re-coded as necessary.
Next, the evaluator analyzed and coded all documents that had been previously coded as
“unsure”. This “unsure” category was established in MAXQDA’s Code System during the
initial coding phase to flag any data segments that were unclear to analysts so they could
later be categorized by other analysts who had specific program content expertise. As a
final data cleaning step, the evaluator scanned the retrieved segments for each code of in-
terest and flagged 1–2 segments as “anchoring examples” that would later be exported to
highlight specific themes.
Following the program-focused data cleaning, the evaluator relied on three MAXQDA
analysis tools to create a summary report for leadership. The Subcode Statistics tool (avail-
able after right-click on the parent code or in the Codes menu) was used to describe edu-
cational approaches, audiences, issue areas, and delivery modes—answering leadership’s
questions such as: “What are the most common educational approaches being used by
colleagues?” Next, the Crosstab feature (Mixed Methods > Crosstab) helped to answer more
complex questions, including “Which types of educational delivery mode are being used
to serve X type of program audience?” Rather than exporting tabular crosstab results, the
evaluator summarized the information in text. The Interactive Quote Matrix (Mixed Meth-
ods > Interactive Quote Matrix) as well as a simple export of compiled text segments in the
Retrieved Segments window allowed the evaluator to provide narrative examples. As an
appendix to a written summary report, the evaluator also provided an export of selected
coded segments for each thematic code of interest.
While the primary benefit of the pre-coded dataset was the accelerated generation of
focused summary reports for leadership, it had two ancillary benefits. First, the evaluator
could use the dataset to run quick analyses during program planning meetings with lead-
ership. As an example, during one meeting, leadership asked the evaluator to quickly ex-
plain if X audience was being engaged through a type of educational programming. Be-
cause of the organization of the dataset, questions such as that could be immediately an-
swered with Crosstabs and Subcode Statistics. Second, the summary reports and data-
driven conversations illuminated gaps in the reporting and limitations of the data/report-
ing. Knowing those limitations, leadership identified new themes for the larger analysis
team to investigate in future cycles and crafted communication to colleagues regarding
reporting tips and resources.
Using MAXQDA in Teams and Work Groups 183
6 Lessons learned
Giving analysts more time to co-analyze
After a few collaborative coding sessions in late March and early April, we assigned por-
tions of the data to code and paraphrase to Analysis Team Members. The rationale for the
shift to individual coding was a practical one because we wanted to provide leadership and
stakeholders with timely information in a rapidly developing crisis. Additionally, our stu-
dent evaluators worked on a different schedule. However, the Project Leads quickly found
in that our code definitions and shared understanding was not evolved enough; to test this
hunch, we also employed inter-coder reliability tests that indicated that we needed to work
more closely together until our teams’ understanding of the relationships between the de-
fined codes and the data was more aligned. To counter this, we set up more frequent team
analysis sessions, and we assigned the coding/paraphrasing processes to dyads of Analysis
Team Members.
Bibliography
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychol-
ogy, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa
Charmaz, K. (2006). Constructing grounded theory: A practical guide through qualitative analysis. Sage.
Guest, G., MacQueen, K., & Namey, E. E. (2012). Applied thematic analysis. Sage.
Schmieder, C. (2020). Qualitative data analysis software as a tool for teaching analytic practice: To-
wards a theoretical framework for integrating QDAS into methods pedagogy. Qualitative Re-
search, 20(5), p. 684–702, https://doi.org/10.1177/146879411 9891846
Schmieder, C., Caldwell, K. E. H., & Bechtol, E. (2018). Readying extension for the systematic analysis
of large qualitative data sets. Journal of Extension, 56(6), 8.
Silver, C., & Woolf, N. (2019). Five-level QDA method. In P. Atkinson, S. Delamont, A. Cernat, J.W.
Sakshaug, & R.A. Williams (Eds.), SAGE Research Methods Foundations. https://www.doi.org/
10.4135/9781526421036818833
Acknowledgments
Special thanks go to our Student Evaluators Tina Dhariwal, Yuxin Liu, Adam Kanter, Ben
Peterson, and Jess Mullen—this work would not have been possible without you!
Embedded in the context of each research example, readers can follow analytical
processes step-by-step and gain insights into efficient ways to use MAXQDA.
Authors
Dr. Michael C. Gizzi is a professor of criminal justice at Illinois State University, USA.
He holds a doctorate in political science, and his research focuses on constitutional
criminal procedure and judicial process. He is a professional trainer and consultant
for MAXQDA and uses it in research courses, workshops, and webinars.
Dr. Stefan Rädiker is a consultant and trainer for research methods and evaluation.
He holds a doctorate in educational sciences and his research focuses on computer-assisted
analysis of qualitative and mixed methods data (www.methoden-expertise.de). Edited by Gizzi & Rädiker
ISBN 978-3-948768-10-2
90000
MAXQDA
PRESS MAXQDA
www.maxqda-press.com 9 783948 768102 PRESS