Antho Models
Antho Models
Antho Models
Ideas to Consider
Brandon W. Youker
Abstract
This paper examines the relationship between ethnographic research methods and
evaluation theory and methodology. It is divided into two main sections: (a)
ethnography in evaluation and (b) anthropological models of evaluation. Three
levels of the leading anthropological models of evaluation are summarized, which
include responsive evaluation, goal-free evaluation, and constructivist evaluation.
In conclusion, (a) there is no consensual definition of ethnography; (b) in many
circumstances, ethnographic evaluation models may be beneficial; and (c)
ethnography can be used in evaluation but requires a high level of analysis to
transform ethnographic data into useful information for eliciting an evaluative
conclusion.
*The author would like to thank Daniela C. Schröter, Chris L. S. Coryn, and
Elizabeth K. Caldwell for editing this paper and for their extremely useful
comments and suggestions.
Introduction
The paper is divided into two main sections: (1) Ethnography and Evaluation and
(2) Anthropological Models of Evaluation. The first section presents a summary
definition of ethnography, its theories, concepts, and benefits; and the difference
between ethnography and anthropology. The author then provides a brief definition
of evaluation and discusses the relationship between ethnography and evaluation.
There are three anthropological models of evaluation in which the author
summarizes, discusses the strengths and limitations, and reflects on their
1
AUTHOR'S NOTE: The author of this paper uses the terms “ethnography,” “ethnographic
techniques,” and often “qualitative research methods” interchangeably. Additionally, the term
“program” is used generically, to refer to the evaluand*. Ethnography in the context of this paper
is primarily in regards to program and policy evaluations. Ethnography may also be used in
product, personnel, and performance evaluations.—* “Evaluand: That which is being evaluated
(e.g., program, policy, project, product, service, organization)” (Davidson, 2005, p. 240).
relationship with ethnography. The paper concludes with a synopsis of the author’s
main impressions and key points.
2
Alternative definitions: Ethnography is “a descriptive study of an intact cultural or social group
or an individual or individuals within the group based primarily on participant observation and
open-ended interviews. Ethnography is based on learning from people as opposed to studying
people” (Beebe, n.d.). Ethnographic research “involves the study of groups and people as they go
about their everyday lives” (Emerson, Fretz, & Shaw, 1995). “Ethnography is the art and science
of describing a group or culture” (Fetterman, 1989, p. 11).
3
Naturalism: Leave natural phenomenon alone.
4
Constructivist philosophy maintains that the researcher manufactures knowledge through her
interaction in the field and that there is no objective truth to be uncovered (ontological
5
Heuristics is a form of phenomenological inquiry focusing on the personal experiences and
insights of the researcher—it considers researcher’s experience in addition to other observers that
experience the phenomenon.
6
Emic perspective is that of the insider and includes the acceptance of multiple realities.
Multiple factors may guide evaluators and researchers alike toward choosing
quantitative or qualitative evaluation methodology. In the following, qualitative
ethnographic evaluation models will be introduced.
There are many considerations that will need resolution before deciding if an
ethnographic method is an appropriate method for an evaluation. Considerations
include the purpose of the evaluation; whether the evaluation is formative or
summative; the amount of time allocated for the evaluation; the financial and other
resources available; and the level of expertise and competence of the evaluation
team. Prior to adopting a specific methodology or model, all the typical issues
regarding methodology, conceptual context, validity, ethics, etc. must be discussed.
Fetterman (1982) identified a study that called itself ethnographic although the
researchers were on site for only five days. Deneberg (1969) and Fetterman (1982)
claim that these researchers are fickle to scholastic fads and refer to them as
“Zeitgeister-Shysters.” Zeitgeister-Shysters become involved in research that is a
hot topic or trendy and the result is superficial research. Such researchers
contribute minimally to the field and often tarnish the reputation and credibility of
ethnography. In describing the Zeitgeister-Shysters, Fetterman stated, “rather than
conducting ethnographies, they are simply using ethnographic techniques”
(Fetterman, 1982, p.2). Wolcott (1980) concluded that “much of what goes on
today as educational ethnography is either out and out program evaluation, or, at
best, lopsided and undisciplined documentation” (p.39). Fetterman warns that the
adoption of random elements of ethnography without emphasis on the whole,
results in “the loss of the built-in safeguards of reliability and validity in data
collection and analysis” (Fetterman, 1982, p.2). Researchers often use
anthropological tools (ethnography) without understanding the values and
cosmology underlying the ethnographic techniques. Wolcott (1980) reminds the
reader that the purpose of ethnography is cultural interpretation and this requires
the researcher to examine the whole trait complex rather than a few single traits.
Still many evaluators study single traits and call their evaluation ‘ethnographic’.
a program, its acceptability, and whether or not they were influenced to modify
behavior or thinking” (p. 45). This has always been a consideration for evaluators,
as it pertains to, or affects the program's quality, significance, or merit.
Experienced evaluators typically employ several qualitative data collection
methods in an evaluation in hopes of understanding some of these cultural issues,
albeit less in depth than with pure ethnography.
7
Ontology: The nature of the real.
the merit of outcome measures are decided by the program impactees and
stakeholders. Evaluators are partners with the stakeholders in the creation of data
and they orchestrate the consensus building process. By contrast, in goal-free
evaluation, program success is decided by examining change relative to the
identified needs through a comprehensive needs assessment. Lastly, all three
models rely on an evaluator with significant commitment to and experience with
ethnographic and qualitative methods.
The remainder of the paper will discuss each anthropological evaluation model and
illustrate its relationship to ethnography and the qualitative research paradigm of
evaluation.
Responsive Evaluation
Environment
• Quantity (investigate for quantity including the counting of frequencies, occurrences, products,
performances, participants, resources, etc.).
• Diversity (diversity in artistic products, performances and participants).
• Excellence (refers to technique or quality of execution/performance; has a varying threshold of
acceptability).
• Originality (separate from quantity and diversity; referring more to creativity and inventiveness; the ability
to make someone “catch their breath”; best measured by degree on a variable range).
• Vitality (changeability of physical environment measured over time; encourages regular review of the
physical conditions and aesthetics of environment).
Workspace
• Space and content - suitability and accessibility
• Quantity and quality of equipment and supplies
Output
• Measure outputs with careful consideration of the threshold of acceptability
• Incorporate experts in the field
Support
• Within the program and from the community, the school or organization as a whole
• Investigates how outputs are regarded and rewarded
ways people benefit from involvement with the program and among each other;
furthermore, they are not sensitive to changes in program purpose. Stake cites
Scriven (1967) and suggests that it may be preferable to evaluate the “intrinsic
merit of the experience rather than the more elusive payoff” (p. 27). Stake feels
that less emphasis on preconceived notions of success will allow for increased
stakeholder flexibility in determining the purposes of the evaluation and criteria by
which to measure success. In a responsive evaluation, the evaluator has the ability
to respond to emerging issues, rather than sticking to a strict evaluation plan or
structure. This ultimately leads to an increase in the evaluation's utility to the
program stakeholders. Recurring events in responsive evaluation (Stake 1975):
Data is collected through direct personal experience or the second best option,
vicarious experience. Observations are not only conducted by the evaluator, but the
evaluator enlists program stakeholders according to the issues being studied and
the audience being served. Having multiple observations and observers increases
data reliability; observations continue to be subjective but through replication
random error is reduced. The bias of direct or vicarious experience decreases as
repeated observation and diverse points of view are attained. The evaluator
produces portrayals typically featuring descriptions of persons, such as a five-
minute script, a log, scrapbook, multi-media or audience role-plays. The small
number of case studies is often criticized for sampling error, but Stake attests that
the error may be minimal and that it is a small price to pay for potentially
substantial improvements in communication. Moreover, Stake assumes that case
studies of several students are more interesting and representative of a program
than a few measurements on all program participants. Therefore, the reader
benefits by a more comprehensive understanding of the program.
Goal-Free Evaluation
This means that all program materials are screened either by a non goal-free
evaluator on the evaluation team, an administrative assistant, or by the client to
ensure that none of the stated goals or objectives are described to the goal-free
evaluator. The purpose of this is:
…finding out what the program is actually doing without being cued as to what it
is trying to do. If the program is achieving its stated goals and objectives, then
these achievements should show up; if not, it is argued, they are irrelevant.
less pure goal-free evaluation still makes finding outcomes difficult and
encourages the evaluator to connect program effects to recipients’ needs instead of
the stated goals of the program. Altschuld and Witkin (2000) state that the needs at
the primary level (i.e., recipients of the program) are the most critical concern, and
from there the needs assessment can considers the needs of the service deliverers
and the program delivery system. They argue that the primary needs are the “raison
d’être” or the “rationale for the existence” of the service deliverers and delivery
systems (Altschuld & Witkin, 2000, p. 10).
There are also relative degrees to which an evaluation may be goal-free. Goal-free
evaluations may be combined, in full or in part with other evaluation methods (e.g.,
“qualitative versus quantitative, survey versus experiment, multiple perspectives
versus one right answer, etc.”, Scriven, 1991, p. 182). Additionally, an evaluation
may begin goal-free and then become goal-based; the reverse is not possible. It is
also suggested that goal-free evaluation can be used as a supplement to a
traditional outcomes evaluation conducted by a separate evaluator. The evaluator
implementing the goal-free evaluation collects exploratory data to supplement and
provide context to another evaluator's goal-oriented data. Goal-free evaluators
observe the program in an attempt to understand the culture, meanwhile
considering needs, processes, and outcomes. Below, the author provides a
simplified illustration of a goal-free evaluation using a physical education and
training program.
The evaluator of a physical education and training program enters into the
evaluation without any prior knowledge of the program's goals. She would likely
be capable of directly observing changes in health-related knowledge, strength, and
endurance, which are the program's stated goals. However, the goal-free evaluator
might also discover changes in endurance, flexibility, physique, changes in
Journal of MultiDisciplinary Evaluation (JMDE:3) 131
ISSN 1556-8180
http://evaluation.wmich.edu/jmde/ Global Review: Publications
behavior, social status, networking with other students, finding new supportive
workout partners, sharing of dietary and nutrition tips, increased self-esteem, etc.
all of which were not original goals of the program and would be considered
positive, unintended side-effects. They would likely have been missed if the
evaluation solely examined the stated or preordained goals.
• It is less intrusive to the program and potentially less costly to the client.
Scriven and other users of goal-free evaluations have provided minimal direction
regarding operational methodology in conducting the model. The only known
attempt to develop an operational methodology for goal-free evaluation was by
Evers (1980) in a doctoral dissertation. Evers outlined a goal-free evaluation
methodology consisting of six components each of with comprising several sub-
categories. The six main components were: (1) Conceptualization of Evaluation;
(2) Socio-Political Factors; (3) Contractual/Legal Arrangements; (4) The Technical
Design; (5) Management Plan; and (6) Moral/Ethical/Utility Questions. The
success of a goal-free evaluation is dependent upon the quality of the needs
assessment. If there is not an accurate comprehension of the program participants'
needs then the entire evaluation may be at jeopardy.
Constructivist Evaluation
8
The new meaning of constructivist methodology: Truth is determined by consensus building
among informed constructors, not of correspondence with an objective reality. Facts are
meaningless without a value framework; therefore, no proposition can be objectively assessed.
Causes and effects do not exist; accountability is relative and implicates all interacting parties
equally (Guba & Lincoln, 1989).
British scholars call it ’human inquiry’ (inquiry conducted in human ways for
humane ends); Americans scholars call it ‘action research’ (research which aims
to produce action on or through it[s] findings, and third world or developmental
evaluators call it 'developmental evaluation’ (evaluation which develops the
understanding, and resources to respond, of those evaluated). A common generic
term for it is ‘collaborative inquiry’ (which simply describes what goes on when
you use the method).
b. A local process.
c. A sociopolitical process.
f. An emergent process.
4. Generate consensus.
Conclusion
References
Jessor, R., Colby, A., & Shwedler, R. A. (1996). Ethnography and human
development: Context and meaning in social inquiry. Chicago, IL: The
University of Chicago Press.
McLean, L. D. (1975). Judging the quality of a school as a place where the alis
might thrive.” In R. Stake (Ed.), Evaluating the arts in education: A
responsive approach (pp.41-58). Columbus, OH: Charles E. Merrill
Publishing Company.
Patton, M. Q. (1990). Qualitative evaluation and research methods (2nd ed.). Sage
Publications Newbury Park, CA.
Sanders, J. (1994). The program evaluation standards (2nd Ed.). Thousand Oaks,
CA: Sage Publications, Inc.
Scriven, M. (1991). Evaluation thesaurus (4th ed.). Newbury Park, CA: Sage
Publications, Inc.
United States General Accounting Office. (2003). Ethnographic studies can inform
agencies’ actions. GAO-03-455.
Wolcott, H. P. (1980). How to look like an anthropologist without really being one.
Journal of MultiDisciplinary Evaluation (JMDE:3) 141
ISSN 1556-8180
http://evaluation.wmich.edu/jmde/ Global Review: Publications
Brandon W. Youker obtained his Bachelor of Arts in Social Work from Michigan
State University, Master of Science in Social Work from Columbia University in
the City of New York, and is a former post-graduate advanced clinical social work
fellow at Yale Child Study Center-Yale School of Medicine. Currently, Brandon
Youker is a doctoral student in Interdisciplinary Evaluation at Western Michigan
University and an evaluator at Western Michigan University’s Evaluation Center.
His academic interests include evaluation theory, methodology and design;
international program evaluation; evaluation of social work practice and
programming; the evaluation of human service programs; and the evaluation of arts
programs.