Measuring the Success
of Environmental
Education Programs
By Gareth Thomson
Canadian Parks and Wilderness Society
and Jenn Hoffman
Sierra Club of Canada, BC Chapter
Global, Environmental, and Outdoor Education Council
Measuring the Success of Environmental Education Programs
Table of Contents
Executive Summary
3
Introduction
5
About Environmental Education and Evaluation
1. What is ‘Good’ Environmental Education?
6
2. Elements of Excellent Environmental Education Programs
10
3. What is Evaluation?
A. Evaluation Planning: A Background
B. Conditions Unfavourable for Evaluation
12
13
15
4. The Benefits of an Evaluation Program
16
The Nuts and Bolts of Evaluating Environmental Education
5. Choosing an Evaluation Model
17
6. Outcome-Based Evaluation
A. What is Outcome-Based Evaluation?
B. General Steps to Outcomes-Based Evaluation
19
19
22
7. Building A Program That Includes Evaluation
25
8. Trying to Evaluate the ‘Tough Stuff’
A. How Learners (Sometimes) Get to Action
B. Why Are EE Programs So Difficult to Evaluate?
C. Measuring Values Shift
D. Measuring Behaviour Change
E. Measuring Benefits to the Environment
F. An Alternative Approach: What We Know About Good EE
29
29
31
33
36
38
39
9. Conclusion: Implications of Conducting Evaluation
40
– Page 1 –
Measuring the Success of Environmental Education Programs
Appendixes
Appendix One: An Environmental Education Tool Kit
A. What are the Instruments Available?
B. Pros and Cons of Each Instrument
42
42
47
Appendix Two: Checklist for Program Evaluation Planning
50
Appendix Three: Tips for Conducting an Evaluation
52
Appendix Four: Evaluation Samples
A. Teacher Written Questionnaire
B. Teacher Interview/Focus Group Questions
C. Student Questionnaire
D. Student Focus Group Questions
E. Student Class Action Plans Feedback Forms
53
53
55
58
61
63
End Notes
Glossary
65
References
67
Resources
70
– Page 2 –
Measuring the Success of Environmental Education Programs
Executive Summary
Today more than ever, society needs high-quality environmental
education programs that succeed in moving values and changing
behaviours in the direction of sustainability and environmental
conservation. Effective, relevant evaluation offers a very powerful way
to improve these education programs and enables them to succeed in
accomplishing more of their objectives and goals.
Funders and programmers alike strive for better techniques to evaluate
the success of environmental education. Methods of evaluation are
often poorly understood, particularly among professionals who deliver
environmental education programs.
A survey of both these
professionals and academics found a scarcity of techniques to measure
the more challenging outcomes such as values shift, behaviour change,
and benefits to the environment. This document is an attempt at
outlining and describing pertinent educational evaluation
methodologies and tools. Its purpose is not to reinvent the wheel, but
rather to connect environmental educators with solid, practical
evaluation strategies, methods and advice.
Outcome-Based Evaluation is rapidly growing in popularity and use
among both funding and the non-governmental community, and the
authors describe a program logic model and an evaluation scheme that
flows from this model, using illustrative examples from existing
environmental education programs. Finally, some outcome indicators
are suggested that can be used to assess the ‘hard to measure’ long-term
outcomes that pertain to values, behaviour, and environmental
benefits. This report also briefly reviews the basic tenets of
environmental education, reports on ten principles of excellent
environmental education, and includes a glossary and written and online resources to assist the reader.
– Page 3 –
Measuring the Success of Environmental Education Programs
Acknowledgements
The authors would like to thank a several funders and contributors who
made this work possible. Alberta Ecotrust and the J.W. McConnell
Family Foundation both provided funds to support this work. Deep
thanks to the following individuals who took time from their busy
schedules to comment on this draft:
Sue Staniforth, Staniforth & Associates Environmental Education
Consulting
Andy Pool, Encana
Margaret Floyd, SPARKS Strategies
Susan Ellis, Ellis and Associates
Phillip Cox, Plan:Net
Jill Kirker, Alberta Ecotrust
Also, throughout this document we have cited the works of several
individuals and organizations that have published a great deal on
materials on evaluation. Some individuals, such as Carter McNamara,
have done extensive work on Outcomes-Based Evaluation. We
gratefully acknowledge our debt to these workers especially those that
we have referenced. For more information, please refer to the
Resources section at the end of this document.
A Note About This Document
This is a living document, and the authors welcome any constructive
feedback or suggestions that would help improve subsequent drafts.
Contact the Canadian Parks and Wilderness’s (CPAWS) Gareth
Thomson with your feedback via email:
[email protected].
– Page 4 –
Measuring the Success of Environmental Education Programs
Introduction
We live in an age in which environmental educators are increasingly
being challenged by their funders and their audiences to demonstrate
their results, and where accountability and performance measurement
techniques are increasingly being emphasized. A good evaluation
program can satisfy this need – and it can do more. Careful reflection
about and attention to the results of a good program evaluation can
provide environmental education professionals with concrete ideas on
how to improve the management and ultimate performance of their
programs. That is, a good evaluation program can improve the
education that students receive.
This report is aimed at two audiences: professionals working in the field
of environmental education who strive to improve their programs and
achieve their goals and objectives; and the funding community, who
wrestle with issues of accountability and results when faced with
environmental education proposals. The intent of this document is to
provide program designers and funders with a succinct set of
recommendations on the best instruments that can be used to measure
the success of education programs, all the while keeping the learner at
the centre of any evaluation. One of our goals is to make a document
that allows environmental education programmers of any level and
organizational size to overcome their ‘fear’ of evaluation, and to plan
for and carry out evaluations of their programs. We have tried to
create a ‘one-stop shopping centre’ complete with discussions and
examples of tools, planning strategies, and a solid example of one
particular evaluation strategy that works well for programs. We hope
that environmental education professionals will come to see an
evaluation plan as a critical part of their program, as important as the
provision of services, classroom presentations or the creation of teacher
activity guides.
– Page 5 –
Measuring the Success of Environmental Education Programs
About Environmental Education and Evaluation
1. What is ‘Good’ Environmental Education?
Environmental education has been defined and redefined
over the last twenty-five years. Definitional issues are
inherent in a field this broad and encompassing. It is
generally agreed that environmental education is a
process that creates awareness and understanding of the
relationship between humans and their many
environments – natural, man-made, cultural, and
technological. Environmental education is concerned with
knowledge, values, and attitudes, and has as its aim
responsible environmental behaviour.
NEEAC, 1996
Appreciation of, and concern for our environment is nothing new. Since
the early writings of John Muir, Aldo Leopold and Henry David Thoreau,
amongst others, concern over humankind’s impact on the environment
has been well discussed and documented. In 1962, Rachel Carson’s
release of Silent Spring, a seminal work documenting the effects of
pesticides in the environment, brought about a new sense of urgency in
how humankind interacted with their environment. As Daniel Einstein
notes, “a new educational movement was born” (1995). Silent Spring
quickly became a catalyst for the environmental movement. From this
movement, a different emphasis began to emerge, one of awareness of
human complicity in environmental decline and the involvement of
public values that stressed the quality of the human experience and
hence of the human environment (NEEAC, 1996). Public concern over
our effects on the world around us began to mount. Events that both
celebrated the environment as well as called to attention the issues
affecting it became increasingly popular. Earth Day was born. Those
that taught about the environment called for a new type of curriculum
that included an examination of the values and attitudes people used to
make decisions regarding the environment (Einstein, 1995). And
environmental educators began work towards a common definition for
environmental education.
Much of the work on environmental education within the last quarter
century has been guided by the Belgrade Charter (UNESCO-UNEP,
1975) and the Tbilisi Declaration (UNESCO, 1978). These two documents
furnish an internationally accepted foundation for environmental
education.
Belgrade Charter, 1975
The Belgrade Charter was developed in 1975 at the United Nations
Educational, Scientific, and Cultural Organization Conference in
– Page 6 –
Measuring the Success of Environmental Education Programs
Yugoslavia, and provides a widely accepted goal statement for
environmental education:
The goal of environmental education is to develop a world
population that is aware of, and concerned about, the
environment and its associated problems, and which has
the knowledge, skills, attitudes, motivations, and
commitment to work individually and collectively toward
solutions of current problems and the prevention of new
ones.
(UNESCO, 1976)
Tbilisi Declaration, 1977
Following Belgrade, the world's first Intergovernmental Conference on
Environmental Education was held in Tbilisi, Georgia. Building on the
Belgrade Charter, representatives at the Tbilisi Conference adopted the
Tbilisi Declaration, which challenged environmental education to create
awareness and values amongst humankind in order to improve the
qualities of life and the environment.
A major outcome of Tbilisi was detailed descriptions of the objectives of
environmental education. Most environmental educators have since
universally adopted these objectives.
• Awareness– to help social groups and individuals
acquire an awareness and sensitivity to the total
environment and its allied problems.
• Knowledge– to help social groups and individuals gain
a variety of experience in, and acquire a basic
understanding of, the environment and its associated
problems.
• Attitudes– to help social groups and individuals acquire
a set of values and feelings of concern for the
environment and the motivation for actively
participating in environmental improvement and
protection.
• Skills– to help social groups and individuals acquire the
skills for identifying and solving environmental
problems.
• Participation– to provide social groups and individuals
with an opportunity to be actively involved at all levels
in working toward resolution of environmental
problems.
(UNESCO, 1978)
– Page 7 –
Measuring the Success of Environmental Education Programs
Characteristics of Environmental Education
The outcomes of Tbilisi and Belgrade have, in many ways, provided the
basis for many environmental education programs. Certainly, having
both a commonly accepted goal statement and associated set of
objectives has allowed many educators to better address the desired
outcomes of their programs. Equal to the need to identify both a
common goal and set of objectives is the need to consider the
characteristics of environmental education.
In Environmental Education Materials: Guidelines for Excellence (1996)
the North American Association for Environmental Education (NAAEE)
identify a number of specific characteristics of environmental
education. According to NAAEE, environmental education:
• is learner-centred, providing students with
opportunities to construct their own understandings
through hands-on, minds-on investigations
• involves engaging learners in direct experiences and
challenges them to use higher-order thinking skills
• is supportive of the development of an active learning
community where learners share ideas and expertise,
and prompt continued inquiry
• provides real-world contexts and issues from which
concepts and skills can be used
(NAAEE, 1996)
These characteristics, when applied in conjunction with the abovementioned goal and objectives for environmental education, have
allowed environmental educators to develop programs that lend to the
formation of positive beliefs, attitudes and values concerning the
environment as a basis for assuming a wise stewardship role towards
the earth (Caduto, 1985).
As a program planner or environmental educator, however, it’s a lot to
take in. Goals, objectives, characteristics…the question quickly arises:
how do we attempt to develop a program that meets all of these
components, without losing sight of what we originally set out to do?
One such framework, offers a clear approach:
Environmental education has long been defined to include
three critical components: a w a r e n e s s , leading to
understanding which in turn creates the potential and
capacity for appropriate actions . More specifically,
environmental education includes
• developing personal awareness of the environment and
one's connections to it;
• developing an understanding of environmental
concepts and knowledge of ecological, scientific, social,
political and economic systems;
– Page 8 –
Measuring the Success of Environmental Education Programs
• the capacity to act responsibly upon what a person
feels and knows, in order to implement the best
solutions to environmental problems.
(Staniforth & Fawcett, 1994)
Ultimately, environmental education as it is practiced in the 21st century
is largely based on a rich legacy of existing environmental education
declarations, frameworks, definitions, and models as a foundation. The
field as a whole owes a great deal to those who have worked to create
these documents. Each document is based on a different set of
assumptions and priorities, yet the commonalities are considerable.
A detailed backgrounder that summarizes these frameworks, entitled
“What Is Good Environmental Education?” can also be downloaded
from the Canadian Parks and Wilderness Society (CPAWS) website (see
Resources for more information). And to further explore the diversity
of this area visit also “Excellence in Environmental Education Guidelines
for Learning (K-12)” by the North American Association for
Environmental Education (see Resources).
– Page 9 –
Measuring the Success of Environmental Education Programs
2. Elements of Excellent Environmental Education Programs
What constitutes an excellent environmental education program? A
rigorous definition would be very difficult to give, and beyond the scope
of this document. Notwithstanding this, it is useful for practitioners
and funders of environmental education to consider the answer to this
question.
As an example, the following ten principles of excellent environmental
education programs were drafted by a group of experienced
environmental educators involved in the Green Street initiative, as part
of a visioning and capacity-building exercise. Green Street is a standard
of excellence for quality environmental education programs provided by
prominent Canadian environmental education organizations. (See
Resources for more).
The intention of these principles is not to be prescriptive or
exclusionary, nor are they necessarily a checklist (indeed, no programs
can claim to exhibit 100% of these principles!).
Excellent environmental education programs…
• Are credible, reputable, and based on solid facts,
traditional knowledge, or on science. Values, biases,
and assumptions are made explicit.
• Create knowledge and understanding about ecological,
social, economic, and political concepts, and
demonstrate the interdependence between a healthy
environment, human well-being, and a sound
economy.
• Involve a cycle of continual improvement that includes
the processes of design, delivery, evaluation, and
redesign.
• Are grounded in a real-world context that is specific to
age, curriculum, and place, and encourage a personal
affinity with the earth through practical experiences
out-of-doors and through the practice of an ethic of
care. Like the environment itself, programs transcend
curricular boundaries, striving to integrate traditional
subject areas and disciplines.
• Provide creative learning experiences that are hands-on
and learner-centred, where students teach each other
and educators are mentors and facilitators. These
experiences promote higher order thinking and provide
a cooperative context for learning and evaluation.
– Page 10 –
Measuring the Success of Environmental Education Programs
• Create exciting and enjoyable learning situations that
teach to all learning styles, promote life-long learning,
and celebrate the beauty of nature.
• Examine environmental problems and issues in a allinclusive manner that includes social, moral, and
ethical dimensions, promotes values clarification, and is
respectful of the diversity of values that exist in our
society.
• Motivate and empower students through the provision
of specific action skills, allowing students to develop
strategies for responsible citizenship through the
application of their knowledge and skills as they work
cooperatively toward the resolution of an
environmental problem or issue.
• Engage the learner in a long-term mentoring
relationship, transforming them as they examine their
personal values, attitudes, feelings and behaviours.
• Promote an understanding of the past, a sense of the
present, and a positive vision for the future, developing
a sense of commitment in the learner to help create a
healthier environment and a sustainable home,
community, and planet.
Another approach, recently launched by NAAEE in November of 2002 is
the publishing of the draft document: “Guidelines for Excellence in
Nonformal Environmental Education Program Development and
Implementation.” These guidelines point out six key characteristics of
high quality environmental education programs:
1. They support their parent organizations’ mission,
purpose, and goals.
2. They’re designed to fill specific needs and produce
tangible benefits.
3. They function within a well-defined scope and structure.
4. They require careful planning and well-trained staff.
5. They are built on a foundation of quality instructional
materials and thorough planning.
6. They define and measure results in order to improve
current programs, ensure accountability, and maximize
the success of future efforts.
(NAAEE, 2002)
Astute readers will note that this current document is directly aimed at
helping organizations achieve Characteristic #6!
– Page 11 –
Measuring the Success of Environmental Education Programs
3. What is Evaluation?
Evaluation is a scary word. My experience has been that as
soon as you use that word, people get their backs up and
feel like they are going to be evaluated, and hence judged.
It gets personal very easily.
Margaret Floyd, Sparks Strategies, 2002
We as humans evaluate all the time. Listen in on conversations and
you’ll hear: “I loved that movie last night”. “He is a terrible cook!” “That
car isn’t worth the price they’re charging.” In more formal terms, most
of us have been evaluated by teachers through the school system or by
employers in the work place – often leaving us with negative
connotations about both the process and the end results.
Evaluation is a term that is used to represent judgments of many kinds.
What all evaluations have in common is the notion of judging merit.
Someone is examining and weighing something against an explicit or
implicit yardstick. The yardsticks can vary widely, and include criteria
such as aesthetics, effectiveness, economics, and justice or equity issues.
One useful definition of program evaluation is provided below, with an
analysis of its components:
Evaluation is the systematic assessment of the operation
and/or the outcomes of a program or policy, compared to
a set of explicit or implicit standards, as a means of
contributing to the improvement of the program or policy.
(Weiss,
1998)
Dr. Weiss breaks the definition down into several key elements, which
serve to highlight the specific nature of evaluation:
Systematic assessment: this emphasizes the research nature of
evaluation, stressing that it should be conducted with rigor and
formality, according to accepted research canons. Therefore, an
evaluation of an environmental education program should follow
specific, well-planned research strategies, whether qualitative or
quantitative in nature. Scientific rigor can take more time and be more
costly than informal methods, yet it is an essential component of
successful evaluations. This is especially so in education, where
outcomes are complex, hard to observe, and made up of many elements
that react in diverse ways.
– Page 12 –
Measuring the Success of Environmental Education Programs
The activities and outcomes of a program are the actual focus of the
evaluation – some evaluations study process while others examine
outcomes and effects. An educational program evaluation would usually
look at both the activities of the program (how it’s delivered, by whom,
etc.) and its outcomes for participants (skills, knowledge, attitudes,
values change, etc.).
Standards for comparison: this is a set of expectations or criteria to
which a program is compared. Sometimes it comes from the program’s
own goals or mission statement, as well as from the objectives of
program sponsors, managers and practitioners.
Improvement of the program: the evaluation should be done not to
point fingers or assign blame but to provide a positive contribution that
helps make programs work better and allocates resources to better
programs.
A. Evaluation Planning: A Background
One does not plan and then try to make circumstances fit
those plans. One tries to make plans fit the circumstances.
General George S. Patton, 1947
“Begin at the beginning,” the King said gravely, “and go on
till you come to the end: then stop.”
Lewis Carroll, 1865
Evaluation has become very popular over the past two decades, as an
important tool for program funding and decision-making,
organizational learning, accountability and program management and
improvement. How do we as environmental educators go about
developing evaluation programs that work for us?
Evaluation planning can be a complex and cyclical process. One must
identify the key questions for a study, decide on the best measurements
and techniques to answer the questions, figure out the best way to
collect the data, develop an appropriate research design, implement it
and promote appropriate use of the results. Here are some evaluation
definitions and descriptions to provide some background.
– Page 13 –
Measuring the Success of Environmental Education Programs
Formative and Summative Evaluation
Michael Scriven introduced these two terms, formative and summative,
in 1967, to describe the evaluation of educational curriculum. Formative
evaluation produces information that is fed back during the course of a
program to improve it. Summative evaluation is done after the program
is finished, and provides information about its effectiveness. Scriven
later simplified this distinction, as follows: “When the cook tastes the
soup, that’s formative evaluation; when the guest tastes it, that’s
summative evaluation.” (In Weiss, 1998, p. 31)
Programs are seldom “finished;” they continue to adapt and modify over
time, in response to internal and external conditions. Therefore, the
need for “formative” information continues – to be fed back to program
staff to improve the program.
Outcome and Process-Based Evaluation
Focusing on the results of a program or its outcomes is still a major
aspect of most evaluations. Outcomes refer to the end results of a
program for the people it was intended to serve – students, teachers,
and volunteers – whoever your audience is. The term outcome is often
used interchangeably with result and effect . Some outcomes of a
program are the results the program planners anticipated. Other
outcomes however are effects that nobody expected – and sometimes
that nobody wanted – yet are important information for program
improvement. Change is a key word here – what is the change that
results from a particular program? Is it an increase in something, such
as knowledge? Or a decrease in something, such as environmentally
detrimental behaviour?
The process of a program is also important to evaluators – a systematic
assessment of what is going on. Evaluators need to know what the
program actually does – what is actually happening on the ground.
Sometimes process is the key element of success or failure of a program
– how is it delivered, what services does it provide, is there follow-up,
do students like it? Studying program process also helps one to
understand outcome data.
Initially, there seems to be a lot of similarity between formativesummative and process-outcome evaluations. However, the two sets of
terms have quite different implications. Formative and summative refer
to the intentions of the evaluator in doing the study – to help improve
the program or judge it. Process and outcome have nothing to do with
the evaluator’s role, but relate to the phase of the program studied.
Often there is a combination of evaluations going on – the study of a
program’s process or what goes on during a program, in a formative
sense, combined with a look at outcomes – the consequences for
participants at the end.
– Page 14 –
Measuring the Success of Environmental Education Programs
B. Conditions Unfavourable for Evaluation
There are four circumstances where evaluation may not be worthwhile:
review your program with these in mind before beginning.
1. When the program has few routines and little stability.
This might be the case with a new program that needs to be piloted
and established before any systematic evaluation can occur. It can
also occur if the program has no consistent activities, delivery
methods, or theories behind it. However, a formative evaluation
done through a pilot phase may help identify these gaps more
clearly.
2. When those involved in the program can’t agree as to what it is
trying to achieve.
If there are big discrepancies in perceived goals, staff are probably
working at cross-purposes. Again, the coherence of the program is in
doubt, and while a process-based evaluation might be helpful, an
Outcomes-Based Evaluation would have no criteria to use.
3. When the sponsor or program manager sets limits as to what the
evaluation can study, putting many important issues off limits.
This occurs rarely, when the sponsor or manager wants a
“whitewash” job from an evaluation. Avoid at all costs!
4. When there is not enough funds, resources, or staff expertise to
conduct the evaluation.
Evaluations call for time, money, energy, planning and skill: it is
important to ensure these are in place for a successful product. This
is beyond a doubt the single most prevalent reason that program
evaluations are either not done or are not adequate.
– Page 15 –
Measuring the Success of Environmental Education Programs
4. The Benefits of an Evaluation Program
Now that I’ve written my logic model for my Education
Program, I’ve got a single table that captures all the
activities and the outputs, outcomes, and impacts that I
hope to achieve. Now whenever a board member or
teacher asks me ‘What do you do?’ I can show them my
plan on one sheet of paper.
Jenn Hoffman, Sierra Club of Canada, BC Chapter, 2002
Many environmental education professionals would identify one
primary reason to evaluate their programs: to satisfy one or more of
their funders and audiences. While it is true that a sound evaluation
can strengthen accountability for the use of resources, evaluating your
program can do more than just satisfying the reporting requirements of
a funding agency.
Plan:Net Limited reminds us that evaluation can help organizations
make wise planning and management decisions. It will help
organizations:
• Know what to expect from project activities
• Identify who will benefit from the expected results
• Gather just the right information to know whether the
project is achieving what you want
• Know how to improve project activities based on this
information
• Know how to maximize positive influences (referred to
as Enablers), and to avoid or overcome negative
influences (referred to as Constraints)
• Communicate plans and achievements more clearly to
people and other organizations
• Gain from the knowledge, experience and ideas of the
people involved
• Provide accurate and convincing information to
support applications for funding.
(Plan:Net Limited, 2002)
Seen as part of the larger process of continual improvement of
environmental education programs, good evaluation has the potential
to improve program quality over the long term. As such, a good
evaluation program can improve program quality, improve student
learning, and ultimately assist the program to achieve its goals, which
may include such things as higher degree of student involvement and
benefits to the environment.
– Page 16 –
Measuring the Success of Environmental Education Programs
The Nuts and Bolts of Evaluating Environmental Education
5. Choosing an Evaluation Model
Choosing a way of evaluating your environmental education program
can be intimidating or even bewildering for those whom have never
done so before. Evaluation models come with various names: Needs
Assessments, Cost/Benefit Analysis, Effectiveness, Goal-Based, ProcessBased – the list goes on and on. A bewildering array of over 35 different
types of evaluation is described in the literature – too many to list in
this document!
At its core, program evaluation is really all about collecting information
about a program or some aspect of a program in order to make
necessary decisions about the program. It’s not rocket science and is
entirely possible to do, even for the novice evaluator with limited
resources, experience and skills. Carter McNamara, author of several
program evaluation-related resources, offers the following advice:
It’s better to do what might turn out to be an average
effort at evaluation than to do no evaluation at all. The
type of evaluation you undertake to improve your
program depends on what you are doing – worry about
what you need to know to make the program decisions
you need to make, and worry about how you can
accurately collect and understand that information.
(McNamara, 1999)
Regardless of the type of model you use, there is a common set of steps
that can be used to complete program evaluation. Consider this as a
suggested framework that you can expand upon if necessary
1.
2.
3.
4.
5.
6.
Decide what you want to assess.
Select an evaluation design to fit your program.
Choose methods of measurement.
Decide whom you will assess.
Determine when you will conduct the assessment.
Gather, analyze, and interpret the data.
(SAMHSA-CSAP-NCAP, 2000)
– Page 17 –
Measuring the Success of Environmental Education Programs
Obviously, there will be many questions that you will need to consider
when designing your program evaluation. For example, why are you
doing the evaluation? From who will you gather information? How
will you get this information? And who will be reading this evaluation
in the end?
Another good resource to reference when designing an educational
program evaluation is The Program Evaluation Standards by the Joint
Committee on Standards for Educational Evaluation (1994). Thirty
standards were developed by several international panels of researchers,
as a tool to improve the practice of educational program evaluation and
consequently improve educational practice. The standards themselves
are organized around the four important attributes of an evaluation:
utility, feasibility, propriety, and accuracy. Each standard helps define
one of these four attributes, and they can be used to focus, define and
assess your evaluation plan. For example,
The first Utility Standard is as follows:
Stakeholder Identification – Persons involved in or
affected by the evaluation should be identified, so
that their needs can be addressed.
The first Feasibility Standard is:
Practical Procedures – The evaluation procedures
should be practical, to keep disruption to a
minimum while needed information is obtained.
(Sanders, 1994)
The 30 standards can be used as a useful and well-referenced checklist
to help you design an evaluation, collect information, analyze
information and report, manage and/or staff an evaluation. Refer to
the Resources section for more information on these standards.
And finally, to assist you in answering many important evaluation
questions, we’ve included a copy of Carter McNamara’s Checklist for
Program Evaluation Planning in Appendix Two.
– Page 18 –
Measuring the Success of Environmental Education Programs
6. Outcome-Based Evaluation
In this document we have focused on one particular model of
evaluation – Outcomes-Based Evaluation – that is quite effective in
evaluating environmental education programs, especially amongst nonprofit groups. This technique is rapidly growing in popularity amongst
the funding community and among non-profit groups. It should be
noted that this technique is also known by other names, among them
results-based management (popular with the Government of Canada),
outcome or performance measurement, or even performance mapping!
As McNamara points out, Outcomes-Based Evaluation has proven
particularly effective in assisting non-profits in answering whether
“their organization is really doing the right program activities to bring
about the outcomes they believe to be needed by their clients” (1999).
As McNamara notes, outcomes are usually in terms of enhanced
learning such as increased knowledge or improvements in perception,
attitudes or skills, or improved conditions, such as increased ecological
literacy.
A. What is Outcomes-Based Evaluation?
Outcomes-Based Evaluation is quickly becoming one of
the more important means of program evaluation being
used by non-profit organizations. Is your program really
doing the right activities to bring about the outcomes you
want? Or are you just engaging in busy activities that
seem reasonable at the time? Funders are increasingly
questioning whether non-profit programs are really
making a difference.
(McNamara, 1999)
Outcomes-Based Evaluation looks at the impacts, benefits, or changes
to your clients – students, teachers, etc.– as a result of your efforts
during and/or after their participation in your program. It helps you
find out if you’re really doing the right program activities to achieve
some pre-specified outcomes. Outcome-Based Evaluation is a method of
evaluation that is based on a program logic model; the measurement of
the success of a program relies on the measurement of several
components of the logic model system.
Program Logic Model
A logic model is an approach to planning and managing projects that
helps us to be clear both about what our projects are doing and what
they are changing. The word ‘logic’ is used because of the logical link
between the system components: inputs are a necessary precondition to
activities; activities need to take place before outputs are possible, etc.
Think of your program as a system that has inputs, activities, outputs
and outcomes:
– Page 19 –
Measuring the Success of Environmental Education Programs
Inputs
Activities
Outputs
Outcomes
(a.k.a.
‘Objectives’)
Impact
(a.k.a
‘Goals’)
(Note: Indicators are not part of the logic model – but they are a critical
part of the evaluation process. See text below for more information.)
Input: The materials and resources that the program uses in its
activities. These are often easy to identify, and are common to many
organizations and programs. For example: equipment, staff, facilities,
etc. These are the resources you need to get the resources you seek.
Activities: Activities are what you do to create the change you seek;
they are what you do with the inputs you have. Under the headings
promotion, networking, advocacy, or training, you describe what the
project is doing.
Outputs: Outputs are the most immediate results of your project, and
each relates directly to your activities. More importantly, outputs
create the potential for desired results; they create potential for your
outcomes to occur. Outputs are usually measured as are statistics, and
indicate hardly anything about the changes in clients. (Example: 61
students attended our Ecology Camp).
Outcomes: Outcomes describe the true changes that occur to people,
organizations and communities as a result of your program. These are
the actual impacts, benefits, or changes for participants during or after
your program, expressed in terms of knowledge, skills, values or
behaviours.
Outcomes may be expressed in terms of enhanced learning, such as
increased knowledge, a positive change in perceptions or attitudes, or
enhanced skills. For
example, an objective of your program might be to “demonstrated
increase awareness of the causes and prevention measures of climate
change”.
Outcomes many also be expressed in terms of physical conditions, such
as the development of school-grounds garden.
Impact: This describes your vision of a preferred future and underlines
why the project is important. It refers to the longer-term change that
you hope your project will help create.
– Page 20 –
Measuring the Success of Environmental Education Programs
Measuring Outcomes
The success of a program is measured using indicators that measure any
or all of the three logic model components of output, outcome, or
impact.
Outcome Indicators: These are what you can see, hear, read, etc. and
suggest that you’re making progress toward your outcome target or
not. Indicators can be established for outputs, outcomes, and impacts.
Indicators are measured using ‘instruments’ such as questionnaires or
surveys, and may be either quantitative or qualitative. Indicators can be
envisioned as the “flags” that let us know we’re on the correct path.
They answer the question, “How will you know when you’ve achieved
the outcome?” They are the measurable, observable ways of defining
your outcomes. They define what the outcome looks like.
Outcome Targets: These specify how much of your outcome you hope
to achieve.
Splash and Ripple
Here is one image to help people understand and use outcome
measurement. The rock is like a material Input; the person holding
the rock is like a human resource Input. The act of dropping the rock
is like an Activity. When the rock reaches the water, it creates a
Splash – these are your Outputs. The ripples spreading out from the
splash are like your Outcomes, then later your Impacts. The edge of
the pond represents the geographic and population boundaries of
your project.
(Plan: Net Limited, 2002)
Impacts
Outcomes
Outputs
– Page 21 –
Measuring the Success of Environmental Education Programs
B. General Steps to Outcomes-Based Evaluation
With a basic understanding of the process and language of OutcomesBased Evaluation, we can start to think about how one would set about
conducting it. Carter McNamara has produced many handy resources
on how to understand and conduct Outcomes-Based Evaluation. The
following is adapted from his Basic Guide to Outcomes-Based
Evaluation in Nonprofit Organizations with Very Limited Resources
(1999) and provides a very succinct set of steps that environmental
education organizations can easily follow.
Step 1. Get Ready!
• Start smart! Pick a program to evaluate. Choose one
that has a reasonably clear group of clients and clear
methods to provide services to them.
• If you have a mission statement, a strategic plan or a
goal(s) statement for this program, haul it out of the
filing cabinet you’ve jammed it into, and consider it.
• Ask for some money. Consider getting a grant to
support developing your evaluation plan – while this is
not vital to the process, evaluation expertise to review
your plans can be extremely beneficial and insightful.
Many grantors will consider assigning up to 15% of
total budget amount to an external evaluation; make
sure you ask about this.
Step 2. Choose Your Outcomes.
• Choose the outcomes that you want to examine. Make
a ‘priority list’ for these outcomes, and if your time and
resources are limited, pick the top two to four most
important outcomes for now.
• It can be quite a challenge to identify outcomes for
some types of programs. Remember, an outcome is the
benefit that your client receives from participating in
your program (not a statistic). McNamara suggests
using words such as “enhanced”, “increased”, “more”,
“new”, or “altered” to identify outcomes.
• Consider your timeframe and what you can evaluate
within it. McNamara suggest that within 0-6 months,
knowledge and skills can be evaluated; within 3-9
months, behaviours; and within 6-12 months values
and attitudes. Try linking these short-term outcomes
(0-3) months, with long-term outcomes (6-12 months).
– Page 22 –
Measuring the Success of Environmental Education Programs
Step 3. Selecting Indicators
• Specify an indicator for each outcome.
• Choose indicators based on asking questions like: What
would I see, hear, read about clients that means
progress toward the outcome? For example: “30 of the
200 students who participate in CPAWS Grizzly Bear
Forever! Program will demonstrate one new
environmental stewardship activity within two months
of the program”.
• If this is your first outcomes plan that you’ve ever done
or the program is just getting started, don’t spend a lot
of time trying to find the perfect numbers and
percentages for your indicators.
Step 4. Gathering Data and Information
• For each indicator, identify what information you will
need to collect or measure to assess that indicator.
Consider the practicality of collecting data, when to
collect data and the tools/instruments available to
collected data. Basically, you’re trying to determine
how the information you’re going to need can be
efficiently and realistically gathered.
• Data collection instruments could include, but are not
limited to: surveys, questionnaires, focus groups,
checklists, and observations. See Appendix One for
more information on the pro’s and con’s of each survey
method.
Step 5. Piloting/Testing
• Most likely you don’t have the resources to pilot test
your complete outcomes evaluation process. Instead,
think of the first year of applying your outcomes
process as your pilot process. During this first year,
identify any problems and improvements, etc. and
document these. Then apply them to your program
evaluation the following year for a new and improved
process (see Step 6).
Step 6. Analyzing and Reporting
• Analyze your data.
You may have collected
quantitative (i.e. numerical) or qualitative (i.e.
comments) data. To analyze comments, etc. (that is,
data that is not numerical in nature): read through all
the data, organize comments into similar categories,
e.g., concerns, suggestions, strengths, etc; label the
categories or themes, e.g., concerns, suggestions, etc;
and identify patterns, or associations and causal
relationships in the themes.
• Report your evaluation results. The level and scope of
information in report depends for whom the report is
intended, e.g., funders, board, staff, clients, etc.
– Page 23 –
Measuring the Success of Environmental Education Programs
• Remember that the most important part of an
evaluation is thinking through how you’re going to
incorporate what you learned from it into the next
round of programming. That’s the real value, beyond
reports to funders, accountability, etc.: its the learning
that comes through looking at what worked and what
didn’t and applying that learning
A Note About Getting Outside Evaluation Expertise
If you have the resources to do so, bringing in an outside consultant to
assist you in your evaluation process can be extremely beneficial. They
can help you avoid costly mistakes, and their external role helps them
challenge your assumptions and bring a valuable perspective to what
you are trying to achieve.
– Page 24 –
Measuring the Success of Environmental Education Programs
7. Program Planning – Building A Program Plan That Includes
Evaluation
The best time to plant a tree was a decade ago; the next
best time to plant a tree is – today.
Author Unknown
Just like planting a tree, the most logical time to build evaluation into
your program plan is at the outset of your program, when you are still
in the planning stages. Why? Because you’ll know ahead of time what
sort of data you need to collect in order to evaluate your program the
way you want.
However, if you are in the midst of a program – but wish to create a
better evaluation plan – don’t despair. A tree planted today still
provides benefits, and so will your evaluation plan.
To help you visualize this process, refer to the example below, from the
CPAWS Education program called Grizzly Bears Forever! (GBF!), in
which CPAWS staff visit junior high students in their classroom to teach
about ecosystems, using the grizzly bears as an entry point. They took
the following table, which summarizes a logic model, and tried to “fill in
the blanks”.
Table One
Component
CPAWS Planning and Evaluation Strategy
ACTIVITIES
What you do to create the change you seek.
INPUTS
Materials and resources used.
OUTPUTS
The most immediate results of your project.
Each relates directly to your activities.
OUTCOMES
Actual benefits/impacts/changes for participants.
IMPACTS
The longer-term change you hope your project will help
create.
To help themselves, CPAWS asked the following questions:
• What activities do we propose to do in this program?
This was the easy part – they had already thought out
what they wanted to do (including program marketing
and delivery mechanisms).
• What will our program outputs be? Each activity has a
corresponding output. These easy-to-measure items,
– Page 25 –
Measuring the Success of Environmental Education Programs
such as ‘number of students contacted’ were all CPAWS
had ever successfully measured before they adopted an
Outcome-Based planning approach. Until recently, this
level of measurement was all that funders had ever
asked for!
• What will our program inputs be? This is a list of
things they will need to carry out their project. This
information includes human and material resources:
staffing, budget, etc
• What outcomes do we hope to achieve with this
program? These are the desirable changes resulting
from the outputs. Some of these statements were
gleaned from the objectives they had generated in
their proposal. It is important to note that objectives
are generally fewer in number than the outputs that
create them (CPAWS decided to bend this rule,
however).
• What impact do we hope to have with this program?
To create this statement, they reminded themselves
why CPAWS invests its resources in education – and
ended up using the goal statement they had generated
in the proposal to funders.
When they had answered all of these questions and filled in the table,
CPAWS had an Outcome-Based Plan for their education program (Table
Two: Outcomes-Based Planning Table For CPAWS GBF! School Visit
Program).
A couple of observations about their work. All inputs and activities tend
to focus on the CPAWS program, whereas outcomes and impacts tend
to centre on changes to the learner. This is as it should be; CPAWS has
attempted to keep the learner in the centre of this process, as far as
important outcomes are concerned.
Also, the outcomes identified display an interesting range. Outcomes
pertaining to knowledge, skills, and attitudes can generally be expected
to be achieved in the days or weeks immediately following a
presentation; the inculcation of values, and the occurrence of desired
behaviours, will take longer (perhaps several months). Their evaluation
plan must take this into consideration.
What they did not have – yet – was any idea on how to measure these
things they hope to achieve. They needed to create an evaluation plan.
To do this, they assigned numbers to all the measurable items on this
table (outputs, outcomes, and impact). They then came up with a
variety of quantitative and qualitative indicators for each of these items
– and to make sure that these were reasonable and feasible indicators,
they identified how these indicators would be measured, and by whom.
Table Three: Evaluation Table for CPAWS GBF! School Visit Program
shows these details (for brevity, only a few of the outcomes are shown.)
– Page 26 –
Measuring the Success of Environmental Education Programs
The next step for CPAWS will be taking all these measurement
techniques that need to be done (usually by CPAWS’ School Programs
Specialist) and inserting these into her work plan so that they occur. A
student from the local university assisted with the details of conducting
this survey, in the process receiving credit for an independent study.
The creation of measurement instruments, such as the pre and post
surveys of students, will take some time (see Implications).
Once the results are in and compiled, a report can be produced
summarizing results and ‘lessons learned.’ Such a report can be used by
CPAWS as an accountability exercise, demonstrating to a funder the
legitimacy of the program. The true potential of this process, though, is
to enhance the future performance and management of the program –
but CPAWS will have to take the time to process this information and
permit the fruits of the evaluation process to be ‘ploughed back in,’ and
used to improve the effectiveness and impact of the education program.
Table Two:
Outcomes-Based Planning Table
For CPAWS GBF! School visit program
ACTIVITIES
• Develop kit – slides, graphics, visuals, game-pieces
• Development of an illustrated 80 page teacher activity guide
• E-mail and posters are distributed to market the program
• In-class presentations (80 minutes in length) by CPAWS staff
INPUTS
•
OUTPUTS
1. Presentation kit and activity guides are completed.
2. E-mail sent to 200 addressees, and 125 posters are delivered to
Calgary-area schools.
3. Deliver presentations to 50 classes (2500 students).
OUTCOMES
4. Students know more about grizzly bear ecology and conservation.
5. Teachers enjoy the presentation and believe it to be a high-quality
product.
6. Through the grizzly bear, students will learn to value and
appreciate the importance of ecosystem processes and ecosystem
services.
7. Student’s behaviours change as a result of the education program
(e.g. less consumerism, increased wilderness experiences, more
political/extracurricular involvement).
8. Students will develop a sense of responsibility and a personal
commitment towards grizzly bear and environmental
conservation.
9. Teachers incorporate lessons on the grizzly bear in future years.
IMPACTS
10. CPAWS Education Team creates a community of knowledgeable,
empowered citizens who engage in positive environmental action
to protect ecosystems, wilderness and protected areas.
Staff 0.4 FTE (full time equivalent) to deal with administration
(marketing, phone calling, bookings) and presentations
• Budget of $20,000 to cover staff time and expenses, office space
– Page 27 –
Measuring the Success of Environmental Education Programs
Table Three:
Evaluation Table for
CPAWS GBF! School Visit Program
Output or
Outcome #
Measurement
Indicator
Data
collection
method
Data
Source
Collecte
d by
whom?
Collecte
d when?
Kits
CPAWS
staff
June
2003
OUTPUTS
1
Compare
completed kit
to original
materials list
Comparison
of kits
2
Check records
from e-mails
and mail
Review
records
E-mails,
mail
Universit
y
student
June
2003
3
Collate
statistics from
database
Simple stats
review
Databas
e
CPAWS
staff
July
2003
Teachers will
distribute Pre
and Post
Program
Student
Questionnaire,
which asks
values-related
and
behavioursrelated
questions using
a Likert scale
CPAWS will
mail and
analyze pre
and post
surveys;
students fill
out and
return
questionnaire
s
Surveys
CPAWS
staff
By June
2003 –
analysis
by Aug
2003
Teachers
develop and
implement
lesson plans
and activities
incorporating
grizzly bear
conservation
CPAWS will
mail and
analyze
surveys;
teachers fill
out and
return
questionnaire
s. Selected
teachers will
participate in
interviews.
Surveys,
Intervie
w
CPAWS
staff
By June
2004 –
analysis
by Aug
2004
OUTCOMES
4, 6, 7, 8
9
– Page 28 –
Measuring the Success of Environmental Education Programs
8. Trying to Evaluate the ‘Tough Stuff’
A. How Learners (Sometimes) Get to Action
The goal of this section is to remind the reader of how ‘education
leading to action’ actually works, and provides an explanation for why
environmental education programs sometimes do not achieve the
results they hope for.
As an illustrative example consider Mary, a student in a Calgary Junior
high school who learns about grizzly bears through a program offered
by CPAWS. This program was offered because CPAWS has the goal of
“creating a community of knowledgeable empowered citizens who
engage in positive environmental action to protect ecosystems,
wilderness and protected areas” (taken from the Impact section of
CPAWS Evaluation Plan for this program).
Anyway, back to Mary. On October 23rd she sits through an interesting,
curriculum-oriented classroom presentation by a CPAWS staff member,
and in the following days receives several subsequent lessons from her
teacher. Through this program she becomes a w a r e of – and
understands something about – grizzly bear ecology and behaviour, and
some of the conservation issues that surround this animal.
Up to this point, CPAWS and her teacher have a fairly high degree of
control over what goes through Mary’s mind. Notice how the locus of
control starts to decrease through the following…
At age thirteen Mary has developed a fairly widespread system of
beliefs about how the world works. (Two of these beliefs are that
grizzly bears exist in the mountains to the west of Calgary, and that
these bears are ‘neat’). These beliefs, once lumped and grouped and
combined with some emotional tendencies, comprise attitudes (for
example, Mary has the attitude that bears should be allowed to exist in
the mountains). Attitudes come to light most frequently as opinions,
which Mary - like most 13 year olds – has no trouble expressing. Caduto
(1985) can help us here: “Values are in turn formed by a meld of closely
aligned attitudes. A value is an enduring conviction that a specific mode
of conduct is… preferable to an opposite mode of conduct (p.7).” For
example, Mary values the wild areas found in the mountains, and the
intrinsic right of animals to live there. Furthermore, when all of Mary’s
values are considered, they form a value system. As it happens, in this
case CPAWS is fortunate: for the most part, Mary’s values are consistent
with CPAWS,’ and the program actually helped confirm, validate, and in
a small way perhaps even shift Mary’s values.
This is necessary for CPAWS to achieve its goal, but unfortunately it is
not sufficient. CPAWS’ next hurdle is a huge one: that of persuading
Mary to make her behaviour consistent with her values. Mary may
value wild nature, but she may do nothing to protect it – worse, her
consumer habits and failure to act to protect nature may
– Page 29 –
Measuring the Success of Environmental Education Programs
unintentionally harm nature. Of course, this gap between values and
behaviour is not unique to Mary – she lives in a society where
acceptance of this gap has reached epidemic proportions.
Environmentalists and psychologists alike talk about the ‘cognitive
dissonance’ that results when people lead lives in which their behaviour
does not mirror their values.
But again, lets get back to Mary. By a stroke of luck, it turns out that in
Mary’s case her values are coupled with a strong sense of motivation to
make things better. She has been raised to believe that her actions can
make a difference in the world, and her resolve is strengthened by a
sympathetic teacher who has fuelled her generally proactive nature,
two classmates that will support her, and a case history (provided, as it
happens, by CPAWS) of another classes’ successful campaign to protect
bear habitat in the Bow Valley.
It is now December 20, and Mary decides to do some things. Some of
the changes to her behaviour were unanticipated and decidedly
peripheral to CPAWS goals (she buys a pair of bear slippers for her
mother for Christmas). Her New Year’s Resolution to eat only
vegetarian meals and use less electricity is a small but significant gain to
the environment and, it can be argued, to CPAWS’ goal.
But the truly exciting thing for CPAWS is that on January 4th, the day
before she heads back to school, Mary watches a show on TV about the
plight of the Panda bear in China – something that CPAWS had nothing
to do with. At this point, Mary becomes convinced that she must get
involved.
In her science class the next day she makes an
announcement, inviting fellow students to a lunchtime meeting to
discuss ways in which they can help the grizzly bears of the Kananaskis
Valley through helping protect the bear habitat in this area.
Will Mary and her newly formed action club help CPAWS achieve its
goal? Perhaps. As any environmental advocate will tell you, it is hard
to be truly effective. Many hurdles lie in the way of this group: other
demands on students’ time, burnout, attrition, and simple
ineffectiveness (for example, they may launch a letter-writing
campaign, but send letters to the wrong Minister).
– Page 30 –
Measuring the Success of Environmental Education Programs
B. Why Are Environmental Education Programs So Difficult to
Evaluate?
Clayoquot Sound is a forested watershed on the west coast of
Vancouver Island in British Columbia, Canada. In 1993, Clayoquot Sound
was the site of the largest civil disobedience protests in Canadian
history, with almost 900 arrested and jailed for protesting logging
within this area.
As you know, what usually happens [in the evaluation of
environmental education] is that we can only measure simple
things, and then because that is what we can measure, we say
that those simple things are the only real things. So we count
numbers, do simple pre-post treatment surveys, look for
short-term changes, measure things, and then write our
report.
The real things, the ways in which environmental education
can change someone’s life, are much more subtle and difficult
to measure. You can ask questions about meaning, about
influence, about impacts, and look at things that aren’t visible
necessarily over a short time, but become apparent over the
long term. This is what we have to consider as we look at
effectiveness of environmental education.
You know, when the Clayoquot trials were going on, and they
took place right outside my office, my boss – the director of
public affairs from our Ministry- asked me if ‘all of that’ was
my fault. IS people going to trial or jail a good indicator of our
success?
Dr. Rick Kool, Environmental Educator, 2000
Rick Kool said it best: “The real things, the ways in which environmental
education can change someone’s life, are much more subtle and difficult
to measure.”
Why is this? For one thing, many of those in the environmental
education field are talented teachers with a passion for the environment
and a gift for interpreting nature to students. Evaluation is not
something they have received training in, nor is it something they are
necessarily drawn to – frankly, they would rather be in the classroom or
in the field!
Despite this, evaluation of their programs is something that –
eventually – most environmental educators deal with, either because of
the reporting requirements attached to a received grant, or because
they and their program recipients have legitimate questions about the
efficacy of their programs. Most programs have the ambition of –
among other things - creating some changes to students’ behaviours,
and/or causing net benefits to the environment – and there they hit a
– Page 31 –
Measuring the Success of Environmental Education Programs
significant stumbling block. What measurement instruments are best
used to measure behavioural change? How does one avoid such pitfalls
as test wisdom and interviewer bias, which threaten to render results
meaningless? And if, a decade after receiving an education program, a
student becomes an environmental advocate, or sponsors
environmental legislation in parliament (or, to use Dr. Kool's example
above, is arrested in a peaceful blockade) – can any reasonable claim be
made that the education program contributed to this outcome? To
relate this discussion to the Outcomes-Based schema described above –
we are attempting to measure relatively complex outcomes and impacts
– always trickier to measure than mere outputs.
A widespread survey of environmental education professionals asked
them to suggest detailed indicators or instruments that they have used
to measure such things as values shift, behavioural change,
environmental action, or even discrete benefits to the environment that
result from an environmental education program. The results were
revealing. Not one respondent could suggest such an indicator, and all
agreed that work in this field is sadly lacking.
That said, this area has been discussed briefly in some places, usually in
a context that does necessarily fit into any particular evaluation
schema. The text box below summarizes suggestions for evaluating
action projects developed to compliment a widely used and welldesigned program known as Project Wild.
Ideas for Measuring Success of
Taking Action (p. 19) / Project WILD 1995.
Taking time to evaluate an action project helps students understand what they’ve
accomplished and allows recognition of how their project facilitated their personal
growth.
Assessing Student Knowledge, Values, and Behaviours:
• Keep a video or photo log of project highlights.
• Collect memorabilia (articles about the project, photos, planning schedules, and so
on) to create an action project scrapbook that students can sign and write
comments in.
• Have students write essays and /or keep a journal about any changes in their
thinking or behaviours as a result of the project.
• Have students evaluate other members of their group, as well as themselves. Give
students pointers on positive constructive feedback and focus the session on
specific points, such as contribution to the project, effort, conflict resolution
approach, etc.
• Have community members involved in the project assess student performances.
Assessing Project Success:
• Have students describe how well they think their project accomplished the
objectives they outlined at the start.
• Have students conduct surveys, field studies, or interviews to assess the success of
their completed project. What worked? What didn’t and why?
• Evaluate how students planned for ongoing maintenance and sustainability of it.
• Have community members and others involved in the project assess project
outcome.
– Page 32 –
Measuring the Success of Environmental Education Programs
The writers of this document have taken two approaches. Section F: An
Alternative Approach is a less direct approach that acknowledges that
some of the outcomes of environmental education are very difficult or
perhaps even impossible to measure. This approach simply describes
good environmental education, creating a sort of checklist of program
elements. The more features of positive environmental education that
are present in a program, the higher the likelihood of behaviour change
and positive action.
Our principal approach, though, is to review the process through which
learners can sometimes get to action, and suggest some ways in which
outcomes pertaining to values, behaviours, and benefits to the
environment may be measured, using some of the measurement
instruments described in Appendix One.
Those using the following sections to develop an evaluation plan should
keep in mind the notion of triangulation: the use of two or more
techniques to measure outcomes. Two independent measurements that
‘triangulate,’ or point to the same result, are mutually complementary
and strengthen the case that change occurred.
It is important to note that with all the following areas we are
interested in change that has occurred (change in values, a change in
behaviour, etc.) by far the best way to measure change is to use testing
instruments that examine the subject at two different times, both
before and after a learning experience. This is referred to as pre/post
testing. Any techniques used to make claims about change that do not
rely on pre/post testing must instead rely reconstruction, in which
subjects make claims about ‘the way things used to be’. Often, these
claims tend to remain unsubstantiated. Although we refer to both
techniques below, those that rely on reconstruction tend to be of lesser
validity than those based on pre/post testing.
C. Measuring Values Shift
People may hold thousands of beliefs, which combine to
form attitude orientations toward responding to an object
or set of circumstances in a certain way, and attitudes are
the building blocks of values. By adulthood, a person will
hold hundreds, if not thousands of beliefs, a smaller
number of attitudes and only dozens of values.
(Caduto, 1985)
Why measure values and value shifts? As described above, beliefs
about how the world works, once lumped and grouped and combined
with emotional tendencies comprise attitudes. And attitudes in turn
combine to form a set of values, or a value system. As a larger category,
values subsume the ‘sub-concepts’ of attitudes and beliefs: by
measuring values we can make inferences about beliefs and attitudes.
– Page 33 –
Measuring the Success of Environmental Education Programs
Should we strive to measure attitudes or values? Opinions vary: some
workers in the field of evaluation feel more that making statements
about attitudes, rather than values, might be more realistic. This
section concerns itself with the measurement of changes in values or
value shifts that may take place as a result of an environmental
education program or experience. Given the somewhat subjective
distinction between attitudes and values, the authors believe that these
techniques can be used to measure both attitudes and values.
Is it important to measure value shifts? Some might argue to instead
focus on behaviours (see next section), and try to measure action that
attempts to help the environment – surely, this argument goes, these
new behaviours come about as a result of a values shift. The advantage
of measuring value shifts is that in some cases this might be all that
changes: there are many good reasons why thought does not always
translate into action (Section 8A). Table Four summarize some of the
instruments and associated outcome indicators for measuring a values
shift. Appendix One discusses the strengths and weaknesses of each
instrument.
Table Four: Measuring A Shift In Values
Measurement
Instrument
Pre/
Post
Test 1
Questionnaires (Likert
scale or multiple
choice)
¸
Interview
Focus Group
Review of Peers
Journals
¸
Student art work
¸
Feedback form (ex. A
letter to your
organization)
Outcome Indicator 2
Quantitative shift in individuals/group for questions
pertaining to values.
Student responses reveal a higher appreciation of
natural values.
Unprompted, at least 15% of students will comment
that their values are more supportive of the
environment.
Students comment on changes in the values of their
peers through formal and informal interviews and
assignments at both the individual and team level.
Students make written reference to changes they
feel have occurred in their own beliefs, attitudes, or
values.
Students’ drawings of their schoolyard give more
emphasis (using colour and perspective) to natural
objects.
Unsolicited, students comment on how the
program influenced/changed how they feel about
an aspect of nature.
1. Pre/Post. Often, an objective measurement of a change in values requires a baseline, preprogram measurement as well as a post-program measurement. A check mark is used to
indicate if this is necessary.
2. Outcome Indicators: These are quantitative or qualitative statements that result in
the desired results we get after using the relevant measurement instrument.
– Page 34 –
Measuring the Success of Environmental Education Programs
Sample Activity:
Values Shift Measurement using ‘Take a Stand’
‘Take a Stand’ is a common activity in which students take a stand on a
controversial statement by physically standing beside a sign that best
reflects their opinion (signs says one of the following: strongly
disagree, somewhat disagree, can’t decide, somewhat agree, or
strongly agree).
Repetition of an identical statement before and after a program, and
keeping track of the number of students that stand beside these signs,
can be used to measure both values clarification and values shift. The
outcome indicator for this activity might be a target indicator: “At
least 20% of students shift their position on the strongly disagreestrongly agree’ spectrum, towards a position that shows a greater
support of the protection of nature.”
*To download a CPAWS version of this activity, visit the CPAWS
website (listed in the Resources section of this document).
– Page 35 –
Measuring the Success of Environmental Education Programs
D. Measuring Behaviour Change
It is a common ambition of education programs that some new
behaviour arises as a result of activities. In the case of Mary described
above, not only did the program act on her value system but it also
helped her ‘make the jump’ across the very significant divide between
thought and action.
It is important to remember that not all new behaviour can be classified
as environmental action (for example, Mary bought bear slippers for her
mother – a new behaviour, certainly, but not one that helped the
environment). Environmental action involves students in tackling an
environmental issue or problem, or working to improve an
environmental setting. Actions can be as simple as making and
maintaining a community notice board of environmental events or as
complex as developing and implementing a plan for walking school
buses. Environmental action is behaviour that intentionally tries to do
something to help the environment, and is a subset of other behaviour
change. Note that actual benefits to the environment do not necessarily
follow; the next section deals with environmental benefits and how to
evaluate them.
So which types of behaviour can be classified as environmental action?
R.J. Wilke summarizes the primary methods through which a person
may engage in action.
• Persuasion: educating or lobbying other members of the
public.
• Consumerism: either changing one’s own consumer
habits or encouraging others to do so.
• Political Action: action that is aimed at influencing a
decision-maker.
• Ecomanagement: actions to restore, remediate, or
improve a natural area.
• Legal Action: action taken through legal avenues.
(In Hammond, 1997)
The authors suggest that any behaviour change focusing on the
environment that falls into one of these five categories can be classified
as “environmental action.”
A new difficulty presents itself when measuring behavioural outcomes:
time. Whereas changes in values tend to occur during or shortly after a
program, it may take longer for behaviours to manifest themselves.
This not only calls for a long-term approach to evaluation that spans a
number of years, but also opens the door to the possibility that some
influence other than the program caused the behaviour.
– Page 36 –
Measuring the Success of Environmental Education Programs
Table Five:
Measuring Behaviour Change
Measurement
instrument
Questionnaires
Pre/Post? 1
Outcome Indicator 2
¸
Respondents list
behaviours that they
began after the program.
Interviews
Open-ended questions
prompt interviewees to
remark on changes to
their behaviour.
Observations
Observer tests for the
presence or absence of a
number of behavioural
criteria (i.e. classroom
recycling program.
¸
Focus Group
Over a quarter of
students agree that their
behaviour has changed in
a specific way.
Student art work
Student artwork created
in response to assignment
identifies specific
behavioural changes (i.e.
human interaction with
ecosystems mural).
Feedback form
Students describe and
document action projects/
changes they have made/
special events: in the
form of postcards,
photographs, videos,
journals, and web page
entries.
1. Pre/Post. In many cases, an objective measurement of a change in values requires a
baseline, pre-program measurement as well as a post-program measurement. If this is
necessary, a check mark is placed in the table below.
2. Outcome Indicators: These are quantitative or qualitative statements that result in
the desired results we get after using the relevant measurement instrument.
– Page 37 –
Measuring the Success of Environmental Education Programs
E. Measuring Benefits to the Environment
Many funders’ application forms ask the following question:
“Please describe in concrete terms how your project will benefit the
environment.”
Those who have read this far will understand why environmental
educators have a difficult time answering this question! Even those
students who have values sympathetic to the environment and whose
new behaviours can be categorized as a kind of environmental action
may not ever do anything that directly benefits the environment – or at
least not within the time period that we have capacity to measure!
The following table provides a discussion of the five kinds of action and
their evaluation.
Table Six:
Types and Benefits of Environmental Action
Type of Environmental
Action
Persuasion: educating or
lobbying other members of
the public.
Ease of Evaluating Benefits to the
Environment
Benefits may never be demonstrable,
and/or may not exist. The possibility
exists that this persuasion may not in
fact change anyone’s behaviour.
Consumerism: either
changing one’s own consumer
habits or encouraging others
to do so.
Several measurement instruments can
be used to identify changes in
consumerism, and resources
documenting the relationship between
consumer habits and environmental
impact are readily available*
Decision-makers may never respond to
pressure – or a pro-environmental
decision they make may be due to other
factors. Interviews of decision-makers
can be helpful in determining this.
Political Action: action that is
aimed at influencing a
decision-maker.
Ecomanagement: action to
restore, remediate, or
improve a natural area.
This is easily documented and measured
using a “before and after” scenario.
Funders who emphasize easy
accountability, such as EcoAction, place
high emphasis on activities of this sort.
Legal Action: action taken
through legal avenues.
Action of this sort can be easily
documented, through such things as
judicial decisions.
* For example, the Union of Concerned Scientists recently published the Consumer’s
Guide to Effective Environmental Choices. See the Resources section for more
information.
– Page 38 –
Measuring the Success of Environmental Education Programs
But what does ‘benefits to the environment’ mean, you might ask?
Some funding bodies come at this topic from a restoration or antipollution perspective, and want to know how the project will improve
the environment or ‘help it get better’. Others come from a point of
view that assumes the environment is healthy, (e.g. intact ecosystems)
and want applicants to demonstrate how higher protection of the
environment will result from the project. Both these outcomes can
produce benefits to the environment.
F. An Alternative Approach: What We Know About Good EE
The following approach has been validated through conversations with
professionals in both funding and environmental education
organizations. This approach acknowledges that some of the benefits
and outcomes of environmental education are either so difficult to
measure that they are impractical, or truly intangible. For these cases,
our approach is to simply describe what good environmental education
is. An extensive analysis of the relevant peer-reviewed professional
papers is beyond the scope of this document however; what we can rely
on is the best judgment of practitioners. Recently, a number of
environmental educators used their best professional judgment to
identify elements of environmental education programs that help lead
learners towards action. This work is summarized in “How to Get
Action in Your Environmental Education Program” which can be
downloaded from the CPAWS website. See the Resources section for
more information.
Using this approach, those trying to either design a sound
environmental education program or make decisions about funding
such a program need only examine the proposed program, look for
elements that contribute towards behaviour change and positive action,
and make a subjective decision about the likelihood of the proposed
program ever achieving these goals. It is worth reiterating, however,
that a sound evaluation plan is superior in a number of ways, as it
allows a far more empirical assessment of success than the theoretical
approach described in this section.
– Page 39 –
Measuring the Success of Environmental Education Programs
9. Conclusion: Implications of Conducting Evaluations
The authors would like to mention
several trends that are relevant here.
We feel that today, more than ever,
society
needs
high-quality
environmental education programs
that succeed in moving values and
changing behaviours in the direction
of sustainability and environmental
conservation. We also believe that
effective, relevant evaluation offers a
very powerful way to improve these
education programs and enable them
to succeed in accomplishing more of
their objectives and goals.
Concerns About
Outcome-Based Evaluation
Most organizations (that
recently encountered OutcomeBased evaluation) over the past
few years harbour concerns
about it. Commonly emerging
themes are:
• Questioning and confusion
about what exactly is being
asked of recipient
organizations by funding
bodies.
• Frustration with perceived
For a variety of reasons, funders and
lack of coordination among
environmental
education
funding bodies with regard
professionals alike feel increased
to the introduction of these
pressure to conduct better program
schemas.
evaluations. Further, there is a lot of
• Concerns about definitions
support in these communities for
of key terms (‘output’,
Outcome-Based Evaluation. That
‘outcome’).
said, the reader should refer to the
• Doubts about the capacity of
sidebar, Concerns About Outcomeresults-based performance
Based Evaluation, that summarizes
management schemas to
some of the concerns felt by
capture the so-called ‘soft’
organizations.
results that are intrinsic to
social development
The following implications arise from
initiatives.
these concerns:
• Recognition that it is
important to keep track and
• There is a real need within the
learn from the work, but
environmental education
concern that expectations
community for training in the
around information
area of evaluation.
gathering and reporting
Environmental education
exceed previous
professionals are experts in
expectations and require
program design and delivery, but
funds, time, skills and
not necessarily in conducting
information systems that
evaluations of these activities.
the organization does not
have.
• Expectation that funding
bodies should recognize that
if they desire higher levels of
accountability and
performance management,
they should assist with
capacity building support.
– Page 40 –
Measuring the Success of Environmental Education Programs
• Evaluation is an expensive and time-consuming
activity. Practitioners need to recognize this in their
budgeting and fundraising activities, and funders need
to be prepared to fund good evaluation activities.
Many responsible funders now will provide funds for
evaluation; up to 15% of the total budget of the
program.
• Practitioners and funders alike should recognize the
importance and validity of external evaluators, despite
the increased cost. “If a thing is worth doing, it is
worth doing well!”
• Funders who deal with numerous education programs
should consider efficiencies. This might range from
giving workshops to all grantees, to keeping an
external evaluator on retainer who could engage in a
participatory way with grantees. Such a person or
group could provide invaluable advice to groups setting
up and implementing their evaluation plans, as well as
being the external evaluator at the end of the process.
• It may take months or years for participant behaviours
to change as a result of an education program. Given
this, discerning whether or not behaviour change
occurs requires long-term study. Practitioners need to
acknowledge this in their evaluation design, and
funders need to recognize this reality when considering
funding requests.
• The creation of measurement instruments, such as the
pre and post surveys of students, takes considerable
time and expertise. Funders should be aware of this,
perhaps providing groups with templates developed by
those with backgrounds in the psychological sciences,
or at the very least referring groups to models that
have been developed by others.
– Page 41 –
Measuring the Success of Environmental Education Programs
Appendixes
Appendix One: An Environmental Education Took Kit
If the only tool one has is a hammer, then one tends to treat
everything as if it is a nail.
Mark Twain
A. What Are the Instruments Available?
So now what? Well, you’ve picked a method for evaluating your data,
but how exactly are you going to go about gathering the information
necessary for doing the actual evaluation? What tools (‘instruments’)
will you use? From whom are you going to obtain information? And
how will you gather it? Through a survey? An interview? A focus group
meeting? There is a wide array of data collection instruments for you to
choose from, which can be used to evaluate outputs and outcomes. The
purpose of this section is to help you determine appropriate means of
data collection for your evaluation.
Four Quick Hints Before You Begin:
• Obtain necessary clearances and permission before you begin.
• Consider the needs and sensitivities of the respondents.
• Make sure your data collectors are adequately trained and will
operate in an objective, unbiased style.
• Cause as little disruption as possible to the ongoing effort.
1. Questionnaires and Surveys
Questionnaires and surveys are popular tools in conducting
environmental education program evaluation. They are useful for
obtaining information about opinions and attitudes of participants, and
for collecting descriptive data. They are particularly useful when quick
and easy information is needed. They can be applied in the following
ways:
• Self-Administered: a questionnaire is distributed in
person or by mail. Examples: Teacher Written
Questionnaire, used to determine teacher satisfaction,
post-program; Student Written Questionnaire.
– Page 42 –
Measuring the Success of Environmental Education Programs
• Telephone: best when answers are more numerous and
more complex questions are needed. Examples:
Teacher Verbal Questionnaire – for telephone
interviews, to investigate changes in attitudes, values,
or behaviours of students; Student Verbal
Questionnaire – for pre/post survey of attitudes and
values
• Face-to-Face: best for when answers are more
numerous and more complex questions are needed. A
face-to-face questionnaire (where a series of precisely
worded questions are asked) is different from an
interview (usually a more open-ended discussion).
Questionnaires and surveys can readily become quantitative
measurement instruments. For example, a Likert scale (in which
respondents circle a number between one and five) can be used to
measure agreement with certain statements. A multiple-choice
questionnaire can likewise be used in a quantitative manner.
Quantitative tools lend themselves to such things as making general
statements about an individual or class of students, or measuring
changes to an individual or class as a result of an educational program.
Design Tips
Before designing your questionnaire, clearly articulate what you’ll use
the data for: what problem or need is to be addressed using the
information gathered by the questions. Remember to include/consider:
• Directions to respondents (purpose, explanation of
how to complete the questionnaire, issue of
confidentiality).
• Questions Content – ask what you need to know and
consider whether respondents will be able (or want) to
answer.
• Wording of the questions – not too technical, use
appropriate language levels for your audience, ask one
question at a time, avoid biased wording or using ‘not’
questions/double negatives.
• Order of questions – watch the length; start with factbased and go on to opinion-based questions.
2. Interviews
Interviews are particularly useful for getting the story
behind a participant's experiences. The interviewer can
pursue in-depth information around a topic. Interviews
may be useful as follow-up to certain respondents to
questionnaires, e.g., to further investigate their responses.
Usually open-ended questions are asked during interviews.
(McNamara, 1999)
– Page 43 –
Measuring the Success of Environmental Education Programs
Interviews themselves can range in style from informal, casual
conversations to standardized, open-ended questions, to a closed, fixedresponse style.
Design Tips
Regardless of the format used, there are several design tips that can be
applied to all, to ensure both the quality of data you are gathering as
well as the quality of the experience for both the interviewer and the
interviewee remains high.
•
•
•
•
•
•
•
Choose a setting with little distraction.
Explain the purpose of the interview.
Address terms of confidentiality.
Explain the format of the interview.
Indicate how long the interview usually takes.
Tell them how to get in touch with you later if they want to.
Don't count on your memory to recall their answers.
(McNamara, 1999)
Remember to ask your questions one at a time, in order to give the
interviewee sufficient time to think and respond. Keep the wording of
the questions as neutral as possible, and remain as neutral as possible.
When closing, ask the interviewee if he/she has anything they’d like to
ask you.
3. Focus groups
Commonly, focus groups involve bringing together a number of persons
from the population to be surveyed to discuss, with the help of a leader,
topics that are relevant to the evaluation. Essentially, focus groups are
interviews, but of a larger number of people at the same time, in the
same group. Focus groups can allow you to explore many different
aspects of your program. For example, student focus groups can allow
you to ask questions that let you look for change in attitudes and
values, plus any action that may have ensued. Focus groups also enable
a deeper exploration of issues through creating a venue for the sharing
of experiences: in a successful focus group session, a synergy or catalyst
effect occurs, where participants build upon each others’ responses to
paint a rich context of your program.
Design Tips
• The usefulness of your focus group will depends on skills of
the moderator and on your method of participant selection.
• It’s important to understand that focus groups are essentially
an exercise in-group dynamics.
• When preparing for your focus group session, clearly identify
the objective of your meeting, and develop a set number of
questions (six is a good target number).
• Keep it short but sweet – one to 1.5 hours is long enough.
Hold it in a stress-free environment and provide
refreshments. Pay attention to room set-up: have chairs
arranged well, provide paper/pens, etc.
– Page 44 –
Measuring the Success of Environmental Education Programs
• At the beginning set out ground rules, remember to introduce
(or better yet, let them introduce themselves) all participants,
and clarify the agenda. At the end, thank everyone for
coming.
• After the group has answered each question, summarize what
you think their answer was, and repeat it back to them. This
is to ensure that you have a correct interpretation of their
answer.
• Don’t forget to keep minutes! Tape-recording focus group
sessions may be the best way to capture the often rapid
discussions, comments and ideas.
• Explore a topic in depth through group discussion, e.g., about
reactions to an experience or suggestion, challenges and
successes of a program, understanding common complaints,
marketing needs.
4. Tests
Many feel that improvements in test scores are the best indicators of a
project’s success. Test scores are often considered ‘hard data’ and
therefore (presumably) objective data, more valid than other types of
measurements such as opinion and attitude data. (EHR/NSF Evaluation
Handbook)
Various Type and Their Applications
• Norm-referenced: Measuring how a given student
performed compared to a previously tested population
• Criterion-reference: Measuring if a student has
mastered specific instructional objectives and thus
acquired specific knowledge and skills
• Performance assessment tests: Designed to measure
problem solving behaviours rather than factual
knowledge; asked to solve more complex problems and
to explain how they go about arriving at answers and
solving these problems; may involve group as well as
individual activities, and may appear more like a
project than a traditional test.
(EHR/NSF)
When considering using tests as part of your data gathering strategy,
there are a few fundamental questions that must first be addressed: Is
there a match between what the test measures and what the project
intends to teach i.e. if a science curriculum is oriented toward teaching
process skills, does the test measure these skills or only the more
concrete scientific facts)? Has the program been in place long enough
for there to be an impact on test scores? (EHR/NSF)
– Page 45 –
Measuring the Success of Environmental Education Programs
Design Tips
Evaluators may be tempted to develop their own test instruments
rather than relying on ones that exist. While this may at times be the
best choice, it is not an option to be undertaken lightly. Test
development is more than writing down a series of questions, and there
are some strict standards formulated by the American Psychological
Association that need to be met in developing instruments that will be
credible in an evaluation. If at all possible, use of a reliable, validated,
test is best.
5. Observations
The basic goal behind conducting observations is for the researcher to
gather accurate information about how a program actually operates,
particularly about processes involved. Observations involve watching
others while systematically recording the frequency of their behaviours,
usually according to pre-set definitions. (SAMHSA – CAP – NCAP,
2000)
Many researchers employ “Participant Observation’ strategies, whereby
the researcher joins in the process that is being observed, in order to
provide more of an “insider’s” perspective. (SAMHSA – CAP – NCAP,
2000)
6. Document and Record Review
Often, researchers want to obtain an impression of how a program
operates without actually interrupting the program itself. A review of
secondary data such as applications, journals, memos, minutes, and
other such forms of data provides one means of doing so.
7. Case Studies
Case studies are used to organize a wide range of information about a
case and then analyze the contents by seeking patterns and themes in
the data, and by further analysis through cross comparison with other
cases. A case can describe any unit, such as individuals, groups,
programs, or projects, depending on what the program evaluators want
to examine. (McNamara, 1999)
Design Tips
McNamara has identified five steps towards developing a case study:
1. All data about the case is gathered.
2. Data is organized into an approach to highlight the focus of
the study.
3. A case study narrative is developed.
4. The narrative can be validated by review from program
participants.
5. Case studies can be cross-compared to isolate any themes
or patterns.
(McNamara , 1999)
– Page 46 –
Measuring the Success of Environmental Education Programs
B. Table Seven: Pro’s and Con’s of Evaluation Instruments
The following table is a summary of the advantages and disadvantages
of the evaluation instruments mentioned above.
Instrument
Advantages
1. Questionnaires
and Surveys
Disadvantages
¸ Inexpensive.
r Wording of questions
¸ Easy to analyze.
might bias responses.
Types include:
¸ Easy to ensure
r No control for
1. Self-administered.
anonymity.
misunderstood
2. Interview
¸ Can be quickly
questions, missing
administered
administered to many
data, or untruthful
by telephone.
people.
responses.
¸ Can provide a lot of data r Not suitable for
¸ Easy to model after
examining complex
existing samples.
issues.
r Can be impersonal.
r By telephone:
respondents may lack
privacy.
2. Interviews
¸ Can allow researcher to r As a rule not suitable
Types include:
get a full range and
for younger children,
1. Informal,
depth of information.
older people, and nonconversational
¸ Develops relationship
English speaking
interview.
with client.
persons.
2. Standardized,
¸ Can be flexible with
r Not suitable for
open-ended
client.
sensitive topics.
interview.
¸ Can allow you to clarify r Respondents may lack
3. Closed, fixedresponses.
privacy.
response
¸ Interviewer controls
r Can be expensive.
interview.
situation, can probe
r May present logistics
irrelevant or evasive
problems (time, place,
answers.
privacy, access,
¸ With good rapport, may
safety).
obtain useful openr Often requires lengthy
ended comments.
data collection period
¸ Usually yields richest
unless project employs
data, details, and new
large interviewer
insights.
staff.
¸ Best if in-depth
r Can take much time.
information is wanted.
r Can be hard to analyze
and compare.
r Interviewer can bias
client’s responses.
– Page 47 –
Measuring the Success of Environmental Education Programs
Table Seven: Pro’s and Con’s of Evaluation Instruments (cont’d)
3. Focus groups
¸ Useful to gather ideas,
different viewpoints,
new insights, and for
improving question
design.
¸ Researcher can quickly
and reliably obtain
common impressions
and key information
about programs from
group.
¸ Can be efficient way to
get much range and
depth of information in
short time.
¸ Information obtained
can be used to generate
survey questions.
r Not suited for
generalizations
about population
being studied.
r It can often be
difficult to analyze
responses.
r A good facilitator is
required to ensure
safety and closure.
r It can be difficult to
schedule people
together.
4. Tests
Types include
1. Normreferenced.
2. Criterionreferenced.
3. Performance
assessment
tests.
¸ Test can provide the
"hard" data that
administrators and
funding agencies often
prefer.
¸ Generally they are
relatively easy to
administer.
¸ Good instruments may
be available as models.
r Available
¸ If done well, can be best
for obtaining data about
behaviour of individuals
and groups.
¸ You can view operations
of a program as they are
actually occurring.
¸ Observations can be
adapted to events as
they occur.
r Can be expensive
instruments may be
unsuitable.
r Developing and
validating new,
project-specific tests
may be expensive
and time consuming.
r Objections may be
raised because of
test unfairness or
bias.
5. Observations
Types include
1. Observations.
2. Participant
observations.
r
r
r
r
– Page 48 –
and time-consuming
to conduct.
Needs well-qualified
staff to conduct.
Observation may
affect behaviour of
program
participants and
deliverers.
Can be difficult to
interpret and
categorize observed
behaviours.
Can be complex to
categorize
observations.
Measuring the Success of Environmental Education Programs
Table Seven : Pro’s and Con’s of Evaluation Instruments (cont’d)
6. Documentation
and Record Review
¸ Can be objective.
¸ Can be quick (depending
on amount of data
involved).
¸ Get comprehensive and
historical information.
¸ Doesn’t interrupt
program or client’s
routine in program.
¸ Information already
exists.
¸ Few biases about
information.
r Can also take much
r
r
r
r
r
time, depending on
data involved.
Data may be
difficult to organize.
Can be difficult to
interpret/compare
data.
Data may be
incomplete or
restricted.
Need to be quite
clear about what
looking for.
Not a flexible means
to get data.
7. Case Studies
¸ Fully depicts client’s
experience in program
input, process and results.
¸ Can be a powerful means
to portray program to
outsiders.
r Usually quite time-
consuming to
collect, organize and
describe.
r Represents depth of
information, rather
than breadth.
The above table is a compilation of information take from the following documents
and sources: Carter McNamara’s Basic Guide to Program Evaluation; EHR/NSF’s UserFriendly Handbook for Project Evaluation; and SAMHSA – CSAP – NCAP’s Getting to
Outcomes. See the References section for more information.
– Page 49 –
Measuring the Success of Environmental Education Programs
Appendix Two: Checklist for Program Evaluation Planning
The following checklist is adapted from Carter McNamara’s Checklist for
Program Evaluation Planning. A full checklist, in addition to many
other useful evaluation tools, documents and materials, can be found on
McNamara’s excellent website. See the Resources section of this
document for more information.
Name of Organization ___________________________________________
Name of Program ______________________________________________
Purpose of Evaluation?
What do you want to be able to decide as a result of the evaluation?
For example:
__ Understand, verify or increase impact of products or services on
customers/ clients (e.g., outcomes evaluation)
__ Improve delivery mechanisms to be more efficient and less costly
(e.g., process evaluation)
__ Verify that we're doing what we think we're doing (e.g., process
evaluation)
__ Clarify program goals, processes and outcomes for management
planning
__ Public relations
__ Program comparisons, e.g., to decide which should be retained
__ Fully examine and describe effective programs for duplication
elsewhere
__ Other reason(s)
Audience(s) for the Evaluation?
Who are the audiences for the information from the evaluation, for
example:
__ Clients/customers
__ Funders/Investors
__ Board members
__ Management
__ Staff/employees
__ Other(s) _________________________________________________
What Kinds of Information Are Needed?
What kinds of information are needed to make the decision you need to
make and/or enlighten your intended audiences, for example,
information to understand:
__ The process of the product or service delivery (its inputs, activities &
outputs)
__ The customers/clients who experience the product or service
__ Strengths and weaknesses of the product or service
__ Benefits to customers/clients (outcomes)
__ How the product or service failed and why, etc.
__ Other type(s) of information?
– Page 50 –
Measuring the Success of Environmental Education Programs
Type of Evaluation?
Based on the purpose of the evaluation and the kinds of information
needed, what types of evaluation are being planned?
__ Goal-Based?
__ Process-Based?
__ Outcomes-Based?
__ Other(s)? ___________________________________________________
Where Should Information Be Collected From?
__ Staff/employees
__ Clients/customers
__ Program documentation
__ Funders/Investors
__ Other(s) ________________________________________________
How Can Information Be Collected in Reasonable and Realistic Fashion?
__ questionnaires
__ interviews
__ documentation
__ observing clients/customers
__ observing staff/employees
__
conducting
focus
groups
among____________________________________
__ other(s)
When is the Information Needed?
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
What Resources Are Available to Collect the Information?
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
– Page 51 –
Measuring the Success of Environmental Education Programs
Appendix Three: Tips for Conducting an Evaluation
I. Develop Evaluation Questions
1. Clarify goals and objectives of the evaluation
2. Identify and involve key stakeholders and audiences.
3. Describe the intervention to be evaluated.
4. Formulate potential evaluation questions of interest to all
stakeholders and audiences.
5. Determine resources available.
6. Prioritize and eliminate questions.
II.
Match Questions with Appropriate Information-Gathering
Techniques
1. Select a general methodological approach.
2. Determine what sources of data would provide the information
needed.
3. Select data collection techniques that would gather the desired
information from the identified sources.
III. Collect Data
1. Obtain the necessary clearances and permission.
2. Consider the needs and sensitivities of the respondents.
3. Make sure data collectors are adequately trained and will operate
in an objective, unbiased manner.
4. Cause as little disruption as possible to the ongoing effort.
IV. Analyze Data
1. Check raw data and prepare date for analysis.
2. Conduct initial analysis based on the evaluation plan.
3. Conduct additional analyses based on the initial results.
4. Integrate and synthesize findings.
V. Provide Information to Interested Audiences
1. Provide information to the targeted audiences
2. Deliver reports and other presentations in time to be useful.
3. Customize reports and other presentations.
(EHR/NSF)
*From the HER/NSF The User Friendly Handbook for Project Evaluation. See
References section for more information.
– Page 52 –
the
Measuring the Success of Environmental Education Programs
Appendix Four: Evaluation Samples
A. Teacher Written Questionnaire
The following questionnaire is used by the Sierra Club of Canada – BC
Chapter to determine teacher satisfaction, post-program delivery. It is
designed to be short, simple to fill out and trouble-free to return.
Sierra Club Education Program
Evaluation Form
The Sierra Club strives towards the delivery of fair, balanced & interesting
educational programs. Your feedback is important to us. Please take a
moment to comment.
Please indicate which presentation your class received:
q
q
q
q
Journey through the Temperate Rainforest
Trees and Fishes, Needs and Wishes
Weaving Wildlife Webs
You, Me and the Trees
q
q
q
q
Fish, Forests and Fungi
All Aboard the Stewardship
Stewardship Superheroes
ESP Investigator
Program Goal: Please indicate if the program received met the goals (as
outlined in the Pre-Program Booking Package and the Program Flyer sent to
the school)?
Yes ____
No ____
Why or why
not:_________________________________________________________
Program Content: Please evaluate this program based on the following
themes:
Messaging: Were messages:
• clearly defined:
Yes ____
No ____
• balanced, fair and educational:
Yes ____
No ____
If no, why not:
_________________________________________________________
Activities: Were activities:
• engaging/interesting:
_____
• provoking of critical thought:
_____
• suitable for information delivery:
_____
Yes ____
No
Yes ____
No
Yes ____
No
Information Amount: Was suitable information on the following provided:
Temperate Rainforest ecology:
Yes ____
No _____
N/A ______
Stewardship (ideas and solutions): Yes ____
No _____
N/A ______
Did the program fit your curriculum: Yes ____
No _____
N/A ______
– Page 53 –
Measuring the Success of Environmental Education Programs
Program changes: Check any that apply:
• Group size:
smaller _____ fine _____
larger _____
• Program length:
shorter _____ fine _____
longer
___(hrs)
• Amount of audience interaction:
more _____ fine _____
less
______
• Age appropriateness:
appropriate _____
not
appropriate _____
• What grade level would you consider this program appropriate for?
__________
How did you find out about this program?
____________________________________
Are you familiar with the Sierra Club of Canada, BC Chapter?
No____
Would you be interested in having us visit you again?
No ____
Yes ____
Yes ____
Any other comments?
______________________________________________________________________
______________________________________________________________________
*We welcome any student feedback you would like to forward to us
Please sign and return this form to the Sierra Club of Canada – BC Chapter.
School Name: ____________________ Your Name:
____________________________
Your Signature:
________________________________________________________
Return to:
The Sierra Club of Canada, BC Chapter, Johnson St., Victoria, BC, V8W 2M3.
Ph: (250) 386-5255 F: (250) 386-4453
Thank you!
– Page 54 –
Measuring the Success of Environmental Education Programs
B. Teacher Interview/Focus Group Questions
The following set of interview questions was used by the Green Street
Initiative to evaluate their program. The questions were delivered by
an independent evaluator who met with teacher focus groups in order
gain a better understanding of the impact and effectiveness of the
program.
Teacher Focus Group Questions
and Interview Guide
Part One: Quantitative Data: Information to obtain before focus
group/interview session.
How many students were involved?
______
How many classrooms:
Grade(s):
Name of GS Program:
class:
Program deliverer(s):
Males______ Females
Subject area(s):
Number of visits to
Part Two: Introductions, Outline Purpose and Review Focus Group
Guidelines
Discussion Starter: Past experience with EE
1. Can you talk a little about your past experiences with environmental
education?
2. Probes: – any background training, any other environmental
education programming you may have done in your classroom or
with outside groups?
3. What about your best evaluation experience?
4. What made it so?
5. How you first heard about the program?
Theme 1: Promotion/Communications
5. How did you first hear about Green Street?
Probes: Anyone hear about it thorough a student? Did anyone hear
about Green Street first, and then contact the Provider for the
program?
6. Did anyone review the Green Street web site? Did you review and
learn about other providers/ programs through the site?
7. What would be the best way to learn about a program like this?
How do you usually find out about programs like these?
(Probes: principals, posters, list-serves, emails, other teachers,
conferences, newsletters, staff meeting…)
8. What made you choose this program?
– Page 55 –
Measuring the Success of Environmental Education Programs
Theme 2: Registration Process
9. I’d like to hear about some of your experiences with registering for
the program – how did it go for you?
Probe: What did you like / not like about the process? Was it simple
or complicated?
10. Were students involved in the registration process? Can you talk
about this?
Theme 3: Program Reach
Let’s talk a bit about Green Street itself now – Green Street’s main
function was to be set up as a brokering function, where a pre-screened
“roster” of EE programs was available to choose from, based on a set of
criteria such as curriculum alignment and teacher support. (Have list of
criteria available to show teachers, post it visibly and discuss)
11. Was this evident to you? Did GS provide any added value to you?
12. Did you choose a program with more confidence, knowing that it
was aligned with curriculum? Knowing that it would provide you
with support?
13. Did you feel the program was in line with the prescribed curriculum?
14. Did you use the program as part of any extra-curricular activities –
i.e. environment club?
Theme 4: Experience with the Program
I’d like to turn now to your actual experience with the program and
hear about things like the quality of the program, your expectations,
and student learning.
15. Quality of the program: Were you happy with the quality of the
program? Why? Why not?
16. What are your opinions regarding the value of offering a field trip or
in-class component, versus sending out a resource package?
17. What about your expectations: Did the program provide for you
what you expected or were there surprises (good and/ or bad)?
18. What do you think students gained from participating in the
program?
Probe: level of knowledge increased, practical skills, applied learning
19. Do you feel your students were engaged with the program? Does
the program have elements that make it transformative for
students? Compel them to take action?
Theme 5: Support
20. What can you tell us about this aspect of the program?
Probe: Did the Providers provide you with support in a timely
efficient manner? Were the necessary pieces in place to provide you
with support? I.e. before, during and after the program?
Suggestions?
– Page 56 –
Measuring the Success of Environmental Education Programs
21. What types of support would be most useful for you?
22. Did you have the support of your Principal? Did you need the
support of your Principal?
Theme 6: Future Directions
23. Would you do this again?
24. Would you recommend the program to another teacher?
25. What are other programs / topics you would like to see?
Key points:
To close, I’d like to hear from each of you what you feel the most
important elements of the discussion have been. If you like, you can
take a minute or two to write them down, to help
summarize/synthesize your thoughts.
End Notes:
Thank attendees for participation.
Offer to send summary of final report.
Provide contact information in event of further thoughts or ideas to be
passed on.
– Page 57 –
Measuring the Success of Environmental Education Programs
C. Student Questionnaire
CPAWS recently developed a questionnaire designed to measure
changes in students’ knowledge, attitudes, and behaviours that occur
after their school presentations. The questionnaire was administered
by CPAWS before and then again immediately following the classroom
presentation.
Grizzly Bears Forever!
Student Survey
Part One: Knowledge
This section of the survey is designed to determine knowledge about
Grizzly bears and related things. CIRCLE the letter that reflects what
you think is a correct response to the statement or question. This is not
a test! Don’t worry if you can’t answer many of these. Very few people
can. If you don’t know the answer, guess.
1. Grizzly bears are:
(a) Carnivores (they eat only meat)
(b) Herbivores (they eat only plants)
(c) Omnivores (they eat both plants and animals)
(d) Vegans (they eat no milk products)
2. Grizzly bears are known as umbrella species. This means that:
(a) They protect other species from rain
(b) Their presence in an ecosystem means that many other
animals will also be found there
(c) On a graph, the curve of their population is shaped like an
umbrella
(d) Grizzlies hide in dense brush to avoid rain or snow
3. Which of the following statements is true?
(a) Grizzly bears still live in all the same places they did 200 years
ago
(b) Grizzly bears once lived in Eastern Canada as well, but are no
longer there
(c) Grizzly bears once lived throughout Western North America,
but can no longer be found in much of this area
(d) Grizzly bears can now only be found in the Yukon and Alaska
4. Grizzly bears prefer not to cross busy roads like the Trans-Canada
Highway. This is a problem because:
(a) This means that the genetic diversity of the population will
decrease
(b) Bears typically need to cross the road to get to rich food
sources
– Page 58 –
Measuring the Success of Environmental Education Programs
(c) It is important for tourists to see bears crossing the highway
(d) This interrupts the flow of animals as they migrate south for
the winter
5. The biggest threat to large animals like Grizzlies and wolves is
(a) Hunting and other forms of direct mortality such as being hit
by cars
(b) Pollution of air and water that contaminates their food
sources
(c) Global warming and climate change
(d) Habitat loss and habitat fragmentation
6. In Alberta, Grizzly bears are
(a) Hunted north of the Bow River
(b) Listed as Endangered in the Provincial Wildlife Act
(c) Listed as Threatened in the Provincial Wildlife Act
(d) Found mainly in the Calgary Zoo
Part Two: Environmental Attitudes
This part of the survey is designed to determine environmental
attitudes. There are no right or wrong answers, only differences of
opinion. CIRCLE the letter that reflects your true feelings.
1. I am in favour of saving remote wilderness areas, even if few people
ever get a chance to go there.
A
Strongly Agree
B
Agree
C
Neutral
D
Disagree
E
Strongly Disagree
2. Poisonous snakes that pose a threat to people should be killed.
A
Strongly Agree
B
Agree
C
Neutral
D
Disagree
E
Strongly Disagree
3. If a plant or animal is of no use to humans, then we don’t need to
waste our time trying to protect it.
A
Strongly Agree
B
Agree
C
Neutral
D
Disagree
E
Strongly Disagree
4. If I had to choose between protecting a natural area and creating
homes for humans, I would choose to protect the area.
A
Strongly Agree
B
Agree
C
Neutral
D
Disagree
E
Strongly Disagree
5. Government should pass laws to make recycling mandatory, so that
everyone is forced to recycle.
A
Strongly Agree
B
Agree
C
Neutral
D
Disagree
E
Strongly Disagree
6. Preserving wild areas isn’t important because we’re good at managing
wildlife.
A
Strongly Agree
B
Agree
C
Neutral
– Page 59 –
D
Disagree
E
Strongly Disagree
Measuring the Success of Environmental Education Programs
7. Industries should have to pay for any pollution they cause.
A
Strongly Agree
B
Agree
C
Neutral
D
Disagree
E
Strongly Disagree
8. There is no point in getting involved in environmental issues, since
governments and industries have all the power and can do whatever
they want to.
A
Strongly Agree
B
Agree
C
Neutral
D
Disagree
E
Strongly Disagree
9. I am interested in spending time working to help the environment,
even though I realize this will cut into my free time.
A
Strongly Agree
B
Agree
C
Neutral
D
Disagree
E
Strongly Disagree
Part Three: Environmental Behaviours
This section of the survey is designed to find out what things you do
about the environment. There are no right or wrong answers, so don’t
worry if you have never done any these things, and don’t worry if all
your tick marks end up in the ‘N’ column. We ask only that you be
truthful as you answer these questions.
Mark the answer that is closest to the right answer for you:
N - stands for never or no
R – stands for rarely (three or four times a year)
S – stands for sometimes (three or four times a month)
U – stands for usually, or yes (most of the time you have the
chance)
N R
I bring all my lunch to school in reusable containers
I walk or bike to places instead of asking for a ride
I turn off the tap water while I brush my teeth
At home, I try to recycle as much as I can
I talk to others about helping the environment
I pick up litter when I see it in a park or a natural area
I am a member of an environmental club or group
I write to politicians about things that concern me
I work on outdoor projects to improve the environment
I have helped raise money to support an environmental
cause
I read about the environment for fun
You are finished! Thank you for your participation.
– Page 60 –
S
U
Measuring the Success of Environmental Education Programs
D. Student Focus Group Questions
The following set of interview questions was used by the Green Street
Initiative to evaluate their program. An independent evaluator met with
student focus groups in order gain a better understanding of the impact
and effectiveness of the program.
Student Focus Group Questions
Part One: Quantitative data:
Number and gender of students
Classes / schools represented:
GS Programs represented:
Males______ Females ______
Part Two: Introductions, Outline Purpose and Review Focus Group
Guidelines
Discussion Starter: Past experience:
1. Can you talk about the best experiences you may have had with
environmental education, either in or out of school?
Probe: projects, camps, environmental issues. Why were they the
best?
2. What are your feelings about them? Do you feel environmental
issues are relevant to your lives?
Theme 1: Experience with the Program
Now I want to hear about the program you just completed: I want this
to be an open discussion, but I’ll throw in some questions about things
like program registration and program choices, to keep us on track.
All of you participated in the _____ program, right?
3. What did you learn about?
4. What did you think?
Probe: Was the program what you expected, or were there surprises
(good or bad)?
5. What did you like the most? What was your least favorite part?
6. On a scale of 1-10 related to your other schoolwork, how would you
rate this program? Why?
7. Would you want to do a program like this again?
Theme 2: Student Engagement/ Learning
8. Do you think you’ll do anything different as a result of the program?
Probe: Did it inspire you in any way - to do more for the
environment?
Did you bring any of what you learned home? )
– Page 61 –
Measuring the Success of Environmental Education Programs
Theme 3: Promotion / Communications
9. Did you get any choice in what kind of EE program your class was
going to get? If yes – go to question #13
10. Do any of you know what Green Street is? (Branding question). (Tell
them if they don’t know).
11. If yes - How did you hear about Green Street? (Probe: teacher, web,
poster)?
12. Did anyone ever visit the Green Street web site? If so, what did you
think of it? Should there be environmental content on the site? Any
other ideas for the site?
Theme 4: Registration Process
13. Were any of you involved with registering for the program? If so,
what did you think of the process? Easy? Complicated?
14. Has anyone looked at the registration site on the web site? If so –
what did you think of it?
The Green Street program was actually set up to be student-driven – so
you can go in, look at EE programs, and choose what you’d like to do
with your school, then bring it to your teacher and apply together.
15. What do you think of this idea?
Probe: Do you think you’d do this? Would it work in your class
/school? Why or why not?
Theme 5: Future Directions
16. I want to hear about your preferences here - If you had your choice,
what kinds of EE programs would you like to see in schools?
Green Street is looking at other ways to get students involved in
environmental programs/ events: like on-line chat rooms, student
conferences, chances to travel, work with environmental groups.
17. What do you think? Would you participate in these things if they
were available? Why/when would you use the on-line chat rooms /
go to student conferences? Any other ideas?
Key points:
To close, I’d like to hear from each of you what you feel the most
important elements of the discussion have been. If you like, you can
take a minute or two to write them down, to help
summarize/synthesize your thoughts.
End Notes:
Thank attendees for participation
Offer to send summary of final report
Provide contact information in event of further thoughts or ideas to be
passed on.
– Page 62 –
Measuring the Success of Environmental Education Programs
F. Student Class Action Plans Feedback Forms
The following table summarizes the tools used by the Sierra Club of
Canada, BC Chapter to gather information on what students retained
after participating in one of their environmental education programs.
Each specific Class Action Plan (CAP) is tailored to particular age/grade
learner levels, and is designed to measure knowledge of key learning
outcomes introduced (younger grades) and/or stewardship/
environmental actions taken (older grades). Each handout is printed on
tree-free (kenaf) paper and includes a self-address envelope, to ensure a
higher return rate.
Class Action Plans
Built In Mechanisms For Evaluating
TREE Team/ESP! School Programs
PROGRAM
Journey Through
the Temperate
Rainforest
GRADE
K-2
CAP EVALUATION
Title: Rainforest Report
Tool: Handout with room for youth to draw
and/or write one thing they learned from the
program.
Method: Introduced at end of program, left with
self-address envelope to be mailed back by
teacher.
Weaving
Wildlife Webs
2-4
Title: Create a Rainforest Creature Report
Tool: Handout with room for youth to draw and
write about a TRF animal, noting it’s (one or all):
(a) predator/prey relationship
(b) habitat
(c) adaptations for living in the TRF
Method: Introduced at end of program, left with
self-address envelope to be mailed back by
teacher.
Trees and Fishes,
Needs and
Wishes
2-4
Title: Eco-Detective: Special Species Sheet
Tool: Handout with room for youth to draw
and/or write about one of the Endangered Species
in the program (Kermode bear, Orca whale,
Marbled Murrelet, Phantom Orchid, and or Tailed
Frog). Questions include (one or all):
1) What are the species’ habitat requirements?
2) Why is the species is endangered/ threatened.
3) What can students do to help the species
(environmental stewardship)?
Method: Introduced at end of program, left with
self-address envelope to be mailed back by
teacher. For classes that respond, a certificate will
be sent.
– Page 63 –
Measuring the Success of Environmental Education Programs
You, Me, and the
Trees
5-7
Title: The Great Tree Challenge.
Tool: The Great Tree Challenge is introduced. A
cut out outline of a tree is put up. Handouts of
pinecones are given to each student. After a
class brainstorm on stewardship actions and
goals, each student writes an individual
stewardship action idea on their pinecone and
posts it on the tree. After they’ve implemented
their action idea they return a Feedback Form to
Sierra Club.
M e t h o d : Teachers keep the tree and the
pinecones, and mail back the feedback form.
Fish, Forests, and
Fungi
5-7
Title: The Great Salmon Challenge
Tool: The Great Salmon Challenge is introduced.
An outline of a tree is put up and handouts of
salmon are given to each student. After a class
brainstorm on rainforest connections and
stewardship actions, students writes individual
action idea on their salmon and then posts it on
the tree. After they’ve implemented their action
idea they return a Feedback Form to Sierra Club.
Method: Class keeps the tree and the salmon,
and mail back the Feedback Form.
All Aboard the
Steward-Ship
K-2
Title: Steward-Ship Trip Report
Tool: Students will draw a picture of their
voyage on their handout, and will list (gradedependent) 3 ways in which they can be
environmental stewards.
Method: Introduced at end of program, to be
mailed back by teacher.
Stewardship
Superheroes!
2-4
Title: Stewardship Superhero Logo
Tool: Students develop a Stewardship Superhero
logo on a blank handout given to them. Each
logo will be pinned up on a giant Superhero
Sheet. They also write what action they will
take in their lives or classroom to fulfill their role
as stewardship superheroes. They then submit a
Feedback Form to Sierra Club.
Method: Teachers mail back the Feedback Form
ESP! Investigators
(Environmental
Stewardship
Powers!)
5-7
Title: The Mystery Case of Sick Planet Earth
Tool: Students will draw/write about one
stewardship action idea that they learned about.
They will also report on one stewardship action
that they have implemented in their lives.
Method: Teachers mail back the feedback.
– Page 64 –
Measuring the Success of Environmental Education Programs
End Notes
Glossary
Action research: Applied research that engages practitioners with researchers in the
study of organizations or activities. Emphasis is on the ongoing improvement of
practice by the practitioners themselves.
Assessment: Process of or instrument for measuring, quantifying, and/or describing
aspects of teaching related to the attributes covered by the evaluation.
Case Study: A research strategy that documents and investigates a phenomenon in
its natural setting using multiple sources of evidence.
Control group: A group chosen randomly from the same population as the program
group but that does not receive the program. It is a stand-in for what the program
group would have looked like if it hadn’t received the program.
Correlation: The extent to which two or more variables are related to each other.
Empirical research: Research that relies on quantitative or qualitative data drawn
from observation or experience.
Evaluation: The systematic process of determining the merit, value, and worth of
someone (the evaluee, such as a teacher, student, or employee) or something (the
evaluand, such as a product, program, policy, or process). (See Assessment.)
Experiment:
A study in which the evaluator has control over some of the
conditions in which the study takes place and some aspects of the independent
variables being studied. The criterion of a true experiment is random assignment of
participants to experimental and control groups.
Formative evaluation: A type of evaluation done during the course of program
implementation whose main purpose is to provide information to improve the
program under study. (See Summative Evaluation).
Focus groups: A method of data collection in which 6 – 12 respondents are brought
together to discuss and provide data on a particular issue(s).
Goal: A program’s desired outcome.
Impact: The net effects of a program. Impact may also refer to program effects for
the larger community. Generally it is a synonym for outcome.
Indicator: A measure that consists of hierarchically ordered categories arranged in
ascending or descending order of desirability.
Objectives: The specific, desired program outcomes.
Outcome: The end results of the program. Outcomes may be intended or
unintended, and be positive or negative.
Outcome evaluation: Study of whether or not the program produced the intended
program effects; relates to the phase of the program studied – in this case the end
result of the program (compare Process Evaluation).
Peer review: An assessment, often of a proposal, report or program, conducted by
people with expertise in the author’s field. Can also refer to peer evaluations done
by program participants at a team or individual level.
Post-test: A measure taken after a program ends.
Pre-test: A measure taken before a program begins.
– Page 65 –
Measuring the Success of Environmental Education Programs
Process evaluation: Study of what goes on while a program is in progress; it relates
to the program phase studied – e.g., program implementation (compare Outcome
Evaluation).
Qualitative research: Research that examines phenomena primarily through words
and tends to focus on dynamics, meaning and context. Qualitative research usually
uses observation, interviewing and document reviews to collect data.
Quantitative research: Research that examines phenomena that can be expressed
numerically and analyzed statistically.
Randomization: Selection according to the laws of chance. A sample can be
randomly selected from a population to represent the population. Randomization is
a rigorous process that seeks to avoid human bias.
Reliability: The consistency or stability of a measure over repeated use. An
instrument is reliable if repeated efforts to measure the same phenomenon produce
the same result.
Results-based accountability: Holding programs accountable not only for the
performance of activities but also for the results they achieve.
Self-evaluation: Self-assessment of program processes and/or outcomes by those
conducting the program.
Stakeholders: Those with a direct or indirect interest (stake) in a program or its
evaluation; can be people who conduct, participate in, fund or manage a program,
or who may otherwise be affected by decisions about the program/ evaluation.
Summative evaluation: Study conducted at the end of a program (or phase of the
program) to determine the extent to which anticipated outcomes were produced. It
is intended to provide information about the worth of the program. (S e e
Formative)
Validity: In measurement, validity refers to the extent to which a measure captures
the dimension of interest. In analysis, validity refers to the close approximation of
study conclusions to the “true” situation.
Variable: A measured characteristic, usually expressed quantitatively, that varies
across members of a population. In evaluation, variables may represent inputs,
processes, interim progress markers, longer-term outcomes, and unintended
consequences.
Note: Several of the above definitions drawn from multiple sources, including Weiss (1998),
Cohen & Manion, 1994, and Gall, Borg and Gall (1996).
– Page 66 –
Measuring the Success of Environmental Education Programs
References
Brower, Michael and Leon Warren. (1999). The Consumer’s Guide to
Effective Environmental Choices: Practical Advice from the Union of
Concerned Scientists. Random House.
Caduto, Michael J.. (1985). A Guide to Environmental Values
Education. Environmental Education Series #13, UNESCO-UNEP
International Environmental Education Programme.
Cohen, Louis and Lawrence Manion. (1994). Research Methods in
Education. Fourth Edition. Routledge, London and New York.
Cox, Philip. “Implementing Results-Based Planning and Management
Among Community Organizations”, Plan:Net Limited.
Council for Environmental Education / Project WILD (1995). Taking
Action. An Educator’s Guide to Involving Students in Environmental
Action Projects. Project WILD / World Wildlife Fund, Bethesda, MD.
Cousins, J. Bradley, and Lorna M. Earl. (1992) “The Case for Participatory
Evaluation.” Educational Evaluation and Policy Analysis, Winter 1992,
Vol. 14, No.4, pp. 397-418.
Einstein, Daniel. (1995). The Campus Ecology Research Project: An
Environmental Education Case Study. Institute for Environmental
Studies, University of Wisconsin-Madison.
EHR/National Science Foundation’s. The User Friendly Handbook for
Project Evaluation.
http://www.ehr.nsf.gov/EHR/RED/EVAL/handbook/handbook.htm.
Gall, Meredith D., Walter Borg and Joyce P. Gall. (1996). Educational
Research. An Introduction. Longman Publishers, White Plains, NY.
Hammond, William. (1997). “Educating for Action: A Framework for
Thinking about the Place of Action in Environmental Education.” Green
Teacher , Winter 1996-97.
Hollweg, Karen S. (1997). Are We Making a Difference? Lessons
Learned from VINE Program Evaluations. North American Association
of Environmental Education, Washington, DC/ Troy, Ohio.
Huberman, M. (1990). “Linkage between researchers and practitioners:
A qualitative study.” American Educational Research Journal, 27, 363391.
Joint Committee on Standards for Educational Evaluation. (1994). The
Program Evaluation Standards: How To Assess Evaluations of
– Page 67 –
Measuring the Success of Environmental Education Programs
Educational Programs. 2nd Edition. Sage Publications, Thousand Oaks,
CA.
King, Jean A, Lynn Lyons Morris and Carol Taylor Fitz-Gibbon. (1987).
How to Assess Program Implementation. SAGE Publications, Newbury
Park, London.
McNamara, C. (1999).Basic Guide to Outcomes-Based Evaluation in
Nonprofit Organizations with Very Limited Resources.
www.managementhelp.org/evaluatn/outcomes.htm
Morgan, David L. (1997). Focus Groups as Qualitative Research.
Qualitative Research Methods Series Volume 16. SAGE Publications,
Newbury Park, London.
Morris, Lynn, Carol Taylor Fitz-Gibbon, and Marie Freeman. (1987).
How to Communicate Evaluation Findings. SAGE Publications, Newbury
Park, London, New Delhi.
NEEAC. (1996). Report Assessing Environmental Education in the
United States and the Implementation of the National Environmental
Education Act of 1990. NEEAC, Washington, DC.
North America Association for Environmental Education. (1996).
Environmental Education Materials: Guidelines for Excellence. NAAEE ,
Rock Spring, GA.
North America Association for Environmental Education. (2002).
Guidelines for Excellence in Nonformal Environmental Education
Program Development and Implementation. (draft) NAAEE, Rock
Spring, GA.
Orr, David. (1992). Ecological Literacy: Education and the Transition to
a Postmodern World. State University of New York Press, Albany, NY.
Plan:Net Limited. (2002). Splash and Ripple: Using Outcomes to Design
and Guide Community Work.
SAMHSA-CSAP-NCAP. (2000). Getting to Outcomes. Substance Abuse
and Mental Health Services Administration, Centre for Substance Abuse
Prevention and National Centre of the Advancement of Prevention.
Sanders, Dr. James R. (Chair). (1994). The Program Evaluation
Standards. The Joint Committee on Standards for Educational
Evaluation, The Evaluation Centre, Western Michigan University,
Kalamazoo.
Staniforth, Susan & Leesa Fawcett. (1994). Metamorphosis For
Environmental Education: A Core Course Guide for Primary /
Elementary Teacher Training. The Commonwealth of Learning,
Vancouver, BC/ UNESCO.
– Page 68 –
Measuring the Success of Environmental Education Programs
Stokking, H., van Aert, L., Meijberg, W. and A. Kaskens. (1999).
Evaluating Environmental Education. IUCN Commission on Education
and Communication (CEC), Gland Switzerland and Cambridge UK.
UNESCO-UNEP. (1978). Final Report Intergovernmental Conference on
Environmental Education. Organized by UNESCO in Cooperation with
UNEP, Tbilisi, USSR, 14-26 October 1997, Paris: UNESCO
UNESCO-UNEP (1976). “The Belgrade Charter”. Connect: UNESCOUNEP Environmental Newsletter, Vol. 1 (1) pp. 1-2.
United Way. (1996). Measuring Program Outcomes: A Practical
Approach. United Way of America. Item # 0989, Eighth Printing.
Weiss, Carol H. (1998). Evaluation. Methods for Studying Programs and
Policies. Second Edition. Prentice Hall, New Jersey.
– Page 69 –
Measuring the Success of Environmental Education Programs
Resources
A. On the Web: Resources on Program Evaluation and OutcomeBased Measurement
Carter McNamara
McNamara is author of several excellent websites on evaluation and
evaluation-related information, which have been cited extensively
throughout this document.
Basic Guide to Program Evaluation
http://www.mapnp.org/library/evaluatn/fnl_eval.htm
Basic Guide to Program Evaluation
www.mapnp.org/library/evaluatn/fnl_eval.htm
General Planning and Management for Organizations
www.mapnp.org/library/plan_dec/plan_dec.htm
Evaluation Activities in Organizations
http://www.mapnp.org/library/evaluatn/evaluatn.htm
Evaluation Assistance Centre
Evaluation Handbook (1995), by Judith Wilde, PhD and Suzanne
S o c k e y ,
P h D .
http://www.ncela.gwu.edu/miscpubs/eacwest/evalhbk.htm
Joint Committee on Standards for Environmental Education
Program Evaluation Standards by the Joint Committee on Standards
for EE.
For a standards overview: http://www.wmich.edu/evalctr/jc/JCOverviewNet.htm
To see the standards:
http://www.eval.org/EvaluationDocuments/progeval.html
National Science Foundation (NSF)
User-Friendly Handbook for Project Evaluation developed by NSF.
http://www.ehr.nsf.gov/EHR/RED/EVAL/handbook/intro.pdf
University of Ottawa, School of Medicine
Home to A Program Evaluation Tool Kit, and includes examples of
Logic Models.
www.uottawa.ca/academic/med/epid/toolkit.htm
United Cerebral Palsy, Greater Utica (NY) Area
Outcomes Focused Evaluation. A terrific list of Internet Resources for
Nonprofits.
www.ucp-utica.org/uwlinks/outcomes.html
United Way of America
The United Way of America Outcome Measurement Resource
Network site provides many excellent outcome measurement
resources and learnings.
www.unitedway.org and http://national.unitedway.org/outcomes/
W.K. Kellogg Foundation
Home to several key resources, including the W.K Kellogg
Foundation Logic Model Development Guide and the W.K. Kellogg
– Page 70 –
Measuring the Success of Environmental Education Programs
Foundation Evaluation Handbook.
www.wkkf.org
B. Website Resources on Environmental Education
The Association Québécoise pour la Promotion de L'éducation Relative à
L'environnement (AQPERE)
AQPERE is a non-profit organization that brings together mainly
individuals and organizations in Quebec working in the field of
education and training in environment and sustainable
development.
http://ecoroute.uqcn.qc.ca/educ/aqpere/index.html
Canadian Journal of Environmental Education
The Canadian Journal of Environmental Education is a refereed
journal published once a year that seeks to further the study and
practice of environmental education.
http://www.yukoncollege.yk.ca/programs/cjee.htm
Canadian Network for Environmental Education (EECOM)
EECOM is a national ENGO whose mission is to engage Canadians in
learning about their environment by enabling teachers and others to
1) work together in ways that nurture environmentally informed
and responsible individuals, organizations and communities, and to
2) improve the quality and effectiveness of their services.
http://www.eecom.org/
Canadian Parks and Wilderness Society. (CPAWS)
CPAWS Education Program helps teachers and students take
informed action on important conservation issues such as
endangered species, or the protection of Alberta’s threatened
ecosystems – as the culmination of a curriculum-based education
process.
http://www.cpawscalgary.org/education/index.html
In addition, CPAWS provides several excellent environmental
education documents online, including:
How to Get Action in Your Environmental Education Program
document; and
What is Good Environmental Education document, both at
www.cpawscalgary.org/education/network-environmentaleducation/capacity-building.html
Numerous activity guides and lesson modules, written to help
teachers teach about and integrate environmental concepts in their
lessons can be found at:
http://www.cpawscalgary.org/education/free-resources/lessons.html
The Take a Stand! Activity can be found at:
www.cpawscalgary.org/education/free-resources/why-y2y.html.
– Page 71 –
Measuring the Success of Environmental Education Programs
FEESA
FEESA, An Environmental Education Society, is a private, non-profit
education organization established in 1985 to promote, co-ordinate
and support bias-balanced environmental education across Alberta.
http://www.feesa.ab.ca
Global Environmental and Outdoor Education Council (GEOEC)
The GEOEC of the Alberta Teachers' Association is a group of
educators who are interested in outdoor education and teaching
about the environment.
http://www.rockies.ca/geoec
Green Street
Green Street is a Canadian-wide initiative designed to link teachers
with quality environmental education programs provided by
environmental organizations. Green Street's mission is to offer high
quality programs that actively engage Canadian elementary and
secondary school students in learning about the environment,
promoting environmental stewardship, and taking further action.
www.green-street.ca
Green Teacher
Green Teacher is a magazine by and for educators to enhance
environmental education across the curriculum at all grade levels.
An excellent resource!
http://www.greenteacher.com
North American Association for Environmental Education (NAAEE)
NAAEE is a network of environmental educators throughout North
America & in over 55 countries around the world.
http://naaee.org/.
Downloadable versions of the Excellence in Environmental
Education series, including Guidelines for Learning (K-12) and
Environmental Education Materials: Guidelines for Excellence can be
found at:
http://naaee.org/npeee/
and Guidelines for Excellence in Nonformal Environmental
Education Program Development and Implementation at:
http://naaee.org/npeee/nonformalguidelines.php/ .
Sierra Club of Canada – BC Chapter
The Sierra Club of Canada's BC Chapter is committed to protecting
BC's wild lands and wildlife. Our learning resources and school
programs are designed to foster appreciation and understanding of
the many different value systems and perspectives surrounding
environmental issues and solutions.
http://www.sierraclub.ca/bc/
– Page 72 –
FAX BACK FORM
To: Gareth Thomson, Education Director
Fax: 403-678-0079
Re: Evaluating the Success of Environmental Education Programs
~ Feedback Form ~
Your feedback is important to us! Evaluating the Success of Environmental Education
Programs is a living document; the Canadian Parks and Wilderness Society intends to improve
the quality and the value of this document by incorporating feedback from both funders and
environmental education groups. We welcome all comments and suggestions for change.
Do you feel that this document makes a positive contribution to your work? Please state
why.
____________________________________________________________
____________________________________________________________
Which section(s) of Evaluating the Success...” did you find most useful?
____________________________________________________________
____________________________________________________________
____________________________________________________________
Which section(s) of Evaluating the Success..” did you find LEAST useful?
____________________________________________________________
____________________________________________________________
____________________________________________________________
Is there anything you would add or change?
____________________________________________________________
____________________________________________________________
____________________________________________________________
Please place any additional comments on a separate page.
Name (optional): _______________________________________________
Organization: _________________________________________________
Thank you! Please return this form to us, either via fax or email.
CANADIAN PARKS AND WILDERNESS SOCIETY
CALGARY-BANFF CHAPTER
CANMORE OFFICE: 403-678-0079 (PHONE/FAX)
E-MAIL:
[email protected]