BUKU Mercy Corps Dme Guide
BUKU Mercy Corps Dme Guide
BUKU Mercy Corps Dme Guide
EVALUATION
GUIDEBOOK
March 2003
This Guidebook is designed as a “Living Document” that will be revised and updated as needed. Send
comments, concerns and suggestions to the New Initiatives team (Graham Craft, [email protected] or
Rob Zeaske, [email protected] ) or via your HQ program officer.
TABLE OF CONTENTS
INTRODUCTION
Purpose…………………………………………….………………………………………….……3
Special Features……………………………………………………………………………….……4
FUNDAMENTALS OF PROJECT DESIGN
In this Section………………………………………………………………………………………5
The Logic of Goal-Oriented Design
The Logical Framework…………………………………………………………………..6
The 8 Steps of Project Design
#1 Assessing the Situation………………………………………………...………………9
#2 Setting our Goal……………………………………………………………………....10
#3 Choosing Objectives………………………………………………………………….12
#4 Outputs………………………………………………………………………………..13
#5 Activities……………………...………………………………………………………14
#6 Indicators……………………………………………………………………………...14
# The Reasonableness Test………………………………………………………………18
#8 Completing a Work Plan…………………………………………………………...…19
SOUNDING MONITORING MANAGEMENT
In this Section…………………………………………………………………………………...…23
Monitoring Defined………………………………………………………………………………..24
Improving Monitoring Efficiency………………………………………………………………....25
Reviewing the Data and Acting On It……………………………………………………………..26
Communicating Monitoring Data and Conclusions……………….………………………………27
Participatory Monitoring…………………………………………………………………………..28
CRITERIA FOR A USEFUL EVALUATION
In this Section……………………………………………………………………………………...29
Evaluation Defined………………………………………………………………………………...29
Monitoring vs. Evaluation…………………………………………………………………………29
Evaluation Purposes…………………………………………………………………………….....29
Internal, External and Participatory Evaluations………………………………………………..…30
When and What to Evaluate……………………………………………………………………….32
Evaluations Begin with a Focused Scope of Work………………………………………………..33
Methods……………………………………………………………………………….…………...33
Review of the Results…………………………………………………………………………...…35
Final Report Format……………………………………………………………………………….35
GLOSSARY OF KEY TERMS……………………………………………………………………………..36
APPENDICES
A- Sample Logical Framework
B- Sample Work Plan
C- Sample Indicators Plan
D- Evaluation Scope of Work Template
E- Comparison of Mercy Corps and Donor Terminology
F- Sphere Guidelines and the Mercy Corps DM&E Guidebook
G- The DM&E Checklist
1
INTRODUCTION
The purpose of this Guidebook is to establish a set of common principles for the Design,
Monitoring and Evaluation (DM&E) of Mercy Corps projects, programs and Annual
Plans. These principles are based on established practices developed by our field
personnel, colleague agencies, major donors and professional associations. As such, we
are not attempting to establish a new way to do DM&E. Instead, this Guidebook provides
Mercy Corps’ diverse programs and worldwide staff with a common approach to DM&E.
Using the Guidebook will ensure that all Mercy Corps’ projects are designed using the
same key principles and that staff have a common language for discussing issues related
to DM&E. At the same time, the Guidebook is designed to preserve program staff’s
flexibility and independence to define their own, context-specific goals, objectives,
indicators and methods. By improving our ability to monitor, evaluate and report on
programs, Mercy Corps will be better able to document its experiences, communicate
them, learn from them and incorporate that learning into future programs.
Purpose
The DM&E Guidebook initiative will assist MC offices to a) design high quality
programs & implement them efficiently and effectively, b) measure outcomes and
impact, and c) document experiences and share them across the agency, with donors and
the general public.
The Guidebook’s “design” section will help ensure that all new programs are impact-
oriented and are easier to monitor and evaluate through:
• A goal-driven design process and logical framework
• The choice of a manageable number of SMART1 Objectives and indicators.
• The collection of adequate, relevant baseline data related directly to the indicators.
• The creation of an efficient system for monitoring and evaluation including ensuring
that staff time and other resources are built into the work plan and budget.
• The creation of programs that best meet local needs and conditions through focused,
participatory assessments.
1
Generally, Objectives and Indicators should be Specific, Measurable, Accurate, Relevant and Time-
bound. More on this below.
2
The evaluation section will:
• Improve future program design/implementation by documenting successful strategies,
potential pitfalls and effective methods for avoiding them.
• Help assess the end-result of program activities, document what was achieved and
measure impact.
• Hold MC and partners accountable to both the donor and the communities we serve.
Of course, many donors and colleague agencies have their own specialized vocabulary
and processes for project design. How will our framework fit with those of our major
donors? The answer, we believe, is “Quite easily.” We have chosen our tools and
vocabulary based on a thorough review of standard practice in our industry. Rather than
simply adapt a system used by one of our major donors, we decided on a simplified
format that best fits Mercy Corps’ own needs. And since our format is based on standard
practice across our industry, it is easily translatable into a variety of other formats as
needed. Please see Appendix E for more specifics.
A number of other resources can be consulted for more specific Design, Monitoring and
Evaluation needs. For example, for the sub-set of programs that address the needs of
disaster affected populations, the Sphere Project Humanitarian Charter and Minimum
Standards in Disaster Response provide a greater level of detail and guidance as to
specific issues related to designing, monitoring and evaluating a disaster response
program. There are references to the Sphere Project Humanitarian Charter and Minimum
Standards in Disaster Response throughout this guidebook, highlighting the importance
and complimentary nature of Sphere in Mercy Corps’ DM&E principles.
This Guidebook is the primary reference for how to do DM&E at Mercy Corps, whether
at the project, program or country strategy level. As such, the process and principles it
describes should be applied to all projects and programs. Supporting resources include:
1. Orientation & Training Module. Based on the Guidebook, this training module
serves as an orientation for our more experienced staff while also providing
examples and skills-building activities for those who need a more basic
introduction. The best way to become proficient with the DM&E principles is
practice. The training module provides this opportunity for staff at all levels. This
module can also be adapted for self-study and ToT use.
2. DM&E CheckList. Distills the contents of the Guidebook into a two-page list of
key principles to ensure good Design, Monitoring and Evaluation. Use it to
remind yourself of key issues when considering the design of a new project or
reviewing a proposal. This is included as Appendix G.
3. Your HQ-based Program Officer and Sector Specialists. The New Initiatives
team in Portland provides a “he lp desk” function for program staff. They can help
you with specific needs including tools, assistance with indicators, and training. In
addition, the Program Officers for each region – and the sector specialists – are
good sources of support to the field for targeted DM&E advice.
The following three sections of the Guidebook describe and discuss each major step in
the Design, Monitoring and Evaluation process, followed by a brief list of key terms and
what they mean for Mercy Corps. The Guidebook concludes with an Appendix that
3
includes suggested formats (with completed examples) for the various tools described in
the preceding sections.
Key Point: Underscores some of the most important tools and suggestions.
Look Out! Highlights common pitfalls of DM&E and helps us avoid them.
4
FUNDAMENTALS OF PROJECT DESIGN
In this section
In the past, project design has been almost synonymous with proposal development. But
repeated experience has demonstrated that there is much more to a well-designed project
than the information required for the average proposal. The steps outlined in this section
represent the minimum components necessary for a well-designed project or country
program. In fact, these design principles apply equally at all the levels, whether we’re
reviewing the work of national or international partners, designing individual donor-
funded projects or developing country-wide Annual Plans. 2
This section introduces the key principles related to design and provides three important
tools for putting those principles into practice. These tools are:
1. The logical framework. A quick snapshot of the cause and effect logic that forms
the basis of our design. Primarily a planning document, a “log frame” helps us focus
on what we want to achieve and how we’ll do it. It also helps us communicate this
quickly to other team members, donors and external evaluators. Aside from being a
crucial design element, the log frame is also the foundation for planning mid-term and
final evaluations.
2. The work plan. A detailed work plan helps ensure that all important tasks are
planned for and carried out on time. This includes setting targets for project
performance and management tasks and assigning responsibility for achieving them
to specific staff members. The work plan acts like a road map for the implementation
of our projects or programs. It is a key management and monitoring tool.
3. The indicator plan. Helps us explain, in practical terms, how we define success and
helps ensure that we can actually measure it.
Together, these three tools are indispensable for quality project design. Appendices A-C
at the back of this book contain samples of a completed logical framework, work plan
and indicator plan. Please refer to them as necessary when reading this section.
2
. While these principles apply at all levels, for the sake of brevity, we will refer to “project” design in
most of the rest of this document.
5
“microcredit loan programs” for example. However, obviously we are not doing these
activities for their own sake. The point of a “micro-credit lending program” is not to give
away money. We provide loans in order to generate sustainable incomes for our target
population. Therefore, when we design a project, we should first ask “WHY do we want
to do this” (what’s our goal?) and only then move on to decide “HOW will we address
this problem ” (what should our activities be?).
Like any other tool, a logical framework is only as good as the material we use. If we put
“garbage” into the log frame, that’s what we’ll get back. The most important thing about
the process is the “logic” behind it rather than the precise definitions attached to each
part. Rather than just “filling in boxes”, completing a log frame asks us to consider the
causal chain of events and assumptions that make up our project.
The “goal-oriented” approach is outlined below. Let’s assume our assessments reveal a
situation of “high mother and infant mortality” in our target region. We start the design of
a solution by thinking about what “impact” we want to achieve. Why are we undertaking
a project in the first place? What fundamental change in the living conditions of our
target group do we hope to bring about? If our problem is “high mother/infant mortality”,
what we want to achieve with our project is probably “a healthy mother/infant
population.”
Goal (Impact)
A healthy mother/infant
population
Next, we determine the key changes in the target population that will be required to
achieve this impact. Much of the time, we will need to change peoples’ knowledge,
attitudes or behaviors. Changes at this level are called the project’s “effects.”3 Each
“effect” we have takes us one important step closer to achieving our planned “impact.”
In this example, the “effects” we need to achieve our goal might include the following:
Goal (Impact)
A healthy mother/infant
population
?
3
. The definitions of “impact” and “effect” used here were borrowed from the “Causal Pathways” concept
developed by the International Rescue Committee in The IRC Causal Pathway Framework: A Guide to
Program Design, Monitoring and Evaluation (New York, 2001).
6
Objective (Effect)
Mothers make Pre-Natal
Visits to Clinics
Now we determine what goods and services will be needed to help people change their
knowledge, attitudes or behaviors. Think of them as our “deliverables”, the concrete
things that our project must produce to achieve the “effects” we are seeking. These are
called our project “outputs.” In this case, in order for mothers to make pre- natal visits to
clinics, we might need to ensure that sufficient clinics exist, that they are adequately
staffed and equipped. 4
Goal (Impact)
A healthy mother/infant
population
?
Objective (Effects)
Mothers make Pre-Natal
Visits to Clinics
?
Outputs
Well Equipped, Staffed
Clinics
And this brings us back to “activities” or the main things we will actually do during our
project. These are the activities that will produce the “outputs.”
Goal (Impact)
A healthy mother/infant
population
?
Objective (Effects)
Mothers make Pre-Natal
Visits to Clinics
?
Outputs
Well Equipped, Staffed
Clinics
?
Activities
Clinic
Construction/Rehabilitation,
Training for Clinic Staff
4
. Note. This is a greatly simplified example. In actual practice, a number of other variables would almost
certainly be involved, including raising public awareness about maternal health needs and resources.
7
As this example suggests, it is much easier to design effective activities when we are
clear about the impact and effects that we want to achieve. The strength of the logical
framework approach is that it reminds us to start with the “impact” and work backwards,
ensuring that each step in the pathway is logically linked to the following one. It is also
much easier to measure progress and evaluate impact when we have clearly expressed
what we want to achieve and how we plan to get there.
The following section details an eight-step approach to goal-oriented design that begins
with a thorough assessment of the problem and ends with the completion of a detailed
and clearly articulated logical framework, work plan and indicator plan – the key
elements of any sound project design.
8
The 8 Steps of Project Design
Donor
Target Area of Program Design
Interest
Participatory Approaches
The best situation assessments are those that include the highest level of participation
possible. Of course, time, resources and other factors will determine the nature of each
individual situation assessment. In some cases, our expatriate and national staff will meet
to brainstorm on a situation and possible responses. Whenever possible, national NGO
and other stakeholder partners will be invited to join the process. In the best cases,
Mercy Corps and national NGO partners will be able to conduct focus groups, key
interviews and surveys to help develop our understanding of the problem and the most
appropriate solutions. In all cases, our assessments should also make use of additional
information from donors, other international and national NGOs, the UN, local and
national government, and the press.
9
During the design phase, we normally consider a range of possible interventions, target
areas and partners, some of which we discard as unworkable based on what we learned
during the assessment. Project designs (and proposals) state pretty clearly the reasons for
what we have decided to do. But they generally don’t include much discussion of what
we decided NOT to do. This information can be vital for staff who come on board after
the design has been completed. By retaining all our assessment data and analysis, we help
those who come after us understand the “big picture” and avoid reinventing the wheel.
Key Point: Assessment teams should always retain assessment information, even when it doesn’t
find its way into a design document or proposal.
Tools : The following are several tools for conducting assessments in a variety of
situations. Most are included on the DME Training CD-ROM and on the Mercy Corps
Digital Library.
#2) Setting our Goal. Once we have identified the problem or challenge through an
assessment, we move on to thinking about what kind of change we need to make in order
to improve the situation. If our assessment reveals a high mother/infant mortality rate, for
example, our desired future might be “A healthy mother/infant population” for our target
communities. This becomes our project’s “goal.”
A Goal is: A simple, clear statement of the “impact” we want to achieve with our
project, the change we hope to bring about in the target population’s standard of
living. This may be quantifiable, but it doesn’t have to be. It may not be something that
Mercy Corps can even do alone! It is simply our big-picture purpose for doing the
project.
Key Point: Remember to think in terms of impact rather than activities. This is usually written as a
“end-state” rather than an action. That’s why our goal is “a healthy mother/infant population”
rather than “to build maternal health clinics and train staff”.
Creating a working goal is an iterative process that may require several attempts. It’s easy
to think of a goal in terms of existing priorities – the proposal parameters, current
activities, or departmental initiatives. But do your best to think big and creatively, relying
on the assessment results to guide a bold vision of the future. While we shouldn’t be
intimidated by a big goal, we should also make sure that it is of correct scope.
10
In general, a goal should:
Look Out! A common mistake is to describe the entire project in the goal statement including the
desired impact, our expected results and our methods for achieving them. These are important
design elements, but each has its own place in the log frame. Trying to include them all in the goal
only makes for long, confusing statements that do not help us focus or communicate our logic.
Bad • Improved Quality of Medical Care – with Special Emphasis on the Needs of
Mothers and Children –Achieved Through Public Awareness Campaigns, Targeted
Health Worker Training, New Clinic Construction in Under-Served Areas and the
Provision of Key Supplies determined through Participatory Assessments
(“Improved Quality of Care” is too vague, the rest is too specific and will be covered
in the Log Frame )
Bad • Provide training to Mothers and Children to make them more healthy (Really more
of an activity rather than a goal)
Fair • Healthy Mothers and Children (A fine, but not very reachable, vision of the future)
Choosing a goal is the first step in completing a log frame (see example below). This
should serve as a good starting point for our design, although we may revise the goal as
we identify the remaining pieces of the log frame.
GOAL: Ask: What is the impact we want to achieve? What does our community look like if we are successful?
Healthy Mothers and Infants in our target population
OBJECTIVES: KEY OUTPUTS MAJOR INDICATORS
ACTIVITIES
Ask: What are the desired effects on Ask: What final goods and Ask: How will we know
people’s knowledge, attitudes, and services will we provide? Ask: What daily efforts contribute if we have achieved our
behaviors. to our outputs? Objective?
11
#3) Choosing Objectives. The next task is to determine what changes will be necessary
in order to achieve our goal. These are the program’s objectives. Typically, we will ask,
“What “effect” on people’s lives do we want to achieve?” and, “What has to happen to
make the goal a reality?” Often, to meet our project’s goal, we must facilitate a change in
the target communities - these will be cha nges in a population’s “knowledge, attitudes
and behaviors.” Creating a complete set of objectives is the second step in the logical
framework process (see example below).
Key Point: Again,we should avoid thinking of our objectives in terms of activities,
focusing instead on the “effects”, the end-state we’d like to see as a result of our
activities.
Since these changes need to happen to make our goal a reality, they need to be things that
we can commit ourselves to achieving. Remember, our goal is something that we can
contribute to but not necessarily achieve all by ourselves. Our objectives, on the other
hand, are things that we believe we can accomplish through our project. And since we’re
committed to achieving them, we need to define them in a way that lets us know when
(and if) we’ve been successful. In general, objectives should be “SMART” which means
they should be:
Specific
Measurable
Achievable
Relevant
Time-bound (meaning they have a clear beginning and end).
Examples of Objectives – based on a Goal of “Healthy Mothers and Infants in our Target
Population”:
Bad • “Build 7 New Health Clinics by the End of the Project” (SMART, but more of an
output/activity than an objective. We really want our objective to be “Mothers
visiting the health clinics)
Good!• “75% of mothers make 2 prenatal visits to a quality health center by end of project”
(SMART, and a lasting effect on our target population)
12
GOAL: Ask: What is the impact we want to achieve? What does our community look like if we are successful?
Healthy Mothers and Infants in our target population
OBJECTIVES KEY OUTPUTS MAJOR ACTIVITIES INDICATORS
Ask: What are the desired effects Ask: What final goods and Ask: What daily efforts contribute Ask: How will we know if we
on people’s knowledge, attitudes, services will we provide? to our outputs? have achieved our Objective?
and behaviors.
The log frame above shows just one objective as an example. Normally, a complete log
frame would require several objectives. See Appendix A for an example of a completed
logical framework. In addition to being SMART, we need to make sure our objectives
are:
• Logically correct (Do they lead directly to our goal?),
• Comprehensive (Did we leave anything out that we need to do to achieve
the goal?)
#4) Outputs. These are the things, the “goods and services” that need to exist in order for
us to achieve our objective. They are usually our “final products” or “deliverables” that
we provide to create the effects that we seek.
13
Output (good or service) Objective (effect)
Conduct Training Changes in behavior
In each of these examples, we are making an assumption that the piece of the project that
we are responsible for delivering (training, loan, or school feeding) will actually cause a
change. We should carefully examine each logical framework and design to make sure
that our assumptions are valid for our context.
#5 Activities. These are the daily chores we need to implement to achieve the outputs. A
log frame only needs to list the principle activities, those things that explain in broad
terms how we will operate. In a detailed work plan, these activities would be further
broken down into smaller steps.
GOAL: Ask: What is the impact we want to achieve? What does our community look like if we are successful?
Healthy Mothers and Infants in our target population
OBJECTIVES KEY OUTPUTS MAJOR ACTIVITIES INDICATORS
Ask: What are the desired effects Ask: What final goods and Ask: What daily efforts contribute to our Ask: How will we know if we
on people’s knowledge, attitudes, services will we provide? outputs? have achieved our
and behaviors. Objective?
#6) Indicators . Indicators are units of measure that demonstrate our success in
implementing our project. Indicators can be attached to each element of our log frame,
14
but we are particularly interested in identifying good indicators for our objectives. For
example, a good indicator for Objective One of our health project might be “% of
mothers who attend at least two prenatal visits.”
It is important to determine good indicators as early as possible in the life of the project.
In the best case, we should do this during the design phase. That allows us to make sure
the informatio n is available and develop a plan for gathering it. Most critically, it means
we can make sure the project work plan and budget include adequate resources (staff,
time, or funding ) to gather the information. In some ways, poorly thought-out indicators
are worse than no indicators at all because they:
The Indicator Plan (see Appendix C) is a useful tool for defining quality objectives and
indicators, as well as providing the beginnings of a plan for baseline data collection. It
helps us define what our indicators mean in relation to what they are supposed to
measure, their relevance to the project, and why we chose them. Completing an Indicator
Plan requires us to think about how we will get the information, from which sources and
on what schedule. Therefore, careful consideration of an Indicator Plan is the best way to
ensure that we have chosen good indicators that we can actually track. This tool is
especially useful for clarifying indicators that may be hard to measure. For example, how
will we know if we have achieved intangible things like “increased capacity” or “reduced
tension”? Using the indicator plan will help us define these concepts and communicate
how we’ll measure them.
Key Point: The Indicator Plan is based on a similar chart commonly used in USAID performance
monitoring plans and often required by them for proposals and/or work plans.
Because we are aiming for SMART objectives, many of them actually contain the targets
(and indicators) already! For the objective “75% of mothers attend at least two prenatal
visits" the indicator will be “% of mothers who attend at least two prenatal visits.” The
target would be “75% of mothers by a specific date” – probably the end of the project.
We will talk more about targets in the work plan section below.
Look Out! Indicators vs. Targets . Indicators are often confused with “targets” (sometimes called
“benchmarks” or “milestones”). Remember:
• Indicators tell us what we want to measure. They are units of measure only.
• Targets have a specific value attached – usually a number and/or a date – and help us track our
progress.
15
whether the level of difficulty (and expense) is justified by the importance of the data.
Our intention is to have an “elegant” M&E system that collects enough data to meet our
needs but that does not waste time collecting unnecessary information.
In most cases, we would want to aim for the second and more important result. Of course,
our objectives will always be determined by a complex set of factors including
availability of information, length of the project, budget and staff time. But we should
always attempt to measure the deepest and most profound results possible.
GOAL: Ask: What is the impact we want to achieve? What does our community look like if we are successful?
Healthy Mothers and Infants in our target population
OBJECTIVES KEY OUTPUTS MAJOR ACTIVITIES INDICATORS
Ask: What are the desired effects Ask: What final goods and Ask: What daily efforts contribute to our Ask: How will we know
on people’s knowledge, attitudes, services will we provide outputs? if we have achieved our
and behaviors. objective?
Sector Specialists and Standard Indicators. Mercy Corps’ sector specialists are the
primary resource available to MC staff for selecting the most appropriate indicators for
specific types of projects. They may be able to refer you to existing banks of standard
indicators. If pre-existing indicators are not available, the Sector Specialists (including
the New Initiatives DM&E “help desk”) can assist you to develop your own or adapt
indicators used by similar projects elsewhere.
16
The Sphere Project Humanitarian Charter and
Minimum Standards in Disaster Response
The Sphere project is an important example of how to use existing “best practice”
indicators to improve project design and performance. Sphere is the primary resource for
Mercy Corps disaster response programming (and is fully compatible with this
Guidebook). Sphere and other such resources are key examples of improving program
effectiveness using recommended indicators that have been endorsed through
unprecedented consensus by respected industry professionals. Use of such best practice
indicators can be very helpful when coordinating programs in a complex multi-agency,
multi-donor environment.
This is especially true for those “fuzzy” objectives that are hard to quantify and measure.
For example, if your project aims to “revitalize” communities following a man- made or
natural disaster, how will you define and measure that? You might begin by asking
members of the target group what a “revitalized” community would look like, asking
them how they would define or measure their own community’s vitality. Answers to
these questions would help you define your objectives and indicators in a way that is
appropriate to your location and to ensure that your project is meaningful for the target
community.
Baseline Data
This is the set of data you collect on your indicators at the very beginning of a project (or
as soon after the beginning as possible). It provides you with a starting point to measure
against. The baseline is different from an assessment that potentially will collect a wide
variety of social, economic and political information. While an assessment attempts to
provide the “big picture” about conditions in a target area, the baseline focuses on the
state of our indicators at the beginning of our project.
17
Look Out! Baselines vs Assessments! Two more terms that are often confused. Remember,
baselines are collected only on information needed to track progress toward our targets.
To put it in very simple terms, imagine that you are starting a new diet.
• Maybe your goal is to become healthier.
• Your primary objective is to lose 5 kilos.
• Your indicator therefore would be “# of kilos lost”.
In order to measure your progress you need to know how much you weigh before
beginning the diet. That’s your baseline. You can check your weight each week and see
how close you are to meeting your objective.
In the case of our maternal health example, our main indicator is “% of pregnant women
who make 2 pre-natal visits to a clinic.” Our baseline would tell us what percentage of
women were already making such visits before our project began. We could measure the
same thing at the end of the project and that would show us our result, allow us to
measure whether or not we had achieved our objective.
Key Point: You should generally plan on and budget for baseline data collection. Since baselines
are important in showing progress, you should plan on gathering this data unless there is a
compelling reason for NOT doing so.
The “Reasonableness Test ” helps us consider our draft design and see where we might
need to refine it. Whenever possible, its best to have some colleagues who have not
worked directly on the design help you review your log frame and indicator plan. Key
questions at this stage include:
18
If the answers to these questions are “Yes!” and other key design stakeholders (managers,
team members, partners & beneficiaries) concur, then we have completed a successful
conceptual design.
Targets
These are key elements of the work plan that define where you plan to be at certain points
in the life of the project. We should set project targets that relate to our objectives and
achievement of our overall goal. In addition, we should also identify key management
activities that need to happen and set targets for achieving them. By having well-defined
targets at various stages of our work plan, we are better able to gauge our progress (or
lack of it) on a variety of levels and make timely changes (where needed) to keep our
project on track.
19
These targets are like landmarks to let us know we are on the right track to our final
destination.
Setting targets for these activities helps us monitor progress over the life of the project.
Also, in some cases, failure to meet our objective level targets is caused by an inability to
meet key management targets. If we state both kinds of targets clearly in our work plan, it
will make it easier to determine where things went wrong and learn how to avoid them in
the future. This is especially helpful in situations where we are likely to have high staff
turnover over the life of project, where the management team at the end of a project is
totally different from the one that designed it and began implementation.
For example, perhaps we planned to get baseline data in Month Two but didn’t actually
complete this until Month Eight. Let’s say the reason was that we planned to buy
motorbikes for field monitors to carry out surveys in remote areas. But the motorbikes
were unexpectedly held up in customs until Month 6. If we set the baseline collection
and motorbike targets in our work plan, it should be clear from our donor reports what
went wrong and why we missed our deadlines. Otherwise, at evaluation time, given staff
turnover, it may not be clear what the original reason for the delay was – and so we won’t
be able to learn how to avoid similar delays in the future.
20
By defining these things in our work plan, we help ensure that they actually happen.
Also, by making M&E an integral part of our project activities, we prevent them from
becoming seen as something extra, something to do IN ADDITION to project duties.
Look Out! Does the Work Plan Match the Indicator Plan? Make sure that data collection and
review activities that are described in the Indicator Plan are also adequately provided for in the
Work Plan.
Work plans require different levels of information for different purposes. For example, a
single page work plan is generally sufficient for a proposal. Most donors don’t want more
info than that at prior to making a grant. On the other hand, such a simple work plan is
probably not detailed enough to be useful for project implementation. It is best to design
one that fits management needs and then make a simpler, more focused version to attach
to the proposal.
Key Point: The work plan – especially for multi-year projects – will probably need to be revised
from time to time to reflect new information or changing conditions. During the design phase, it’s
best to focus on details for the first 12 months and update the plan each year.
Conclusion
The table below summarizes the most important principles discussed in this section. You
may want to refer to it when designing or reviewing a new project or program.
DESIGN CHECKLIST
q Assessment Conducted
q A. Assessment data not used in proposal is kept for future reference
q Goal-Oriented Program/Project Design
q A. Design starts with defining a goal based on impact rather than activities.
q SMART Objectives
q A. Key steps in the project which logically, reliably contribute to achieving our goal.
q B. Describe an “end-state” and focus on “effects” (changes in behavior, attitudes or knowledge in our
Target population) rather than activities whenever possible.
q C. SMART – Specific, Measurable, Appropriate, Realistic, & Time -Bound.
q Select Appropriate Outputs & Activities
q A. Logically, reliably contribute to our SMART objectives.
q B. Outputs represent our “deliverables” or final products for which we are responsible.
q C. Activities describe the key actions we’ll have to carry out to achieve our outputs.
q Identify Indicators
q A. Fewer, more direct indicators that measure performance against our objectives as well as outputs.
q B. Consider relevant standard indicators and consult appropriate sector specialists & other resources
(such Sphere standards).
q Formulate Work Plan
q A. Include monitoring as a key management activity and make resources available to carry it out,
Including roles and responsibilities, budgeting time for baselines, regular data collection, review and
reporting.
q B. Include key management and implementation tasks, persons responsible and clear targets for
achieving them so that we can track performance over time.
q Approaches
q A. A high degree of participation of expat and national staff, representatives of the target group,
21
partner organizations etc in the design of our strategy and in the implementation of the project.
q B. A focus on the highest level of impact or effects possible.
q C. All pieces of program design are logically and causally connected. (Logic is much more important
than vocabulary)
q D. An evidence-based approach that suggests our actions will be successful
q Final Products From Design Phase
q A. Completed Log Frame
q B. Completed Indicator Plan for our SMART Objectives
q C. Completed Work Plan
q D. Folder containing assessment data
q E. Finished proposal, if applicable
22
SOUND MONITORING MANAGEMENT
In this section
Albert Einstein said, “It’s simple, but it’s not easy” in describing his theory of relativity.
While not quantum mechanics, the same may be said of our daily monitoring challenges
– plans that seem very straightforward on paper often break down in the complex
operating environments in which we work. This section describes the basic pieces of
monitoring and provides tools to assist in their smooth and effective implementation.
Key Point: Improving Mercy Corps’ monitoring practices – and acting on the results – may be the
single biggest opportunity to enhance our program impact worldwide. There is simply no substitute
for great information to generate first-rate learning and program management.
Sound design is only the first step in ensuring quality M&E. The good intentions that go
into designing our monitoring plans are sometimes forgotten during implementation.
Other times, data is collected but not analyzed or communicated well. To be successful,
monitoring plans have to be carried out and the information collected, reviewed and acted
upon. And all this needs to be clearly communicated to all stakeholders including project
staff, participants, partners, HQ and the donor.
The need to monitor is not always self-evident. Many field staff are so intimately
connected with the ir programs that they have (or feel they have) complete information on
which to make decisions. Practice has shown, however, that sound monitoring practice is
vital for the project’s field managers, headquarters staff, and other field personnel to
maximize learning in the many projects we undertake. There is no doubt that good
monitoring is an integral part of program management. Several key reasons for
monitoring include:
23
environment. Monitoring data also provides guidance on what questions we
should ask during an evaluation, avoiding costly redundant information.
f. Unexpected Results – through frequent checks, we may also uncover
unexpected results. Good or bad, these surprises can only be unearthed and
addressed through rigorous monitoring.
g. Part of the job – performance monitoring is an integral part of good program
management.
Monitoring Defined
In simple terms, “monitoring” can be understood as a cycle of “regularly collecting,
reviewing, reporting and acting on information about project implementation. Generally
used to check our performance against ‘targets’ as well as ensure compliance with donor
regulations.”
Look Out! Monitoring is more than just data collection! The monitoring pyramid shows that
while most of our resources are used collecting the data, our efforts are incomplete unless we
review, report and adjust our strategy based on our information.
Program Reporting
Information
Local Decision-Making Flows Up &
Down
Data Collection
Compliance Monitoring. This is the most basic level of project monitoring. It is carried
out to ensure that our staff and our partners and sub-grantees are conforming to donor
regulations and the requirements of our grants, sub- grants and contracts. Examples
include “end- use” checks in distribution projects. These are used to make sure that
intended beneficiaries are receiving the standard ration of food or supplies that they are
24
entitled to. In infrastructure projects, engineering staff make regular site visits to ensure
that construction firms are meeting the terms of their contracts and working to agreed
engineering standards.
But to be effective, monitoring has to be more than just routine data collection. We must
also regularly review the data and (if necessary) revise our work plan in response.
While it’s easy to describe the benefits of monitoring, more challenging is carving out
time and resources amid many competing priorities to actually complete the monitoring
tasks. And although we want to conduct regular, complete monitoring, these activities
should certainly be conducted in a practical, efficient way. We’ll discuss a few tips for
getting more out of Mercy Corps’ monitoring efforts.
While the worst mistake is a complete lack of a monitoring plan, a more frequent
problem is attempting to monitor a project with too many indicators. We recently read
two Mercy Corps proposals that included more than 70 indicators each! This poses a few
problems. First, these monitoring plans use too many resources to be sustainable given
the time and resource pressures. Second, it’s likely that not all of the information
gathered is relevant – cluttering the good information. Third, it’s very difficult to process
this much information, even if it is useful.
#2 – Set up your Monitoring in your Work Plan. It is very difficult to make time for
the entire monitoring cycle (collection, reflection, decision making, reporting) unless it is
accounted for early on through the work plan (and budget). So make time at the
beginning for these activities.
#3 – Collect Baseline Data. For longer programs/projects baseline data collection can
save you a lot of time. Apart from helping demonstrate suc cess over time, baseline data
25
collection is a test of your indicators and time commitment up front and provides
information on the ease of monitoring the indicators you have selected.
Tip #4 – Use an Indicator Plan to plan your information needs. The indicator plan is
an easy tool to help you manage your monitoring time. By investing a little time in the
Indicator Plan you will be able to: a) better define your indicators, b) narrow your
indicators to a manageable number, c) set up a thoughtful schedule for data collection, d)
select your data collection methods. The process of completing the indicator plan is a
long-term time saver, but only if these plans are also reflected in the work plan so that
staff time and other resources are available to carry them out.
Tools : There are a whole range of methods for monitoring project performance and
compliance with Mercy Corps and donor regulations. These can include simple check-
lists and short narrative reports for:
• direct observation of project activities
• meetings with partner organizations and sub- grantees
• checking partner/sub-grantee records
• individual interviews with project participants.
The mix of methods will depend on the type of project we are carrying out. Because the
structure of these tools will determine what information is collected and what is not, it is
important to devote sufficient thought to their design early in the project. The New
Initiative DM&E team “help desk” can assist with this process by suggesting formats or
reviewing templates.
Look Out! Don’t be afraid to change course based on your monitoring data. That’s the
primary purpose of monitoring. When we consistently take good data, review it and report it to
our donors, they should understand and respect any changes required for our work plan.
26
Part of using our data thoughtfully means reflecting on monitoring results individually,
with Headquarters, and, most importantly, with the local team and partners. Regular
meetings to review monitoring data should:
1. Clearly compare expected and actual results.
2. Identify reasons for lower than expected results (if applicable).
3. Outline a plan of action in response to the results.
4. Communicate that information to stakeholders – especially to partners and
participants.
5. Included as key activity in the work plan.
Lack of time may not be the only reason some projects do not analyze and act on
monitoring info. Many staff are intimidated by the concept of “data analysis” which
conjures up images of complex statistical models. We’re really talking about “reflection”
– looking at our monitoring data and asking, “What does this mean for our project?” and
“What conclusions can we draw from this information?”
This is true whether it’s a partner or sub-grantee reporting to us or a field office reporting
to the donor or the Headquarters. There is no pre-set time period for the frequency of
reporting. Generally, our donors have their own specific requirements for program
reports. We set our own requirements for our partners and sub-grantees, usually based on
the donor’s guidelines. In addition, each country program is required to submit Program
Reports to Portland and our Board of Directors on a regular basis. But no matter what the
reporting requirements are, as a general rule, data collection should occur at least on a
monthly basis and review of the data should take place at least once a quarter. For larger,
more complex objective indicators, we may collect data (through surveys, observation,
focus groups) less frequently, but should still collect and review at least twice a year.
The “monitoring pyramid” diagram at the beginning of this chapter reminds us that
monitoring information should not just “filter up” but also “flow back down.” When
decisions are made at higher levels based on monitoring results, the reasons for those
decisions should be clearly communicated to the people they will most affect; the field
implementation staff, project participants and partners. This two way communication:
• facilitates transparent decision-making
• is more appropriate to a partnership relationship
• helps build participant capacity
27
• should improve the quality of monitoring since those who do the footwork will
understand clearly why they’re collecting certain data and how it is used. 5
Participatory Monitoring
In general, monitoring should be as participatory as is practical. This includes not only
the collection of information but also analysis and determining appropriate responses. By
including national staff and partners in this process, we make sure we have a more
complete understanding of the situation. Often a problem looks different from the
perspective of an MC country office than it does from the field or from the perspective of
a national NGO partner. Our analysis will be better and our responses more appropriate
if these perspectives are included. In addition, participatory monitoring also:
• builds staff and partner capacity to analyze situations and determine responses
• helps hold us accountable
• helps participants understand the relief and/or development process and gives them a
voice in how it is implemented. It helps convert them from passive “beneficiaries” to
active “participants” in our projects
• gives participants a share in the responsibility for implementing the project. They are
now part of the implementation team and have to take on more responsibility for
project outputs
Tools : A number of monitoring and project report formats are available on the Digital
Library. The “help desk”, Program Officers and Sector Specialists can also help
determine appropriate ways to capture specific types of information for different kinds of
activities.
MONITORING CHECKLIST
q The Process
q A. Regularly collect, review and report on data related to all project indicators, targets and other donor
requirements according to the work plan.
q B. Clearly compare actual results against targets during review of monitoring data.
q C. Use the data to refine the project approach (as necessary).
Reporting
q A. Clearly reflect actual and planned performance for each objective, analysis of the results and plans
for next steps in all project reports (to donors, HQ and others in the “monitoring pyramid”).
Approaches
q A. Missing a planned target is not viewed as a “failure”. Failure is defined as failing to capture this
info, draw conclusions and act on them.
q B. Monitoring is as participatory as possible (including review of the data).
C. Give attention to the quality of the data. How good is the information?
5
. Research and experience demonstrate that monitoring systems tend to collect better
quality information when staff and participants understand how the information will be
used.
28
CRITERIA FOR A USEFUL EVALUATION
In this section
This section provides guidance on deciding “what, when and how” to evaluate and
explores the difference between “monitoring” and “evaluation”.
Evaluation Defined
An evaluation, for our purposes, refers to an in-depth, retrospective analysis of an aspect
(or aspects) of a project that occurs at a single point in time. It is generally intended to
measure our effects and impact and examine how we achieved them. This process also
captures our experience so that future projects can learn from it.
Evaluation Purposes
Mid-term and final evaluations are the most common types in the NGO world. Mid-term
evaluations are used to 1) measure the effectiveness of the project and 2) determine
changes that might need to be made to improve effectiveness for the remainder of the
project. 6 Final evaluations in the NGO world generally take place in the final months of
a multi- year project. These evaluations are generally designed to 1) measure the effects
and impact of a project and 2) draw conclusions about lessons- learned for future
projects. 7
Evaluations may be further divided into those that measure impact and those that
examine process. In general, impact evaluations seek to determine the results of a project;
its impact or effects on the target population. Process evaluations look more closely at
management practices and different approaches to implementation. Often, mid-term
evaluations are mostly process- focused while final evaluations look more at impact. In
practice, evaluations generally combine elements of both impact and process evaluations.
6
. These are often referred to as “formative” evaluations because they examine how a project is
implemented and make suggestions about the form of future activities.
7
. These are also known as “summative” evaluations because they “sum up” project experience.
29
Key Point: Mercy Corps views evaluations primarily as a learning tool rather than an “audit” of
people or their projects. Their main purpose is to help us learn about our projects, share that
information and improve performance in the future.
In keeping with our core principles (including participation, accountability and peaceful
change), Mercy Corps’ projects should steer a middle course between these two
extremes. The purpose of the evaluation and donor requirements will normally determine
the exact composition of an evaluation team. The inclusion of an external evaluator is a
good way to ensure a certain healthy distance and objectivity in the evaluation. This can
be an outside consultant, a Mercy Corps HQ person or a staff member from a different
field location, so long as they have no direct stake in the outcome of the evaluation. The
bulk of the evaluation team should be made up of project staff from senior managers
down to individual international and national project officers.
No matter what the level of external involvement, evaluations should include project
participants and other key stakeholders whenever possible, including the design of the
evaluation, implementation and analysis of the results. This will help ensure objectivity,
accountability and learning because:
1) the inclusion of many stakeholders helps keep one perspective from dominating the
evaluation
2) through participation in the design and analysis phases, project participants get a
better overall sense of how the project performed and why.
3) project staff and other participants are more likely to accept, internalize and “own”
evaluation findings that they reach themselves.
4) it provides capacity building to partners and participants
30
The need for participation should always be balanced against the particular challenges
and constraints this type of evaluation involves. These include:
1) The costs involved. A participatory evaluation can be very time consuming. It
requires more effort to manage the higher level of input and demands good
organization to get effective results.
2) Objectivity concerns. An over-reliance on participation can lead to something more
like a “self-evaluation” that lacks the healthy balance of an outside perspective.
The chart below lists some of the major types of evaluation and suggests some of the
costs and benefits associated with each. Obviously, these are not all mutually exclusive
categories and an evaluation design will combine several of them.
8
. Any type of evaluation (internal, mixed team or external) can and should involve participatory elements.
31
When and What to Evaluate
As discussed above, not all projects need a formal evaluation involving an outside
evaluator. In general, formal evaluations may be more appropriate for projects lasting two
or more years. In some cases, such as short term projects (1 year or less), there may not
be the time or budget for a full- fledged evaluation, especially if the project ends before
any tangible effects can be measured. This is not to say that project staff and participants
should not review their performance constantly. Every time a monitoring meeting is held,
data examined and conclusions reached, we are contributing to agency learning. When a
final report summarizes monitoring data, draws conclusions and makes recommendations
for future programming this learning is preserved for future projects and is more easily
shared around the agency. For this reason, some agencies (such as the Peace Corps) have
decided to spend their M&E resources entirely on ensuring high quality monitoring rather
than undertaking mandatory, formal evaluations of each project.
Look Out: Donors may require an evaluation and wish to stipulate much of the scope of work and
team composition. Therefore, it’s best to make sure these expectations are negotiated at the
beginning of the project and included in the design so that we can 1) ensure the evaluation meets
our needs and 2) make sure we have sufficient resources to carry it out.
Longer-term projects should generally include a more formal evaluation process at least
once in their life cycle (either mid-term or final). But evaluations should never be
undertaken without a sound management reason. We do not do them just to do them. For
example, projects which use standard, well- tested approaches will not necessarily need
formal evaluation. In these cases, the overall impact or effect may be inferred from a
sound knowledge of what the results of that approach have been shown to be elsewhere.
Examples might include a reconstruction project in Bosnia that follows the same
approach and targets similar communities as a number of other MC and non-MC projects.
If the general success rates for similar reconstruction projects are well established and
MC has formally evaluated similar projects recently, there would be no need to formally
evaluate all of them. Another example might be the introduction of a proven anti-
tuberculosis treatment like TOPS that has demonstrated results around the world. In these
cases, analysis of sound monitoring data might be sufficient.
32
Key Point: Evaluations should only be undertaken in response to a specific management need.
Evaluations are time-consuming and should have a clear benefit for Mercy Corps and our other
stakeholders.
Once this is decided, an evaluation scope of work is then prepared (generally by project
management) that is focused on meeting the identified needs. The USAID evaluation
scope of work format is a good one that can be adapted for most projects. A modified
version of the USAID scope is included as Appendix D. The most common challenge is
drafting a scope that is focused on only a few key questions and includes the resources
(time, staff etc) to answer them adequately. An evaluation with a wide scope but very
limited resources can be worse than no evaluation at all if it leads to superficial
conclusions as the evaluator races to finish in the time provided. Three weeks of field
work is generally a minimum amount of time required for even a narrowly focused
formal evaluation to be worthwhile.
Key Point: A focused scope of work is vital to a good evaluation. Tips include:
• Start with what you want to learn . It is easy to lose focus and try to evaluate too much, leading
to uncertain conclusions. Choose a manageable piece of your project to learn from and design a
scope to support that.
• Treat the scope as something to be negotiated with your donor and lead evaluator rather than
something to be stipulated by them.
Methods
The choice of evaluation methods depends on the type of program, resources available
and the type of questions the evaluation is trying to answer. Most will start with a review
of project documents. The log frame provides a clear explanation of what the project was
designed to accomplish, the strategy and how success should be measured. The work
plan and indicators plan describe data collection for the project and the collected
monitoring and project reports detail what has already been accomplished. For impact
evaluations, the baseline data gives a starting point against which further progress can be
measured. Without a baseline, it is extremely difficult for an evaluation to gauge project
impact. After a document review, the next step is generally a workshop with key staff
members to discuss their experiences and perceptions related to the evaluation questions
33
and for the evaluation team to get insights that are not contained in the reporting
documents.
The next steps will vary depending on the circumstances. Common evaluation
instruments are surveys, focus groups, key informant interviews and direct observation.
Evaluations may use some or all of these approaches. It is a good idea to balance strictly
quantitative methods (like surveys) with more qualitative methods like focus groups,
interviews and workshops. This is because there is often much more to the story than
what is apparent simply from a dry listing of statistics. Surveys are good ways to
determine “what happened” and “how many times it happened” but not good at
explaining “why” something happened.
The best evaluations combine both kinds of instruments to show not only “what”
happened but also “why” it happened or what it meant to the participants. Keep in mind
that ‘qualitative’ and ‘quantitative’ information are not completely separate categories.
Most ‘qualitative’ information can also be expressed in numerical terms. In fact, it’s a
good idea to present qualitative information that way in donor reports, because some
donors seem to believe that numerical data equals “hard” data, that it is more accurate
and reliable than narrative descriptions.
Surveys also rarely turn up unexpected results. That is because surveys use
questionnaires that, by definition, only give people a certain number of possible answers.
If there is a project result that you did not anticipate, you will not know in advance to
make this a possible answer on the questionnaire. That’s why it’s usually a good idea to
do some focus groups first, before finalizing the design of a survey questionnaire. If
focus groups reveal a project result that you did not expect, you can include questions
about that result in your survey questionnaire and find out just how frequent that result
was.
34
of the benefits of partnership. At the same time, the evaluation’s contribution to learning
for both agencies was considerably increased because it pointed out not only important
successes but also vital areas where more work was urgently needed.
We should be careful not to overstate our results by ensuring there is a fairly clear link
between the information we collect and the effects/impact that we claim. This applies
equally to monitoring and evaluation. For examp le, if project documents show that we
rebuilt houses for 250 returning refugee families and 248 of them actually retook
possession of the homes, what does that information tell us? That we caused their return
or simply assisted it? How many would have returned without our help? A survey of
project participants might help to show how much influence our project had on people’s
decision to return. But we would still not know for certain what would have happened
without our assistance. A good set of performance monitoring data, coupled with survey
and focus group information, would allow us to demonstrate that we 1) provided the
expected number of houses, on time, within budget and to a high standard, 2) that 99% of
the target population made use of the houses and 3) that the participants stated that the
availability of the houses was important in their decision to return. That’s a pretty good
result that we could be proud of, even though it stops short of actually proving that our
project “caused” 248 retur ns.
35
Key Point: Keeping the evaluation report short will make it easier to review the draft while the
evaluation team is still in the field (and revisions are still possible). It will also facilitate sharing the
lessons-learned with other Mercy Corps and partner staff.
EVALUATION CHECKLIST
Design. Evaluations:
q A. Focus on utility. Evaluations should be designed to answer pressing management needs.
q B. Start with a clear Scope of Work.
q C. Are primarily a learning tool rather than an audit of project performance.
q D. Should be designed to yield lessons-learned for similar programming.
q E. Are an in-depth reflection on a specific aspect of or programming.
Participation. Evaluations:
q A. Are designed to allow the highest reasonable degree of participation in the implementation and
review of results.
q B. Are completed (draft form) and discussed with project staff while the evaluation team is still in-
country.
Sharing the Lessons-Learned. Evaluations:
q A. Should be short but informative (usually no more than 20 pages plus attachments).
q B. Need to be widely distributed within Mercy Corps (including the Digital Library) to make sure
lessons are learned.
q C. Should be promoted by Program Officers so that other staff know they are available and what
information they contain.
q D. Should be read by other Program Staff working on similar projects and used to improve design and
implementation of Mercy Corps activities.
KEY TERMS
In this section
Many MC staff bring a wealth of DM&E experience to bear on the programs they are
responsible for. Yet coordination on this issue and communication between programs is
often difficult. A frequent obstacle to effective discussion of DM&E are the
misunderstandings that result from a lack of agreed terminology. Many donor and
implementing organizations have their own, specific (and contradictory) definitions of
the terms commonly associated with DM&E. To facilitate communication inside the
Mercy Corps world, the following section lists some key terms and establishes a common
definition. For proposals and donor reporting, our terms can be easily translated into
other formats. For a comparison of Mercy Corps standard definitions and those used by
key donors and colleague agencies, please see Appendix E.
Activities. The things that our project “does” or the actions that we carry out in order to
produce our outputs. Examples include providing training, rebuilding infrastructure,
making loans, monitoring implementation, evaluating impact.
36
Project Design phase in order to define our overall strategy. Assessments are what we
use to understand what the problem is and possible ways to address it.
Baseline. A set of data that measures specific conditions (almost always the indicators
we have chosen through the design process!) before a project starts or shortly after
implementation begins. You will use this baseline as a starting point to compare project
performance over the life of the project. Example: If you are on a diet, your baseline is
your weight on the day you begin.
Best Practice. Something that we have learned from experience on a number of similar
projects around the world. This requires looking at a number of “lessons- learned” from
projects in the same field and noticing a trend that seems to be true for all projects in that
field.
Effects. A change that results directly from our outputs and activities. These are short or
medium term changes that should happen during the life of our project. Generally, these
are “changes in a target group’s knowledge, attitudes or behaviors as a result of our
project” and appear as objectives in our log frame.
Failure. Projects often fall short of expectations. A failure occurs only when a project
fails to achieve its expected results AND the project management team fails to document
it, analyze it and adjust their strategy in response. If they do identify the problem and
draw a lesson from it, the event is a “learning experience” and is just as valuable to the
agency as a “success story.”
Goal. This is the ultimate reason for undertaking a project or program. It describes the
“end-state” that you would like to achieve. Generally, this is related to the “impact” you
want to have on a target population. Often our projects will not be able to achieve their
goal all by themselves but they should always be able to make a substantial contribution
to it.
Impact. A deep and lasting change we want to bring about in a target region or country.
Our individual projects may only make a partial contribution to achieving this change and
it may occur only after the project is completed. Usually this is “a change in the living
standards or quality of life for a target population” and is directly related to Mercy Corps’
mission to alleviate poverty, suffering and oppression.
37
objectives as well. In the best case scenario, we should also try develop them for our
goal.
Outputs. The final goods and services provided by our project activities. Examples
include training courses, rebuilt homes or infrastructure or microcredit loans.
Objective. This is what we expect to achieve directly through our project or program
outputs. Often, a project will have several objectives and these are generally related to
the “effects” we want to have on a target population. Each objective should be an
important step toward achieving the project’s goal.
Triangulation. Data collection from three different sources about the same subject. This
is considered the best way to ensure that our information is valid. For example, if we
want to know about the effects of a community mobilization project, we might collect
data via 1) interviews with key participants, including our own staff 2) a document
review to understand exactly what services were delivered and in what amounts 3) focus
groups and/or a survey of project participants. This helps us avoid the natural biases of
any one method of data collection. Although three different sources are not always
possible, the primary point is to avoid reliance on a single source or perspective.
38
Attachment A
Logical Framework
LOGICAL FRAMEWORK: MATERNAL HEALTH EXAMPLE
GOAL: Ask: What is the impact we want to achieve? What does our community look like if we are successful?
Ask: What are the desired effects Ask: What final goods and Ask: What daily efforts contribute to our outputs? Ask: How will we know if we have
on people’s knowledge, attitudes, services will we provide? achieved our Objective?
and behaviors.
1) 75% of mothers are aware of 1) 10 x 30 second radio 1) Create/disseminate public service 1) % of mothers who can identify 2
2 pregnancy-related danger spots. announcements. pregnancy related danger signs.
signs by the end of the project. 2) 250 Well-Trained Health 2) Identify & train health care outreach
Care Outreach Workers workers
3) Survey 3) Disseminate health message through
community mobilization
4) Baseline and final surveys
2) 75% of mothers/expectant 1) 7 New or Rehabilitated 1) Assess community clinic needs 1) % of mothers who attend at least
mothers attend at least 2 routine Clinics 2) Design and Tender for Clinic two prenatal and postnatal visits.
prenatal and 2 postnatal care 2) Transportation to clinics construct/rehab.
visits during the project. provided from remote 3) Clinic Rehabilitation
areas 4) Assess specific transportation needs
3) 75% of mothers with risk 1) 7 Adequately supplied 1) Provide medical supplies 1) % of mothers with risk signs who
signs (bleeding, anemia as clinics 2) Train staff well in EOC receive EOC.
defined by WHO) receive 2) 7 Clinic Staffs Capable
Emergency Obstetric Care of Using Equipment
(EOC) by end of project. 3) Outputs 1&2 above
1
. Reminder: Does achievement of each Objective contribute directly to achieving the Goal?
2
. Is each Output necessary to achieve the Objective?
3
. Does each Major Activity lead directly to the Outputs?
4
. Does each Indicator directly measure progress toward the Objective? If not, does it come as close as possible? Do we have enough to get a fairly reliable
measure of our effects/impact? Do we have more than we need or too many to handle on a regular basis?
Attachment B
Work Plan
General Management
Timeline in Months
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Targets Staff Responsible Notes
Activities
1 Locate new office space x Program Manager
2 Hire field staff x Program Officers
3 Procure computers/printers x 5 desk tops/2 Admin Officer
printers
4 Procure vehicles/radios x x 2 Nivas (Month Admin Officer Nivas locally, Jeeps
1) 2 Jeeps via broker, radios
(Month 2) 12 via HQ
VHF handsets
(Month 2),
Motorbikes
(Month 1)
5 Complete staff orientation x Program Manager
6 Finalize Security Plan x Program Manager
7 Team review of monitoring info x x x x x x Program Manager
8 Submit Final Year One Work x By Aug 30 Program Director Prepared by
Plan Program Manager
9 Submit Mid-Term Report X By Feb 28 Program Director
10 Submit Close-Out Plan x Program Director Includes disposition
of vehicles,
computers etc
Month
10
Month
11
Month
12
Month
13
Month
14
Month
15
Month
16
Month
17
Month
18
Month
19
Month
20
Month
21
Month
22
Month
23
Month
24
Month 1
Month 2
Month 3
Month 4
Month 5
Month 6
Month 7
Month 8
Month 9
Activities Sub-Activities Targets Staff Responsible Notes
Public Education
Create Announcements x x x x
Broadcast on radio x x x x x x x x x x x x x x x x x x x x x 5 minute spots
on 10 radio
stations
Health Worker Training Maternal Health Also fulfills Objectives
Officer 2-3
Develop Training Materials x x x Maternal Health
Officer
Lease Training Spaces x 3 spaces leased Admin Officer
in month1
Month 1
Month 2
Month 3
Month 4
Month 5
Month 6
Month 7
Month 8
Month 9
Month 10
Month 11
Month 12
Month 13
Month 14
Month 15
Month 16
Month 17
Month 18
Month 19
Month 20
Month 21
Month 22
Month 23
Month 24
Activities Sub-Activities Targets Staff Responsible Notes
Month 1
Month 2
Month 3
Month 4
Month 5
Month 6
Month 7
Month 8
Month 9
Month 10
Month 11
Month 12
Month 13
Month 14
Month 15
Month 16
Month 17
Month 18
Month 19
Month 20
Month 21
Month 22
Month 23
Month 24
Activities Sub-Activities Targets Staff Responsible Notes
Assess Equipment Needs Maternal Health
Officer
Review Protocols x Maternal Health
Officer/Assistants
Meet with Clinic Staff x x Maternal Health
Assistants
Draft Needs per Clinic x Maternal Health
Assistants/Officer
Training on New Equipment x x x x x x
Pre-and-Post Tests x x x x x x 90% pass each Maternal Health Carried out monthly
test Officer along with distribution
Indicators Plan
INDICATOR PLAN
Indicator Definition of Indicator Baseline Data and Targets Data Collection Frequency of Person Responsible
and Management Utility Sources & Methods Data Collection
1. % of mothers aware Mothers can list two of the Targets: 1. Baseline 1. Month 2,12, 24 1. Surveys Designed
of at least two four danger signs defined • 50% by month 12 Survey/Final by Maternal Health
pregnancy-related by PEPC Program • 75% by end of project Survey of mothers Officer
danger signs Guidelines. Baseline: 2. Carried Out By
Less than 22% (estimated Maternal Health
Unprompted recall of these according to assessment Assistants
danger signs is a key piece data, to be confirmed by
of awareness and prevention baseline survey in month 1 )
Objective 2: 75% of mothers attend at least 2 prenatal care visits during pregnancy during the project.
Indicator Definition of Indicator Baseline Data and Targets Data Collection Frequency of Person Responsible
and Management Utility Sources & Methods Data Collection
1. % of mothers who Mothers within our target Targets: 1. Interview with 1. Month 8-23 1. Instruments
attend at least 2 prenatal provinces who visit • 50% by month 12 Clinic Staff 2. Months 10- Designed by
care visits during qualified, staffed clinics. • 75% by end of project 2. ClinicRecords 23 Maternal Health
pregnancy. Baseline: Check 3. Months 1, 12 Officer
Prenatal visits to a clinic • 44% in areas with 3. Survey Results and 24 2. Carried Out By
with trained, equipped staff exisitng clinics Maternal Health
are a proven method of • 7% in areas with no Assistants
reducing complications functioning clinic 3. Direct Observation
from birth. • 25.5% average for Carried Out By
entire target area Maternal Health
Officer
Objective 3: 75% of mothers with risk signs receive Emergency Obstetric Care (EOC) by end of project.
Indicator Definition of Indicator Baseline Data and Targets Data Collection Frequency of Person Responsible
and Management Utility Sources &Methods Data Collection
1. % of mothers with “Risk Signs” defined as Targets: 1. Interview with 1. Month 8-23 1. Instruments Designed
risk signs who receive bleeding, anemia, as defined • 90% clinic staff pass Clinic Staff 2. Months 10- by Maternal Health
EOC by end of project. in WHO guidelines. EOC training course* 2. ClinicRecords 23 Officer
• 40% by month 12 Check 3. Months 21- 2. Carried Out By
EOC as defined in Maternal • 75% by end of project 3. Interview 23 Maternal Health
Health Training Module and Baseline: Community 4. Months Assistant
consistent with MoH Thought to be less than 10% Members 7,12,16,19,24 3. Direct Observation
Guidelines. (to be confirmed by baseline 4. EOC Post-Test 5. Months 8-23 Carried Out By
study). Results Maternal Health
5. Direct Officer s
Observation of
Clinic Staff
During Visits *
*Included here as a quality measurement to ensure that the EOC delivered is effective.
Attachment D
USAID Evaluation
Scope of Work
Template
EVALUATION SCOPE OF WORK1
TEMPLATE
2) Background
Give a brief description of the history and current status of the project/program, goals and
objectives, names and roles of partners, basic methodology and any other info to help the
evaluation team understand the context.
In this section, we should state the reason for the evaluation and its intended audience:
• Who wants the information (donor, program staff, HQ, partners, participants)?
• What do they want to know?
• What will they do with the info?
• When is it needed and in what form?
5) Evaluation Questions
Articulate clearly the main questions the evaluation will have to answer to supply the info
described in section 4 above. Vague questions will lead to vague answers. Too many
questions, on too many topics, will lead to a superficial and unfocused evaluation.
Ensure that questions are management or participant priorities. One approach is to ask the
intended audience what they most want to know and then ask them which of these are
priorities.
6) Evaluation Methods
This section specifies an overall design strategy to answer the evaluation questions and
provides a plan for collecting and analyzing the data.
Note: Often the Overall Design Strategy section will be negotiated with the outside
evaluator and/or other participants and members of the evaluation team.
7) Team Composition and Participation
Identify the approximate team size, the qualifications needed and desired level of
participation. Consider:
• Language skills
• Technical knowledge
• Cultural sensitivity
• Evaluation skills
• Facilitation skills
• Gender mix
• Knowledge of Mercy Corps’ culture and programming
• Who should participate and at what stage (design, implementation, analysis,
dissemination).
• Define the role of each member and list their specific duties.
The exact size and composition of the team is determined by the purpose and strategy, as
well as other constraints such as time, budget, logistics and availability. Technical
knowledge about the specific sectors to be evaluated, language skills, evaluation skills
and cultural sensitivity are all mandatory requirements for a successful evaluation. The
highest possible degree of staff, partner and beneficiary participation should also be
considered. An outside evaluator is not mandatory unless required by the donor or the
specific nature of the question (when objectivity is a high priority).
There is no easy rule of thumb for estimating costs. It will depend on many factors
including your resources, evaluation needs, time frame and availability of in-country
expertise. Your HQ program officer can provide you with sample budgets from other
projects that might help guide your own calculations.
Attachment E
Mercy Corps/Donor
Dictionary
The Mercy Corps/Donor Dictionary
It is important to realize that often donors, partner agencies and Mercy Corps’ staff use
slightly different words to describe the same DM&E topics. This table compares our
definitions of key terms with those used by some of our major donors and colleague
agencies. Mercy Corps’ definitions are based on common usage in our field and the
glossary of terms developed by the DAC/OECD.*
Project Inputs
CIDA Overall Goal Results/ Outputs Activities Milestones
Purpose
Overall Project
EC/Relex Results Activities Milestones
Objective Purpose
Development Immediate Inputs
FAO & UNDP Outputs Activities
Objective Objectives
Long-Term Short-Term Inputs
World Bank Outputs
Objective Objectives
* The Development Assistance Committee (DAC) Working Party on Aid Evaluation is an international
forum comprising 30 member countries and multilateral donor agencies. Mercy Corps’ definitions are
generally consistent with the DAC’ Glossary of Key Terms in Evaluation and Results-Based Management.
Attachment F
The Mercy Corps’ DM&E Guidebook is just one example of our commitment to program
quality. Another, related initiative, is our strong support for and participation in the Sphere
Project, a global NGO effort to increase effectiveness and accountability in humanitarian
assistance. Our involvement with Sphere includes agreement to serve as a pilot agency
looking at how to institutionalize our commitment to the Humanitarian Charter and use of the
Sphere standards as a means of increasing the quality of our programs. Mercy Corps is also
currently serving as the Chair of the Sphere Project management committee and is a strong
advocate for the use of Sphere throughout the humanitarian community, and especially within
its own country programs. Mercy Corps intends that the principles and practices articulated
within the Sphere Handbook will be central to the way that Mercy Corps designs,
implements, monitors and evaluates its disaster response programs. These principles and
practices are completely compatible with, and often more detailed than, the more general
guidance contained in the DM&E Guidebook.
Meeting essential needs and restoring life with dignity are core principles that should inform
all humanitarian action.
The purpose of the Humanitarian Charter and the Minimum Standards is to increase the
effectiveness of humanitarian assistance, and to make humanitarian agencies more
accountable. It is based on two core beliefs: first, that all possible steps should be taken to
alleviate human suffering that arises out of conflict and calamity, and second, that those
affected by a disaster have a right to life with dignity and therefore a right to assistance.
The Sphere Handbook is the result of more than two years of inter-agency collaboration to
frame a Humanitarian Charter, and to identify Minimum Standards to advance the rights set
out in the Charter. These standards cover disaster assistance in water supply and sanitation,
nutrition, food aid, shelter and site planning, and health services.
The cornerstone of the book is the Humanitarian Charter. Based on the principles and
provisions of international humanitarian law, international human rights law, refugee law,
and the Code of Conduct for the International Red Cross and Red Crescent Movement and
NGOs in Disaster Relief , the Charter describes the core principles that govern humanitarian
action and asserts the right of populations to protection and assistance.
The Charter defines the legal responsibilities of states and parties to guarantee the right to
assistance and protection. When states are unable to respond, they are obliged to allow the
intervention of humanitarian organizations.
The Minimum Standards were developed using broad networks of experts in each of the five
sectors. Most of the standards, and the indicators that accompany them, are not new, but
consolidate and adapt existing knowledge and practice. Taken as whole, they represent a
remarkable consensus across a broad spectrum of agencies, and mark a new determination to
ensure that humanitarian principles are realized in practice.
Agencies’ ability to achieve the Minimum Standards will depend on a range of factors, some
of which are within their control, while others such as political and security factors, lie
outside their control. Of particular importance will be the extent to which agencies have
access to the affected population, whether they have the consent and cooperation of the
authorities in charge, and whether they can operate in conditions of reasonable security.
Availability of sufficient financial, human and material resources is also essential. This
document alone cannot constitute a complete evaluation guide or set of criteria for
humanitarian action.
While the Charter is a general statement of humanitarian principles, the Minimum Standards
do not attempt to deal with the whole spectrum of humanitarian concerns or actions. First,
they do not cover all the possible forms of appropriate humanitarian assistance. Second, and
more importantly, they do not deal with the larger issues of humanitarian protection.
Humanitarian agencies are frequently faced with situations where human acts or obstruction
threaten the fundamental well-being or security of whole communities or sectors of a
population - such as to constitute violations of international law. This may take the form of
direct threats to people's well-being, or to their means of survival, or to their safety. In the
context of armed conflict, the paramount humanitarian concern will be to protect people
against such threats.
Comprehensive strategies and mechanisms for ensuring access and protection are not detailed
in the Handbook. However, it is important to stress that the form of relief assistance and the
way in which it is provided can have a significant impact (positive or negative) on the
affected population’s security. The Humanitarian Charter recognizes that the attempt to
provide assistance in situations of conflict ‘may potentially render civilians more vulnerable
to attack, or bring unintended advantage to one or more of the warring parties’, and it
commits agencies to minimizing such adverse effects of their interventions as far as possible.
The Humanitarian Charter and Minimum Standards will not solve all the problems of
humanitarian response, nor can they prevent all human suffering. What they offer is a tool for
humanitarian agencies to enhance the effectiveness and quality of their assistance and thus to
make a significant difference to the lives of people affected by disaster.
The Sphere Project is a significant process - it has entailed an extensive and broad-based
consultation in the humanitarian community. The people who participated in writing the
Sphere handbook came from national and international NGOs, UN agencies, and academic
institutions. Thousands of individuals from over 300 organizations representing 60 countrie s
have participated in various aspects of the Sphere Project, from developing the handbook
through to piloting and training. The Sphere process has endeavored to be inclusive,
transparent, and globally representative.
More on Sphere
Mercy Corps’
DM&E Checklist
Appendix G
Mercy Corps’ DM&E Principles At A Glance
Mercy Corps’ commitment to quality DM&E requires us to go beyond the minimum requirements of some of
our donors. A sound program design, for example, often goes beyond simply fulfilling proposal
requirements. The same can be true for monitoring and evaluation. Therefore, we have developed the
following checklist to help review our various DM&E activities and make sure they conform to Mercy Corps'
principles. Use it when reviewing project designs, proposals or reports; designing monitoring systems or
developing the scope of work for an evaluation.
DM&E Checklist
DESIGN
q Assessment Conducted
q A. Assessment data not used in proposal is kept for future reference
q Goal-Oriented Program/Project Design
q A. Design starts with defining a goal based on impact rather than activities.
q SMART Objectives
q A. Key steps in the project which logically, reliably contribute to achieving our goal.
q B. Describe an “end-state” and focus on “effects” (changes in behavior, attitudes or knowledge in
our target population) rather than activities whenever possible.
q C. SMART – Specific, Measurable, Appropriate, Realistic, & Time -Bound.
q Select Appropriate Outputs & Activities
q A. Logically, reliably contribute to our SMART objectives.
q B. Outputs represent our “deliverables” or final products for which we are responsible.
q C. Activities describe the key actions we’ll carry out to achieve our outputs.
q Identify Indicators
q A. Fewer, more direct indicators that measure performance against our objectives as well as
outputs.
q B. Consider relevant standard indicators and consult appropriate sector specialists & other
resources (such Sphere standards).
q Formulate Work Plan
q A. Include monitoring as a key management activity and make resources available to carry it out,
including roles and responsibilities, budgeting time for baselines, regular data collection, review
and reporting.
q B. Include key management and implementation tasks, persons responsible and clear targets for
achieving them so that we can track performance over time.
q Approaches
q A. A high degree of participation of expat and national staff, representatives of the target group,
partner organizations etc in the design of our strategy and in the implementation of the project.
q B. A focus on the highest level of impact or effects possible.
q C. All pieces of program design are logically and causally connected. (Logic is much more
important than vocabulary)
q D. An evidence-based approach that suggests our actions will be successful
q Final Products From Design Phase
q A. Completed Log Frame
q B. Completed Indicator Plan for our SMART Objectives
q C. Completed Work Plan
q D. Folder containing assessment data
q E. Finished proposal, if applicable
MONITORING
q The Process
q A. Regularly collect, review and report on data related to all project indicators, targets and other
donor requirements according to the work plan.
q B. Clearly compare actual results against targets during review of monitoring data.
q C. Use the data to refine the project approach (as necessary).
Reporting
q A. Clearly reflect actual and planned performance for each objective, analysis of the results and
plans for next steps in all project reports (to donors, HQ and others in the “monitoring pyramid”).
Approaches
q A. Missing a planned target is not viewed as a “failure”. Failure is defined as failing to capture this
info, draw conclusions and act on them.
q B. Monitoring is as participatory as possible (including review of the data).
q C. Give attention to the quality of the data. How good is the information?
EVALUATIONS
Design. Evaluations:
q A. Focus on utility. Evaluations should be designed to answer pressing management needs.
q B. Start with a clear Scope of Work.
q C. Are primarily a learning tool rather than an audit of project performance.
q D. Should be designed to yield lessons-learned for similar programming.
q E. Are an in-depth reflection on a specific aspect of or programming.
Participation. Evaluations:
q A. Are designed to allow the highest reasonable degree of participation in the implementation and
review of results.
q B. Are completed (draft form) and discussed with project staff while the evaluation team is still in-
country.
Sharing the Lessons-Learned. Evaluations:
q A. Should be short but informative (usually no more than 20 pages plus attachments).
q B. Need to be widely distributed within Mercy Corps (including the Digital Library) to make sure
lessons are learned.
q C. Should be promoted by Program Officers so that other staff know they are available and what
information they contain.
q D. Should be read by other Program Staff working on similar projects and used to improve design
and implementation of Mercy Corps activities.