BUKU Mercy Corps Dme Guide

Download as pdf or txt
Download as pdf or txt
You are on page 1of 63

DESIGN, MONITORING AND

EVALUATION

GUIDEBOOK

March 2003

This Guidebook is designed as a “Living Document” that will be revised and updated as needed. Send
comments, concerns and suggestions to the New Initiatives team (Graham Craft, [email protected] or
Rob Zeaske, [email protected] ) or via your HQ program officer.
TABLE OF CONTENTS
INTRODUCTION
Purpose…………………………………………….………………………………………….……3
Special Features……………………………………………………………………………….……4
FUNDAMENTALS OF PROJECT DESIGN
In this Section………………………………………………………………………………………5
The Logic of Goal-Oriented Design
The Logical Framework…………………………………………………………………..6
The 8 Steps of Project Design
#1 Assessing the Situation………………………………………………...………………9
#2 Setting our Goal……………………………………………………………………....10
#3 Choosing Objectives………………………………………………………………….12
#4 Outputs………………………………………………………………………………..13
#5 Activities……………………...………………………………………………………14
#6 Indicators……………………………………………………………………………...14
# The Reasonableness Test………………………………………………………………18
#8 Completing a Work Plan…………………………………………………………...…19
SOUNDING MONITORING MANAGEMENT
In this Section…………………………………………………………………………………...…23
Monitoring Defined………………………………………………………………………………..24
Improving Monitoring Efficiency………………………………………………………………....25
Reviewing the Data and Acting On It……………………………………………………………..26
Communicating Monitoring Data and Conclusions……………….………………………………27
Participatory Monitoring…………………………………………………………………………..28
CRITERIA FOR A USEFUL EVALUATION
In this Section……………………………………………………………………………………...29
Evaluation Defined………………………………………………………………………………...29
Monitoring vs. Evaluation…………………………………………………………………………29
Evaluation Purposes…………………………………………………………………………….....29
Internal, External and Participatory Evaluations………………………………………………..…30
When and What to Evaluate……………………………………………………………………….32
Evaluations Begin with a Focused Scope of Work………………………………………………..33
Methods……………………………………………………………………………….…………...33
Review of the Results…………………………………………………………………………...…35
Final Report Format……………………………………………………………………………….35
GLOSSARY OF KEY TERMS……………………………………………………………………………..36

APPENDICES
A- Sample Logical Framework
B- Sample Work Plan
C- Sample Indicators Plan
D- Evaluation Scope of Work Template
E- Comparison of Mercy Corps and Donor Terminology
F- Sphere Guidelines and the Mercy Corps DM&E Guidebook
G- The DM&E Checklist

1
INTRODUCTION
The purpose of this Guidebook is to establish a set of common principles for the Design,
Monitoring and Evaluation (DM&E) of Mercy Corps projects, programs and Annual
Plans. These principles are based on established practices developed by our field
personnel, colleague agencies, major donors and professional associations. As such, we
are not attempting to establish a new way to do DM&E. Instead, this Guidebook provides
Mercy Corps’ diverse programs and worldwide staff with a common approach to DM&E.
Using the Guidebook will ensure that all Mercy Corps’ projects are designed using the
same key principles and that staff have a common language for discussing issues related
to DM&E. At the same time, the Guidebook is designed to preserve program staff’s
flexibility and independence to define their own, context-specific goals, objectives,
indicators and methods. By improving our ability to monitor, evaluate and report on
programs, Mercy Corps will be better able to document its experiences, communicate
them, learn from them and incorporate that learning into future programs.

Purpose
The DM&E Guidebook initiative will assist MC offices to a) design high quality
programs & implement them efficiently and effectively, b) measure outcomes and
impact, and c) document experiences and share them across the agency, with donors and
the general public.

The Guidebook’s “design” section will help ensure that all new programs are impact-
oriented and are easier to monitor and evaluate through:
• A goal-driven design process and logical framework
• The choice of a manageable number of SMART1 Objectives and indicators.
• The collection of adequate, relevant baseline data related directly to the indicators.
• The creation of an efficient system for monitoring and evaluation including ensuring
that staff time and other resources are built into the work plan and budget.
• The creation of programs that best meet local needs and conditions through focused,
participatory assessments.

The “monitoring” section will:


• Provide program staff with a crucial time management tool that measures program
performance against objectives and management targets, and helps ensure that the
program is on schedule.
• Improve program management by giving managers timely feedback on what is
working and what is not, allowing adequa te time to change procedures and request
grant/budget modifications or extensions.
• Assist managers in documenting program results on a regular basis.
• Help hold the agency and its partners accountable to the donor and the communities
we serve through regula r, participatory monitoring and clear communication.

1
Generally, Objectives and Indicators should be Specific, Measurable, Accurate, Relevant and Time-
bound. More on this below.

2
The evaluation section will:
• Improve future program design/implementation by documenting successful strategies,
potential pitfalls and effective methods for avoiding them.
• Help assess the end-result of program activities, document what was achieved and
measure impact.
• Hold MC and partners accountable to both the donor and the communities we serve.

Of course, many donors and colleague agencies have their own specialized vocabulary
and processes for project design. How will our framework fit with those of our major
donors? The answer, we believe, is “Quite easily.” We have chosen our tools and
vocabulary based on a thorough review of standard practice in our industry. Rather than
simply adapt a system used by one of our major donors, we decided on a simplified
format that best fits Mercy Corps’ own needs. And since our format is based on standard
practice across our industry, it is easily translatable into a variety of other formats as
needed. Please see Appendix E for more specifics.

A number of other resources can be consulted for more specific Design, Monitoring and
Evaluation needs. For example, for the sub-set of programs that address the needs of
disaster affected populations, the Sphere Project Humanitarian Charter and Minimum
Standards in Disaster Response provide a greater level of detail and guidance as to
specific issues related to designing, monitoring and evaluating a disaster response
program. There are references to the Sphere Project Humanitarian Charter and Minimum
Standards in Disaster Response throughout this guidebook, highlighting the importance
and complimentary nature of Sphere in Mercy Corps’ DM&E principles.

This Guidebook is the primary reference for how to do DM&E at Mercy Corps, whether
at the project, program or country strategy level. As such, the process and principles it
describes should be applied to all projects and programs. Supporting resources include:
1. Orientation & Training Module. Based on the Guidebook, this training module
serves as an orientation for our more experienced staff while also providing
examples and skills-building activities for those who need a more basic
introduction. The best way to become proficient with the DM&E principles is
practice. The training module provides this opportunity for staff at all levels. This
module can also be adapted for self-study and ToT use.
2. DM&E CheckList. Distills the contents of the Guidebook into a two-page list of
key principles to ensure good Design, Monitoring and Evaluation. Use it to
remind yourself of key issues when considering the design of a new project or
reviewing a proposal. This is included as Appendix G.
3. Your HQ-based Program Officer and Sector Specialists. The New Initiatives
team in Portland provides a “he lp desk” function for program staff. They can help
you with specific needs including tools, assistance with indicators, and training. In
addition, the Program Officers for each region – and the sector specialists – are
good sources of support to the field for targeted DM&E advice.

The following three sections of the Guidebook describe and discuss each major step in
the Design, Monitoring and Evaluation process, followed by a brief list of key terms and
what they mean for Mercy Corps. The Guidebook concludes with an Appendix that

3
includes suggested formats (with completed examples) for the various tools described in
the preceding sections.

Special Features of the Guidebook


These two special features appear at key places in the text and call attention to especially
important points.

Key Point: Underscores some of the most important tools and suggestions.

Look Out! Highlights common pitfalls of DM&E and helps us avoid them.

4
FUNDAMENTALS OF PROJECT DESIGN

In this section
In the past, project design has been almost synonymous with proposal development. But
repeated experience has demonstrated that there is much more to a well-designed project
than the information required for the average proposal. The steps outlined in this section
represent the minimum components necessary for a well-designed project or country
program. In fact, these design principles apply equally at all the levels, whether we’re
reviewing the work of national or international partners, designing individual donor-
funded projects or developing country-wide Annual Plans. 2

Quality design is important to Mercy Corps because:


• Projects are more likely to be effective if they are well-designed.
• Projects are much easier to monitor and evaluate if they have been designed from the
beginning with M&E in mind.
• Given frequent staff turnover on our longer-term projects, well-articulated plans, in
an accessible format, are critical for ensuring continuity from design to
implementation through close-out and final evaluation.

This section introduces the key principles related to design and provides three important
tools for putting those principles into practice. These tools are:
1. The logical framework. A quick snapshot of the cause and effect logic that forms
the basis of our design. Primarily a planning document, a “log frame” helps us focus
on what we want to achieve and how we’ll do it. It also helps us communicate this
quickly to other team members, donors and external evaluators. Aside from being a
crucial design element, the log frame is also the foundation for planning mid-term and
final evaluations.
2. The work plan. A detailed work plan helps ensure that all important tasks are
planned for and carried out on time. This includes setting targets for project
performance and management tasks and assigning responsibility for achieving them
to specific staff members. The work plan acts like a road map for the implementation
of our projects or programs. It is a key management and monitoring tool.
3. The indicator plan. Helps us explain, in practical terms, how we define success and
helps ensure that we can actually measure it.

Together, these three tools are indispensable for quality project design. Appendices A-C
at the back of this book contain samples of a completed logical framework, work plan
and indicator plan. Please refer to them as necessary when reading this section.

The Logic of Goal -Oriented Design


Most of us tend to define our projects by the activities involved. When we think about a
project, we naturally think about it in terms of the work we do on a daily basis. We
describe ourselves as engaged in “training programs”, “food distribution programs” or

2
. While these principles apply at all levels, for the sake of brevity, we will refer to “project” design in
most of the rest of this document.

5
“microcredit loan programs” for example. However, obviously we are not doing these
activities for their own sake. The point of a “micro-credit lending program” is not to give
away money. We provide loans in order to generate sustainable incomes for our target
population. Therefore, when we design a project, we should first ask “WHY do we want
to do this” (what’s our goal?) and only then move on to decide “HOW will we address
this problem ” (what should our activities be?).

The Logical Framework


A “goal-oriented” approach to project design forces us to think critically about our
intended impact and the steps needed to achieve it. It all starts with a logical framework
(or “log frame”). Essentially, a log frame is a chart that captures all the major steps in the
life of a project and ensures that they each are logically connected. This can be a huge
help both in designing a project and in evaluating it later because it explains clearly and
simply what a project is intended to achieve (its “goal” or “impact”) and how each step
contributes to that achievement.

Like any other tool, a logical framework is only as good as the material we use. If we put
“garbage” into the log frame, that’s what we’ll get back. The most important thing about
the process is the “logic” behind it rather than the precise definitions attached to each
part. Rather than just “filling in boxes”, completing a log frame asks us to consider the
causal chain of events and assumptions that make up our project.

The “goal-oriented” approach is outlined below. Let’s assume our assessments reveal a
situation of “high mother and infant mortality” in our target region. We start the design of
a solution by thinking about what “impact” we want to achieve. Why are we undertaking
a project in the first place? What fundamental change in the living conditions of our
target group do we hope to bring about? If our problem is “high mother/infant mortality”,
what we want to achieve with our project is probably “a healthy mother/infant
population.”

Goal (Impact)
A healthy mother/infant
population

Next, we determine the key changes in the target population that will be required to
achieve this impact. Much of the time, we will need to change peoples’ knowledge,
attitudes or behaviors. Changes at this level are called the project’s “effects.”3 Each
“effect” we have takes us one important step closer to achieving our planned “impact.”
In this example, the “effects” we need to achieve our goal might include the following:

Goal (Impact)
A healthy mother/infant
population
?

3
. The definitions of “impact” and “effect” used here were borrowed from the “Causal Pathways” concept
developed by the International Rescue Committee in The IRC Causal Pathway Framework: A Guide to
Program Design, Monitoring and Evaluation (New York, 2001).

6
Objective (Effect)
Mothers make Pre-Natal
Visits to Clinics

Now we determine what goods and services will be needed to help people change their
knowledge, attitudes or behaviors. Think of them as our “deliverables”, the concrete
things that our project must produce to achieve the “effects” we are seeking. These are
called our project “outputs.” In this case, in order for mothers to make pre- natal visits to
clinics, we might need to ensure that sufficient clinics exist, that they are adequately
staffed and equipped. 4

Goal (Impact)
A healthy mother/infant
population
?
Objective (Effects)
Mothers make Pre-Natal
Visits to Clinics
?
Outputs
Well Equipped, Staffed
Clinics

And this brings us back to “activities” or the main things we will actually do during our
project. These are the activities that will produce the “outputs.”

Goal (Impact)
A healthy mother/infant
population
?
Objective (Effects)
Mothers make Pre-Natal
Visits to Clinics
?
Outputs
Well Equipped, Staffed
Clinics
?
Activities
Clinic
Construction/Rehabilitation,
Training for Clinic Staff

4
. Note. This is a greatly simplified example. In actual practice, a number of other variables would almost
certainly be involved, including raising public awareness about maternal health needs and resources.

7
As this example suggests, it is much easier to design effective activities when we are
clear about the impact and effects that we want to achieve. The strength of the logical
framework approach is that it reminds us to start with the “impact” and work backwards,
ensuring that each step in the pathway is logically linked to the following one. It is also
much easier to measure progress and evaluate impact when we have clearly expressed
what we want to achieve and how we plan to get there.

The following section details an eight-step approach to goal-oriented design that begins
with a thorough assessment of the problem and ends with the completion of a detailed
and clearly articulated logical framework, work plan and indicator plan – the key
elements of any sound project design.

8
The 8 Steps of Project Design

#1) Assessing the Situation


Projects should always begin with a thorough assessment of the situation and a firm
knowledge of the target population. A good assessment gives us the “big picture” of the
target area and informs our strategy and approach. The best project designs combine our
target population’s greatest unmet needs, their strongest assets and Mercy Corps’ unique
capabilities – and (to be effective) also factor in the interests of our donors. Specific
questions should include:
• What are the biggest challenges, the communities’ greatest concerns and needs?
• What are the target communities’ assets and resources for meeting these needs?
• What are the communities’ visions for the future?
• What else is needed, what are the gaps?
• Which of these needs is Mercy Corps best suited to meet?
• What are the donor interests, what are they likely to fund?

Donor
Target Area of Program Design
Interest

Local Needs & Priorities Mercy Corps Capabilities

Participatory Approaches
The best situation assessments are those that include the highest level of participation
possible. Of course, time, resources and other factors will determine the nature of each
individual situation assessment. In some cases, our expatriate and national staff will meet
to brainstorm on a situation and possible responses. Whenever possible, national NGO
and other stakeholder partners will be invited to join the process. In the best cases,
Mercy Corps and national NGO partners will be able to conduct focus groups, key
interviews and surveys to help develop our understanding of the problem and the most
appropriate solutions. In all cases, our assessments should also make use of additional
information from donors, other international and national NGOs, the UN, local and
national government, and the press.

9
During the design phase, we normally consider a range of possible interventions, target
areas and partners, some of which we discard as unworkable based on what we learned
during the assessment. Project designs (and proposals) state pretty clearly the reasons for
what we have decided to do. But they generally don’t include much discussion of what
we decided NOT to do. This information can be vital for staff who come on board after
the design has been completed. By retaining all our assessment data and analysis, we help
those who come after us understand the “big picture” and avoid reinventing the wheel.

Key Point: Assessment teams should always retain assessment information, even when it doesn’t
find its way into a design document or proposal.

Tools : The following are several tools for conducting assessments in a variety of
situations. Most are included on the DME Training CD-ROM and on the Mercy Corps
Digital Library.

• ASSETS – Mercy Corps’ assessment tool, primarily directed at emergency


(non-conflict) situations. Also contains guidance on assessing partner NGO
capacity.
• SPHERE – The Sphere Handbook provides extensive information about
assessment, also most pertinent to emergency situations.
• DFID’s “Conducting Conflict Assessments: Guidance Notes” – a good tool
for conflict situations.

#2) Setting our Goal. Once we have identified the problem or challenge through an
assessment, we move on to thinking about what kind of change we need to make in order
to improve the situation. If our assessment reveals a high mother/infant mortality rate, for
example, our desired future might be “A healthy mother/infant population” for our target
communities. This becomes our project’s “goal.”

A Goal is: A simple, clear statement of the “impact” we want to achieve with our
project, the change we hope to bring about in the target population’s standard of
living. This may be quantifiable, but it doesn’t have to be. It may not be something that
Mercy Corps can even do alone! It is simply our big-picture purpose for doing the
project.

Key Point: Remember to think in terms of impact rather than activities. This is usually written as a
“end-state” rather than an action. That’s why our goal is “a healthy mother/infant population”
rather than “to build maternal health clinics and train staff”.

Creating a working goal is an iterative process that may require several attempts. It’s easy
to think of a goal in terms of existing priorities – the proposal parameters, current
activities, or departmental initiatives. But do your best to think big and creatively, relying
on the assessment results to guide a bold vision of the future. While we shouldn’t be
intimidated by a big goal, we should also make sure that it is of correct scope.

10
In general, a goal should:

• Be reachable (although perhaps not by Mercy Corps alone)


• Be within Mercy Corps’ capabilities and country needs (as identified in
the assessment process)
• Fit with Mercy Corps’ mission and civil society values
• Be defined by our desired “impact” not by our activities.

Look Out! A common mistake is to describe the entire project in the goal statement including the
desired impact, our expected results and our methods for achieving them. These are important
design elements, but each has its own place in the log frame. Trying to include them all in the goal
only makes for long, confusing statements that do not help us focus or communicate our logic.

Examples of Goal Statements:

Bad • Improved Quality of Medical Care – with Special Emphasis on the Needs of
Mothers and Children –Achieved Through Public Awareness Campaigns, Targeted
Health Worker Training, New Clinic Construction in Under-Served Areas and the
Provision of Key Supplies determined through Participatory Assessments
(“Improved Quality of Care” is too vague, the rest is too specific and will be covered
in the Log Frame )

Bad • Provide training to Mothers and Children to make them more healthy (Really more
of an activity rather than a goal)

Fair • Healthy Mothers and Children (A fine, but not very reachable, vision of the future)

Good• A Healthy Mother/Infant Population in Rural, Southern Country X – (Good


! balance of the ambitious, specific & reachable)

Choosing a goal is the first step in completing a log frame (see example below). This
should serve as a good starting point for our design, although we may revise the goal as
we identify the remaining pieces of the log frame.

GOAL: Ask: What is the impact we want to achieve? What does our community look like if we are successful?
Healthy Mothers and Infants in our target population
OBJECTIVES: KEY OUTPUTS MAJOR INDICATORS
ACTIVITIES
Ask: What are the desired effects on Ask: What final goods and Ask: How will we know
people’s knowledge, attitudes, and services will we provide? Ask: What daily efforts contribute if we have achieved our
behaviors. to our outputs? Objective?

11
#3) Choosing Objectives. The next task is to determine what changes will be necessary
in order to achieve our goal. These are the program’s objectives. Typically, we will ask,
“What “effect” on people’s lives do we want to achieve?” and, “What has to happen to
make the goal a reality?” Often, to meet our project’s goal, we must facilitate a change in
the target communities - these will be cha nges in a population’s “knowledge, attitudes
and behaviors.” Creating a complete set of objectives is the second step in the logical
framework process (see example below).

Key Point: Again,we should avoid thinking of our objectives in terms of activities,
focusing instead on the “effects”, the end-state we’d like to see as a result of our
activities.

Since these changes need to happen to make our goal a reality, they need to be things that
we can commit ourselves to achieving. Remember, our goal is something that we can
contribute to but not necessarily achieve all by ourselves. Our objectives, on the other
hand, are things that we believe we can accomplish through our project. And since we’re
committed to achieving them, we need to define them in a way that lets us know when
(and if) we’ve been successful. In general, objectives should be “SMART” which means
they should be:

Specific
Measurable
Achievable
Relevant
Time-bound (meaning they have a clear beginning and end).

Examples of Objectives – based on a Goal of “Healthy Mothers and Infants in our Target
Population”:

Bad • “Improved Knowledge of Maternal Health Issues” (Not SMART, although it is an


important effect)

Bad • “Build 7 New Health Clinics by the End of the Project” (SMART, but more of an
output/activity than an objective. We really want our objective to be “Mothers
visiting the health clinics)

Good!• “75% of mothers make 2 prenatal visits to a quality health center by end of project”
(SMART, and a lasting effect on our target population)

12
GOAL: Ask: What is the impact we want to achieve? What does our community look like if we are successful?
Healthy Mothers and Infants in our target population
OBJECTIVES KEY OUTPUTS MAJOR ACTIVITIES INDICATORS

Ask: What are the desired effects Ask: What final goods and Ask: What daily efforts contribute Ask: How will we know if we
on people’s knowledge, attitudes, services will we provide? to our outputs? have achieved our Objective?
and behaviors.

1. 75% of expectant mothers


make 2 pre-natal visits to a
clinic by the end of our project

The log frame above shows just one objective as an example. Normally, a complete log
frame would require several objectives. See Appendix A for an example of a completed
logical framework. In addition to being SMART, we need to make sure our objectives
are:
• Logically correct (Do they lead directly to our goal?),
• Comprehensive (Did we leave anything out that we need to do to achieve
the goal?)

The Project Logic is More Important than Definitions


As mentioned above, in most cases our goal will relate directly to the “impact” we want
and our objectives will yield results at the “effect” level. In some cases, this might not be
possible, especially in short-term or emergency projects. As an example, an emergency
food distribution program may not yield “effects”. The program will be successful if we
deliver food to the people in need (there is no “change in behavior” we are trying to
produce, other than a more complete diet). The most important thing to keep in mind is
the idea that each step is logically connected and helps us fulfill our goal.

#4) Outputs. These are the things, the “goods and services” that need to exist in order for
us to achieve our objective. They are usually our “final products” or “deliverables” that
we provide to create the effects that we seek.

Differentiating between Objectives and Outputs


The difference between objectives and outputs is sometimes confusing. To keep the
distinction clear, try thinking of outputs as those things that we are certain we can deliver
as a direct result of our actions. For example, 25 rehabilitated clinics, 100 trained medical
staff or a public education campaign are things we can produce directly through our own
efforts. With objectives, on the other hand, we are more dependent on the actions of
others. We are making an assumption, a “leap of faith”, that if we provide these outputs,
other people will respond in a certain way. So for example, we’re assuming that if we
provide our outputs (rehabilitated clinics, more trained staff and information on the
importance of pre-natal care), we will achieve our objective (women change their
behavior and begin visiting clinics during pregnancy). We assume this is true (and we
should have good evidence to support our assumption) but we cannot compel the women
to change their behavior nor guarantee that they will do so. Several examples of these
assumptions are listed below.

13
Output (good or service) Objective (effect)
Conduct Training Changes in behavior

Give micro- loan Sustainable income generation

Provide school feeding Increased school attendance


Assumptions or
“Leaps of Faith”

In each of these examples, we are making an assumption that the piece of the project that
we are responsible for delivering (training, loan, or school feeding) will actually cause a
change. We should carefully examine each logical framework and design to make sure
that our assumptions are valid for our context.

#5 Activities. These are the daily chores we need to implement to achieve the outputs. A
log frame only needs to list the principle activities, those things that explain in broad
terms how we will operate. In a detailed work plan, these activities would be further
broken down into smaller steps.

Differentiating between Outputs and Activities


Activities are often confused with outputs. The primary difference is that outputs are
usually finished products (the final result of our activities). Activities are the actions that
must be carried out on the way to those outputs. Again, for some simple programs, the
differentiation may be very slight.

GOAL: Ask: What is the impact we want to achieve? What does our community look like if we are successful?
Healthy Mothers and Infants in our target population
OBJECTIVES KEY OUTPUTS MAJOR ACTIVITIES INDICATORS

Ask: What are the desired effects Ask: What final goods and Ask: What daily efforts contribute to our Ask: How will we know if we
on people’s knowledge, attitudes, services will we provide? outputs? have achieved our
and behaviors. Objective?

1. 75% of expectant mothers 1. X minutes of 1. Design public information


make 2 pre -natal visits to a maternal health campaign, including radio
clinic by the end of our project info on radio. spots.
2. X clinics 2. Identify clinic needs and carry
rehabilitated out rehabilitation works.
3. X clinic staff 3. Design and implement staff
successfully training on basic maternal
trained. health.

#6) Indicators . Indicators are units of measure that demonstrate our success in
implementing our project. Indicators can be attached to each element of our log frame,

14
but we are particularly interested in identifying good indicators for our objectives. For
example, a good indicator for Objective One of our health project might be “% of
mothers who attend at least two prenatal visits.”

It is important to determine good indicators as early as possible in the life of the project.
In the best case, we should do this during the design phase. That allows us to make sure
the informatio n is available and develop a plan for gathering it. Most critically, it means
we can make sure the project work plan and budget include adequate resources (staff,
time, or funding ) to gather the information. In some ways, poorly thought-out indicators
are worse than no indicators at all because they:

• May be impossible to measure


• Produce inaccurate information
• Waste resources by tracking unnecessary info

The Indicator Plan (see Appendix C) is a useful tool for defining quality objectives and
indicators, as well as providing the beginnings of a plan for baseline data collection. It
helps us define what our indicators mean in relation to what they are supposed to
measure, their relevance to the project, and why we chose them. Completing an Indicator
Plan requires us to think about how we will get the information, from which sources and
on what schedule. Therefore, careful consideration of an Indicator Plan is the best way to
ensure that we have chosen good indicators that we can actually track. This tool is
especially useful for clarifying indicators that may be hard to measure. For example, how
will we know if we have achieved intangible things like “increased capacity” or “reduced
tension”? Using the indicator plan will help us define these concepts and communicate
how we’ll measure them.

Key Point: The Indicator Plan is based on a similar chart commonly used in USAID performance
monitoring plans and often required by them for proposals and/or work plans.

Because we are aiming for SMART objectives, many of them actually contain the targets
(and indicators) already! For the objective “75% of mothers attend at least two prenatal
visits" the indicator will be “% of mothers who attend at least two prenatal visits.” The
target would be “75% of mothers by a specific date” – probably the end of the project.
We will talk more about targets in the work plan section below.

Look Out! Indicators vs. Targets . Indicators are often confused with “targets” (sometimes called
“benchmarks” or “milestones”). Remember:
• Indicators tell us what we want to measure. They are units of measure only.
• Targets have a specific value attached – usually a number and/or a date – and help us track our
progress.

The “Right” Number of Indicators


Choosing the right objectives and indicators can be difficult. First, we don’t want too
many (because measuring them takes time, money and other resources). However, we
also don’t want to have so few that we can’t really tell if we’ve made any progress or not.
For each possible indicator, think about how difficult it will be to gather the info and

15
whether the level of difficulty (and expense) is justified by the importance of the data.
Our intention is to have an “elegant” M&E system that collects enough data to meet our
needs but that does not waste time collecting unnecessary information.

Objectives and Indicators That Ensure Quality


Since we want high quality results, we should strive for objectives and indicators that
measure our highest impact wherever possible. For example, which objective aims for the
higher level of effect?

A) “75% of women attend meetings about the importance of pre-natal care”, or


B) “75% of women make 2 pre-natal visits to clinics”

In most cases, we would want to aim for the second and more important result. Of course,
our objectives will always be determined by a complex set of factors including
availability of information, length of the project, budget and staff time. But we should
always attempt to measure the deepest and most profound results possible.

GOAL: Ask: What is the impact we want to achieve? What does our community look like if we are successful?
Healthy Mothers and Infants in our target population
OBJECTIVES KEY OUTPUTS MAJOR ACTIVITIES INDICATORS

Ask: What are the desired effects Ask: What final goods and Ask: What daily efforts contribute to our Ask: How will we know
on people’s knowledge, attitudes, services will we provide outputs? if we have achieved our
and behaviors. objective?

1. 75% of expectant mothers 1. X minutes of maternal 1. Design public information 1. % of expectant


make 2 pre -natal visits to a health info on radio. campaign, including radio spots. mothers making
clinic by the end of our project 2. X clinics rehabilitated 2. Identify clinic needs and carry 2 pre-natal visits
3. X clinic staff out rehabilitation works. to a clinic by the
successfully trained. 3. Design and implement staff end of our
training on basic maternal project.
health.

Sector Specialists and Standard Indicators. Mercy Corps’ sector specialists are the
primary resource available to MC staff for selecting the most appropriate indicators for
specific types of projects. They may be able to refer you to existing banks of standard
indicators. If pre-existing indicators are not available, the Sector Specialists (including
the New Initiatives DM&E “help desk”) can assist you to develop your own or adapt
indicators used by similar projects elsewhere.

Why use a standard indicator?


• To save time
• Because they accompany a specific project methodology
• Because they can add legitimacy or objectivity to your monitoring results
• To allow your results to be aggregated with (or compared to) other projects working
toward a common goal and using shared indicators.

16
The Sphere Project Humanitarian Charter and
Minimum Standards in Disaster Response
The Sphere project is an important example of how to use existing “best practice”
indicators to improve project design and performance. Sphere is the primary resource for
Mercy Corps disaster response programming (and is fully compatible with this
Guidebook). Sphere and other such resources are key examples of improving program
effectiveness using recommended indicators that have been endorsed through
unprecedented consensus by respected industry professionals. Use of such best practice
indicators can be very helpful when coordinating programs in a complex multi-agency,
multi-donor environment.

Participatory Approaches Help Define “Fuzzy” Indicators


Focus groups with key staff, target groups and other stakeholders can also help you
develop indicators (and SMART Objectives) that are relevant for your particular
circumstances. Only the participants themselves can define what success would mean for
them and they can suggest ways that information can be collected or measured.

This is especially true for those “fuzzy” objectives that are hard to quantify and measure.
For example, if your project aims to “revitalize” communities following a man- made or
natural disaster, how will you define and measure that? You might begin by asking
members of the target group what a “revitalized” community would look like, asking
them how they would define or measure their own community’s vitality. Answers to
these questions would help you define your objectives and indicators in a way that is
appropriate to your location and to ensure that your project is meaningful for the target
community.

Mercy Corps Case Study – Participation and Indicators


Participant interviews and focus groups conducted for the Community Revitalization
through Democratic Action (CRDA) program in Serbia revealed that many residents felt
a key indicator for the revitalization of their communities would be the “# of community-
organized cultural and sporting events.” These activities had previously been a valued
part of community life in our target region but had disappeared as government repression
and economic hardship had caused many residents to turn inward, shunning their
neighbors and focusing on their own survival. Their renewal, residents argued, would be
as important an indicator of “revitalization” as the more predictable indicators like “# of
new social services” or “% increase in employment.”

Baseline Data
This is the set of data you collect on your indicators at the very beginning of a project (or
as soon after the beginning as possible). It provides you with a starting point to measure
against. The baseline is different from an assessment that potentially will collect a wide
variety of social, economic and political information. While an assessment attempts to
provide the “big picture” about conditions in a target area, the baseline focuses on the
state of our indicators at the beginning of our project.

17
Look Out! Baselines vs Assessments! Two more terms that are often confused. Remember,
baselines are collected only on information needed to track progress toward our targets.

To put it in very simple terms, imagine that you are starting a new diet.
• Maybe your goal is to become healthier.
• Your primary objective is to lose 5 kilos.
• Your indicator therefore would be “# of kilos lost”.

In order to measure your progress you need to know how much you weigh before
beginning the diet. That’s your baseline. You can check your weight each week and see
how close you are to meeting your objective.

In the case of our maternal health example, our main indicator is “% of pregnant women
who make 2 pre-natal visits to a clinic.” Our baseline would tell us what percentage of
women were already making such visits before our project began. We could measure the
same thing at the end of the project and that would show us our result, allow us to
measure whether or not we had achieved our objective.

Key Point: You should generally plan on and budget for baseline data collection. Since baselines
are important in showing progress, you should plan on gathering this data unless there is a
compelling reason for NOT doing so.

#7) The “Reasonableness Test”


At this point, we should have most of the ingredients of a successful project design and a
nearly complete log frame and indicator plan.

The “Reasonableness Test ” helps us consider our draft design and see where we might
need to refine it. Whenever possible, its best to have some colleagues who have not
worked directly on the design help you review your log frame and indicator plan. Key
questions at this stage include:

a. Does the flow of ideas seem logical and reasonable?


b. Are the log frame and indicator plans complete? (Did we leave out any critical
activities, outputs, or objectives?)
c. Do the outputs reliably contribute to the objectives and address the “Leap of
Faith” discussed in Step 4?
d. Do the objectives and indicators appear measurable and achievable?
e. Does the design reflect Mercy Corps’ civil society values of participation,
transparency, and peaceful change?
f. Does the log frame correspond to the intersection of the community’s needs
and Mercy Corps’ capabilities?
g. Does the design permit successful and regular monitoring and ultimate
evaluation?
h. Does the design account for any likely barriers or challenges to the completion
of the objectives?

18
If the answers to these questions are “Yes!” and other key design stakeholders (managers,
team members, partners & beneficiaries) concur, then we have completed a successful
conceptual design.

#8) Completing a Work Plan


At this stage, it’s time to begin planning to make our proposed project a reality through
the construction of a work plan. While the log frame is used to focus our thinking and
communicate it to others, the work plan is the step-by-step outline of how we’ll
implement those ideas. We’ll refer to it frequently during the life of the project to plan
upcoming activities, make resource allocation decisions and to monitor our performance
against objectives and targets.

Every Work Plan should:


• Identify key tasks
• Set targets for our indicators and key management tasks
• Determine staff members responsible for achieving them
• Articulate the monitoring and evaluation schedules
• Allow us to clearly report performance

Targets
These are key elements of the work plan that define where you plan to be at certain points
in the life of the project. We should set project targets that relate to our objectives and
achievement of our overall goal. In addition, we should also identify key management
activities that need to happen and set targets for achieving them. By having well-defined
targets at various stages of our work plan, we are better able to gauge our progress (or
lack of it) on a variety of levels and make timely changes (where needed) to keep our
project on track.

Objective Level Targets


We start by filling in targets that are related directly to our objectives. Let’s say our
maternal and children’s health project will last two years. One of our objectives is that
“75% of mothers attend at least two prenatal visits by the end of the project.” Let’s say
that our baseline is 10% of mothers currently access clinics for prenatal visits. If part of
the problem is that there are not enough clinics and trained staff available, it would not be
reasonable to expect to reach our 75% target in the first twelve months. First, we’ll need
to construct more clinics and identify and train clinic staff. So perhaps a reasonable set of
targets for this objective would be:

• 50% of mothers attend pre-natal visits after 12 months, and


• 75% attend pre-natal visits by the end of year two.

19
These targets are like landmarks to let us know we are on the right track to our final
destination.

Management Level Targets


We should also set more general management targets. For example:
• how long should it take to set up field offices?
• comp lete the hiring of new staff?
• train them in new skills?
• carry out any remaining baseline data collection?
• at what date should we expect to complete development of clinic staff training
materials?
• when should we release tenders for construction work or procurement of equipment?
• when must we complete construction or take delivery of equipment?

Setting targets for these activities helps us monitor progress over the life of the project.
Also, in some cases, failure to meet our objective level targets is caused by an inability to
meet key management targets. If we state both kinds of targets clearly in our work plan, it
will make it easier to determine where things went wrong and learn how to avoid them in
the future. This is especially helpful in situations where we are likely to have high staff
turnover over the life of project, where the management team at the end of a project is
totally different from the one that designed it and began implementation.

For example, perhaps we planned to get baseline data in Month Two but didn’t actually
complete this until Month Eight. Let’s say the reason was that we planned to buy
motorbikes for field monitors to carry out surveys in remote areas. But the motorbikes
were unexpectedly held up in customs until Month 6. If we set the baseline collection
and motorbike targets in our work plan, it should be clear from our donor reports what
went wrong and why we missed our deadlines. Otherwise, at evaluation time, given staff
turnover, it may not be clear what the original reason for the delay was – and so we won’t
be able to learn how to avoid similar delays in the future.

Including M&E in the Work Plan


Make sure to include monitoring and evaluation activities in your work plan (and your
budget). Mercy Corps regards M&E as a vital part of project management. For example:
• who will be responsible for gathering baseline data?
• when will that task begin and how long will it take?
• who will conduct monitoring activities? How often?
• what resources will be required?
• when will staff meet to review monitoring data?

The same is true for evaluation:


• How often will evaluation activities take place?
• Who will be responsible for organizing them?

20
By defining these things in our work plan, we help ensure that they actually happen.
Also, by making M&E an integral part of our project activities, we prevent them from
becoming seen as something extra, something to do IN ADDITION to project duties.

Look Out! Does the Work Plan Match the Indicator Plan? Make sure that data collection and
review activities that are described in the Indicator Plan are also adequately provided for in the
Work Plan.

Work plans require different levels of information for different purposes. For example, a
single page work plan is generally sufficient for a proposal. Most donors don’t want more
info than that at prior to making a grant. On the other hand, such a simple work plan is
probably not detailed enough to be useful for project implementation. It is best to design
one that fits management needs and then make a simpler, more focused version to attach
to the proposal.

Key Point: The work plan – especially for multi-year projects – will probably need to be revised
from time to time to reflect new information or changing conditions. During the design phase, it’s
best to focus on details for the first 12 months and update the plan each year.

Conclusion
The table below summarizes the most important principles discussed in this section. You
may want to refer to it when designing or reviewing a new project or program.

DESIGN CHECKLIST
q Assessment Conducted
q A. Assessment data not used in proposal is kept for future reference
q Goal-Oriented Program/Project Design
q A. Design starts with defining a goal based on impact rather than activities.
q SMART Objectives
q A. Key steps in the project which logically, reliably contribute to achieving our goal.
q B. Describe an “end-state” and focus on “effects” (changes in behavior, attitudes or knowledge in our
Target population) rather than activities whenever possible.
q C. SMART – Specific, Measurable, Appropriate, Realistic, & Time -Bound.
q Select Appropriate Outputs & Activities
q A. Logically, reliably contribute to our SMART objectives.
q B. Outputs represent our “deliverables” or final products for which we are responsible.
q C. Activities describe the key actions we’ll have to carry out to achieve our outputs.
q Identify Indicators
q A. Fewer, more direct indicators that measure performance against our objectives as well as outputs.
q B. Consider relevant standard indicators and consult appropriate sector specialists & other resources
(such Sphere standards).
q Formulate Work Plan
q A. Include monitoring as a key management activity and make resources available to carry it out,
Including roles and responsibilities, budgeting time for baselines, regular data collection, review and
reporting.
q B. Include key management and implementation tasks, persons responsible and clear targets for
achieving them so that we can track performance over time.
q Approaches
q A. A high degree of participation of expat and national staff, representatives of the target group,

21
partner organizations etc in the design of our strategy and in the implementation of the project.
q B. A focus on the highest level of impact or effects possible.
q C. All pieces of program design are logically and causally connected. (Logic is much more important
than vocabulary)
q D. An evidence-based approach that suggests our actions will be successful
q Final Products From Design Phase
q A. Completed Log Frame
q B. Completed Indicator Plan for our SMART Objectives
q C. Completed Work Plan
q D. Folder containing assessment data
q E. Finished proposal, if applicable

22
SOUND MONITORING MANAGEMENT
In this section
Albert Einstein said, “It’s simple, but it’s not easy” in describing his theory of relativity.
While not quantum mechanics, the same may be said of our daily monitoring challenges
– plans that seem very straightforward on paper often break down in the complex
operating environments in which we work. This section describes the basic pieces of
monitoring and provides tools to assist in their smooth and effective implementation.

Key Point: Improving Mercy Corps’ monitoring practices – and acting on the results – may be the
single biggest opportunity to enhance our program impact worldwide. There is simply no substitute
for great information to generate first-rate learning and program management.

Sound design is only the first step in ensuring quality M&E. The good intentions that go
into designing our monitoring plans are sometimes forgotten during implementation.
Other times, data is collected but not analyzed or communicated well. To be successful,
monitoring plans have to be carried out and the information collected, reviewed and acted
upon. And all this needs to be clearly communicated to all stakeholders including project
staff, participants, partners, HQ and the donor.

The need to monitor is not always self-evident. Many field staff are so intimately
connected with the ir programs that they have (or feel they have) complete information on
which to make decisions. Practice has shown, however, that sound monitoring practice is
vital for the project’s field managers, headquarters staff, and other field personnel to
maximize learning in the many projects we undertake. There is no doubt that good
monitoring is an integral part of program management. Several key reasons for
monitoring include:

a. Program management - The best programs require sound information to


make management decisions about how to use scarce resources like staff time,
budget and equipment. At regular intervals, we need to know where we’re
doing well, where we’re lagging behind and why.
b. Institutional knowledge - Not all programs will be formally evaluated, and
Mercy Corps has a strong need to know that the design is working as planned
and what adjustments might be appropriate in the future for other programs.
c. Donor requests – Donors typically require compliance monitoring and value
(if not also require) performance data as well. While different donors vary in
the attention they pay to monitoring data, documentation of performance
against objectives is a standard that we should be able to deliver for all
donors.
d. Team morale – monitoring lets us know that what we do works on a real-time
basis. While many of our goals are large, multi- year efforts, monitoring is an
important tool in showing teams that we are making progress day-to-day.
e. “Evaluability” – regular monitoring makes the evaluation process much
easier by providing frequent performance updates that create a written
“history” of the project that will survive staff changes and a changing

23
environment. Monitoring data also provides guidance on what questions we
should ask during an evaluation, avoiding costly redundant information.
f. Unexpected Results – through frequent checks, we may also uncover
unexpected results. Good or bad, these surprises can only be unearthed and
addressed through rigorous monitoring.
g. Part of the job – performance monitoring is an integral part of good program
management.

Monitoring Defined
In simple terms, “monitoring” can be understood as a cycle of “regularly collecting,
reviewing, reporting and acting on information about project implementation. Generally
used to check our performance against ‘targets’ as well as ensure compliance with donor
regulations.”

Look Out! Monitoring is more than just data collection! The monitoring pyramid shows that
while most of our resources are used collecting the data, our efforts are incomplete unless we
review, report and adjust our strategy based on our information.

The “Monitoring Pyramid”


The Monitoring Pyramid demonstrates the basic features of the complete monitoring
function. Without the entire cycle, we risk falling short in the impact that our program
can have in our target communities. Particularly important in this model is that
information flows to all of the stakeholders in the monitoring process, ensuring
transparency and participation throughout the pyramid.

Program Reporting

Information
Local Decision-Making Flows Up &
Down

Reviewing the Data

Data Collection

Monitoring commonly serves two related functions:

Compliance Monitoring. This is the most basic level of project monitoring. It is carried
out to ensure that our staff and our partners and sub-grantees are conforming to donor
regulations and the requirements of our grants, sub- grants and contracts. Examples
include “end- use” checks in distribution projects. These are used to make sure that
intended beneficiaries are receiving the standard ration of food or supplies that they are

24
entitled to. In infrastructure projects, engineering staff make regular site visits to ensure
that construction firms are meeting the terms of their contracts and working to agreed
engineering standards.

Performance Monitoring. This is often carried out in conjunction with compliance


monitoring. Performance monitoring is data collection to check our progress against our
targets, to determine how well we are progressing against expected results. Also,
performance monitoring goes beyond compliance with regulations and often involves
measuring our project’s “effects.”

But to be effective, monitoring has to be more than just routine data collection. We must
also regularly review the data and (if necessary) revise our work plan in response.

Improving Monitoring Efficiency

While it’s easy to describe the benefits of monitoring, more challenging is carving out
time and resources amid many competing priorities to actually complete the monitoring
tasks. And although we want to conduct regular, complete monitoring, these activities
should certainly be conducted in a practical, efficient way. We’ll discuss a few tips for
getting more out of Mercy Corps’ monitoring efforts.

While the worst mistake is a complete lack of a monitoring plan, a more frequent
problem is attempting to monitor a project with too many indicators. We recently read
two Mercy Corps proposals that included more than 70 indicators each! This poses a few
problems. First, these monitoring plans use too many resources to be sustainable given
the time and resource pressures. Second, it’s likely that not all of the information
gathered is relevant – cluttering the good information. Third, it’s very difficult to process
this much information, even if it is useful.

To help manage the “Too-Many-Indicators Syndrome”, and other time pressures


associated with monitoring, please consider the following tips.

Key Point : Four Tips to Make Monitoring More Effective.

#1 – Focus on just a few indicators . It’s worthwhile to distinguish between indicators


for our outputs/activities and our objectives. If we have a complete work plan, it will be
easy to track our progress on bigger deliverables. But it is also vital that we monitor
progress against our objectives. Find one or two indicators per objective that really
demonstrate our progress, and set a schedule to monitor those only once or twice a year.

#2 – Set up your Monitoring in your Work Plan. It is very difficult to make time for
the entire monitoring cycle (collection, reflection, decision making, reporting) unless it is
accounted for early on through the work plan (and budget). So make time at the
beginning for these activities.

#3 – Collect Baseline Data. For longer programs/projects baseline data collection can
save you a lot of time. Apart from helping demonstrate suc cess over time, baseline data

25
collection is a test of your indicators and time commitment up front and provides
information on the ease of monitoring the indicators you have selected.

Tip #4 – Use an Indicator Plan to plan your information needs. The indicator plan is
an easy tool to help you manage your monitoring time. By investing a little time in the
Indicator Plan you will be able to: a) better define your indicators, b) narrow your
indicators to a manageable number, c) set up a thoughtful schedule for data collection, d)
select your data collection methods. The process of completing the indicator plan is a
long-term time saver, but only if these plans are also reflected in the work plan so that
staff time and other resources are available to carry them out.

Reporting Bad News


It is nearly impossible for a project to meet all of its targets, all of the time. The more
complex the project, the more likely that we won’t be able to fulfill all our targets. Falling
short of a target is not a “failure.” It’s natural. It will only be a “failure” if project staff
do not document it, reflect on it and design a response to improve the situation. We need
to adopt this attitude both internally and in our relations with sub-grantees and national
NGO partners. They should not be afraid to report “bad news.” To give them the
confidence to do this, we must clearly demonstrate a “partnership” attitude that does not
punish bad news but treats it as a challenge to be met together.

Tools : There are a whole range of methods for monitoring project performance and
compliance with Mercy Corps and donor regulations. These can include simple check-
lists and short narrative reports for:
• direct observation of project activities
• meetings with partner organizations and sub- grantees
• checking partner/sub-grantee records
• individual interviews with project participants.
The mix of methods will depend on the type of project we are carrying out. Because the
structure of these tools will determine what information is collected and what is not, it is
important to devote sufficient thought to their design early in the project. The New
Initiative DM&E team “help desk” can assist with this process by suggesting formats or
reviewing templates.

Reviewing the Data and Acting On It


Imagine flying a plane with no instruments. This is just like running a project without
monitoring. The point of instruments in an airplane is to help the pilot make the many
adjustments necessary to fly and land a plane safely. In other words, flight instruments
exist solely to allow the pilot to make any necessary changes in the plane’s course. The
same is true for monitoring. The primary reason we collect information about our
programs (other than to satisfy donors!) is to make “mid-flight corrections” – so that we
can improve the program as it unfolds.

Look Out! Don’t be afraid to change course based on your monitoring data. That’s the
primary purpose of monitoring. When we consistently take good data, review it and report it to
our donors, they should understand and respect any changes required for our work plan.

26
Part of using our data thoughtfully means reflecting on monitoring results individually,
with Headquarters, and, most importantly, with the local team and partners. Regular
meetings to review monitoring data should:
1. Clearly compare expected and actual results.
2. Identify reasons for lower than expected results (if applicable).
3. Outline a plan of action in response to the results.
4. Communicate that information to stakeholders – especially to partners and
participants.
5. Included as key activity in the work plan.

Lack of time may not be the only reason some projects do not analyze and act on
monitoring info. Many staff are intimidated by the concept of “data analysis” which
conjures up images of complex statistical models. We’re really talking about “reflection”
– looking at our monitoring data and asking, “What does this mean for our project?” and
“What conclusions can we draw from this information?”

Communicating Monitoring Data and Conclusions


To ensure that lessons- learned are effectively shared and routine management can take
place, regular project reports should clearly communicate all the key elements of good
monitoring:
• expected and actual results.
• reasons for missing targets (where applicable)
• an outline of “next steps” based on the analysis of results

This is true whether it’s a partner or sub-grantee reporting to us or a field office reporting
to the donor or the Headquarters. There is no pre-set time period for the frequency of
reporting. Generally, our donors have their own specific requirements for program
reports. We set our own requirements for our partners and sub-grantees, usually based on
the donor’s guidelines. In addition, each country program is required to submit Program
Reports to Portland and our Board of Directors on a regular basis. But no matter what the
reporting requirements are, as a general rule, data collection should occur at least on a
monthly basis and review of the data should take place at least once a quarter. For larger,
more complex objective indicators, we may collect data (through surveys, observation,
focus groups) less frequently, but should still collect and review at least twice a year.

The “monitoring pyramid” diagram at the beginning of this chapter reminds us that
monitoring information should not just “filter up” but also “flow back down.” When
decisions are made at higher levels based on monitoring results, the reasons for those
decisions should be clearly communicated to the people they will most affect; the field
implementation staff, project participants and partners. This two way communication:
• facilitates transparent decision-making
• is more appropriate to a partnership relationship
• helps build participant capacity

27
• should improve the quality of monitoring since those who do the footwork will
understand clearly why they’re collecting certain data and how it is used. 5

Participatory Monitoring
In general, monitoring should be as participatory as is practical. This includes not only
the collection of information but also analysis and determining appropriate responses. By
including national staff and partners in this process, we make sure we have a more
complete understanding of the situation. Often a problem looks different from the
perspective of an MC country office than it does from the field or from the perspective of
a national NGO partner. Our analysis will be better and our responses more appropriate
if these perspectives are included. In addition, participatory monitoring also:
• builds staff and partner capacity to analyze situations and determine responses
• helps hold us accountable
• helps participants understand the relief and/or development process and gives them a
voice in how it is implemented. It helps convert them from passive “beneficiaries” to
active “participants” in our projects
• gives participants a share in the responsibility for implementing the project. They are
now part of the implementation team and have to take on more responsibility for
project outputs

Tools : A number of monitoring and project report formats are available on the Digital
Library. The “help desk”, Program Officers and Sector Specialists can also help
determine appropriate ways to capture specific types of information for different kinds of
activities.

MONITORING CHECKLIST
q The Process
q A. Regularly collect, review and report on data related to all project indicators, targets and other donor
requirements according to the work plan.
q B. Clearly compare actual results against targets during review of monitoring data.
q C. Use the data to refine the project approach (as necessary).
Reporting
q A. Clearly reflect actual and planned performance for each objective, analysis of the results and plans
for next steps in all project reports (to donors, HQ and others in the “monitoring pyramid”).
Approaches
q A. Missing a planned target is not viewed as a “failure”. Failure is defined as failing to capture this
info, draw conclusions and act on them.
q B. Monitoring is as participatory as possible (including review of the data).
C. Give attention to the quality of the data. How good is the information?

5
. Research and experience demonstrate that monitoring systems tend to collect better
quality information when staff and participants understand how the information will be
used.

28
CRITERIA FOR A USEFUL EVALUATION
In this section
This section provides guidance on deciding “what, when and how” to evaluate and
explores the difference between “monitoring” and “evaluation”.

Evaluation Defined
An evaluation, for our purposes, refers to an in-depth, retrospective analysis of an aspect
(or aspects) of a project that occurs at a single point in time. It is generally intended to
measure our effects and impact and examine how we achieved them. This process also
captures our experience so that future projects can learn from it.

Monitoring vs. Evaluation


Monitoring and Evaluation are closely related activities that both involve collection and
analysis of information. However, evaluation is generally more focused and intense than
monitoring and often uses more time-consuming techniques such as surveys, focus
groups, interviews and workshops. While monitoring is a continuous process, evaluation
is normally a discrete event that takes place once or twice in the life of a project.
Monitoring focuses mostly on whether or not we’re achieving our targets. Evaluation, on
the other hand, should better answer the “why” and “how” questions: “Why are we
getting these results?” and, “How did we achieve them?” Despite their similarities, the
purposes of these exercises are quite different. Evaluation cannot take the place of a
sound monitoring system. In fact, an evaluation generally relies in part on the data
accumulated over time by the monitoring process in order to draw conclusions about
project performance.

Evaluation Purposes
Mid-term and final evaluations are the most common types in the NGO world. Mid-term
evaluations are used to 1) measure the effectiveness of the project and 2) determine
changes that might need to be made to improve effectiveness for the remainder of the
project. 6 Final evaluations in the NGO world generally take place in the final months of
a multi- year project. These evaluations are generally designed to 1) measure the effects
and impact of a project and 2) draw conclusions about lessons- learned for future
projects. 7

Evaluations may be further divided into those that measure impact and those that
examine process. In general, impact evaluations seek to determine the results of a project;
its impact or effects on the target population. Process evaluations look more closely at
management practices and different approaches to implementation. Often, mid-term
evaluations are mostly process- focused while final evaluations look more at impact. In
practice, evaluations generally combine elements of both impact and process evaluations.

6
. These are often referred to as “formative” evaluations because they examine how a project is
implemented and make suggestions about the form of future activities.
7
. These are also known as “summative” evaluations because they “sum up” project experience.

29
Key Point: Mercy Corps views evaluations primarily as a learning tool rather than an “audit” of
people or their projects. Their main purpose is to help us learn about our projects, share that
information and improve performance in the future.

Internal, External and Participatory Evaluations


Opinions differ on the need for external control over evaluations in order to ensure
objectivity. At one extreme, some larger institutions like the World Bank maintain
evaluation departments that are separate from program implementation teams. USAID
pursues a more moderate approach and often commissions evaluations that are led by an
external consultant but that also involve key agency staff. Some organizations take a
more inclusive approach and rely on project staff to design and conduct their own
evaluations, generally with significant participation by other stakeholders and the target
communities. Those in the first group are focused mainly on the need to objectively
document project results. Those in the final group are focused more on learning and the
desire to hold projects accountable to the participants.

In keeping with our core principles (including participation, accountability and peaceful
change), Mercy Corps’ projects should steer a middle course between these two
extremes. The purpose of the evaluation and donor requirements will normally determine
the exact composition of an evaluation team. The inclusion of an external evaluator is a
good way to ensure a certain healthy distance and objectivity in the evaluation. This can
be an outside consultant, a Mercy Corps HQ person or a staff member from a different
field location, so long as they have no direct stake in the outcome of the evaluation. The
bulk of the evaluation team should be made up of project staff from senior managers
down to individual international and national project officers.

Finding an external evaluator or facilitator is a critical step of most evaluations. Key


characteristics we should weigh when selecting an external evaluator include:
• Familiarity with our type of project
• A background in the type of evaluation that will be carried out
• A knowledge of the local environment
• Flexibility in meeting Mercy Corps’ evaluation needs.

No matter what the level of external involvement, evaluations should include project
participants and other key stakeholders whenever possible, including the design of the
evaluation, implementation and analysis of the results. This will help ensure objectivity,
accountability and learning because:
1) the inclusion of many stakeholders helps keep one perspective from dominating the
evaluation
2) through participation in the design and analysis phases, project participants get a
better overall sense of how the project performed and why.
3) project staff and other participants are more likely to accept, internalize and “own”
evaluation findings that they reach themselves.
4) it provides capacity building to partners and participants

30
The need for participation should always be balanced against the particular challenges
and constraints this type of evaluation involves. These include:
1) The costs involved. A participatory evaluation can be very time consuming. It
requires more effort to manage the higher level of input and demands good
organization to get effective results.
2) Objectivity concerns. An over-reliance on participation can lead to something more
like a “self-evaluation” that lacks the healthy balance of an outside perspective.

The chart below lists some of the major types of evaluation and suggests some of the
costs and benefits associated with each. Obviously, these are not all mutually exclusive
categories and an evaluation design will combine several of them.

Who Does the Evaluating?


Internal Mixed Team External Participatory8
• Involves only project staff • Involves project staff plus • Led and implemented • Involves our participants
and participants (Perhaps an outsider, usually in a mostly by outsiders and other stakeholders.
with a facilitator) central role. Benefits Benefits
Benefits Benefits • Higher perceived level of • More perspectives help
• Focuses mostly on learning • Focuses on learning objectivity keep us objective
• Can be relatively • Also provides more • Results more likely to be • Holds us accountable
inexpensive objectivity and validation accepted by “outsiders” • Participants learn more
• Ensures that staff “own” of results Costs about the project and
the process and results Costs • Added expense of develop their own
Costs • Added expense of outside outsiders evaluation skills
• Viewed by outsiders as participation • Staff and participant • Can be combined with any
less objective knowledge not as central to type of evaluation
the design or analysis Costs
• Learning may be reduced • More participation can
and staff less likely to make it harder to maintain
“own” the results focus
• Participation can require
more resources (time, staff
and money).
When Do We Evaluate?
Mid-Term Final
• To assess performance and determine next steps • Documents our experience.
Benefits Benefits
• Allows us to refine our approach for better performance or • Occurs at end of project when impact should be more obvious
impact • Allows us to capture our impact and/or lessons-learned.
Costs Costs
• Takes time away from implementation Info comes too late to affect current project
• Not always possible for short -term projects
• Takes funds that could be spent on other activities, including
monitoring or final evaluation.
What Do We Evaluate?
Process Impact
• Documents our systems, methods, tools etc. • Documents our effects or impact.
Benefits
• Helps identify systems, tools, methods that need improvement Benefits
• Captures details of systems etc that can be useful for other • Tells us and others what we achieved
implementers • Lets us know if we’ve achieved our goal and objectives
Costs • Helps identify effective approaches
• Usually cannot tell us much about impact Costs
• May not tell us enough about how we achieved impact

8
. Any type of evaluation (internal, mixed team or external) can and should involve participatory elements.

31
When and What to Evaluate
As discussed above, not all projects need a formal evaluation involving an outside
evaluator. In general, formal evaluations may be more appropriate for projects lasting two
or more years. In some cases, such as short term projects (1 year or less), there may not
be the time or budget for a full- fledged evaluation, especially if the project ends before
any tangible effects can be measured. This is not to say that project staff and participants
should not review their performance constantly. Every time a monitoring meeting is held,
data examined and conclusions reached, we are contributing to agency learning. When a
final report summarizes monitoring data, draws conclusions and makes recommendations
for future programming this learning is preserved for future projects and is more easily
shared around the agency. For this reason, some agencies (such as the Peace Corps) have
decided to spend their M&E resources entirely on ensuring high quality monitoring rather
than undertaking mandatory, formal evaluations of each project.

Look Out: Donors may require an evaluation and wish to stipulate much of the scope of work and
team composition. Therefore, it’s best to make sure these expectations are negotiated at the
beginning of the project and included in the design so that we can 1) ensure the evaluation meets
our needs and 2) make sure we have sufficient resources to carry it out.

Longer-term projects should generally include a more formal evaluation process at least
once in their life cycle (either mid-term or final). But evaluations should never be
undertaken without a sound management reason. We do not do them just to do them. For
example, projects which use standard, well- tested approaches will not necessarily need
formal evaluation. In these cases, the overall impact or effect may be inferred from a
sound knowledge of what the results of that approach have been shown to be elsewhere.
Examples might include a reconstruction project in Bosnia that follows the same
approach and targets similar communities as a number of other MC and non-MC projects.
If the general success rates for similar reconstruction projects are well established and
MC has formally evaluated similar projects recently, there would be no need to formally
evaluate all of them. Another example might be the introduction of a proven anti-
tuberculosis treatment like TOPS that has demonstrated results around the world. In these
cases, analysis of sound monitoring data might be sufficient.

It is also important to keep in mind what evaluation cannot do. Evaluations:


• are not substitutes for management decision- making. Rather, they provide vital
info for the people making those decisions.
• cannot replace sound monitoring systems. Since they are short, focused activities,
they cannot recreate monitoring data that needs to be continuously collected over
the life of a project.
• should not be used to solve internal disputes or mediate between conflicting views
about the value or future direction of a project.

32
Key Point: Evaluations should only be undertaken in response to a specific management need.
Evaluations are time-consuming and should have a clear benefit for Mercy Corps and our other
stakeholders.

Evaluations Begin with a Focused Scope of Work


An evaluation is like a mini-project. Good evaluations begin with a good evaluation
design (or Evaluation Scope of Work). The steps involved in good evaluation design are
the same as those for project design. We begin with asking why we are doing the
evaluation. What need does it serve? What does the project team or Mercy Corps need to
learn about the project? Do we want to take a detailed look at our performance halfway
through a project to determine if adjustments are necessary? In that case, a mid-term
evaluation might be called for that focused on impact and process. Do we want to learn
how well we achieved our objectives and our overall goal and capture lessons- learned? In
that case, a final impact evaluation is appropriate.

Once this is decided, an evaluation scope of work is then prepared (generally by project
management) that is focused on meeting the identified needs. The USAID evaluation
scope of work format is a good one that can be adapted for most projects. A modified
version of the USAID scope is included as Appendix D. The most common challenge is
drafting a scope that is focused on only a few key questions and includes the resources
(time, staff etc) to answer them adequately. An evaluation with a wide scope but very
limited resources can be worse than no evaluation at all if it leads to superficial
conclusions as the evaluator races to finish in the time provided. Three weeks of field
work is generally a minimum amount of time required for even a narrowly focused
formal evaluation to be worthwhile.

Key Point: A focused scope of work is vital to a good evaluation. Tips include:
• Start with what you want to learn . It is easy to lose focus and try to evaluate too much, leading
to uncertain conclusions. Choose a manageable piece of your project to learn from and design a
scope to support that.
• Treat the scope as something to be negotiated with your donor and lead evaluator rather than
something to be stipulated by them.

Methods
The choice of evaluation methods depends on the type of program, resources available
and the type of questions the evaluation is trying to answer. Most will start with a review
of project documents. The log frame provides a clear explanation of what the project was
designed to accomplish, the strategy and how success should be measured. The work
plan and indicators plan describe data collection for the project and the collected
monitoring and project reports detail what has already been accomplished. For impact
evaluations, the baseline data gives a starting point against which further progress can be
measured. Without a baseline, it is extremely difficult for an evaluation to gauge project
impact. After a document review, the next step is generally a workshop with key staff
members to discuss their experiences and perceptions related to the evaluation questions

33
and for the evaluation team to get insights that are not contained in the reporting
documents.

The next steps will vary depending on the circumstances. Common evaluation
instruments are surveys, focus groups, key informant interviews and direct observation.
Evaluations may use some or all of these approaches. It is a good idea to balance strictly
quantitative methods (like surveys) with more qualitative methods like focus groups,
interviews and workshops. This is because there is often much more to the story than
what is apparent simply from a dry listing of statistics. Surveys are good ways to
determine “what happened” and “how many times it happened” but not good at
explaining “why” something happened.

The best evaluations combine both kinds of instruments to show not only “what”
happened but also “why” it happened or what it meant to the participants. Keep in mind
that ‘qualitative’ and ‘quantitative’ information are not completely separate categories.
Most ‘qualitative’ information can also be expressed in numerical terms. In fact, it’s a
good idea to present qualitative information that way in donor reports, because some
donors seem to believe that numerical data equals “hard” data, that it is more accurate
and reliable than narrative descriptions.

Surveys also rarely turn up unexpected results. That is because surveys use
questionnaires that, by definition, only give people a certain number of possible answers.
If there is a project result that you did not anticipate, you will not know in advance to
make this a possible answer on the questionnaire. That’s why it’s usually a good idea to
do some focus groups first, before finalizing the design of a survey questionnaire. If
focus groups reveal a project result that you did not expect, you can include questions
about that result in your survey questionnaire and find out just how frequent that result
was.

Mercy Corps Case Study – Unexpected Results


In 2001, Mercy Corps commissioned an evaluation of the agency’s role in building the
capacity of a key national partner, Dilsuz, the Association of People with Disabilities in
Tajikistan. Mercy Corps and Dilsuz had been working closely together for over 7 years
and senior staff from both agencies felt that Mercy Corps’ assistance had significantly
improved Dilsuz’s capacity to implement projects that fulfilled its mission statement. In-
depth interviews with both agencies’ senior staff confirmed this impression, as did a
review of financial records which showed that Dilsuz had moved from near bankruptcy to
financial independence and sustainability. An evaluation that stopped there would have
concluded that Dilsuz was a model case of NGO development. However, the evaluation
team went further, interviewing non-management staff, conducting surveys of
beneficiaries and making site visits for direct observation of activities. These
investigations turned up several important inconsistencies between the actual situation in
the field, the perceptions of non-management staff and those of Mercy Corps and Dilsuz
upper management. While Dilsuz had become more efficient and self-sustaining, it was
also reaching fewer disabled people (its core constituency) than was reported by its
headquarters and field staff were found to have serious concerns about the quality of
Dilsuz management. In the end, Dilsuz was still found to be a very noteworthy example

34
of the benefits of partnership. At the same time, the evaluation’s contribution to learning
for both agencies was considerably increased because it pointed out not only important
successes but also vital areas where more work was urgently needed.

Review of the Results


Whenever possible, the draft or summary version of the final evaluation report should be
shared with the project staff and participants while the evaluation team is still in the
country. This should be built directly into the evaluation scope of work. The lead
evaluator and his/her team should present at least a summary of their findings to the staff
for feedback and discussion. Whenever possible, project participants and other
stakeholders should be included in this process as well. This helps build ownership of the
results among the staff and participants and brings their knowledge and perspectives into
the analysis of the data. Not only does this build more accountability into the process,
but it also increases staff and participant skills and experience with the difficult task of
evaluating project results. After gathering feedback on the draft report, the lead evaluator
can then leave the country to prepare the final report.

We should be careful not to overstate our results by ensuring there is a fairly clear link
between the information we collect and the effects/impact that we claim. This applies
equally to monitoring and evaluation. For examp le, if project documents show that we
rebuilt houses for 250 returning refugee families and 248 of them actually retook
possession of the homes, what does that information tell us? That we caused their return
or simply assisted it? How many would have returned without our help? A survey of
project participants might help to show how much influence our project had on people’s
decision to return. But we would still not know for certain what would have happened
without our assistance. A good set of performance monitoring data, coupled with survey
and focus group information, would allow us to demonstrate that we 1) provided the
expected number of houses, on time, within budget and to a high standard, 2) that 99% of
the target population made use of the houses and 3) that the participants stated that the
availability of the houses was important in their decision to return. That’s a pretty good
result that we could be proud of, even though it stops short of actually proving that our
project “caused” 248 retur ns.

Final Report Format


Just like the evaluation scope, the final report should be limited in size to keep it focused
and useful for as many readers as possible. A shorter report will be easier to summarize
and review while the lead evaluator is still in-country. In addition, keeping the report to a
manageable size (20 page maximum) will help ensure that it is read from start to finish by
a wider audience. More detailed, but not crucial, information can (and should) be
included as attachments to the final report. The point is not to limit learning but to
enhance it. Remember, an unread evaluation is a largely a wasted evaluation. A 20 page
report with detailed appendices is the best way to ensure that detailed information is there
for those who will use it, but that the main points of the report reach the widest possible
audience.

35
Key Point: Keeping the evaluation report short will make it easier to review the draft while the
evaluation team is still in the field (and revisions are still possible). It will also facilitate sharing the
lessons-learned with other Mercy Corps and partner staff.

EVALUATION CHECKLIST
Design. Evaluations:
q A. Focus on utility. Evaluations should be designed to answer pressing management needs.
q B. Start with a clear Scope of Work.
q C. Are primarily a learning tool rather than an audit of project performance.
q D. Should be designed to yield lessons-learned for similar programming.
q E. Are an in-depth reflection on a specific aspect of or programming.
Participation. Evaluations:
q A. Are designed to allow the highest reasonable degree of participation in the implementation and
review of results.
q B. Are completed (draft form) and discussed with project staff while the evaluation team is still in-
country.
Sharing the Lessons-Learned. Evaluations:
q A. Should be short but informative (usually no more than 20 pages plus attachments).
q B. Need to be widely distributed within Mercy Corps (including the Digital Library) to make sure
lessons are learned.
q C. Should be promoted by Program Officers so that other staff know they are available and what
information they contain.
q D. Should be read by other Program Staff working on similar projects and used to improve design and
implementation of Mercy Corps activities.

KEY TERMS

In this section
Many MC staff bring a wealth of DM&E experience to bear on the programs they are
responsible for. Yet coordination on this issue and communication between programs is
often difficult. A frequent obstacle to effective discussion of DM&E are the
misunderstandings that result from a lack of agreed terminology. Many donor and
implementing organizations have their own, specific (and contradictory) definitions of
the terms commonly associated with DM&E. To facilitate communication inside the
Mercy Corps world, the following section lists some key terms and establishes a common
definition. For proposals and donor reporting, our terms can be easily translated into
other formats. For a comparison of Mercy Corps standard definitions and those used by
key donors and colleague agencies, please see Appendix E.

Activities. The things that our project “does” or the actions that we carry out in order to
produce our outputs. Examples include providing training, rebuilding infrastructure,
making loans, monitoring implementation, evaluating impact.

Assessment. A detailed look at a particular region, sector or target population to


determine their vision for the future, assets and needs, and the opportunities and
challenges related to meeting those needs. Assessments are usually conducted before the

36
Project Design phase in order to define our overall strategy. Assessments are what we
use to understand what the problem is and possible ways to address it.

Baseline. A set of data that measures specific conditions (almost always the indicators
we have chosen through the design process!) before a project starts or shortly after
implementation begins. You will use this baseline as a starting point to compare project
performance over the life of the project. Example: If you are on a diet, your baseline is
your weight on the day you begin.

Best Practice. Something that we have learned from experience on a number of similar
projects around the world. This requires looking at a number of “lessons- learned” from
projects in the same field and noticing a trend that seems to be true for all projects in that
field.

Effects. A change that results directly from our outputs and activities. These are short or
medium term changes that should happen during the life of our project. Generally, these
are “changes in a target group’s knowledge, attitudes or behaviors as a result of our
project” and appear as objectives in our log frame.

Evaluation. Evaluation is an in-depth, retrospective analysis of a specific aspect (or


aspects) of a project that occurs at a single point in time. Evaluation is generally more
focused and intense than monitoring and often uses more time-consuming techniques
such as surveys, focus groups, interviews and workshops.

Failure. Projects often fall short of expectations. A failure occurs only when a project
fails to achieve its expected results AND the project management team fails to document
it, analyze it and adjust their strategy in response. If they do identify the problem and
draw a lesson from it, the event is a “learning experience” and is just as valuable to the
agency as a “success story.”

Goal. This is the ultimate reason for undertaking a project or program. It describes the
“end-state” that you would like to achieve. Generally, this is related to the “impact” you
want to have on a target population. Often our projects will not be able to achieve their
goal all by themselves but they should always be able to make a substantial contribution
to it.

Impact. A deep and lasting change we want to bring about in a target region or country.
Our individual projects may only make a partial contribution to achieving this change and
it may occur only after the project is completed. Usually this is “a change in the living
standards or quality of life for a target population” and is directly related to Mercy Corps’
mission to alleviate poverty, suffering and oppression.

Indicator. This is a “unit of measure” that lets us know if our implementation is


successful. For example, if you are on a diet, your main indicator would be “# of kgs
lost”. Indicators can measure our success at many levels. At a minimum, we need
indicators to tell us if we’ve achieved our outputs. But we should have them for our

37
objectives as well. In the best case scenario, we should also try develop them for our
goal.

Lesson-Learned. A short, simple description of something we’ve learned from


experience on a specific project or program. It should be supported with evidence from
our monitoring and evaluation. Lessons- learned should be useful to other people
implementing similar projects around the world.

Monitoring. Regularly collecting, reviewing, reporting and acting on information about


project implementation. Generally used to check our performance against expected
results or “targets” as well as ensure compliance with donor regulations.

Outputs. The final goods and services provided by our project activities. Examples
include training courses, rebuilt homes or infrastructure or microcredit loans.

Objective. This is what we expect to achieve directly through our project or program
outputs. Often, a project will have several objectives and these are generally related to
the “effects” we want to have on a target population. Each objective should be an
important step toward achieving the project’s goal.

Targets. Sometimes called “milestones” or “benchmarks”, these tell us what we plan to


achieve at specific points in the life of our projects or programs. We use them to monitor
our progress toward completion of our activities.

Target Population. The specific population we are trying to assist in a particular


program; i.e. “women and infants in Western Kosovo”, “low income families in the
Ferghana valley” etc.

Triangulation. Data collection from three different sources about the same subject. This
is considered the best way to ensure that our information is valid. For example, if we
want to know about the effects of a community mobilization project, we might collect
data via 1) interviews with key participants, including our own staff 2) a document
review to understand exactly what services were delivered and in what amounts 3) focus
groups and/or a survey of project participants. This helps us avoid the natural biases of
any one method of data collection. Although three different sources are not always
possible, the primary point is to avoid reliance on a single source or perspective.

38
Attachment A

Logical Framework
LOGICAL FRAMEWORK: MATERNAL HEALTH EXAMPLE
GOAL: Ask: What is the impact we want to achieve? What does our community look like if we are successful?

Healthy Mothers and Infants in the Target Population

Definition: Maternal mortality rates 40% lower tha n 1999 levels


SMART OBJECTIVES 1 KEY OUTPUTS 2 MAJOR ACTIVITIES 3 INDICATORS4

Ask: What are the desired effects Ask: What final goods and Ask: What daily efforts contribute to our outputs? Ask: How will we know if we have
on people’s knowledge, attitudes, services will we provide? achieved our Objective?
and behaviors.
1) 75% of mothers are aware of 1) 10 x 30 second radio 1) Create/disseminate public service 1) % of mothers who can identify 2
2 pregnancy-related danger spots. announcements. pregnancy related danger signs.
signs by the end of the project. 2) 250 Well-Trained Health 2) Identify & train health care outreach
Care Outreach Workers workers
3) Survey 3) Disseminate health message through
community mobilization
4) Baseline and final surveys
2) 75% of mothers/expectant 1) 7 New or Rehabilitated 1) Assess community clinic needs 1) % of mothers who attend at least
mothers attend at least 2 routine Clinics 2) Design and Tender for Clinic two prenatal and postnatal visits.
prenatal and 2 postnatal care 2) Transportation to clinics construct/rehab.
visits during the project. provided from remote 3) Clinic Rehabilitation
areas 4) Assess specific transportation needs

3) 75% of mothers with risk 1) 7 Adequately supplied 1) Provide medical supplies 1) % of mothers with risk signs who
signs (bleeding, anemia as clinics 2) Train staff well in EOC receive EOC.
defined by WHO) receive 2) 7 Clinic Staffs Capable
Emergency Obstetric Care of Using Equipment
(EOC) by end of project. 3) Outputs 1&2 above

1
. Reminder: Does achievement of each Objective contribute directly to achieving the Goal?
2
. Is each Output necessary to achieve the Objective?
3
. Does each Major Activity lead directly to the Outputs?
4
. Does each Indicator directly measure progress toward the Objective? If not, does it come as close as possible? Do we have enough to get a fairly reliable
measure of our effects/impact? Do we have more than we need or too many to handle on a regular basis?
Attachment B

Work Plan
General Management
Timeline in Months
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Targets Staff Responsible Notes

Activities
1 Locate new office space x Program Manager
2 Hire field staff x Program Officers
3 Procure computers/printers x 5 desk tops/2 Admin Officer
printers
4 Procure vehicles/radios x x 2 Nivas (Month Admin Officer Nivas locally, Jeeps
1) 2 Jeeps via broker, radios
(Month 2) 12 via HQ
VHF handsets
(Month 2),
Motorbikes
(Month 1)
5 Complete staff orientation x Program Manager
6 Finalize Security Plan x Program Manager
7 Team review of monitoring info x x x x x x Program Manager
8 Submit Final Year One Work x By Aug 30 Program Director Prepared by
Plan Program Manager
9 Submit Mid-Term Report X By Feb 28 Program Director
10 Submit Close-Out Plan x Program Director Includes disposition
of vehicles,
computers etc

11 Finance Reports x x x x x x x x x x x x x x x x x x x x x x x x Finance Manager Reviewed by PD


12 Submit Final Program and X within 90 days of Program Director
Finance Reports project close
Objective 1: 75% of mothers aware of 2 pregnancy-related danger signs by end of project
Timeline

Month
10
Month
11
Month
12
Month
13
Month
14
Month
15
Month
16
Month
17
Month
18
Month
19
Month
20
Month
21
Month
22
Month
23
Month
24
Month 1
Month 2
Month 3
Month 4
Month 5
Month 6
Month 7
Month 8
Month 9
Activities Sub-Activities Targets Staff Responsible Notes

Public Education
Create Announcements x x x x
Broadcast on radio x x x x x x x x x x x x x x x x x x x x x 5 minute spots
on 10 radio
stations
Health Worker Training Maternal Health Also fulfills Objectives
Officer 2-3
Develop Training Materials x x x Maternal Health
Officer
Lease Training Spaces x 3 spaces leased Admin Officer
in month1

Road-Show x x Maternal Health


Assistant
Pre-test of health x Maternal Health
workers Assistant
Deliver Training x x x x x x x x x x x x x x x 50 workers Maternal Health
trained every 6 Officer
months
Post-Test of health workers x x x x x All trainees Maternal Health
score 80% or Assistant
better
Visit Clinics to Ensure Health x x x x x x x x x x x x x x x x Maternal Health
Workers Using Training
Assistant

Monitoring 50% by month Maternal Health


12, 75% by Officer
month 24
Baseline Survey of Mothers x Maternal Health
Officer
Team Meetings to Review x x x x x x x x Program Manager PM organizes and all
Monitoring Data
key staff
attend/Consider data
for all 3 Objectives
Mid-Point and Final Survey x x Maternal Health
of Mothers
Assistant
Objective 2: 75% of mothers attend at least 2 pre-natal visits during project

Month 1
Month 2
Month 3
Month 4
Month 5
Month 6
Month 7
Month 8
Month 9
Month 10
Month 11
Month 12
Month 13
Month 14
Month 15
Month 16
Month 17
Month 18
Month 19
Month 20
Month 21
Month 22
Month 23
Month 24
Activities Sub-Activities Targets Staff Responsible Notes

Community Outreach Community Dev.


Officer
Train Community Organizers x x 25 organizers (5 Community Dev.
per village) Assistants
Conduct Participatory x x 5 assessments Comm. Dev.
Assessments of Clinic needs
by month4 Assistants
per Community
Hold Community Meetings on x x x x x x x x x x x x x x x x x x x x 1 per month/per Volunteer Attended by Comm.
maternal health issues
village Community Dev. Assistants
Organizers
Conduct Community x x x x x x x Monthly report Volunteer Attended by Comm.
Monitoring of Rehab. per village Community Dev. Assistants
Organizers
Clinic X Engineering
Rehabilitation/Construction Officer
Review Community x Engineers/Comm.
Assessments
Dev. Assistants
Design Tender x Engineers
Release Tender x Engineers
Review x Engineering
Tender/Contracting Officer
Meet to review x x x x Engineering
Community Monitoring Officer/Comm.
Results Dev. Assistants
Rehabilitation works x x x x x x 7 clinics rehab'd Clinics must meet
according to legal and professional
comm. And MC criteria for
requirements by construction and
month 11 incorporate
community needs
Tech review X Engineers
Monitoring 50% by month Maternal Health
12, 75% by Officer
month 24
Interviews with Clinic x x x x x x x x x x x x x x x x x Maternal Health
Staff Assisants
Interviews with Comm. x x x Maternal Health
Members Assistants
Clinic Records x x x x x x x x x x x x x x Maternal Health Coincides with
Assistants Records Check for
Obj 3
Survey of Mothers x x Maternal Health As part of Survey for
Assistants Objective 1
Objective 3: 75% of mothers with risk signs receive Emergency Obstetric Care (EOC) by end of project.

Month 1
Month 2
Month 3
Month 4
Month 5
Month 6
Month 7
Month 8
Month 9
Month 10
Month 11
Month 12
Month 13
Month 14
Month 15
Month 16
Month 17
Month 18
Month 19
Month 20
Month 21
Month 22
Month 23
Month 24
Activities Sub-Activities Targets Staff Responsible Notes
Assess Equipment Needs Maternal Health
Officer
Review Protocols x Maternal Health
Officer/Assistants
Meet with Clinic Staff x x Maternal Health
Assistants
Draft Needs per Clinic x Maternal Health
Assistants/Officer
Training on New Equipment x x x x x x

Pre-and-Post Tests x x x x x x 90% pass each Maternal Health Carried out monthly
test Officer along with distribution

Purchase Equipment Maternal Health


Officer/Admin.
Officer
Tender if necessary x Admin. Officer
Receive Equipment x x x Admin.
Officer/Health
Assistant
Deliver to Clinics x x x x Health Assistants
Monitoring 40% by month See also post-tests of
12, 75% by clinic staff under
month 24 Objective 1 for EOC
results and Clinic Staff
interviews under
Objective 2
Observe Use of x x x x x x x x x x x x x x x x Maternal Health Targets coincide with
Equipment Assistants delivery of training
under Objective 1 and
delivery of equipment

Examine pregnancy x x x x x x x x x x x x x x Maternal Health


records Assistants
Attachment C

Indicators Plan
INDICATOR PLAN

Objective 1: 75% OF MOTHERS AWARE OF AT LEAST TWO PREGNANCY-RELATED DANGER SIGNS

Indicator Definition of Indicator Baseline Data and Targets Data Collection Frequency of Person Responsible
and Management Utility Sources & Methods Data Collection
1. % of mothers aware Mothers can list two of the Targets: 1. Baseline 1. Month 2,12, 24 1. Surveys Designed
of at least two four danger signs defined • 50% by month 12 Survey/Final by Maternal Health
pregnancy-related by PEPC Program • 75% by end of project Survey of mothers Officer
danger signs Guidelines. Baseline: 2. Carried Out By
Less than 22% (estimated Maternal Health
Unprompted recall of these according to assessment Assistants
danger signs is a key piece data, to be confirmed by
of awareness and prevention baseline survey in month 1 )

Objective 2: 75% of mothers attend at least 2 prenatal care visits during pregnancy during the project.

Indicator Definition of Indicator Baseline Data and Targets Data Collection Frequency of Person Responsible
and Management Utility Sources & Methods Data Collection
1. % of mothers who Mothers within our target Targets: 1. Interview with 1. Month 8-23 1. Instruments
attend at least 2 prenatal provinces who visit • 50% by month 12 Clinic Staff 2. Months 10- Designed by
care visits during qualified, staffed clinics. • 75% by end of project 2. ClinicRecords 23 Maternal Health
pregnancy. Baseline: Check 3. Months 1, 12 Officer
Prenatal visits to a clinic • 44% in areas with 3. Survey Results and 24 2. Carried Out By
with trained, equipped staff exisitng clinics Maternal Health
are a proven method of • 7% in areas with no Assistants
reducing complications functioning clinic 3. Direct Observation
from birth. • 25.5% average for Carried Out By
entire target area Maternal Health
Officer

Objective 3: 75% of mothers with risk signs receive Emergency Obstetric Care (EOC) by end of project.
Indicator Definition of Indicator Baseline Data and Targets Data Collection Frequency of Person Responsible
and Management Utility Sources &Methods Data Collection
1. % of mothers with “Risk Signs” defined as Targets: 1. Interview with 1. Month 8-23 1. Instruments Designed
risk signs who receive bleeding, anemia, as defined • 90% clinic staff pass Clinic Staff 2. Months 10- by Maternal Health
EOC by end of project. in WHO guidelines. EOC training course* 2. ClinicRecords 23 Officer
• 40% by month 12 Check 3. Months 21- 2. Carried Out By
EOC as defined in Maternal • 75% by end of project 3. Interview 23 Maternal Health
Health Training Module and Baseline: Community 4. Months Assistant
consistent with MoH Thought to be less than 10% Members 7,12,16,19,24 3. Direct Observation
Guidelines. (to be confirmed by baseline 4. EOC Post-Test 5. Months 8-23 Carried Out By
study). Results Maternal Health
5. Direct Officer s
Observation of
Clinic Staff
During Visits *

*Included here as a quality measurement to ensure that the EOC delivered is effective.
Attachment D

USAID Evaluation
Scope of Work
Template
EVALUATION SCOPE OF WORK1
TEMPLATE

WHAT IS AN EVALUATION SCOPE OF WORK?

An evaluation scope of work (SoW) is a plan for conducting an evaluation; it conveys


clear directions to the evaluation team.

A good SoW usually:


• Identifies existing sources of info on implementation
• Clearly states what management need the evaluation will fulfill and the intended
audience
• Outlines evaluation team composition and roles
• Covers schedules and logistical arrangements
• Addresses plans for the highest possible degree of participation by expat and national
staff, partners, project participants and other stakeholders.
• Addresses plans for using the information gained in the evaluation, including
dissemination around the MC world.
• Includes a budget.

ELEMENTS OF A GOOD EVALUATION SCOPE OF WORK

1) The Project or Program to be Evaluated


Identify the project/program, where it takes place, the donor, start and end dates.

2) Background
Give a brief description of the history and current status of the project/program, goals and
objectives, names and roles of partners, basic methodology and any other info to help the
evaluation team understand the context.

3) Existing Project/Program Info Sources


What information exists to help the evaluation team learn about the project and determine
its impact? These include the proposal, log frame, work plan, indicator plan, subsequent
revisions, monitoring and donor reports and any previous evaluation information.

4) Purpose of the Evaluation


We should only do an evaluation if we have an important question to answer. Important
questions include:
• What was our impact or effect?
• Why did our project turn out much differently than expected (in either positive or
negative ways)?
• How sustainable are our results?
1
. This template is a modified version of the one developed by USAID. “Preparing an Evaluation Scope of
Work” part of the TIPS series on Performance Monitoring and Evaluation, (USAID Center for
Development Information and Evaluation, 1996, No. 3).
• What lessons can we learn to help us improve similar projects in the future?

In this section, we should state the reason for the evaluation and its intended audience:
• Who wants the information (donor, program staff, HQ, partners, participants)?
• What do they want to know?
• What will they do with the info?
• When is it needed and in what form?

5) Evaluation Questions
Articulate clearly the main questions the evaluation will have to answer to supply the info
described in section 4 above. Vague questions will lead to vague answers. Too many
questions, on too many topics, will lead to a superficial and unfocused evaluation.

Ensure that questions are management or participant priorities. One approach is to ask the
intended audience what they most want to know and then ask them which of these are
priorities.

6) Evaluation Methods
This section specifies an overall design strategy to answer the evaluation questions and
provides a plan for collecting and analyzing the data.

6.A. Select the Overall Design Strategy


This will depend on the nature of the evaluation questions. For example, if the question is
“What percentage of farmers have obtained credit via our program” then a survey and/or
review of program records would be appropriate. If the question is “Why don’t more
farmers apply for credit” then focus group interviews might be a better tool. If the
question is “Are our credit services more effective than grants” then a comparative design
would be best. The challenge is to chose a design that gives a credible answer yet fits our
time and budget constraints. In practice, most evaluations will use a combination of
techniques.

6.B. Data Collection and Analysis Plan


• Define the “unit of analysis” to be studied: do we expect to have effects on
individuals, families, businesses, communities, clinics etc?
• Data disaggregation requirements (by gender, ethnic group, income level etc)
• How interviewees and other sources will be selected (random sample, purposeful
sample, nominated by staff or community?). Explain decision based on strengths and
weaknesses of this approach.
• Techniques or tools: questionnaires, observation, interviews etc.
• How much data to collect: sample size, number of interviews, number of
communities etc.
• How data will be analyzed: What will you do with it once you collect it?

Note: Often the Overall Design Strategy section will be negotiated with the outside
evaluator and/or other participants and members of the evaluation team.
7) Team Composition and Participation
Identify the approximate team size, the qualifications needed and desired level of
participation. Consider:
• Language skills
• Technical knowledge
• Cultural sensitivity
• Evaluation skills
• Facilitation skills
• Gender mix
• Knowledge of Mercy Corps’ culture and programming
• Who should participate and at what stage (design, implementation, analysis,
dissemination).
• Define the role of each member and list their specific duties.

The exact size and composition of the team is determined by the purpose and strategy, as
well as other constraints such as time, budget, logistics and availability. Technical
knowledge about the specific sectors to be evaluated, language skills, evaluation skills
and cultural sensitivity are all mandatory requirements for a successful evaluation. The
highest possible degree of staff, partner and beneficiary participation should also be
considered. An outside evaluator is not mandatory unless required by the donor or the
specific nature of the question (when objectivity is a high priority).

8) Procedures: Schedule and Logistics


Specify the schedule, logistical arrangements, host office support and other items
essential to implementation of the evaluation. Include:
• The schedule for each event, duration, number of participants.
• Allow time for processing and reflecting on data collected at reasonable intervals
• Include time for preparatory work: document reviews etc.
• Travel times in-country and transport plans
• Due date for draft report (before eval team leaves the field)
• Time, place and participants for review of first draft (while eval team still in-country)
• Necessary services: translators, interpreters, drivers, data processors, facilitators,
access to desk space and computers, printers for non-program evaluation team
members.
• In the case of an outside lead evaluator, provide a point person from the host office to
arrange logistical details before and during the evaluation.

9) Reporting and Dissemination Requirements


• Due date for final report
• Page limit (20 plus attachments)
• Requirements for photos, participant profiles or other special documentation needs
• Plan for translation as necessary
• Recipients of final report, including staff, partners, participants, other stakeholders,
donor, HQ program officer and sector teams and the Digital Library.
10) Budget
Estimate the approximate costs for each component and identify the source of funding.
Include international and in-country travel, team members’ salaries, per diem or
expenses, stipends for partners or other participants, costs for translation, administration,
use of facilities etc.

There is no easy rule of thumb for estimating costs. It will depend on many factors
including your resources, evaluation needs, time frame and availability of in-country
expertise. Your HQ program officer can provide you with sample budgets from other
projects that might help guide your own calculations.
Attachment E

Mercy Corps/Donor
Dictionary
The Mercy Corps/Donor Dictionary
It is important to realize that often donors, partner agencies and Mercy Corps’ staff use
slightly different words to describe the same DM&E topics. This table compares our
definitions of key terms with those used by some of our major donors and colleague
agencies. Mercy Corps’ definitions are based on common usage in our field and the
glossary of terms developed by the DAC/OECD.*

Mercy Corps Goal Objective Outputs Activities Targets


USAID Results Strategic Intermediate Outputs/Expected Inputs
Activities Benchmarks
Framework Objective Results Results
Program Inputs
CARE Effects Outputs Activities Benchmarks
Impact

DFID Goal Purpose Outputs Activities

Project Inputs
CIDA Overall Goal Results/ Outputs Activities Milestones
Purpose
Overall Project
EC/Relex Results Activities Milestones
Objective Purpose
Development Immediate Inputs
FAO & UNDP Outputs Activities
Objective Objectives
Long-Term Short-Term Inputs
World Bank Outputs
Objective Objectives
* The Development Assistance Committee (DAC) Working Party on Aid Evaluation is an international
forum comprising 30 member countries and multilateral donor agencies. Mercy Corps’ definitions are
generally consistent with the DAC’ Glossary of Key Terms in Evaluation and Results-Based Management.
Attachment F

Sphere and Mercy Corps’


DM&E
Appendix F

Sphere and Mercy Corps’ DM&E

The Mercy Corps’ DM&E Guidebook is just one example of our commitment to program
quality. Another, related initiative, is our strong support for and participation in the Sphere
Project, a global NGO effort to increase effectiveness and accountability in humanitarian
assistance. Our involvement with Sphere includes agreement to serve as a pilot agency
looking at how to institutionalize our commitment to the Humanitarian Charter and use of the
Sphere standards as a means of increasing the quality of our programs. Mercy Corps is also
currently serving as the Chair of the Sphere Project management committee and is a strong
advocate for the use of Sphere throughout the humanitarian community, and especially within
its own country programs. Mercy Corps intends that the principles and practices articulated
within the Sphere Handbook will be central to the way that Mercy Corps designs,
implements, monitors and evaluates its disaster response programs. These principles and
practices are completely compatible with, and often more detailed than, the more general
guidance contained in the DM&E Guidebook.

A Brief Introduction to Sphere

Meeting essential needs and restoring life with dignity are core principles that should inform
all humanitarian action.

The purpose of the Humanitarian Charter and the Minimum Standards is to increase the
effectiveness of humanitarian assistance, and to make humanitarian agencies more
accountable. It is based on two core beliefs: first, that all possible steps should be taken to
alleviate human suffering that arises out of conflict and calamity, and second, that those
affected by a disaster have a right to life with dignity and therefore a right to assistance.

The Sphere Handbook is the result of more than two years of inter-agency collaboration to
frame a Humanitarian Charter, and to identify Minimum Standards to advance the rights set
out in the Charter. These standards cover disaster assistance in water supply and sanitation,
nutrition, food aid, shelter and site planning, and health services.

The Humanitarian Charter

The cornerstone of the book is the Humanitarian Charter. Based on the principles and
provisions of international humanitarian law, international human rights law, refugee law,
and the Code of Conduct for the International Red Cross and Red Crescent Movement and
NGOs in Disaster Relief , the Charter describes the core principles that govern humanitarian
action and asserts the right of populations to protection and assistance.

The Charter defines the legal responsibilities of states and parties to guarantee the right to
assistance and protection. When states are unable to respond, they are obliged to allow the
intervention of humanitarian organizations.

The Minimum Standards

The Minimum Standards were developed using broad networks of experts in each of the five
sectors. Most of the standards, and the indicators that accompany them, are not new, but
consolidate and adapt existing knowledge and practice. Taken as whole, they represent a
remarkable consensus across a broad spectrum of agencies, and mark a new determination to
ensure that humanitarian principles are realized in practice.

Scope and limitations of the Humanitarian Charter and Minimum Standards

Agencies’ ability to achieve the Minimum Standards will depend on a range of factors, some
of which are within their control, while others such as political and security factors, lie
outside their control. Of particular importance will be the extent to which agencies have
access to the affected population, whether they have the consent and cooperation of the
authorities in charge, and whether they can operate in conditions of reasonable security.
Availability of sufficient financial, human and material resources is also essential. This
document alone cannot constitute a complete evaluation guide or set of criteria for
humanitarian action.

While the Charter is a general statement of humanitarian principles, the Minimum Standards
do not attempt to deal with the whole spectrum of humanitarian concerns or actions. First,
they do not cover all the possible forms of appropriate humanitarian assistance. Second, and
more importantly, they do not deal with the larger issues of humanitarian protection.

Humanitarian agencies are frequently faced with situations where human acts or obstruction
threaten the fundamental well-being or security of whole communities or sectors of a
population - such as to constitute violations of international law. This may take the form of
direct threats to people's well-being, or to their means of survival, or to their safety. In the
context of armed conflict, the paramount humanitarian concern will be to protect people
against such threats.

Comprehensive strategies and mechanisms for ensuring access and protection are not detailed
in the Handbook. However, it is important to stress that the form of relief assistance and the
way in which it is provided can have a significant impact (positive or negative) on the
affected population’s security. The Humanitarian Charter recognizes that the attempt to
provide assistance in situations of conflict ‘may potentially render civilians more vulnerable
to attack, or bring unintended advantage to one or more of the warring parties’, and it
commits agencies to minimizing such adverse effects of their interventions as far as possible.

The Humanitarian Charter and Minimum Standards will not solve all the problems of
humanitarian response, nor can they prevent all human suffering. What they offer is a tool for
humanitarian agencies to enhance the effectiveness and quality of their assistance and thus to
make a significant difference to the lives of people affected by disaster.

The Sphere Project is a significant process - it has entailed an extensive and broad-based
consultation in the humanitarian community. The people who participated in writing the
Sphere handbook came from national and international NGOs, UN agencies, and academic
institutions. Thousands of individuals from over 300 organizations representing 60 countrie s
have participated in various aspects of the Sphere Project, from developing the handbook
through to piloting and training. The Sphere process has endeavored to be inclusive,
transparent, and globally representative.
More on Sphere

For more information on Sphere, and how to incorporate it into


programming, contact your Country Director, HQ program officer or Nigel
Pont of the GEO team ([email protected]). They can direct you to
variety of resources including:
* How to get copies of the Sphere Handbook
* How to access training modules and events
Attachment G

Mercy Corps’
DM&E Checklist
Appendix G
Mercy Corps’ DM&E Principles At A Glance
Mercy Corps’ commitment to quality DM&E requires us to go beyond the minimum requirements of some of
our donors. A sound program design, for example, often goes beyond simply fulfilling proposal
requirements. The same can be true for monitoring and evaluation. Therefore, we have developed the
following checklist to help review our various DM&E activities and make sure they conform to Mercy Corps'
principles. Use it when reviewing project designs, proposals or reports; designing monitoring systems or
developing the scope of work for an evaluation.

DM&E Checklist

DESIGN
q Assessment Conducted
q A. Assessment data not used in proposal is kept for future reference
q Goal-Oriented Program/Project Design
q A. Design starts with defining a goal based on impact rather than activities.
q SMART Objectives
q A. Key steps in the project which logically, reliably contribute to achieving our goal.
q B. Describe an “end-state” and focus on “effects” (changes in behavior, attitudes or knowledge in
our target population) rather than activities whenever possible.
q C. SMART – Specific, Measurable, Appropriate, Realistic, & Time -Bound.
q Select Appropriate Outputs & Activities
q A. Logically, reliably contribute to our SMART objectives.
q B. Outputs represent our “deliverables” or final products for which we are responsible.
q C. Activities describe the key actions we’ll carry out to achieve our outputs.
q Identify Indicators
q A. Fewer, more direct indicators that measure performance against our objectives as well as
outputs.
q B. Consider relevant standard indicators and consult appropriate sector specialists & other
resources (such Sphere standards).
q Formulate Work Plan
q A. Include monitoring as a key management activity and make resources available to carry it out,
including roles and responsibilities, budgeting time for baselines, regular data collection, review
and reporting.
q B. Include key management and implementation tasks, persons responsible and clear targets for
achieving them so that we can track performance over time.
q Approaches
q A. A high degree of participation of expat and national staff, representatives of the target group,
partner organizations etc in the design of our strategy and in the implementation of the project.
q B. A focus on the highest level of impact or effects possible.
q C. All pieces of program design are logically and causally connected. (Logic is much more
important than vocabulary)
q D. An evidence-based approach that suggests our actions will be successful
q Final Products From Design Phase
q A. Completed Log Frame
q B. Completed Indicator Plan for our SMART Objectives
q C. Completed Work Plan
q D. Folder containing assessment data
q E. Finished proposal, if applicable

MONITORING
q The Process
q A. Regularly collect, review and report on data related to all project indicators, targets and other
donor requirements according to the work plan.
q B. Clearly compare actual results against targets during review of monitoring data.
q C. Use the data to refine the project approach (as necessary).
Reporting
q A. Clearly reflect actual and planned performance for each objective, analysis of the results and
plans for next steps in all project reports (to donors, HQ and others in the “monitoring pyramid”).
Approaches
q A. Missing a planned target is not viewed as a “failure”. Failure is defined as failing to capture this
info, draw conclusions and act on them.
q B. Monitoring is as participatory as possible (including review of the data).
q C. Give attention to the quality of the data. How good is the information?

EVALUATIONS
Design. Evaluations:
q A. Focus on utility. Evaluations should be designed to answer pressing management needs.
q B. Start with a clear Scope of Work.
q C. Are primarily a learning tool rather than an audit of project performance.
q D. Should be designed to yield lessons-learned for similar programming.
q E. Are an in-depth reflection on a specific aspect of or programming.
Participation. Evaluations:
q A. Are designed to allow the highest reasonable degree of participation in the implementation and
review of results.
q B. Are completed (draft form) and discussed with project staff while the evaluation team is still in-
country.
Sharing the Lessons-Learned. Evaluations:
q A. Should be short but informative (usually no more than 20 pages plus attachments).
q B. Need to be widely distributed within Mercy Corps (including the Digital Library) to make sure
lessons are learned.
q C. Should be promoted by Program Officers so that other staff know they are available and what
information they contain.
q D. Should be read by other Program Staff working on similar projects and used to improve design
and implementation of Mercy Corps activities.

You might also like