Planning An Outcome Impact Evaluation

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Planning an Outcome/Impact

Evaluation

New Partners Initiative Technical Assistance (NuPITA) Project


June 2011
The New Partners Initiative Technical Assistance (NuPITA) project is funded by the United States Agency for
International Development (USAID) and implemented by John Snow, Inc. and Initiatives Inc. contract GHS-I-00-
07-00002-00.

This document is made possible by the generous support of the American people through USAID. The contents
are the responsibility of John Snow, Inc. and do not necessarily reflect the views of USAID or the United States
Government.

© June 2011 John Snow, Inc.

NuPITA
John Snow, Inc.
44 Farnsworth Street
Boston, MA 02210-1211
Phone: 617.482.9485
www.jsi.com
The goal of an end-of-project evaluation is to determine whether the program was successful and
to identify lessons that can be used in future programming.

There is no fixed way to conduct an end-of-project evaluation. There are however, steps that
can help a project team decide the best approach for their specific project. The following
questions will help you design an appropriate evaluation:

1. What questions do you want the evaluation to answer?


2. What/who are the data sources that the team would like to include?
3. What methods would be most appropriate to answer these questions?
4. Is the plan realistic?
5. How will the information be presented and shared?

1. What questions do you want the evaluation to answer?


The key question(s) that the evaluation will want to address is: to what extent did the project
achieve the outcomes and/or impact that it set out to achieve? Review your logic model to think
about each of your proposed outcomes or objectives, and impact. Sometimes it is not feasible to
measure a change in the impact, especially over a short period of time. If this is the case, is there
some proxy that can be measured to at least suggest that progress has been made towards the
desired impact? In the Next Generation Indicator Guide, the number of women who present at
labor and delivery (L&D) sites supported by USG with unknown status provide a proxy for those
who have not attended ANC with PMTCT services. Another example is an indicator tracking
school attendance as a proxy for quality of education service to OVC.

In addition to measuring the outcomes and possibly the impact, the project is also likely
interested in finding out what worked and what could have been improved. This can be
considered from a variety of perspectives, such as: service delivery to clients; project
management (how efficient and effective were the various project systems?); scope of programming
(breadth and depth – did the project spread itself too thin? Should the depth or comprehensiveness of
the project have been greater?); and sustainability of the services (are there opportunities for
sustaining the services under this project?). Ask questions that will help the organization to
improve the quality of future projects, and will provide the organization with useful
information for future funding opportunities and technical linkages.

Remember that less is often more: do not try to answer too many questions. Pick the most
important ones and focus on getting quality answers to these key questions.

2. What/who are the data sources that the team would like to include?
Information for your evaluation will likely come from a variety of sources. What are the most
feasible sources for getting the information that you want?

What existing data sources will you want to review? These may include project records such as
annual reports, routine data, publicity that the project has received, materials such as BCC

1
materials or data collection forms that the project created, etc., evaluations that were done on
the project (by external sources, baseline evaluations, or midterm reviews), etc.

Whose input do you want to include in the evaluation? Stakeholders you may want to consider
include:
o Clients
o Partner organizations
o MOH staff (at the appropriate level, e.g. district)
o Project managers and staff

3. What methods would be most appropriate to answer these questions?


The evaluation will likely include many formats. For each key question, consider what are the
most efficient and effective methods of collecting the information that the project wants. For
example, focus group discussions may be conducted with clients, brief structured interviews
with implementing partners, a review of key project documents, analysis of routine data (across
years or sites, against targets, etc.), a final participatory capacity self-evaluation , possibly a
survey of households to assess either satisfaction, knowledge, or health status. If a baseline was
conducted, part of the endline evaluation will include asking the same questions to look for
changes over the life of the project. If your evaluation includes a study, consider whether it will
need to be reviewed by an ethics review (and if so, what are the time implications of this). The
likelihood of this being the case for NPI partners is very small. Also, consider the most
appropriate way to obtain a representative sample.

As with every step in designing the evaluation, take a moment to ensure that what you are
planning is feasible.

4. Is the plan realistic?


Each end of project evaluation will be different because the goals and objectives, as well as the
implementation of each project will be different. There is often a tendency to want to answer
more questions than is realistic. It is better to ask a few key questions that are answerable, than
to try to address too many things in the endline and as a consequence, to end up with poor
quality information for each question.

Filling in the table on the following page may help to organize the planning of the evaluation,
and to help determine if the scope of the evaluation is too large. Feel free to modify the table
as appropriate.

5. How will information be presented and shared?


Once you have designed an evaluation plan that will enable you to answer your key questions,
but before you actually begin the data collection, develop a plan for presenting your findings. This
will include an outline for a report, as well as other methods such as: brochures, presentations
back to the community etc. Creating the report outline at this stage will help you to focus your
efforts on a final product that is useful, and that meets the needs of various stakeholders (your
organization, the board, the donor, the community, etc.).

2
Creating a realistic evaluation plan: Fill in the appropriate cells below (modify cells as needed). For each line, consider if it is
feasible to conduct this portion of the evaluation.

What do we need What evaluation questions do we need to ask? Where/who What/who are Who is
to evaluate/why is Sample questions provided below. will the data responsible
it important to information sources for each
ask this question? come from that the data
to answer team would collection
this like to method?
question? include?
Outputs 1. Have the targets set out at the beginning of the project been met?
2. What could best explain project targets being met or not met?
Outcomes 3. What were the pre and post intervention knowledge and skills of
individuals benefiting from the service? Did individuals trained utilize skills
gained? What was the reported or observed change in the status of
beneficiaries (individuals or households)?
4. What could be the determinants of successful utilization health status
following project intervention? Has the project made a difference? Why
or why not?
5. Did the intervention facilitate strengthening of linkages?
6. Did the project improve the quality of service delivery?
Assessing 7. Do the results of your a population or beneficiary survey on selected
contribution/progress outcome/impact indicators indicate an improvement in comparison with
towards desired impact your baseline and or national statistics for your catchment population?
Sustainability 8. Are the benefits from your intervention sustainable? How has the
project facilitated potential replication and scale-up of the identified
good lessons learned?
Quality of service 7. Are intervention beneficiaries satisfied with the quality of services?

You might also like