Role of Impact Evaluation Analysis in Social Welfare Programmes

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 5

Role of Impact Evaluation Analysis in

Social Welfare Programmes

-R. S. Ramesh

An impact evaluation assesses changes in the well-being of individuals,


households, communities or firms that can be attributed to a particular
project, program or policy. The central impact evaluation question is
what would have happened to those receiving the intervention if they
had not in fact received the program. Since we cannot observe this
group both with and without the intervention, the key challenge is to
develop a counterfactual – that is, a group which is as similar as
possible (in observable and unobservable dimensions) to those
receiving the intervention. This comparison allows for the
establishment of definitive causality – attributing observed changes in
welfare to the program, while removing confounding factors.

Impact evaluation is aimed at providing feedback to help improve the


design of programs and policies. In addition to providing for improved
accountability, impact evaluations are a tool for dynamic learning,
allowing policymakers to improve ongoing programs and ultimately
better allocate funds across programs. There are other types of
program assessments including organizational reviews and process
monitoring, but these do not estimate the magnitude of effects with
clear causation. Such a causal analysis is essential for understanding
the relative role of alternative interventions in reducing poverty.

Why conduct an impact evaluation?

Information generated by impact evaluations informs decisions on


whether to expand, modify, or eliminate a particular policy or program
and can be used in prioritizing public actions. In addition, impact
evaluations contribute to improve the effectiveness of policies and
programs by addressing the following questions:

• Does the program achieve the intended goal?


• Should this pilot program be scaled up? Should this large scale
program be continued?
• Can the changes in outcomes be explained by the program, or
are they the result of some other factors occurring
simultaneously?
• Do program impacts vary across different groups of intended
beneficiaries (males, females, and indigenous people), regions,
and over time?
• Are there any unintended effects of the program, either positive
or negative?
• How effective is the program in comparison with alternative
interventions?
• Is the program worth the resources it costs?

When to conduct an impact evaluation?

Impact evaluations demand a substantial amount of information, time


and resources. Therefore, it is important to select carefully the
interventions that will be evaluated. One of the important
considerations that could govern the selection of interventions
(whether they be projects, programs or policies) for impact evaluation
is the potential of evaluation results for learning.

Answering four questions would help guide the decision of when to


conduct an impact evaluation:

Is the policy or program considered to be of strategic relevance for


poverty reduction? The decision of what to evaluate depends on what
are the most critical public actions to reduce poverty. Interventions
that are expected to have the highest poverty impacts may be
evaluated to ensure that poverty reduction efforts are on the right
track and allow for any necessary corrections.

Is the intervention testing an innovative approach to poverty


reduction? Impact evaluations can help to test pioneering approaches
and decide whether they should be expanded and pursued at a larger
scale. Hence, the innovative character of policies or programs also
provides a strong reason to evaluate. This can be built into project
design where, before committing large amounts of resources, multiple
variations of the intervention are tested against each other.

Is there sufficient evidence that this type of intervention works well in


a number of different contexts? If the answer to this question is yes,
then the scarce resources may best be devoted to helping adapt this
intervention to local conditions and paying close attention to
monitoring and supervision. If, however, there are significant
differences in local conditions and/or the target population that cast
doubt on the applicability of results from elsewhere, then an evaluation
may be worth considering. (See our database of completed
evaluations to compare results.)

When do we expect outcomes to show an effect? Certain


outcomes/impacts take time to materialize. In some cases this may
mean that it is better to delay the final stage of the evaluation until
these will show an effect. In other cases it may be better to choose a
proximate set of indicators which are causally linked to the ultimate
outcomes and are likely to show an effect earlier. Of course, the most
comprehensive strategy is to combine both of these types of
indicators.

How to evaluate the impact of interventions?

An impact evaluation must estimate the counterfactual, which


attempts to define a hypothetical situation that would occur in the
absence of the program, and to measure the welfare levels of
individuals or other identifiable units that correspond with this
hypothetical situation. In order to identify the proper comparison
group, a careful understanding of how beneficiaries enter the program
is critical. Ideally an evaluation should be built into a program as the
program is designed which means that the evaluation should be
planned as early as possible. Guidance for task managers on managing
this process can be found in Impact Evaluation and the Project Cycle.

Note that it is important to distinguish between programs which cover


some sub-set of the population (e.g. targeted social programs) and
those that are national in scope (e.g. trade reforms). By nature,
national programs do not have a group which is unaffected by the
program. Hence, these programs require different techniques for
evaluation such as CGE modeling, simulations, and before and after
(reflexive) comparisons. These are not impact evaluations in the sense
that it will be impossible to rule out confounding effects in order to
establish causality.

In cases where there is variation in eligibility (e.g. by individual,


household or geographical location), impact evaluation is likely to be
feasible. This variation can be used to identify the appropriate
comparison group through a range of methods. See Methods and
Techniques for a more in depth discussion.

What is the role of impact evaluation in monitoring & evaluation?

As a component of the monitoring and evaluation process, impact


evaluations are an essential instrument to test the validity of specific
approaches to development and poverty alleviation. Impact
evaluations help those involved on a project to establish whether or
not there is a causal link between an intervention and those outcomes
that are of importance to the policymaker. The counterfactual analysis
used by impact evaluations is a critical tool for assessing the
effectiveness of development interventions. By providing critical
feedback with respect to what works and what does not, impact
evaluations can help to solidify a results-based project structure.

In order to measure the impact of an intervention, a clear, well-


designed evaluation strategy is necessary. Incorporating an impact
evaluation into a development program requires a well-structured
monitoring and evaluation plan. Every impact evaluation requires a
specific methodological design, many of which are described on this
site. Through conversations among project managers, government
officials, and researchers, the appropriate methodology is chosen and
incorporated into the monitoring and evaluation process.

Impact evaluations fit into the chain of monitoring and evaluation


process in several ways. First, they help to assess the casual link
between an intervention and an outcome of interest. Second, impact
evaluations provide baseline evidence for the effectiveness of an
intervention, which can be compared with other similar interventions.
Through this process, impact evaluations assist in establishing credible
cost-effectiveness comparisons. Third, impact evaluations can serve to
build the knowledge base of what works in development. With and
increasing demand for evidence of aid effectiveness, rigorous
evaluations offer a method through which development successes can
be highlighted.

What are the Development Impact Evaluation (DIME) objectives and


initiatives?

The Development Impact Evaluation (DIME) initiative is a World Bank-


led effort involving thematic networks and regional units under the
guidance of an expert.

Its objectives are:

• To increase the number of Bank projects with impact evaluation


components;
• To increase staff capacity to design and carry out such
evaluations;
• To build a process of systematic learning based on effective
development interventions with lessons learned from completed
evaluations.

Impact evaluations interalia assess the specific outcomes attributable


to a particular intervention or program. They do so by comparing
outcomes where the intervention is applied against outcomes where
the intervention does not exist. The appropriate comparison group is
that which represents what would have happened in the absence of
the intervention. By establishing a good comparison of outcomes for
these two groups, an impact evaluation seeks to provide direct
evidence of the extent to which the intervention changes outcomes.
The goal is to verify the effect on outcomes that is caused by the
intervention.

Impact evaluations generate knowledge on which sorts of programs


create substantive results and which do not, and under what
circumstances. Such information is critical not only for the
policymakers directly in charge of the program evaluated, but also for
others who may be considering adapting its approach for use in their
own circumstances. Particularly when used strategically to test the
effectiveness of specific approaches in addressing key development
challenges, impact evaluations constitute the preferred approach to
assessing results. Further, they can provide critical inputs
(benchmarks) to other monitoring and evaluation activities.

Under DIME, groups of impact evaluations of strategic interventions


are initiated in a coordinated fashion across countries in different
regions of the world. This allows a comparative analysis of results in
different settings and produces more robust estimates of program
impact to inform policy and program design in the future. The initial
themes include interventions to improve education service delivery,
conditional cash transfers, urban upgrading programs, and early
childhood development programs.

References:

1. Finance and Development: A World Bank Publication, 2006


2. IMF News Letter: 2008
3. Reserve Bank of India bulletins, 2004-06
4.Evaluation and Monitoring –Some Issues, Cohen & Gregory, UOL
(unpublished) paper,2008
5. Strengthening M&E Components in SMEs of Karnataka-
by R.S.Ramesh (unpublished thesis ),2008

You might also like