INTRAC NGOs Impact Assessment
INTRAC NGOs Impact Assessment
INTRAC NGOs Impact Assessment
3, March 2001
For the NGO Sector Analysis Programme
Jerry Adams, Senior Consultant at INTRAC, prepared this Policy Briefing Paper. Comments on the paper
are welcome and can be sent to: [email protected]
This paper has been produced as part of the NGO Sector Analysis Programme for the following European
organisations: APSO, Cordaid, Concern Worldwide, DanChurchAid, MS Denmark, Norwegian Church Aid,
Novib, Rdda Barnen, Redd Barna and Save the Children Fund - UK. For further information about the
programme please contact Vicky Brehm at INTRAC ([email protected])
In contrast others such as Maxwell (1992) take a highly qualitative approach. They argue that what matters
in qualitative research is how well the data is interpreted and what claims are made from it. A mid point
between these two extremes is taken by researchers such as Patton who see the validity of an assessment
as based on the quality or information-richness of the data and the ability of the researcher to analyse it
(Patton, 1990:185).
Organisations such as ODI, Oxfam and ActionAid, to name a few, have carried out important work on impact
assessment. Oxfam and ActionAid have concentrated on taking a naturalistic approach to assessing
impact. Their results point to difficulties in data collection and the tendency has been to recommend the
need for detailed project monitoring and evaluation systems. Furthermore, a naturalistic approach is very
time-consuming and costly. ODI have taken a different approach and proposed using the tool of cost
effectiveness analysis as a basis for rapid assessments1. However, they too have run into problems of
limited availability of data.
Co-operative
Collaborative
Collegiate
A key lesson learnt is that a participatory approach needs more time to ensure that all stakeholders are
involved. The assessment visit needs to be long enough to allow the stakeholders to understand and
own the information and analysis. It is clear that there is a minimum period of time below which the
results are not believable to the stakeholders, however accurate they may be (Patton 1990).
In trying to increase levels of participation, this raises the very real question of what level of
participation can be achieved when impact assessment is used at a macro level. How feasible is it to
involve all stakeholders fully? Is participation only feasible when stakeholders have similar or agreed
objectives? This point is made in the ISODEC Review that found that the participation of stakeholders in
assessments required a significant amount of time and commitment and could not be created rapidly
(Kamara and Roche, 1996: 14).
A concise explanation of cost effectiveness analysis is given in the Good Practice Review on Evaluating Humanitarian
Assistance Programmes in Complex Emergencies (Hallam, 1998).
data collection tools require skill in order to be used properly and effectively. The key question is: do
they provide reliable and valid data of sufficient quantity and quality?
Qualitative assessment tools, such as semi-structured interviews, focus groups and observation
combined with analysis of secondary data form the basis for the data collection methods. In addition,
more participatory tools such as mapping and ranking can be used, as long as they fulfil the criteria
above and are not used in an extractive fashion.
N/A
Not Relevant
1
Very Weak
2
Weak
Scale
3
Average
4
Strong
5
Very Strong
An example of a scoring system where semantic ends were developed is given below:
TO EMPOWER: Capacity
1. Evidence of policies/strategies which ensure an
empowering management process
2. Methods of leadership and delegation which empower
3. Evidence of structures or forums owned by staff
4. Level of hope and belief in their ability to change for the
better
(Adams 2000, unpublished).
Point 1
Unclear
Point 5
Clear
Disempowering
Little evidence
Low
Empowering
Much evidence
High
The scoring system facilitates comparison. However, the scores only make sense with the support of
the narrative and regular in situ debriefings so that the consultancy team can share impressions. At
the end of the assessment, the team draw their findings together to get an average score for each meta
and sub indicator.
Participation
The participation of stakeholders is a critical area to address because of its fundamental importance to
NGOs and their approach to development. The key lesson emerging is that the more participatory the
approach, the more time is needed to ensure that all stakeholders are involved. The assessment visit needs
to be long enough to allow time for the stakeholders to understand and own the information and analysis. In
this respect, the statement by House (in Patton, 1990) of the importance of persuading the audience, is
helpful in that it shows a limitation in applying the methodology. It is clear that there is a minimum period of
time below which the results are not believable to the stakeholders, however accurate they may be.
The Use of Different Tools
Whilst each of the different tools - analysis of secondary data, beneficiary assessment, self assessment,
interviews, focus groups and so forth - is useful, each requires skill in application and needs to be
complemented by other tools. This bears out the statements by Fielding and Fielding (1986) on the
importance of selecting the right tools and then using them so that through triangulation an accurate picture
is built up.
make use of the outputs of the assessment and to have some input into the form those outputs should
take.
Impact Assessment as a Process, not a Final Act
One of the difficulties in trying to assess impact is that it can actually occur at a number of different stages
of a project or even long after a project has finished. It can also occur outside the project area and to groups
of people who are not involved in the process. In this respect, the definitions of impact assessment are not
helpful as they can imply that impact is a goal to be reached/achieved and that once achieved it cannot be
lost. The Ibis study recognised that impact can happen at a number of different points both during and after
the project process. In using workshops, stakeholders had an opportunity to reflect on what has been
achieved and to take it further. Thus, the actual process of the impact assessment is contributing to the
sustainability of the impact.
A Definition of Impact Assessment or a Set of Qualities?
The Ibis Impact Assessment process questioned whether a definition of impact needs to incorporate the
concept of long term sustainable change, or whether in fact it is more accurate to define impact as a
process where the sustainability of that process is dependant on a number of factors. In Namibia for
example, a key constraint to the sustainability of some of the impacts of the Life Science Project is the
negative impact of HIV/AIDS.
Tearfund:
ADAMS, J. (2000) The Development of a Methodology to Assess the Impact of Tearfund on its Partners and
through them on the Poor. A Paper Outlining the Process and Lessons Learned. Tearfund:
unpublished, 28pp.
ADAMS, J., Batchelor, S., McKemey, K. and Wallace, I. (1999) Developing a Methodology for Measuring the
Impact of an International NGO and its Local Partners. Reading:AERDD, 19pp. (Working
Paper 99/1).
ALKIRE, S. (1997) Impact Assessment: Oxfam Versus Poverty. Oxford: Oxfam, 18 pp.
BIGGS, S. (1989) Resource Poor Farmer Participation in Research: A Synthesis of Experiences from Nine
Agricultural Research Systems.
BLANKENBERG, F. (1995) Methods of Impact Assessment Research Programme, Resource Pack and
Discussion. The Hague: Oxfam UK/I and Novib.
COHEN, L. and Manion, L. (1989) Research Methods in Education. London: Routledge, xxii + 413pp.
EDWARDS, M., and Hulme, D. (1995) Non-Governmental Organisations Performance and Accountability
Beyond The Magic Bullet . London: Earthscan, 259pp.
FEURSTEIN, M. T. (1986) Partners in Evaluation: Evaluating Development and Community Programmes
With Participants. London: Macmillan, xii +196pp.
FIELDING, N. and Fielding, J. (1986) Linking Data: The Articulation of Qualitative and Quantitative Methods
in Social Research. London: SAGE, 96pp. (Qualitative Research Methods Series No. 4).
FIRESTONE, W. (1987) Meaning in Method: The Rhetoric of Quantitative and Qualitative Research, In
Educational Researcher, Vol.16, (7) : 16-21.
FOWLER, A. (1997) Striking a Balance: A Guide to Making NGOs Effective in International Development.
London: Earthscan/INTRAC. 298pp.
GOYDER, H. (1995) New Approaches to Participatory Impact Assessment. London: ActionAid,
GOYDER, H., Davies, R. and Williamson, W. (1997) Participatory Impact Assessment . London: ActionAid,
57pp.
GUBA, E. and Lincoln, Y. (1989) Fourth Generation Evaluation . London: SAGE, 294pp.
HALLAM, H. (1998) Evaluating Humanitarian Assistance Programmes in Complex Emergencies. London:
ODI, 126pp. (RRN Good Practice Review No. 7).
HILDEBRAND, P.E. (1981), Combining Disciplines in Rapid Appraisal: The Sondeo Approach. Agricultural
Administration, Vol. 8 : 423-432.
HOPKINS, R. (1995) Impact Assessment Overview And Methods Of Application. Oxford: Oxfam (UK&I) and
Novib, 70pp.
INTRAC (1999) Evaluating Impact: the Search for Appropriate Methods and Instruments. In Ontrac, No.12.
KAMARA, S. and Roche, C. (1996) A Participatory Impact Assessment and Advocacy Project for Poverty
Alleviation in Northern Ghana. Ghana and Oxford: ISODEC and Oxfam, 16pp.
KRUSE, S-E., Kylonnen, T. Ojanpera, S. Riddell, R. and Vieljus, J-L. (1997) Searching For Impact And
Methods NGO Evaluation Synthesis Study. Finland: Finish Ministry of Foreign Affairs,
xxvii+117pp. (A Report Prepared for The OECD/DAC Expert Group On Evaluation.)
MARSDEN, D., Oakley, P. and Pratt, B. (1994) Measuring the Process: Guidelines for Evaluating Social
Development. Oxford: INTRAC, 178pp.
OAKLEY, P., Pratt, B. and Clayton, A. (1998) Outcomes and Impact: Evaluating Change in Social
Development. Oxford: INTRAC, 177pp.
PATTON, M. (1990) Qualitative Evaluation and Research Methods. London: SAGE, 532 pp.
RIDDELL, R. (1990) Judging Success: Evaluating NGO Approaches To Alleviating Poverty In Developing
Countries . London: ODI, 57pp. (ODI Working Paper 37).
ROCHE, C. (1999) Impact Assessment for Development Agencies. Oxford: Oxfam/NOVIB, viii:308pp.
RUXTON, R. (1995) Participation In Impact Assessment Oxford: OXFAM UK/I. 23pp. (draft).
Save the Children Fund UK (1994) Toolkits: A Practical Guide to Monitoring, Assessment, Review
and Evaluation. London: SCF, 354 pp. Development Manual 5.
YIN, R. (1989) Case Study Research Design Methods. London: Sage, 166pp.
Abbreviations and Acronyms
DAC Development Assistance Committee (of the OECD)
ODI
Overseas Development Institute
OECD Organisation for Economic Co-operation and Development
INTRAC (International NGO Training and Research Centre)
P.O. Box 563, Oxford OX2 6RZ, United Kingdom.
Website: http://www.intrac.org