KPMG-Monitoring and Evaluation in The Development Sector

Download as pdf or txt
Download as pdf or txt
You are on page 1of 32

Monitoring and

Evaluation in the
Development
Sector
A KPMG International
Development
Assistance Services
(IDAS) practice survey

KPMG INTERNATIONAL
Contents Foreword – Monitoring
Evaluation (M&E) Surve

Foreword – Monitoring & Survey Highlights


Evaluation (M&E) Survey 5
Monitoring
Purpose of monitoring
Survey highlights 6 Monitoring system effect

Evaluation Purpose
Monitoring8 Evaluation priorities
Decision rules
Purpose of monitoring9
Tracking outputs and outc
Monitoring system effectiveness9
Evaluation Managemen
Institutional arrangement
Evaluation purpose 10
Change in M&E approach
Evaluation priorities10 Evaluation methodologie
Decision rules11 Evaluation techniques
Strengths and weakness
Tracking outputs and outcomes11

Use of New Technology


Evaluation management Roadblocks to using tech
and approaches 12
Evaluation Feedback Lo
Institutional arrangements13
Timeliness of evaluations
Change in M&E approaches  14
Resources for Monitorin
Evaluation methodologies15
Availability of M&E resou
Evaluation techniques16

Strengths and weaknesses of Role Models in M&E


evaluation17 Methodology Case Study

Survey Methodology

Glossary
&
ey, 2014 4

6 Use of new technology 18


8 Roadblocks to using technology19

9
tiveness9
Evaluation feedback loops 20
10 Timeliness of evaluations20
10
11
comes11 Resources for monitoring
and evaluation 22
nt and Approaches 12
Availability of M&E resources22
ts13
hes  14 Role models in M&E 23
es15
16
 ethodology Case
M
ses of evaluation17
Study: Outcome mapping 24
 18
hnology19 Survey methodology 27
oops 20
s16 Glossary28

ng and Evaluation 16
Bookshelf30
urces16

17
y: Outcome mapping18

19

20
4 Monitoring and Evaluation in the Development Sector
Monitoring and Evaluation in the Development Sector 5

Foreword
Monitoring & Evaluation
(M&E) Survey
We are pleased to present findings from KPMG’s Monitoring and Evaluation
(M&E) Survey, which polled more than 35 respondents from organizations
responsible for over US$100 billion of global development expenditure.
The survey reflects perspectives from M&E leaders on the current state,
including approaches, resources, use of technology and major challenges
facing a variety of funders and implementers.
At a time of increasing public scrutiny of development impacts, there
is increased focus in many development agencies on M&E tools and
techniques. The objective of KPMG’s M&E Survey was to understand
current approaches to M&E and their impact on project funding, design, and
learning. More effective M&E is necessary to help government officials,
development managers, civil society organizations and funding entities to
better plan their projects, improve progress, increase impact, and enhance
learning. With an estimated global spend of over US$350 billion per annum
on development programs by bilateral, multilateral, and not-for-profit
organizations, improvements in M&E have the potential to deliver benefits
worth many millions of dollars annually.
Our survey reveals a range of interesting findings, reflecting the
diversity of institutions consulted. Common themes include:
•  A growing demand to measure results and impact
•  D
 issatisfaction with use of findings to improve the delivery of new
programs
•  Resourcing as an important constraint for many respondents
•  New technology is still in its infancy in application
On behalf of KPMG, we would like to thank those who participated
in this survey. We hope the findings are useful to you in addressing
the challenges in designing and implementing development projects
and also to build on the lessons learned. By enhancing the impact and
delivery of development projects, we can all help to address more
effectively the challenges facing developing countries.

Timothy A. A. Stiles Trevor Davies


Global Chair, IDAS Global Head, IDAS Center of Excellence

KPMG [“we,” “our,” and “us”] refers to the KPMG IDAS practice, KPMG International, a Swiss entity that serves as a
coordinating entity for a network of independent member firms operating under the KPMG name, and/or to any one or
more of such firms. KPMG International provides no client services.
6 Monitoring and Evaluation in the Development Sector

Survey highlights
Survey respondents used divergent organizational definitions of
various M&E terms. This is potentially problematic for both donors
No clear consensus on
and implementers for a variety of reasons, including lack of clarity on
terminology or approach
monitoring approaches and evaluation techniques. (See Glossary for
terminology used in this report).

Although there are a wide range of evaluation techniques available,


ranging from the highly technical (such as counterfactual studies) to the
innovative, (such as Social Return on Investment (SROI)), our survey
Availability of more indicates that the most widely used techniques are in fact quite basic.
sophisticated evaluation The top three techniques used are:
models and techniques
doesn’t guarantee their use 1. “Logical frameworks”
2. “Performance indicators” and
3. “Focus groups”

Project improvement and accountability to funders drives the


motivation for monitoring projects. The vast majority of respondents
said they monitor projects for project improvement, and also said that
they carry out evaluations to ensure that lessons are learned and to
Need for stronger and improve the development impact of their projects.
more timely feedback
However, over half of respondents identified “Changes in policy and
loops to synthesize and
practice from evaluation” as “poor” or “variable” and nearly half of all
act on lessons learned
respondents identified as a weakness or major weakness the ability of
their “Feedback mechanisms to translate into changes.”
This presumably means that reports are produced but they are not acted
upon often enough or in a timely fashion, representing a missed opportunity.

The use of innovative technologies, such as mobile applications, to


address international development challenges has gained recent
attention. When asked about use of technology to collect, manage and
Adoption of new analyze data, the vast majority of respondents said that “Information
technologies is lagging and Communication Technology enabled visualizations” were “never”
or “rarely” used; and almost as many respondents indicated that “GPS
data,” a relatively accessible technology, was never or rarely used.
This means that M&E is still a labor-intensive undertaking.

Lack of access to quality Over half of respondents identified a lack of financial resources as a
data and financial major challenge to improving the organization’s evaluation system.
restrictions are the A similar majority of respondents estimated levels of resourcing for
key impediments to evaluation at 2 percent or less of the program budget, which many
improving M&E systems survey respondents indicated to be inadequate.
Monitoring and Evaluation in the Development Sector 7

Policy implications and recommendations

01
Development organizations should expand their use of innovative approaches to
M&E, using information and communication technology enabled tools to harness
the power of technology to reduce the costs of gathering real-time data.

02
Development organizations need to strengthen feedback from evaluation into
practice through rapid action plans, with systematic tracking, and more effective
and adequately resourced project and program monitoring practices and systems.

03
It is a false economy to underinvest in M&E as the savings in M&E costs are likely to
be lost through reduced aid and development effectiveness. Organizations should
monitor the M&E expenditure as a share of program, and move towards industry
benchmarks where spending is low.

04
Standardized terminology and approaches, such as that provided by the Organisation for
Economic Cooperation and Development (OECD) Development Assistance Committee,
should be applied within nongovernmental organizations (NGOs) and philanthropic
organizations, in order to standardize and professionalize approaches to M&E.

05
Evaluation approaches in NGOs are driven by donors without adequate
harmonization of approaches and joint working. Donors should apply the
principles of harmonization not just to developing countries, but also to NGO
intermediaries, both to reduce the administrative burden and to allow a more
strategic and effective approach.

06 Evaluation systems should include opportunities for feedback from primary


beneficiaries.

07 Project evaluations should be synthesized appropriately through adequate


investment in sector and thematic reviews and evaluations.

08
Fully independent evaluation organizations or institutions provide an effective model
to professionalize and scale up evaluation work, with appropriate support from
independent experts.
8 Monitoring and Evaluation in the Development Sector

Monitoring Achieving maximum development impact is high among the


priorities for monitoring, but challenges remain in using information
to improve program delivery. Less than half the respondents stated
that organizations always or very frequently update targets and
strategies, and in less than 40 percent of cases do organizations
always or very frequently produce clear action plans with follow-up.
Monitoring and Evaluation in the Development Sector 9

Purpose of monitoring
Question: What is the key focus of the organization in project monitoring? Monitoring data is seen as a very
The most important purposes of monitoring are for project improvement important input to evaluation,
(91 percent of respondents) and accountability to funders (87 percent). but since the data are not often
Organizations are more aware of monitoring accountability to funders than to
their own internal boards. It is also striking, in the current climate, that value there, its use is limited.
for money is accorded a relatively lower priority for monitoring information than
most other motivations.

Figure 1: “Most important” or “Very important” monitoring objectives


(multiple responses allowed)

Project Improvement 91%

Accountability to funders 87%

Portfolio performance management 75%

Impact Measurement 68%

Compliance 66%

Value for Money 50%

Accountability to board 48%

Source: Monitoring and Evaluation in the Development Sector, KPMG International, 2014.

Monitoring system effectiveness


Question: How would you assess the monitoring system of your organization?
The basics of the monitoring system are functional in most of the organizations
covered. The strengths of monitoring systems include monitoring in line with
project plans at inception and aggregation of monitoring results. Relative challenges
include lack of sufficient staffing and resources, and the failure to produce clear
action plans with appropriate follow-up to ensure that issues identified during
monitoring are effectively actioned.

Figure 2: “Always” or “Very frequently” used monitoring attributes


(multiple responses allowed)

Projects monitored with 68%


plans at inception

Monitoring results aggregated 61%

Primary beneficiaries and


stakeholders consulted annually 48%

Monitoring results in updated


targets and strategies 45%

Monitoring plans integrated with


evaluation framework 45%

Monitoring produces clear action


39%
plans with appropriate follow-up
Programs teams have sufficient
staffing and travel resources 35%

Source: Monitoring and Evaluation in the Development Sector, KPMG International, 2014.
10 Monitoring and Evaluation in the Development Sector

Evaluation purpose
Learning is the most important There are many factors which influence why organizations
objective. However, the undertake evaluations of their activities, and these are not mutually
Directorate would say that exclusive. Broadly speaking these are focused around operational
showing politicians we are effectiveness, and external accountability to different constituencies.
effective to secure future
funding is paramount. Evaluation priorities
Question: What are the main reasons why your organization conducts formal
evaluations?

Eighty-five percent of respondents indicated that learning lessons was a


key motivation for evaluation, followed closely by 82 percent that identified
development impact. Relatively less emphasis is given to accountability to
taxpayers and trustees, and attracting additional funding. Operational effectiveness
is the more dominant reason why organizations undertake evaluation. In terms of
accountability, improving transparency and accountability dominate; however, some
organizations struggled to rank effectiveness above accountability.

Figure 3: “Most important” or “Very important” evaluation objectives


(multiple responses allowed)

Development Impact Focused Accountability Focused

To improve transparency 79%


To ensure lessons are learned 85% and accountability
from existing programs

To improve 82% To meet donor demands 75%


development impact

To provide evidence 71% To meet statutory demands 57%


for policy makers

To pilot the effectiveness


of innovative approaches 70% To meet board or trustee requirement 48%

To improve value for money 55% To show taxpayers aid is effective 45%

To attract additional funding 52%


Source: Monitoring and Evaluation in the Development Sector, KPMG International, 2014.
Monitoring and Evaluation in the Development Sector 11

Decision rules Under the current drive for


Results Based Management,
Question: Are there decision rules which you follow when deciding when/ we are pushing our staff to
how to invest in evaluation? If yes, what are they? focus more keenly on the
Respondents indicated a variety of reasons for how their organizations approach the outcome level. They have been
question of when and whether evaluations are undertaken, reflecting the diverse stuck in the activity-to-output
nature of institutions and contexts.
process for too long.
Decisions are based on factors such as:
•  Required on all projects
•  Demands from donors
•  Government rules
•  Project plans
•  Evaluation strategies
•  Undertaken as best practice

Tracking outputs and outcomes


Question: Do you aim to evaluate outputs or outcomes?

Most respondents indicated that they look to evaluate both outputs and
outcomes. Some organizations are able to carry out the full M&E cycle from
monitoring outputs to evaluating outcomes to assessing impact. Issues such as
lack of availability of data or differing donor requirements can constrain this.
Evaluation
management and
approaches
Monitoring and Evaluation in the Development Sector 13

Most large organizations have a mixed approach to managing


evaluations in order to combine the advantages of centralized and
decentralized approaches.

Institutional arrangements
Question: Which parts of the organization are responsible for monitoring and
evaluation (country office, program team, HQ evaluation specialists, independent
evaluation office, external contractors, others)? Can you describe how the overall
evaluation work in the organization is divided between these different groups
either by type of work or by amount of work in the area of evaluation?
Evaluations can be conducted at different levels including evaluations by the primary
beneficiaries themselves, evaluations by the program teams, and evaluations
by a central evaluation team. They can also be undertaken by an independent
evaluation office or commissioned from consultants, though less than half of
respondents reported that they always or very frequently do so. Nevertheless, the
more frequently used evaluation approaches include commissioned consultancy
evaluations and program team evaluations. Fully independent evaluations and self
evaluation by grantees are less often used.

Figure 4: Frequency of use of monitoring mechanisms


(multiple responses allowed)

Independent Evaluation 64%


Office/Institutions
33%

Self evaluation by 55%


grantees/recipients 17%

Central Evaluation 45%


Team/Dept (internal)
34%

Commissioned 30%
Consultancy Evaluations
48%

Program Team 34%


Evaluations
43%

Rarely/Never Always/Very frequently

Source: Monitoring and Evaluation in the Development Sector, KPMG International, 2014.

Question: Which mechanisms are used to conduct formal evaluations? Evaluation is decentralized to
Around a third of the respondents indicated that a central evaluation team or teams and commissioned and
department would evaluate projects very frequently, or always. This approach managed by them with advisory
brings greater accountability to the evaluation process as well as a basis support from the central
to compare performance across the organization. It should also allow the
deployment of greater expertise. evaluation department.
14 Monitoring and Evaluation in the Development Sector

Question: How does this distribution mirror the way in which the organization is
structured (e.g., centralized, decentralized)?
Generally, the responses confirm that organizational structure mirrors the
centralized and decentralized aspects of the M&E system. The majority of
respondents focused on the decentralized nature of both their evaluation approach
and their organizational delivery model with some notable exceptions.

Change in M&E approaches


Question: What is changing in your organization’s approach to monitoring and
evaluation?
Some of the key messages are a growing demand for evidence, strengthening
of the evaluation system, improved monitoring, and increased interest in impact
measurement. There is a growing emphasis on building the evidence base for
programs through evaluation in many organizations. Some respondents gave a
strong account of having deliberately embedded a results-based approach in their
organization.

• “Recognition of the need for an evidence base is


increasing.”
Growing • “Demand for regular reporting to the board is increasing.”
Demand for • “Internally we are sick of not being able to say what
Evidence difference we have made.”
• “A shift towards greater focus on building the evidence
base.”

• “We have pushed up both the floors and ceilings of


evaluation standards in the organization. What was
previously our ceiling (gold standard) is now our floor
Evaluation
(minimum standard).“
Systems
Strengthening • “A more strategic approach is planned so evidence
gaps are identified more systematically and better
covered by evaluation.”

• “We are working on getting more sophisticated in


Improved our use of monitoring data so we have better and
Monitoring timelier feedback information loops.”
Approaches • “We are implementing changes to improve
monitoring and how we use monitoring data.”
Every key person in the
program is involved in
ensuring that implementation
• “Evaluation has moved from only addressing
of research projects is geared
performance issues to addressing impact issues.”
towards realizing the impact
More Focus • “We have identified some innovative programs which
we are seeking to achieve on Impact we design with leading universities or academics
and they monitor and collect Measurement where we feel the contribution to global knowledge
evidence of outputs and feed is important, and where the rigor of the design and
M&E needs to be top notch.”
them to the M&E section.
Monitoring and Evaluation in the Development Sector 15

Evaluation methodologies
Question: What type of evaluation does your organization currently use and how frequently?
Project evaluations are the most frequently used compared to other methodologies. Impact, sector,
and risk evaluations are used relatively rarely in most organizations.

Figure 5: “Always” or “Very frequently” used evaluation types


(multiple responses allowed)

Project evaluation 69%

Participatory evaluations 33%

Country program evaluation 26%

Self-evaluations 25%

Thematic evaluation 25%

Impact evaluation 25%

Sector evaluation 17%

Risk evaluations 14%

Source: Monitoring and Evaluation in the Development Sector, KPMG International, 2014.

Question: Which type of evaluation would you like your organization to do more of?
Few techniques are considered to be overused. Respondents report that there is a need to increase
the use of country program, sector, participatory, and impact evaluations. The cost of certain types of
evaluations can also impact choice.

Figure 6: “Underused” or “Very underused” evaluation types


(multiple responses allowed)

Country program evaluation 66%

Participatory evaluations 65%

Risk evaluations 56%

Thematic evaluation 54%

Impact evaluation 53%

Sector evaluation 48%

Self evaluations 43%

Project evaluation 14%

Source: Monitoring and Evaluation in the Development Sector, KPMG International, 2014.
16 Monitoring and Evaluation in the Development Sector

Evaluation techniques
Question: Which evaluation techniques does your organization currently use, and
how frequently?
The survey encountered a divergence in the frequency of techniques used for
evaluation, with relatively low emphasis on quantified techniques which involve a
counterfactual analysis with potential attribution of impact. This is understandable, but
does reflect the challenge of quantified reporting of impact in the development field.
Techniques such as tracking a theory of change and ‘results chains’ are more frequently
used and will give some explanation of how interventions are having an impact.
Performance indicators and logical frameworks are the most frequently used
techniques. Organizations are not frequently using techniques of SROI, Cost Benefit
Analysis, and Return on Investment.

Figure 7: “Very Frequently” or “Always” used evaluation techniques


(multiple responses allowed)
80 77% 75%
70%
70
58%
60
48% 48% 47%
50
43% 42% 41%
40

30 27%

20 17% 15%
11% 11% 11%
10
0%
0
rks

ps

ies

is

ge

is

ng

on

sis

ies

s
en

en
tor

tor

ac

ain

ial
lys

lys
rou

an

rki

uti

aly
tud

tud
wo

stm

tm

l tr
db
ica

ica

ch
na

na
ch

ma

rib

an
sg

ee

tro
es
me

es

ls
ind

ind
a

ka
lts

e
att
of

h
yf

nv

nv

tua
nt

fit
cu

on
lin

nc
fra

su

Ris
ce

xy
ipa

ry

ne
lts

ni

ni
iar
Fo

dc
se

fac
be
Re

eo
pro
al

an

be
su

no

no
fic
rtic
Ba

ize
gic

ter
ce
Th
rm

Re
ne
ct/

st
tur

tur
Pa

om
Lo

an

un
rfo

Co
Be
ire

l re

Re
rm

Co

nd
Pe

Ind

cia
rfo

Ra
So
Pe

75%+ 50-75% 25-50% 0-25%

Source: Monitoring and Evaluation in the Development Sector, KPMG International, 2014.
Monitoring and Evaluation in the Development Sector 17

Strengths and weaknesses of For us, evaluation is a work in


evaluation progress.
Question: What would you say are the main strengths and weaknesses of your
organizational approach to evaluation?
Although there is high confidence in the rigor of measurement for evaluation,
there are perceived weaknesses in other areas. A lack of financial resources is
perceived as a major challenge to improving evaluation systems. Most respondents
(61 percent) indicate strong external scrutiny as a major strength. No other feature
was reported as a major strength by the majority of respondents. Three major
weaknesses were identified by at least 40 percent of respondents in the areas of
measurement, timeliness, and, most commonly, overall feedback mechanisms.

Figure 8: "Weaknesses" or "Major Weaknesses" and "Strengths" or


"Major Strengths" of Evaluation
(multiple responses allowed)

21%
Rigorous Measurement
42%

Timeliness – speed in 15%


finding what is not
working and working 41%

Feedback mechanisms – 38%


findings are effectively
translated to changes 47%

Overall levels of 42%


investment and frequency –
sufficient evaluation activity 27%

Quality assurance – high 44%


quality of evaluation work 19%

Level of independence – 61%


strong external scrutiny 19%

Major strength Major Weakness

Source: Monitoring and Evaluation in the Development Sector, KPMG International, 2014.
18 Monitoring and Evaluation in the Development Sector

Use of new technology


Monitoring and Evaluation in the Development Sector 19

The use of new technology in M&E appears to lag behind other


sectors of development which have more readily adopted new The use of big data is a great idea
technologies including mobile-based solutions, crowd-sourcing,
but our staff are cynical about
and location-based reporting applications. Organizations appear
to be limited in their use of Information and Communication the reliability and veracity of
Technology (ICT) enabled tools due to challenges accessing data government generated data in
and getting meaningful information from the analysis provided by poorly governed countries.
these tools.
Question: Which information and communication technology enabled tools
do you use to collect, manage and analyze data for monitoring and evaluation
purposes?
The extent of new technology applications in development evaluation is as yet in
its relative infancy. Data techniques are “rarely” or “never” used by the majority
of respondents (see Figure 9), with Web-based surveys being the most frequently
used technique. Some organizations are also developing data entry systems using
tablets and mobile phones. Accessing data is a major challenge for a majority of
respondents.

Figure 9: “Rarely” or “Never” used ICT enabled tools


(multiple responses allowed)

91% 83% 82% 81% 72% 69% 57% 54%

ICT enabled Video GPS data Audio Media Mobile based Open source Web-based
visualization monitoring (e.g., SMS) database surveys

Source: Monitoring and Evaluation in the Development Sector, KPMG International, 2014.

Roadblocks to using technology


Question: What would you say are the main challenges and problems in
introducing data analytics and “big data” into your evaluation system?
Access to expertise, cost of data management and analysis, ease of data
accessibility and standardization, and the use of technology by beneficiaries in the
field were identified as key factors that impeded greater adoption of technology.

Figure 10: “Major” or “Substantial” challenges in introducing data analytics and ”big data”
(multiple responses allowed)

Accessing data 72%


Getting meaningful
information from 69%
the analysis
Processing data 55%

Accessing skilled
55%
personnel
Accessing financial
resources 52%

Source: Monitoring and Evaluation in the Development Sector, KPMG International, 2014.
20 Monitoring and Evaluation in the Development Sector

Evaluation feedback loops


A commonly expressed concern about evaluation is that
The better the evaluation the feedback loops are very slow. By the time a particular
product and the more intervention has been implemented and evaluated, the
organizational priorities and approaches may have moved on,
stakeholder involvement
meaning the relevance of evaluation is marginal. For this reason,
during the process, the better it may be better to give more priority and resources on effective
the uptake at all levels. monitoring than on ex post evaluation.
Question: How effective would you say the feedback mechanisms are
between evaluation findings and operational performance/strategy?
Analyzing feedback is generally scored as poor or variable in the majority of
cases. In four of the five categories, more than half the responses were “poor”
or “variable.” The score for “internalizing evaluation feedback” is, however,
less negative than the more detailed examples, which focus on what would
be involved in generating such feedback.

Figure 11: “Poor” or “Variable” assessments of feedback mechanisms


(multiple responses allowed)

59%

53%
52%
52%

36%

Internalizing evaluation feedback

Changes in policy and practice from evaluation

Synthesis of evaluation lessons

Reporting externally on evaluation follow-up

Analysis of emerging patterns and trends

Source: Monitoring and Evaluation in the Development Sector, KPMG International, 2014.

Timeliness of evaluations
Question: How long does it take for the results of an evaluation to result in
improvements in current and future programs?
Most respondents (66 percent) felt that it would take less than a year for the results
of an evaluation to lead to improvements. Few respondents identified timeliness as
a strength of their evaluation system. It is important to appreciate that the question
refers not to the whole project cycle, but only to the period between a completed
evaluation and the lessons from that evaluation being applied at a project level.
Monitoring and Evaluation in the Development Sector 21

Figure 12: Time for evaluation results to lead to performance improvements


(numbers do not sum to 100 due to rounding)

Less than 3–6 6 months 1–2 More than


3 months months – 1 year years 2 years

10% 28% 28% 21% 14%

Source: Monitoring and Evaluation in the Development Sector, KPMG International, 2014.

• Multicountry/Multilingual projects take longer to absorb


Sample the lessons of particular evaluations, especially when
reasons for considering complex projects
length of time • Meta-evaluations on a given sector or a particular
of feedback approach are undertaken on a five-year cycle so the
loops: lesson learning and policy feedback loop can take that
entire length of time.
22 Monitoring and Evaluation in the Development Sector

Resources
for monitoring and evaluation
A common theme of a number of questions in the survey is that
lack of resources is a major constraint or challenge.

Availability of M&E resources


Question: What proportion of program budget would you say is being spent
on monitoring and evaluation?
The majority of respondents (53 percent) stated that levels of resourcing for
evaluation were 2 percent or less of the program budget.
Figure 13: Proportion of program budget being spent on M&E
(Estimated spending on M&E as % of program budget)
30%
28% 28%
25%
25%

20%

15%

10%

6% 6% 6%
5%

0%
1% 2% 3-5% 6-7% 8-9% 10%+
Percent of program budget
(numbers do not sum to 100 due to rounding)
Source: Monitoring and Evaluation in the Development Sector, KPMG International, 2014.

Question: What would you say are the main challenges and problems in
improving your evaluation system?
Lack of financial resources is the most frequently cited challenge to strengthening
the evaluation system. Nearly a quarter of respondents estimated the evaluation
budgets to be 1 percent or less of program spend. The share of respondents
estimating the evaluation budget at more than 5 percent of program budget is fewer
than one in five (19 percent).
Figure 14: Main challenges and problems in improving evaluation systems
(multiple responses allowed)

Lack of financial
55%
resources
Availability of financial Lack of access to
38%
resources is usually a data and information
Inability to hire
challenge which in turn good consultants
30%

has consequences in the Inability to recruit staff 27%


application of more robust
Lack of robust
methodologies for better methodlogies
24%

evaluation practices. Source: Monitoring and Evaluation in the Development Sector, KPMG International, 2014.
Monitoring and Evaluation in the Development Sector 23

Role models in M&E


Although responses to a question about role models were quite varied,
multilateral banks and the health sector were consistently ranked as leaders in the
development sector.
Question: In your opinion which organization has the strongest monitoring
and evaluation approach?
The most admired organizations were praised with regard to the strength, quality,
and data-driven nature of their approach to monitoring and evaluation by their peers.
24 Monitoring and Evaluation in the Development Sector

Methodology
Case Study: Outcome Mapping

Outcome Mapping (OM) is an alternative planning and results


evaluation system for complex development interventions. Key to
the success of this system has been the ability to adapt it in creative
ways to meet an individual program’s needs.
What is it?
One of the most important attributes of the OM method is its ability to track a breadth
of activities – both planned and opportunistic, and capture a range of results – from the
incremental to the transformative, across a variety of stakeholders. This is in contrast to
more conventional systems of results measurement, where the focus is narrowed to a
task of measuring planned activities, and using predefined indicators to chart high-level
results.
How does it work?
OM begins by identifying “boundary partners”: influential people, organizations,
institutions or other entities with whom a program will work to achieve its goals. These
partners might be politicians, community leaders or the media.
Progress towards goals is then tracked in terms of observed changes in behavior among
these boundary partners. Practitioners are asked to record small changes that they
observe every day in “outcome journals,” which enables them to capture a range of
evidence from the seemingly small to the transformative. This also allows practitioners
the freedom to capture whatever information best illustrates the change – as opposed to
collecting information against specific predefined indicators, as is done with a log-frame.
Monitoring and Evaluation in the Development Sector 25

Adapting OM: The Accountability in Tanzania (AcT) Program


Funded by DFID and hosted by KPMG in East Africa, the US$52 million Accountability
in Tanzania (AcT) program provides a useful example of how OM methodologies can be
applied creatively to facilitate flexible, impact-driven development programming. AcT
provides flexible grant funding to 26 civil society organization (CSOs) working to improve
accountability of government in Tanzania.

• AcT developed new results measurement indicators that


In order to allowed it to merge its CSO-level OM data with the program’s
facilitate its overall logframe in order to demonstrate, from top to bottom,
innovative how change actually happens.
approach to • AcT developed a database through which to manage its OM
grant-making, results. Database analysis has allowed AcT to develop a
AcT has clearer view of the results pathways for the program, report
adapted results easily to DFID, and develop much more precise
outcome progress markers to facilitate further learning.
mapping to
meet its needs • OM has provided an effective basis for structuring and
in a variety of monitoring AcT’s partnerships with CSOs – in order to gauge
ways. the extent to which AcT support is helping to achieve a
strengthened civil society in Tanzania.
26 Monitoring and Evaluation in the Development Sector
Monitoring and Evaluation in the Development Sector 27

Survey methodology
In carrying out this survey, we define monitoring as the activity
that is concerned with the review and assessment of progress
during implementation of development activities and projects.
This provides ongoing feedback to managers and funders about
performance – what is working and what is not working, and
needs correcting.
In contrast, “evaluation is the episodic assessment of the change in targeted results
that can be attributed to the program/project intervention, or the analysis of inputs and
activities to determine their contribution to results.”1
KPMG’s Monitoring and Evaluation Survey reflects the responses of 35 participants
during February through April 2014. Respondents’ organizations are responsible for
over US$100 billion of development spend.2
The purpose of the survey was to identify current trends and opinions of those
who are leading the agenda within key development institutions. The survey was
completed using an online survey tool, supplemented in most cases by a telephone
interview to clarify responses and allow opportunity for dialogue. The following
types of organizations participated.

Participant Type Role

20%
23%

53%
12%
27% 65%

Bilateral Multilateral Philanthropic Funder Implementer Both Funder and Implementer

Source: Monitoring and Evaluation in the Development Sector, KPMG International, 2014.

1
 ttp://info.worldbank.org/etools/docs/library/243395/M2S2%20Overview%20of%20Monitoring%20and%20evaluation%20%20NJ.pdf slide, 15 March 2014.
h
2
KPMG estimate based on published information
28 Monitoring and Evaluation in the Development Sector

Glossary
The obligation to account for activities, accept responsibility for them, and to disclose the results
Accountability
in a transparent manner
Baseline Study Analysis of the current situation to identify the starting point for a project or program
Beneficiary Monitoring through obtaining information from the primary stakeholders who benefit or are
Feedback intended to benefit from the project or program
Compliance Evaluation to comply with internal or external rules or regulations
Cost Benefit Quantification of costs and benefits producing a discounted cash flow with an internal rate of
Analysis return or net present value
Counterfactual
A study to estimate what would have happened in the absence of the project or program
Study
An assessment, as systematic and objective as possible, of an ongoing or completed project,
Evaluation
program or policy, its design, implementation and results
A focus group is a form of qualitative research in which a group of people are asked about their
Focus Group
perceptions, opinions, beliefs, and attitudes
An organization which provides financial support to a second party to implement a project or
Funder
program for the benefit of third parties
ICT Enabled Use of computer graphics to create visual means of communication and consultation with
Visualization stakeholders and beneficiaries
The process of identifying the anticipated or actual impacts of a development intervention, on
Impact
those social, economic and environmental factors which the intervention is designed to affect or
Measurement
may inadvertently affect
Assesses the changes that can be attributed to a particular intervention, such as a project,
Impact Evaluation
program or policy, both the intended ones, as well as unintended ones
Implementer The organization which is responsible for delivery/management of a development intervention
Independent An evaluation which is organizationally independent from the implementing and funding
Evaluation organizations
Key Performance Indicators (KPIs) define a set of values used to measure against. An organization
Key Performance
may use KPIs to evaluate its success, or to evaluate the success of a particular activity in which it
Indicators
is engaged
A tool which sets out inputs, outputs, outcomes and impact for an intervention with indicators of
Logical Framework
achievement, means of verification and assumptions for each level
The process of gathering information about project performance and progress during its
Monitoring
implementation phase
Open Source Open source software is computer software that is distributed along with its source code – the
Database code that is used to create the software – under a special software license
An alternative planning and results evaluation system for complex development interventions
Outcome Mapping
which tracks planned and unplanned outcomes (See Case Study)
Provides for the active involvement of those with a stake in the program: providers, partners,
Participatory
beneficiaries, and any other interested parties. All involved decide how to frame the questions
Evaluation
used to evaluate the program, and all decide how to measure outcomes and impact
Monitoring and Evaluation in the Development Sector 29

A range of well-defined, though variable methods: informal interviews, direct observation, participation
Participant
in the life of the group, collective discussions, analyses of personal documents produced within the
Analysis
group, self-analysis, results from activities undertaken offline or online, and life histories
Benchmarking is the process of comparing processes and performance metrics to industry
Performance
bests or best practices from other industries. Dimensions typically measured are quality, time
Benchmarks
and cost
Portfolio Metrics which enable organizations to measure the performance of different elements of their
Performance portfolio and the portfolio overall
Primary
The individual people that the project is intended to assist
Beneficiary
The department or team which is responsible and accountable for a particular project or
Program Team
intervention
A discrete set of activities which are generally approved as a package or series of packages, with
Project
defined objectives
Project Evaluation An evaluation which examines the performance and impact of a single intervention or project
Proxy Indicators An appropriate indicator that is used to represent a less easily measurable one
Randomized An evaluation which assigns at random a control group and a treatment group. Comparison of the
Control Trials performance of the two groups provides a measure of true impact
A Results Chain is a simplified picture of a program, initiative, or intervention. It depicts the logical
Results Chain relationships between the resources invested, activities that take place, and the sequence of
outputs, outcomes and impacts that result
Results Attribution Evaluation techniques which attribute the specific outcomes and impacts of an intervention
Return on A measure of the financial or economic rate of return, typically calculated through discounted
Investment case flow analysis as an internal rate of return
Risk Analysis Assessment of the probability and impact of the risks affecting an intervention or project
A component of risk assessment in which judgments are made about the significance and
Risk Evaluation
acceptability of risk
Evaluation of a set of interventions within a particular sector such as education,
Sector Evaluation
health, etc.
An evaluation which is undertaken by the team which is responsible for the implementation of
Self Evaluation
that intervention
SROI is an approach to understanding and managing the value of the social, economic and
Social Return on
environmental outcomes created by an activity or an organization. It is based on a set of principles
Investment (SROI)
that are applied within a framework
Thematic Evaluation of a set of interventions within a particular thematic approach such as governance or
Evaluation gender
An explicit presentation of the assumptions about how changes are expected to happen within
Theory of Change
any particular context and in relation to a particular intervention
Value for Money ”The optimal use of resources to achieve intended outcomes”3

3
http://www.nao.org.uk/successful-commissioning/general-principles/value-for-money/assessing-value-for-money/, 23 August 2014.
Bookshelf
Thought Leadership
2013 Change Readiness Index Future State 2030: The
The Change Readiness Index assesses global megatrends shaping
the ability of 90 countries to manage governments
change and cultivate the resulting This report identifies nine global
opportunity. megatrends that are most salient to
the future of governments. While their
individual impacts will likely be far-
reaching, the trends are highly interrelated
and thus demand a combined and
coordinated set of responses.

Issues Monitor
Issues Monitor – A greener Issues Monitor –
agenda for international Bridging the gender gap
IDAS ISSUES MONITOR

A greener agenda
for international

development
development
The nexus between climate
change and development

kpmg.com
This publication explores the issue
KPMG INTERNATIONAL

This publication explores the nexus of gender equality - something that


between climate change and remains elusive in many parts of the
development – an undeniable link Issues Monitor
KPMG INTERNATIONAL

world, but is vital for economic growth


that demands greater alignment and
Bridging the
gender gap
Tackling women’s
and development of society.
inequality

integration of both development and October 2012, Volume Six

kpmg.com

climate change agendas in order to help


catalyze real and sustainable change
around the world.

Issues Monitor – Issues Monitor –


Aid effectiveness Ensuring food security
KPMG INTERNATIONAL

Issues Monitor
Ensuring food security

Developing countries have long September 2011, Volume Three

As people in developing countries


depended on humanitarian and struggle to purchase enough food to
kpmg.com

development aid provided by donor fulfill their daily nutrition requirements,


Issues Monitor
KPMG INTERNATIONAL

countries and organizations. The the number who continue to go


Aid effectiveness –
Improving accountability
and introducing new
economic downturn and the resulting hungry remains high. Climate change
strain on budgets have put donors under and crop diversion to biofuels have
initiatives
November 2011, Volume Four

kpmg.com

extra pressure to demonstrate results. increased pressure on food production,


contributing to higher worldwide food
prices. More global financial support to
strengthen supply systems is required
to help ensure that every person has
sufficient access to food.

Other publications
Sustainable Insight: Unlocking INTERNATIONAL DEVELOPMENT
ASSISTANCE SERVICES (IDAS)
kpmg.com/IDAS
You can’t do it alone
the value of social investment You can’t do it alone:
Partnerships the only way to help the world’s young job seekers
As leaders and young people around the world are acutely aware, youth unemployment levels are already
disturbingly high and the problem is getting worse. An exploding youth population1 and lag in job growth are key
This article explores the various demand
side and supply side measures to tackle
causes. Population pressures from increasing numbers entering the labor market every year, particularly in Africa

This report is intended to help corporate


and Asia, create opportunities for ‘demographic dividends’, but in turn will only continue to drive the need for higher
levels of job creation. Other factors such as the global financial crisis, the 2009 Eurozone crisis, and longer term
trends in global trade, technology, and competition, have also increased pressure points on this crisis.

A complex issue of epic proportions

responsibility managers and others the youth unemployment crisis in both


Out of 1.2 billion youth aged 15 to 24:

30%
are not in employment, Of these:
education, or training 341M are in
(NEETs)2, which translates developing countries
to 358M young people.

developed and developing markets.


220M are in Asia

involved in designing and delivering social 75 million


Every year, it is estimated that over 120 million
nearly

are unemployed adolescents reach 16 years of age and are looking


(looking for work)3 to enter the labor market.4

= 10 million = 20 million

investments to overcome some of the


This is
a global
concern

challenges to measuring and reporting on


Greece and Spain had youth
unemployment rates of over
50 percent in 2013.7 In the
US, it was over 15 percent.

Unemployment rates for young

social programs.
women are higher than for
young men in Latin America
and the Caribbean, South Asia In Namibia, Saudi Arabia
and South-East Asia and and South Africa, nearly
the Pacific.6 9 out of 10 youth is outside
of the labor force.5
Source: KPMG International, 2014.

International Development Assistance Services (IDAS)


© 2014 KPMG International Cooperative (“KPMG International”). KPMG International provides no client services and is a Swiss entity with which the independent member firms of the KPMG network are affiliated.
Global Chair
Government & Infrastructure
Nick Chism
T: +44 20 7311 1000
E: [email protected]

Contact IDAS

Global Chair Eastern Europe South America


Timothy A.A. Stiles Aleksandar Bucic Ieda Novais
T: +1 212 872 5955 T: +381112050652 T: +551121833185
E: [email protected] E: [email protected] E: [email protected]

IDAS Center of Excellence European Union Desk South Asia


Trevor Davies Mercedes Sanchez-Varela Narayanan Ramaswamy
T: +1 202 533 3109 T: +32 270 84349 T: +91 443 914 5200
E: [email protected] E: [email protected] E: [email protected]

Central America Francophone Africa Sub-Saharan Africa


Alfredo Artiles Thierry Colatrella Charles Appleton
T: +505 2274 4265 T: +33 1 55686099 T: +254 20 2806000
E: [email protected] E: [email protected] E: [email protected]

CIS Middle East United Nations Desk


Andrew Coxshall Suhael Ahmed Emad Bibawi
T: +995322950716 T: +97165742214 T: +1 212 954 2033
E: [email protected] E: [email protected] E: [email protected]

East Asia and Pacific Islands North America Western Europe


Mark Jerome Mark Fitzgerald Marianne Fallon
T: +856 219 00344 T: +1 703 286 6577 T: +44 20 73114989
E: [email protected] E: [email protected] E: [email protected]

kpmg.com/socialmedia kpmg.com/app

The information contained herein is of a general nature and is not intended to address the circumstances of any particular individual
or entity. Although we endeavor to provide accurate and timely information, there can be no guarantee that such information is
accurate as of the date it is received or that it will continue to be accurate in the future. No one should act on such information
without appropriate professional advice after a thorough examination of the particular situation.
© 2014 KPMG International Cooperative (“KPMG International”), a Swiss entity. Member firms of the KPMG network of
independent firms are affiliated with KPMG International. KPMG International provides no client services. No member firm has any
authority to obligate or bind KPMG International or any other member firm vis-à-vis third parties, nor does KPMG International have
any such authority to obligate or bind any member firm. All rights reserved.
The KPMG name, logo and “cutting through complexity” are registered trademarks or trademarks of KPMG International.
Designed by Evalueserve.
Publication name: Monitoring and Evaluation in the Development Sector
Publication number: 131584
Publication date: August 2014

You might also like