Little information is available on
results achieved. Even self-reported results are rare, and may not be credible.
There are many reasons for this, which are explored in a
Reader on the subject. Briefly, however, recent trends in PSD have made
results measurement more tricky. Market-based approaches tend to benefit a self-selecting
sample; isolation and randomisation are major challenges, which few have either
the money or the expertise to address reliably. Every situation is different, so
there is little hope of 'proving' that an approach always (or never) works.
As a result, the theme of results measurement seems to have been, in practice, rather
'stuck'. On the one hand are those who focus on achieving results in very complex
environments, and who see problems more than solutions in trying to measure those
results. On the other hand are those who say 'if you cannot measure the results
of your work, maybe they just aren't there'. Existing methodologies have not (yet)
delivered solutions to this conundrum, despite an emphasis on traditional log-frames,
baselines and some statistical studies.
the evaluation community is moving away from a reliance on purely statistical techniques
and macro-level correlations, and towards mixed methods, based on the results chain
or logic of the programme. Indeed, articulating and validating the logic of the
programme is a very powerful (if under-used) tool. The log-frame format was designed
in a simpler age to encourage exactly this clarity; expanding this format to capture
the sequencing and parallel activities now being implemented in many programmes
can yield great benefits - in terms both of effectiveness and measurement.
Note that there are many excellent websites on measuring results across all development
fields - see for example Monitoring and
Evaluation News and 3IE.
This page contains useful information and resources on measuring results under the
Suggestions for additions to this page are welcome at any time.
Click here to contact the DCED Secretariat.
General Agency Policies and Methodologies for results measurement
An overview of official member agency methodologies in results measurement, monitoring
and evaluation can be found below.
The Foundation Center has compiled a list of tools and resources used by foundations
and philanthropic organisations for assessing social impact. The list, including
brief summaries of the tools and resources, is available
Results Measurement methodologieS For Private Sector
They are categorised by cross-cutting methodologies and methodologies relating to
value chain development, technical and vocational training, and business environment
reform in particular. Some of them are very broad, implicitly leaving interpretation
to the individual project; others are more specific. In particular, some (e.g. GTZ,
IFAD, USAID) are based on the articulation of results chains, and are therefore
similar in approach to the DCED Standard.
Methodologies for measuring results in PSD (cross-cutting)
- Itad (2013) Review of M4P Evaluation Methods and Approaches
- Liz Turner and Laura Rovamaa (2012) Aid for Trade: Reviewing EC and DFID Monitoring and Evaluation
- Katalyst (2012):
Poverty profiling using the Progress out of Poverty Index (PPI), Katalyst Working
Paper Series, March 2012.
- Hempel, Kevin and Fiala, Nathan. (2012) Measuring Success of Youth
Livelihood Interventions. Washington, DC: Global Partnership for Youth
Employment. (download by chapter)
Martin and Marco Kamiya. (2011) Institutions and Productive Development Programs
in Latin America and the Carribean. Methodological Approach and Preliminary Results.
GTZ. (2009) Measuring Employment Effects of Technical Cooperation Interventions:
Some Methodological Guidelines.
- Creevey, Lucy.
(2008) Common Problems in Impact Assessment Research, USAID Impact Assessment
Primer Series, Washington, DC: USAID
- Woller, Gary.
(2007) Developing a Causal Model for Private Sector Development Programmes.
USAID Impact Assessment Primer Series, Washington, DC: USAID.
- Woller, Gary
and Jeanne Downing. (2007) Causal Models as a Useful Programme Management Tool:
Case Study of PROFIT Zambia, USAID Impact Assessment Primer Series,
Washington, DC: USAID
- Creevey, Lucy
and Don Snodgrass. (2006) Collecting and Using Data for Impact Assessment,
USAID Impact Assessment Primer Series, Washington, DC: USAID
Agency Methodologies for Measuring results in value chain development
Selected impact methodologies specific to value chain development; is listed below.
For more agency methodologies in this field, please refer to the
DCED Value Chains Database.
Measuring results in technical and vocational training
Measuring results in business environment reform
Methodological Papers on measuring results
- Lant Pritchett, Salimah Samji and Jeffrey Hammer (2012) It’s All About MeE: Using Structured Experiential Learning (‘e’) to Crawl the Design Space. CID Working Paper No. 249.
- The World Bank Group (2012): Agenda and Materials, Impact Evaluation
Workshop, Paris, 12-16 November 2012. Several presentations downloadable from
this webpage explain different methods that can be used to evaluate the impact of
- DFID (2012) Broadening the Range of Designs and Methods for Impact
Evaluations, DFID Working Paper 38, by Elliott Stern et al.
Gertler, Paul et.al. (2011) Impact Evaluation in Practice. Washington, DC: The World
- de Janvry, Alain, Andrew Dustan and Elisabeth Sadoulet. (2010) Recent
Advances in Impact Anallysis Methods for Ex-Post Impact Assessments of Agricultural
Technology: Options for the CGIAR.
- White, Howard. (2011) An Introduction to the Use of Randomized Control
Trials to Evaluate Development Interventions. 3ie Working Paper. New Delhi, India:
International Initiative for Impact Evaluation.
Bamberger, Michael, Vijayendra Rao and Michael Woolcock (2010): Using Mixed Methods
in Monitoring and Evaluation Experiences from International Development, World Bank
Policy Research Working Paper.
Vaessen, Jos. (2010) Challenges in impact evaluation of development interventions:
opportunities and limitations for randomized experiments. Discussion Paper.
Antwerp, Belgium: University of Antwerp.
Frans and Jos Vaessen on behalf of the Network of Networks on Impact Evaluation
(NONIE). (2009) Impact and Evaluation: NONIE Guidance. Washington, DC:
World Bank Group.
Glossary of Key Terms in Evaluation and Results Based Management. Paris:
Evaluations of Agencies' Work in Private Sector Development
For some published results, visit our Evidences of Impact
page. Additional input is always welcome - particularly given the lack of results
data mentioned above.
Videos on Results Measurement
This video, from the seminar 'Current Trends and Results in Private Sector Development',
17-20 January 2012, introduces the DCED Standard.
Further videos are viewable.
Photographs courtesy of Sudipto Das, Melina Heinrich