Implementing the DCED Standard

The DCED Standard is a practical eight-part framework for results measurement. It enables projects to monitor their progress towards their objectives and better measure changes, manage implementation, and demonstrate results. This page provides resources for practitioners, to support implementation; advice for donors is available in this guide for SDC staff. For information about training courses and workshops on the Standard, please refer to our events page.

Key Documents

 Implementing the DCED Standard

1) Articulating the Results Chain

The DCED Standard requires that programmes first articulate a results chain, a hypothesis about how the activities of the programme are expected to lead to outputs, outcomes, and eventually development impact. Results chains can:

  • Illustrate what you are doing, and why you are doing it.
  • Clarify the expected outcomes and impact of your programme.
  • Identify key assumptions underlying the programme.
  • Communicate the programme to partners and external stakeholders.

Ultimately, you cannot monitor progress if you do not understand what you are doing. Making the logic of your programme clear, in the results chain format, provides the framework for your results measurement system. Regularly updating results chains allows the monitoring system to evolve as staff learn more about the programme and the context in which they work.

Key Resources

  • DCED Guide to Results Chains: Explains what results chains are, how to design them, and how to use them. It also explains the requirements of the DCED Standard.
  • Case studies of the DCED Standard: Each case study includes example results chains. Look for the sector or country most relevant to your project.

2) Defining Indicators of Change

An indicator shows what you think success will look like if your outputs are produced and outcomes achieved. Indicators specify how you will measure whether the changes anticipated in the results chains are really occurring.

The DCED Standard requires indicators to be derived from the logic of the results chain. Once you have clarified what you expect to happen, you can then be clear about what you expect to change – and what you would measure, at each step, to see whether this change occurred.

Two tools that programmes have found useful are the IRIS metrics, which can be used to measure and describe an organisation’s social, environmental and financial performance, and the Progress out of Poverty Index, which provides a simple indicator of poverty. GIZ’s example results chains and indicators are given at the bottom of this page.

Key Resources:

3) Measuring Changes in Indicators

Once the indicators are identified, programmes must develop a system for measuring how they change. A ‘results measurement plan’ is required to summarise what indicators will be measured, when, how, and by who. Research will frequently be required to gather information against indicators, which should conform to established good practice.

Monitoring key indicators at appropriate intervals provides valuable feedback for programme staff, assisting them to manage and adjust the intervention.

Key Resources:

4) Estimating Attributable Changes

The previous steps have generated information about what is changing during the life of the programme. They do not necessarily say much about the extent to which those changes were caused by the programme. Perhaps they would have happened anyway? This is the question of attribution; to what extent were observed improvements actually caused by your project?

The Standard requires programmes to address this issue of attribution, to a level that would convince a reasonable but sceptical observer.

Key Resources:

5) Capturing wider changes in the system or market

Traditionally, programmes aim to directly improve the lives of aid recipients. For example, they may distribute seeds, provide healthcare, or sponsor education. However, this type of assistance is limited; it will only benefit the direct recipient of the aid. Moreover, it is frequently unsustainable, as it ceases when the project ends.

In response to this challenge, PSD programmes often seek to create ‘systemic change’. This is change in systems, such as markets, government, or civil society. This can have a greater impact than direct assistance, as it affects everyone in that system. People benefit indirectly from systemic change even if they had no contact with the programme. It is more likely to be sustainable, as the change may continue even once the programme is over.

The Standard calls on programmes to make efforts to capture these wider changes, so that they do not under-report their achievements.

Key Resources:

6) Tracking Programme Costs

The Standard requires programmes to state their annual and cumulative costs, so that the achievements of the programme can be put into perspective. Clearly, a larger and more costly programme can be expected to achieve greater results and scale.

Key Resources

7) Reporting Results

The Standard requires programmes to document the key changes in the indicators at least annually, so that they can be communicated both internally (to donors, management staff, programme staff) and externally if deemed appropriate. When doing so, it is necessary to acknowledge the contributions made by other projects or the private sector. Key indicators should be disaggregated by gender, to the extent possible.

Key Resources

8) Managing the System for Results Measurement

An effective programme will use real-time monitoring data to adjust their approach as they implement. This allows information on results to guide decision-making at all levels, from strategic choices to implementation methods.

Results measurement should consequently be integrated into all aspects of programme management, from design through implementation. Indeed, the achievement of results should drive the programme, orienting staff efforts and guiding key decisions. This requires clear responsibilities, adequate planning, appropriate skills and sufficient human and financial resources.

An article in the Development in Practice Journal (Maclay, 2015) points out that “through its Standard for Results Measurement, the DCED has provided a unique example of donors building consensus around adaptive management concepts in the field of market sector development” and that replication of such processes and principles should be encouraged in other sectors.

Key Resources

 

Sector Specific Guidelines

Women’s Economic Empowerment

The DCED has published Practical Guidelines for Measuring Results in Women’s Economic Empowerment in Private Sector Development, using the DCED Standard. The guidelines include practical advice on integrating women’s economic empowerment into each element of the DCED Standard. Drawing on good practices and lessons learned from Making Markets Work for the Chars (M4C) and the Alliances Lesser Caucasus Programme (ALCP) in Georgia, the guidelines in particular offer suggestions for measuring women’s economic empowerment at the household level.

On 17 June 2014, the DCED organised a Webinar on Women’s Economic Empowerment: Measuring Household Level Results in Practice. Click here to access all webinar resources, including the full recording, powerpoint presentation and additional materials.

Implementation in Challenge Funds

The DCED has published Practical Guidelines for Measuring Results in Challenge Funds, using the DCED Standard. The guidelines contain practical advice on topics including the selection of indicators, how to design a practical results measurement system, and how to share responsibilities between the business and the fund manager. They are also applicable to other forms of partnership with the private sector.

Implementation in Conflict-Affected Environments

The DCED’s Practical Guidelines for Measuring Achievements in PSD in Conflict-Affected Environments (July 2013) provide supplementary examples and advice for programmes implementing the DCED Standard in conflict-affected environments. They outline the challenges that programmes may face when implementing each of the eight elements of the Standard, and give practice guidance for results measurement in these contexts.

The DCED has produced two supplementary case studies, which show how elements of the DCED Standard can be applied in a conflict affected environment. They are of the Sustainable Employment and Economic Development Programme in Somalia (FAO) and the Employment Promotion Programme in Sierra Leone (GIZ).

The DCED Standard and evaluation

Many programmes commission external evaluations. These could benefit from the improved clarity and measurement that the DCED Standard requires, and can strengthen accountability and learning from the programme.

  • The DCED’s Paper on Evaluation and the DCED Standard (2014) addresses questions such as: Why should evaluators be interested in monitoring systems? How can the DCED Standard support evaluations, and vice versa?
  • In January 2015, DFID and DCED held a workshop in London on Evaluation of PSD Programmes, with a number of consulting firms providing evaluation services. The Minutes document the issues arising, and include a sample agreement between DFID, Implementers and Evaluators.
  • For a deeper review of evaluation methods in M4P programmes, which provides recommendations in line with the approach taken by the DCED Standard, read Itad’s Review of M4P Evaluation Methods and Approaches, 2013.

 

Other Programme Manuals

For a review of several RM Manuals aligned with the Standard, click here; this section lists some others.

The DCED web-page on results measurement methodologies contains policies and methodologies on results measurement from different agencies, and general papers on measuring results.