Guidelines for implementing the Standard

The DCED Standard is a practical seven-step framework for results measurement. It enables projects to monitor their progress towards their objectives and better measure changes, manage implementation, and demonstrate results.

This page provides resources for practitioners, to support implementation. Advice for donors is available in this guide for SDC staff. For information about training courses and workshops on the Standard, please refer to our events page.

The DCED Standard

Step-by-step implementation guidelines

logic by eucalyp

1. Articulating the results chain

The DCED Standard requires that programmes first articulate their theory of change, in the format of a results chain, to articulate how the activities of the programme are expected to lead to outputs, outcomes, and eventually to development impact. Results chains can:

  • Illustrate what you are doing, and why you are doing it.
  • Clarify the expected outcomes and impact of your programme.
  • Identify key assumptions underlying the programme.
  • Communicate the programme to partners and external stakeholders.

Making the logic of your programme clear, in the results chain format, provides the framework for your results measurement system. Regularly updating results chains allows the monitoring framework to evolve as staff learn more about the programme and the context in which they are working.

metrics

2. Defining indicators of change and other information needs

An indicator shows what you think success will look like if your outputs are produced and outcomes achieved. Indicators specify how you will measure whether the changes anticipated in the results chains are really occurring.

The DCED Standard requires indicators to be derived from the logic of the results chain. Once you have clarified what you expect to happen, you can then be clear about what you expect to change – and what you would measure, at each step, to see whether this change occurred. The Standard also requires “a small number of indicators at the impact level [that] can be aggregated” (2.3); DCED’s work on indicator harmonisation for this purpose is also linked below.

attribution by Dave Gandy

3. Measuring attributable change

Once the indicators are identified, programmes must develop a system for measuring how they change. A ‘results measurement plan’ is required to summarise what indicators will be measured, when, how, and by whom. The Standard assumes that programmes will monitor progress at output and early outcome level, on a frequent basis; this monitoring information is used by programme staff to adjust the approach – in other words, for adaptive management. The Standard also assumes that more intensive research is carried out from time to time, to assess outcomes and impact; these impact assessments are used for reporting and communications.

Measurement generates information about what is changing during the life of the programme. It does not necessarily say much about the extent to which those changes were caused by the programme. Perhaps they would have happened anyway? This is the question of attribution; to what extent were observed improvements actually caused by you? The Standard requires programmes to address this issue of attribution, to a level that would convince a reasonable but sceptical observer.

Systemic change Becris

4. Capturing wider change in the system or market

Traditionally, programmes aim to directly improve the lives of aid recipients. For example, they may distribute seeds, provide healthcare, or sponsor education. However, this type of assistance is limited; it will only benefit the direct recipient of the aid. Moreover, it is frequently unsustainable, as it ceases when the project ends.

In response to this challenge, PSD programmes often seek to create ‘systemic change’. This is change in systems, such as markets, government, or civil society. This can have a greater impact than direct assistance, as it affects everyone in that system. People benefit indirectly from systemic change even if they had no contact with the programme. It is more likely to be sustainable, as the change may continue even once the programme is over.

The Standard calls on programmes to make efforts to capture these wider changes, so that they do not under-report their achievements.

money

5. Tracking costs and impact

The Standard requires programmes to state their annual and cumulative costs, so that the achievements of the programme can be put into perspective. Clearly, a larger and more costly programme can be expected to achieve greater results and scale.

Reporting Freepik

6. Reporting costs and results

The Standard requires programmes to document the key changes in the indicators at least annually, so that they can be communicated both internally (to donors, management staff, programme staff) and externally if deemed appropriate. When doing so, it is necessary to acknowledge the contributions made by other projects or the private sector. Key indicators should be dis-aggregated by gender, to the extent possible.

Managing the RM system Freepik

7. Managing the system for results measurement

An effective programme will use real-time monitoring data to adjust their approach as they implement. This allows information on results to guide decision-making at all levels, from strategic choices to implementation methods.

Results measurement should consequently be integrated into all aspects of programme management, from design through implementation. Indeed, the achievement of results should drive the programme, orienting staff efforts and guiding key decisions. This requires clear responsibilities, adequate planning, appropriate skills and sufficient human and financial resources.

An article in the Development in Practice Journal (Maclay, 2015) points out that “through its Standard for Results Measurement, the DCED has provided a unique example of donors building consensus around adaptive management concepts in the field of market sector development” and that replication of such processes and principles should be encouraged in other sectors.

Thematic and context-specific guidelines

Measuring Women’s Economic Empowerment

Implementing the DCED Standard in Challenge Funds

Implementing the DCED Standard in Conflict-Affected Environments

The DCED Standard and evaluation

Many programmes commission external evaluations. These could benefit from the improved clarity and measurement that the DCED Standard requires, and can strengthen accountability and learning from the programme.

  • The ILO’s Evaluation Office Guidance Notes 1.2 and 1.3 provide guidance on monitoring, reporting and evaluability of ILO programmes and projects, giving the DCED Standard as an example of good practice.
  • The DCED’s Paper on Evaluation and the DCED Standard (2014) addresses questions such as: Why should evaluators be interested in monitoring systems? How can the DCED Standard support evaluations, and vice versa?
  • In 2015, DFID and DCED held a workshop in London on Evaluation of PSD Programmes, with a number of consulting firms providing evaluation services. The Minutes document the issues arising, and include a sample agreement between DFID, Implementers and Evaluators.
  • For a deeper review of evaluation methods in M4P programmes, which provides recommendations in line with the approach taken by the DCED Standard, read Itad’s Review of M4P Evaluation Methods and Approaches, 2013.

Icon credits: Eucalyp; Dave Gandy; Becris; Freepik (flaticon.com) Photo credits: Large photo – FAO/Lucio Olivero; other photos: MDF/ Saira Habib; MDF/ Rob Rickmann; Ed Hedley