by Scott Bayley

Evaluation for Accountability or Learning?

A comparison of two primary evaluation purposes

Published on LinedIn, 16 Jul, 2017

http://www.ecdg.net/evaluation-for-accountability-or-learning/
Evaluation can be defined as the systematic empirical examination of a program’s design, implementation and impacts with a view to judging a program’s merit, worth and significance. Evaluation reduces uncertainty for stakeholders, aids decision making, and serves a political function. Evaluations are commonly undertaken for the following reasons: policy development; improving the administration/management of a program; demonstrating accountability; and facilitating stakeholder participation in decision making.
Evaluation studies are intended to add value for program stakeholders. If the role of the private sector is to generate profits in the context of changing market forces, the role of the public sector is to create value in the context of changing political forces. Guiding questions for an evaluation department within an aid agency include:
  1. Who are our clients?
  2. What do they want/need?
  3. How can we create value for them?
  4. How will we monitor our results?
  5. How has our program responded to the lessons being learned?
Ever since the evaluation of aid programs first began in the 1960s there has been tension and controversy over using evaluation for accountability purposes versus emphasising evaluation to support organisational learning. Donors tend to value the accountability function while program staff are more interested in learning how to improve operational practices. The way in which these two goals are approached has a major influence on how evaluations are organized and conducted. The underlying problem is that it is not really possible to fulfill both purposes in a single evaluation study. The two approaches are often incompatible as shown in the following table:

If accountability is the goal:If learning is the goal:
Selection of individual evaluation topics:Based on size of program budget, perceptions of performance problemsBased on the potential for learning helpful lessons
Primary audience for the evaluation:DonorsOperational staff
Objectives of the evaluation:Accountability, enhanced control, informing resource allocation decisions, a summative focusProgram improvement, enhanced understanding, a formative focus
Focus of the evaluation:Issues relating to outcomes and operational complianceVaries according to the program’s stage of development and the information needs of identified stakeholders
Basic orientation:RetrospectiveProspective
Timing of the evaluation:Conducted late in the program’s life cycleConducted early/mid point in the program’s life cycle
Relevant evaluation models:Objectives based, decision based, needs based, performance auditing, cost benefit studies, autocratic modelsResponsive, utilisation focused, case studies, constructivist, democratic models
Stakeholder involvement:Generally limited to commenting on the terms of reference, serving as a source of data, and responding to recommendationsActive involvement in all stages of the evaluation, stakeholders are often organised into evaluation committees (steering, advisory, consultative, data interpretation)
Relationships between the evaluation team and program staff:More adversarialMore collaborative

If accountability is the goal:If learning is the goal:
Evaluation methodology:Preference for quantitative methodsMultiple methods
Approach to sampling:Representative - probability based samplingPurposeful - sampling information rich cases
Report content:A focus on identifying problems and shortfalls (the gap between expectations and actual achievements)A focus on what is currently working well, what can be learned from current practices, and how further improvements can be made
Reporting style:Technical style like a journal article, a heavy emphasis on textAudience friendly reports in plain English with lots of diagrams, multiple reporting formats, an emphasis on face to face dialogue with intended users
Recommendations:Typically based on reverse problem logic. If ‘X’ is found to be broken, the recommendation is to ‘fix X’, limited consultation with program staffBased on the logic of program planning. Additional data is gathered concerning the technical, financial, legal, political, and administrative viability of potential program improvements with the active collaboration of intended users
Promoting utilisation of evaluation findings:Limited engagement and ongoing support provided to potential usersActive engagement and ongoing support provided to intended users. A focus on supporting learning processes.
Resources devoted to disseminating evaluation results:Typically less than 5% of the evaluation’s budget25% of the evaluation’s budget
Values of the evaluation unit:Emphasis on being objective and independent / impartialEmphasis on generating relevant contextually based knowledge and adding value for identified stakeholders
Evaluator is perceived as:A policeman, auditorA consultant, teacher

Scott Bayley, Managing Director of Scott Bayley Evaluation Services and former Principal Consultant for Monitoring Evaluation and Learning at Oxford Policy Management (OPM) for the Asia Pacific region.

Scott Bayley is Senior Principal Specialist, MEL at Oxford Policy Management (OPM).
Scott leads OPM Australia’s monitoring, evaluation and learning (MEL) work for the Australian Department of Foreign Affairs and Trade and the New Zealand Ministry of Foreign Affairs and Trade.

Call Scott now

on:

 +61 452 509 756

or,

 Email me

Scott Bayley Evaluation Services - Continuous Improvement

Affiliations

Fellow of the:

Australian Evaluation Society logo

Ask a question


A quick question or,

Make an appointment

Please type your full name

This field is required

Please supply a valid email address

This field is required

Please type your phone number

This field is required

Ask for a quote or ask a question.

This field is required