- Who are our clients?
- What do they want/need?
- How can we create value for them?
- How will we monitor our results?
- How has our program responded to the lessons being learned?
If accountability is the goal:If learning is the goal:
Selection of individual evaluation topics: | Based on size of program budget, perceptions of performance problems | Based on the potential for learning helpful lessons |
Primary audience for the evaluation: | Donors | Operational staff |
Objectives of the evaluation: | Accountability, enhanced control, informing resource allocation decisions, a summative focus | Program improvement, enhanced understanding, a formative focus |
Focus of the evaluation: | Issues relating to outcomes and operational compliance | Varies according to the program’s stage of development and the information needs of identified stakeholders |
Basic orientation: | Retrospective | Prospective |
Timing of the evaluation: | Conducted late in the program’s life cycle | Conducted early/mid point in the program’s life cycle |
Relevant evaluation models: | Objectives based, decision based, needs based, performance auditing, cost benefit studies, autocratic models | Responsive, utilisation focused, case studies, constructivist, democratic models |
Stakeholder involvement: | Generally limited to commenting on the terms of reference, serving as a source of data, and responding to recommendations | Active involvement in all stages of the evaluation, stakeholders are often organised into evaluation committees (steering, advisory, consultative, data interpretation) |
Relationships between the evaluation team and program staff: | More adversarial | More collaborative |
If accountability is the goal:If learning is the goal:
Evaluation methodology: | Preference for quantitative methods | Multiple methods |
Approach to sampling: | Representative - probability based sampling | Purposeful - sampling information rich cases |
Report content: | A focus on identifying problems and shortfalls (the gap between expectations and actual achievements) | A focus on what is currently working well, what can be learned from current practices, and how further improvements can be made |
Reporting style: | Technical style like a journal article, a heavy emphasis on text | Audience friendly reports in plain English with lots of diagrams, multiple reporting formats, an emphasis on face to face dialogue with intended users |
Recommendations: | Typically based on reverse problem logic. If ‘X’ is found to be broken, the recommendation is to ‘fix X’, limited consultation with program staff | Based on the logic of program planning. Additional data is gathered concerning the technical, financial, legal, political, and administrative viability of potential program improvements with the active collaboration of intended users |
Promoting utilisation of evaluation findings: | Limited engagement and ongoing support provided to potential users | Active engagement and ongoing support provided to intended users. A focus on supporting learning processes. |
Resources devoted to disseminating evaluation results: | Typically less than 5% of the evaluation’s budget | 25% of the evaluation’s budget |
Values of the evaluation unit: | Emphasis on being objective and independent / impartial | Emphasis on generating relevant contextually based knowledge and adding value for identified stakeholders |
Evaluator is perceived as: | A policeman, auditor | A consultant, teacher |