- Designing public spaces using Minecraft brings refugees and local communities...
- UN Habitat Commits to Implementing the Global Compact on Migration
- Residents, diplomats and UN-Habitat staff join hands to clean up Nairobi
- UN-Habitat Executive Director unveils Youth Declar-Action at the Sustainable...
- Blue economy forum will boost water’s potential
- Op-Ed By Maimunah Mohd Sharif on Sustainable Blue Economy Conference
- UN-Habitat leads Africities session on effective local government planning for...
- Chinese Cities Improving in Global Competitiveness
- Resilient cities, a matter of planning for and with children
- UN-Habitat Executive Director: World Cities Day Message
RESULTS-BASED MANAGEMENT HANDBOOK
- Preliminary Sections
- Part 1: Overview of RBM
- Part 2: Results-Based Planning
- Part 3: Results-Based Monitoring and Reporting
- Part 4: Results-Based Evaluation
- Part 5: Capacity Building, Knowledge Management and Innovations in RBM
Accountability: Responsibility and answerability for the use of resources, decisions and/or the results of the discharge of authority and official duties, including duties delegated to a subordinate unit or individual. In regard to programme managers, the responsibility to provide evidence to stakeholders that a programme is effective and in conformity with planned results, legal and fiscal requirements. In knowledge-based organizations, accountability may also be measured by the extent to which managers use monitoring and evaluation findings.
Achievement: An evidence-based, manifested performance.
Activity: Actions taken or work performed through which inputs such as funds, technical assistance and other types of resources are mobilized to produce specific outputs.
Advocacy: The act of arguing on behalf of a particular issue, idea or person towards specific goals. Advocacy is about strategic, planned, political change.
Analysis: The process of systematically applying statistical techniques and logic to interpret, compare, categorize and summarize data collected in order to draw conclusions.
Analytical method: A means to process, understand and interpret data.
Analytical tool: Method used to process and interpret information.
Applied research: Investigation undertaken in order to acquire new knowledge. Applied research is directed primarily towards a specific practical aim or objective.
Appraisal: An assessment, prior to commitment of support, of the relevance, value, feasibility and potential acceptability of a programme in accordance with established criteria.
Appropriateness: The quality of being especially suitable. It is used as one of the key principles for evaluation criteria.
Assumption: Hypothesis about conditions that are necessary to ensure that: (1) planned activities will produce expected results; (2) the cause effect relationship between the different levels of programme results will occur as expected.
Attribution: The ascription of a causal link between observed (or expected to be observed) changes and a specific intervention. Attribution refers to that which is to be credited for the observed changes or results achieved. It represents the extent to which observed development effects can be attributed to a specific intervention or to the performance of one or more partner taking account of other interventions, (anticipated or unanticipated) confounding factors, or external shocks.
Auditing: An independent, objective and systematic assessment that verifies compliance with established rules, regulations, policies and procedures and validates the accuracy of financial reports.
Authority: The power to decide, certify or approve.
Baseline survey: An analysis describing the situation prior to a development intervention
Baseline: Information gathered prior to a development intervention about the condition or performance of subjects against which variations are measured.
Benchmark: Reference point or standard against which progress or achievements can be assessed. A benchmark refers to the performance that has been achieved in the recent past by other comparable organizations, or what can be reasonably inferred to have been achieved in similar circumstances.
Beneficiaries: Individuals, groups or entities whose situation is supposed to improve (target group), and others whose situation may improve, as a result of a development intervention.
Best practice: Planning, organizational, managerial and/or operational practices that have proven successful in particular circumstances and which can be applied to other circumstances.
Bias: Irrational preference or prejudice causing negative inclination or unfavorable tendency. In statistics, bias may result in overestimating or underestimating certain data characteristics. It may result from incomplete information or invalid data collection methods.
Budget fascicle: Document containing proposed programmatic, financial and resource information of a budget section for the forthcoming biennium and submitted for approval.
Capacity: The knowledge, skills, organization and resources needed to perform a function.
Capacity development: A process that encompasses the building of knowledge, skills, organization and resources that enable individuals, groups, organizations and societies to enhance their performance and to achieve their development objectives over time. Also referred to as capacity building or capacity strengthening.
Case study: The examination of the characteristics of a single case, such as an individual, an event or a programme.
Causal relationship: A logical cause-effect relationship between final results and their impact on target beneficiaries.
Causality analysis: A type of analysis used in a development intervention formulation to identify the root causes of development challenges, organizing main data, trends and findings into relationships of cause and effect.
Causality Framework: A tool used to cluster contributing causes and examine linkages among them and their various determinants. Sometimes referred to as a “problem tree”.
Client satisfaction: The satisfaction of organizations or individuals who are affected by a development intervention, often measured in terms of meeting their needs or expectations.
Conclusion: A reasoned judgement based on a synthesis of empirical findings and/or factual statements corresponding to a specific circumstance.
Contribution: The ascription of a causal link between observed (or expected to be observed) changes and a specific intervention by multiple stakeholders.
Control group: A selected subgroup of beneficiaries who are not part of the programme (e.g. who do not receive the same treatment, input or training) but share characteristics similar to the target group.
Cost-benefit analysis: A type of analysis that translates benefits into monetary terms.
Cost-effectiveness analysis: A type of analysis that compares the effectiveness of different interventions by comparing their costs and outcomes measured in physical units (number of children immunized or the number of deaths averted, for example) rather than in monetary units.
Country assistance Evaluation: Evaluation of one or more development agency’s portfolio of development interventions, and the assistance strategy behind it, in a specific country.
Coverage: The extent to which a programme reaches its intended target population, institution or geographic area.
Criteria: The standards used to determine whether or not a proposal, programme or project meets expectations.
Data collection method: The mode of collection used when gathering information and data on a given indicator of achievement or evaluation.
Data: Specific quantitative and qualitative information or facts.
Data source: The origin of the data or information collected.
Database: An accumulation of information that has been systematically organized for easy access and analysis:
Development effectiveness: The extent to which an institution or intervention has brought about targeted change in a country or the life of an individual beneficiary.
Development intervention: An instrument for partner support aimed to promote development.
Development objective: Intended impact of one or more development interventions, contributing to physical, financial, institutional, social, environmental or other benefits to a society, community or group of people.
Effect: Intended or unintended change due directly or indirectly to an intervention.
Effectiveness: A measure of the extent to which a programme achieves its planned results (outputs, outcomes and impact).
Efficiency: A measure of how economically or optimally inputs (financial, human, time, technical and material resources) are converted to results.
Evaluability: The extent to which an activity or a programme can be evaluated in a reliable and credible fashion.
Evaluation: An assessment, as systematic and impartial as possible, of an activity, project, programme, strategy, policy, topic, theme, sector, operational area, and/or institutional performance. The aim is to determine the relevance and fulfilment of objectives, development efficiency, effectiveness, impact and sustainability. An evaluation should provide information that is credible and useful, enabling the incorporation of lessons learned into the decision–making.
Evaluation scope: A framework that establishes the focus of an evaluation in terms of questions to address issues to be covered and defines what will and will not be analyzed.
Evaluation standards: A set of criteria against which the completeness and quality of evaluation work can be assessed.
Evidence: The information presented to support a finding or conclusion.
Evidence-based: Evidence-based approach integrates all available information for data and research synthesis. The utilization of this process leads to an informed decision.
Execution: The management of a specific programme which includes accountability for the effective use of resources.
Feasibility: The coherence and quality of a programme strategy that makes successful implementation likely.
Feedback: The transmission of findings of monitoring and evaluation activities organized and presented in an appropriate form for dissemination to users in order to improve programme management, decision- making and organizational learning.
Finding: A factual statement on a programme based on empirical evidence gathered through research, monitoring and evaluation activities.
Focus group: A group selected to engage in discussions designed for the purpose of sharing insights, observations, perceptions and opinions, or recommending actions on a topic of concern.
Formative evaluation: Formative evaluation validates or ensures that the goals of the development intervention are being achieved and to improve the development intervention, if necessary, by means of identification and subsequent remediation of problematic aspects.
Goal: The higher-order objective to which a development intervention is intended to contribute. Impact Positive and negative, primary and secondary long-term effects produced by a development intervention, directly or indirectly, intended or unintended. The specific end result desired and expected to occur as a consequence, at least in part, of an intervention or activity
Impact: Positive and negative long-term effects on identifiable population groups produced by a development intervention, directly or indirectly, intended or unintended. These effects can be economic, socio-cultural, institutional, environmental, technological or of other types and should have some relationship to the MDGs and national development goals.
Impartiality: Removing bias and maximizing objectivity.
Independent evaluation: An evaluation carried out by entities and/or persons free of the control of those responsible for the design and implementation of a development intervention.
Indicators: Quantitative or qualitative factor or variable that provides a simple and reliable means to measure achievement, to reflect the changes connected to an intervention, or to help assess the performance of a development actor or intervention.
Indirect effect: The unplanned changes brought about as a result of implementing a programme or a project.
Inputs: The financial, human, material, technological and information resources used for development interventions.
Inspection: A special, on-the-spot investigation of an activity that seeks to resolve a particular problem.
Institutional development impact: The extent to which an intervention improves or weakens the ability of a country or region to make more efficient, equitable and sustainable use of its human, financial and natural resources.
Joint Programme: A set of activities with a common work plan and related budget, involving two or more participating development agencies and national or sub-national partners.
Joint Programming: A collective effort through which development agencies and national partners work together to prepare, implement, monitor and evaluate specific development interventions.
Lessons learned: Generalizations based on evaluation experiences with projects, programmes or policies that abstract from the specific circumstances to broader situations.
Logical framework (log frame): Tool used to emphasize the causal hierarchy of a programme and improve design of interventions. Logical frameworks highlight the links and sequencing between different facets and/or activities in a programme over time. In a logframe the information is organized in a matrix table.
Management information system: A system, usually consisting of people, procedures, processes and a database (often computerized) that routinely gathers quantitative and qualitative information on pre-determined indicators to measure programme progress and impact.
Means of Verification (MOV): The specific source(s) from which the status of results indicators can be ascertained.
Meta-evaluation: A type of evaluation that aggregates findings from a series of evaluations.
Methodology: A description of how something will be done.
Monitoring: A continuous management function that aims primarily at providing programme managers and key stakeholders with regular feedback and early indications of progress or lack thereof in the achievement of intended results.
Objective: A generic term usually used to express an outcome or goal representing the desired result that a programme seeks to achieve.
Operations research: The application of disciplined investigation to problem-solving.
Outcome evaluation: An in-depth assessment of a related set of programmes, components and strategies intended to achieve a specific outcome.
Outcome monitoring: A process of collecting and analyzing data to measure the performance of a programme, project, partnership, policy reform process and/or “soft” assistance towards achievement of development outcomes at country level.
Outcomes: Describe the intended changes in development conditions resulting from interventions. They can relate to changes in institutional performance. UNDAF outcomes are the collective strategic results for the United Nations system cooperation at country level, intended to support national priorities.
Outlier: A subject or other unit of analysis that has extreme values.
Output: Outputs are changes in skills or abilities, or the availability of new products and services that result from the completion of activities within a development intervention within the control of the organization.
Outputs: Specific goods and services produced by the programme. Outputs can also represent changes in skills or abilities or capacities of individuals or institutions, resulting from the completion of activities within a development intervention within the control of the organization.
Outputs: The products, services, skills and abilities that result from the completion of activities within a development intervention within the control of the organization.
Participatory approach: A broad term for the involvement of primary and other stakeholders in an undertaking, e.g. programme planning, design, implementation, monitoring and evaluation.
Participatory evaluation: Evaluation method in which representatives of agencies and stakeholders (including beneficiaries) work together in designing, carrying out and interpreting an evaluation.
Partners: The individuals and/or organizations that collaborate to achieve mutually agreed upon objectives.
Performance assessment: External assessment or self-assessment by programme units, comprising monitoring, reviews, end-of-year reporting, end-of-project reporting, institutional assessments and/or special studies.
Performance indicator: A performance indicator is a unit of measurement that specifies what is to be measured along a scale or dimension but does not indicate the direction or change. Performance indicators are a qualitative or quantitative means of measuring an output or outcome, with the intention of gauging the performance of a programme or investment.
Performance monitoring: A continuous process of collecting and analyzing data for performance indicators, to compare how well a development intervention, partnership or policy reform is being implemented against expected results (achievement of outputs and progress towards outcomes).
Performance: The degree to which a development intervention or a development partner operates according to specific criteria/standard/guidelines or achieves results in accordance with stated plans.
Policy evaluation: Policy evaluation is a considered process of examination, review and analysis which enables participants in the policy process, including stakeholders, legislators, administrators, the target population and others to: (i) measure the degree to which a policy has achieved its goals; (ii) assess the results the policy has had; (iii) identify any needed changes to a policy.
Primary data: Information which derives from new or original research and collected at the source first-hand.
Process evaluation: A type of evaluation that examines the extent to which a programme is operating as intended by assessing ongoing programme operations.
Programme approach: A process which allows governments, donors and other stakeholders to articulate priorities for development assistance through a coherent framework within which components are interlinked and aimed towards achieving the same goals.
Programme evaluation: Evaluation of a set of interventions, marshalled to attain specific global, regional, country, or sector development objectives.
Programme theory: An approach for planning and evaluating development interventions, entailing systematic and cumulative study of the links between activities, outputs, outcomes, impact and contexts of interventions.
Programme: A time-bound intervention similar to a project but which cuts across sectors, themes or geographic areas, uses a multi-disciplinary approach, involves multiple institutions and may be supported by several different funding sources.
Project evaluation: Evaluation of an individual development intervention designed to achieve specific objectives within specified resources and implementation schedules, often within the framework of a broader programme.
Project: A time-bound intervention that consists of a set of planned, interrelated activities aimed at achieving defined outputs.
Proxy measure or indicator: A variable used to stand in for one that is difficult to measure directly.
Purpose: The publicly stated objectives of a development programme or project.
Qualitative data: Data that is primarily descriptive and interpretative, and may or may not lend itself to quantification.
Quality assurance: Quality assurance encompasses any activity that is concerned with assessing and improving the merit or the worth of a development intervention or its compliance with given standards.
Quantitative data: Data measured or measurable by, or concerned with, quantity and expressed in numerical form. Quarterly, annual or multi-year schedules of expected outputs, tasks, timeframes and responsibilities.
Recommendation: Proposal aimed at enhancing the effectiveness, quality, or efficiency of a development intervention, redesigning the objectives and/or reallocating resources.
Relevance: The degree to which the outputs, outcomes or goals of a programme remain valid and pertinent as originally planned or as subsequently modified, owing to changing circumstances within the immediate context and external environment of that programme.
Reliability: Consistency or dependability of data and evaluation judgements, with reference to the quality of the instruments, procedures and analyses used to collect and interpret evaluation data.
Report: An essential element of an accountability process, whereby those doing the accounting for performance report on what has been accomplished against what was expected.
Result: The output, outcome or impact (intended or unintended, positive and/or negative) of a development intervention.
Result: Results are changes in a state or condition that derive from a cause-and-effect relationship. A result can be an output, outcome or impact that is set in motion by a development intervention.
Results: Results are changes in a state or condition that derive from a cause-and-effect relationship. There are three types of such changes (intended or unintended, positive and/or negative) that can be set in motion by a development intervention – outputs, outcomes and impacts.
Results chain: The causal sequence for a development intervention that stipulates the necessary sequence to achieve desired objectives – beginning with inputs, moving through activities and outputs, and culminating in outcomes, impacts and feedback. In some agencies, reach is part of the results chain. It is based on a theory of change, including underlying assumptions.
Results chain: The causal sequence for a development intervention that stipulates the necessary sequence to achieve desired results.
Results framework or matrix: The programme logic that explains how development results are to be achieved, including result chain(s), causal relationships, underlying assumptions and risks.
Results matrix: The results matrix explains how results are to be achieved, including causal relationships and underlying assumptions and risks. The results framework reflects a more strategic level across an entire organization for a country programme, a programme component within a country programme, or even a project.
Results-Based management (RBM): A management strategy by which all actors, contributing directly or indirectly to achieving a set of development results, ensure that their processes, products and services contribute to the achievement of desired results (outputs, outcomes and higher level goals or impact) and use information and evidence on actual results to inform decision making on the design, resourcing and delivery of programmes and activities as well as for accountability and reporting.
Results-based management (RBM): Results-based management is a management strategy by which all actors, contributing directly or indirectly to achieving a set of results, ensure that their processes, products and services contribute to the desired results (outputs, outcomes and higher level goals or impact) and use information and evidence on actual results to inform decision making on the design, resourcing and delivery of programmes and activities as well as for accountability and reporting.
Review: An assessment of the performance of an intervention, periodically or on an ad-hoc basis.
Risk analysis: An analysis or assessment of negative factors that affect or are likely to affect the achievement of results.
Risk: Internal or external uncertainty surrounding future negative factors that may adversely affect project success.
Sample: Selection of a part of a representative whole in order to assess parameters or characteristics.
Secondary data: Information which derives from secondary sources, i.e. not directly compiled by the analyst; may include published or unpublished work based on research that relies on primary sources or any material other than primary sources.
Sector programme evaluation: Evaluation of a cluster of development interventions in a sector within one country or across countries, all of which contribute to the achievement of a specific development goal.
Situation analysis: A situation analysis defines and interprets the state of the environment of an organization. It provides the context and knowledge for planning and describes operating and managerial conditions and general state of internal and external affairs.
SMART: A concept used for formulation of results-chain components (Outcomes, Outputs, Indicators) according to the following parameters: Specific, Measurable, Attainable, Relevant and Time-bound.
Stakeholders: People, groups or entities that have a role and interest in the aims and implementation of a programme.
Summative evaluation: A type of evaluation that examines the worth of a development intervention at the end of the programme activities (summation). The focus is on the outcome.
Survey: Systematic collection of information from a defined population, usually by means of interviews or questionnaires administered to a sample of units in the population, e.g. adults, young persons.
Sustainability: The continuation of benefits from a development intervention after major development assistance has been completed. The probability of continued long-term benefits.
Synthesis: The process of identifying relationships between variables and aggregating data with a view to reducing complexity and drawing conclusions.
Target group: The specific individuals or organizations for whose benefit a development intervention is undertaken.
Target: Specifies a particular value for an indicator to be accomplished by a specific date in the future. Total literacy rate to reach 85% among groups X and Y by the year 2010.
Target: Specifies a particular value for an indicator to be accomplished by a specific date in the future.
Thematic evaluation: Evaluation of a selection of development interventions, all of which address a specific development priority that cuts across countries, regions and sectors.
Time-series analysis: Quasi-experimental designs that rely on relatively long series of repeated measurements of the outcome/output variables taken before, during and after intervention in order to reach conclusions about the results of the intervention.
Transparency: Carefully describing and sharing information, rationale, assumptions and procedures as the basis for value judgments and decisions.
Triangulation: The use of three or more theories, sources or types of information, or types of analysis to verify and substantiate an assessment.
Validation: The process of cross-checking to ensure that the data obtained from one monitoring and evaluation method are confirmed by the data obtained from a different method.
Validity: The extent to which methodologies and instruments measure what they are supposed to measure.
Variable: In evaluation, a variable refers to specific characteristics or attributes, such as behaviours, age or test scores that are expected to change or vary.
Note: The cause-effect relationships between levels of results (the project logic) must be sound – that is , if you carry out all the activities, then you will produce the said Outputs and if all the outputs are delivered then you will realise first the sub-expected accomplishment, and if the Sub-EAs are realised the Expected Accomplishments will be realised, and that will ultimately lead to the realisation of the project’s Objective.
Template for Results Reporting on the Implementation of the Strategic Plan
- RESULTS REPORTING
Duration of Project: ——— Approval Date: ————— Budget:—————–
Percentage of budget used to date—–% Period Covered by report: —– 2013 to—–2013
1. Implementation progress
Progress rating Questions
a) Are resources (inputs)being utilised as planned?
b) What % of the total budget has been used so far?
c) Are activities being implemented according to schedule and within budget?
d) If there are delays what is causing the delays?
e) IS there anything happening that requires project management attention (to adjust or modify activities)?
f) Are activities leading to expected outputs?
g) How many of the total outputs have been completed?
a) Are outputs leading to expected outcomes/EAs?
b) What is the likelihood that the planned EAs will all be accomplished by the end of the project?
c) Are there any challenges or delays – and if so what is causing them?
4. Project Implementation Experiences and Lessons
Please summarize any experiences and/or lessons related to project design and implementation. Please select relevant areas from the list below:
a) Conditions necessary to achieve global urban benefits such as (i) institutional, social and financial sustainability; (ii) country ownership; and (iii) stakeholder involvement, including gender &youth issues.
b) Institutional arrangements, including project structure & governance;
c) Engagement of the private sector;
d) Capacity building;
e) Factors that improve likelihood of outcome sustainability;
f) Factors that encourage replication, upscaling, including outreach and communications strategies;
g) Financial management and co-financing.
1 Outputs and activities as described in the project logframe or in any updated project revision.
2 As per latest workplan (latest project revision)
3 Variance refers to the difference between the expected and actual progress at the time of reporting.
4 To be provided by the Project Leader- see Annex 1:Rating scale
5 Add rows if your project has more that 3 key indicators per objective or outcome.
6 Depending on selected indicator, quantitative or qualitative baseline levels and targets could be used
7 Use six-point scale system: Highly Satisfactory (HS), Satisfactory (S), Marginally Satisfactory (MS), Marginally
Unsatisfactory (MU), Unsatisfactory (U), and Highly Unsatisfactory (HU). See Annex 1 which contains UN-Habitat definitions.
8 Add rows if your project has more than 4 objective-level indicators. Same applies for the number of outcome-level indicators.
9 Add rows if your project has more than 5 Outcomes.
10 Only for Substantial to High risk
- Evaluation plays a key role as: i) a source of evidence on the achievement of results and institutional performance, supporting accountability; and ii) an agent of change, contributing to building knowledge and organizational learning.
- Evaluation can make an essential contribution to managing for results, and to organization-wide learning for improving both programming and implementation. Yet, the value of evaluation depends on its use, which is in turn determined by a number of up-stream key factors, including (but not limited to):
– Relevance of the evaluation, in terms of timing, so as to make evaluation findings available when decisions are taken;
– Quality/credibility of the evaluation, which derives from independence, impartiality, and a properly defined and inclusive methodology;
– Acceptance of the evaluation recommendations, which partially depends on the above two points;
– Appropriateness of practices in the management response, dissemination and use of evaluation findings.
- In 2007, the United Nations Evaluation Group (UNEG) requested the then Evaluation Quality Enhancement (EQE) Task Force to undertake further work on the development of a paper on good practices for management response mechanisms and processes.
- Subsequently independent consultants were commissioned to: 1) provide an overview of, and lessons learned on Management Response and Follow-up to evaluation recommendations within the United Nations system1, and 2) develop good practice standards based on common and differential features as well as key challenges that Evaluation units within the UN system as well as IFI‟s, bilaterals and NGOs face when dealing with follow-up processes to evaluation2.
- The consultants‟ reports were discussed at the Evaluation Practice Exchange Seminar in 2009 and the conclusions from the discussion provided the guidance to the Task Force to finalize its work. It was agreed that UNEG should first develop and agree upon Good Practice Guidelines for the follow-up to evaluations —rather than develop prescriptive standards. Good practices are drawn from the experiences and approaches used by a wide range of evaluation functions that operate in differing contexts, with the aim that they be adapted by UN organisations to match their individual needs and organisational settings.
- By drawing extensively on the two previous reports by independent consultants, this paper aims at outlining good practices in management response to evaluation, in the development of (formal and informal) systems for tracking and reporting on the implementation of the evaluations‟ recommendations, and mechanisms for facilitating learning and knowledge development from evaluations. Good practices are expected to cover both the accountability and the learning dimensions of evaluation, including incentives to use evaluation results in future programming and management.
Preconditions for follow up to evaluations
- There are certain preconditions that aid effective evaluation management response and follow-up processes, as outlined in Figure 1. Whilst description of the attributes of high quality evaluation planning and implementation processes are beyond the scope of this paper, their importance to the effectiveness of the ensuing management response and follow-up processes must be clearly highlighted.
- Involvement of internal stakeholders (and to the extent possible relevant external stakeholders) throughout the evaluation process increases the perceived relevance, and stakeholders‟ ownership, of evaluations. The establishment of reference and consultative groups, which advise on the evaluation‟s approach and provide feedback at key milestones of the process, work both to enhance the quality of the evaluation, and to increase the likelihood that evaluation recommendations will be accepted, owned, and acted upon. It is important to strike an appropriate balance between promoting the ownership of evaluation findings and recommendations without compromising the independence of evaluation, which can be aided by clearly defining the role of such a reference or consultative group prior to commencing work.
- Another precondition for follow-up is quality evaluation recommendations. The recommendations, which should be firmly based on evidence and analysis, and logically following from findings and conclusions, are to be clearly formulated, and presented in a manner that is easily understood by target audiences. Both strategic and more operational recommendations are expected to be implementable3.
- The evaluation‟s credibility is a third factor affecting the utility of the evaluation. Credibility, in turn, depends on independence, impartiality, transparency, quality and the appropriateness of the methods used4. Reporting lines and different structures of the evaluation units are key factors influencing the independence, credibility and, hence, the utility of evaluations.
Policy statements that deal with evaluation follow up
- The UNEG Norms and Standards for Evaluation in the UN System5 state the need for management response and systematic follow-up activities as a means for evaluations to contribute to knowledge building and organizational improvement. Standard 1.4 suggests that, “UN organizations should ensure appropriate evaluation follow-up mechanisms and have an explicit disclosure policy” to ensure that evaluation recommendations are utilized and implemented in a timely fashion and that the findings of evaluations feed into programme or project planning. An explicit disclosure policy ensures the transparent dissemination of evaluation reports.
- The different mechanisms to deal with management response and follow-up can be distinguished along two separate, but in practice often related, dimensions: the degree of formality of the process and the way the knowledge generated by the evaluation is shared. On the one hand there are formal and informal processes, the latter being characterized by more ad-hoc interactions among evaluators and users of evaluations. On the other hand, the distinction is made between explicit (or codified) knowledge, as the one crystallized in evaluations‟ recommendations, and implicit (or tacit) knowledge, which is shared when evaluators interact with potential users of evaluations and enter into a dialogue that allows for knowledge sharing, without having that knowledge embodied in documents.
- Although it has been shown that the formalization of, and transparency in, the set up of management response and follow-up mechanisms contributes to greater systematization and more rigorous implementation of the recommendations, the two models are by no means incompatible. Rather, each process strengthens the other‟s value in promoting ownership of evaluation‟s conclusions and recommendations by management and, at the same time, ensuring accountability within and outside the organization. Formal and informal mechanisms for management response and follow-up reporting represent powerful incentives for accountability mechanisms to work and contribute to organizational learning. To that extent, evaluation policies must be explicit about follow-up mechanisms, both formal and informal.
- The main principles that should be embodied by an evaluation policy focused on the follow-up to evaluations include ownership, consultation and transparency.
- Similar to the importance of stakeholder‟s involvement during the evaluation process, stakeholders‟ inclusion and engagement throughout the follow-up process is not only important for accountability purposes, but it also builds ownership and increases the potential the evaluation has to impact on organizational learning.
- Related and supportive incentives for using evaluation results, to which the organization should commit itself include: building a culture for valuing evaluation; emphasizing the need for compliance; ensuring that evaluation recommendations are relevant to and timely for the organizational agenda; ensuring a close alignment of the departmental agenda to recommendations emanating from evaluations; senior management buy-in; relating good evaluation practices to results-based programming; and the use of results for evidence based communication strategies.
- The policy should clearly define the roles and responsibilities of evaluation offices or units, managers, and staff at large. It is also important to maintain constructive relationships with the organization‟s Governing Bodies and other technical departments without compromising the degree of independence of the evaluation function, which can be achieved through holding consultations with stakeholders during the policy drafting process to ensure that they are clear and supportive of their role, while also emphasizing the importance of independent assessments that reflect the opinions of all stakeholders.
- The requirements and mechanisms for the follow-up to evaluations, including the dissemination of evaluation reports, management response and follow-up reports, must be made clear through the policy. The policy should also leave room for informal activities. Another important point to include is the time frame for the management response and for the implementation of other follow-up activities.
- In this context, the institutionalization of practices for management response and follow-up is deemed essential to capitalise on the knowledge created and contribute to development effectiveness, by, on the one hand, helping to learn what worked well, what didn‟t work, and the reasons for these results and, on the other hand, by serving as an instrument for accountability (which, in turn, becomes an incentive for learning).
Management response to evaluation recommendations
- The UNEG Norms and Standards for Evaluation in the UN System suggest the development of a formal response to the evaluation by management and/or the governing authorities addressed by the recommendations. Standard 3.17 states, “Evaluation requires an explicit response by the governing authorities and management addressed by its recommendations”.
- This section outlines principles and good practices with respect to the development of approaches, mechanisms and processes to promote effective management responses to evaluation recommendations.
- Management Responses to evaluations in UN agencies are most commonly embodied in the production of a formal document. The majority of the UN agencies (and other bilateral and multilateral organizations) develop management responses in a matrix form, requiring feedback to each recommendation (i.e. accepted, not accepted, partially accepted) and a list of actions that the responsible unit(s) commits to take in a fixed amount of time. The responses may also have a narrative component. To ensure relevance, the management response is often required to be completed within a specific time period after the release of the evaluation report.
- The management response matrix constitutes the baseline for monitoring of accepted recommendations and agreed actions, which in turn informs follow-up reports on the status of the implementation. While serving as an important accountability tool, the outline of strict and close deadlines needs to take into adequate consideration the time that in some cases (e.g. joint evaluations) is necessary for the involvement of different stakeholders and/or organizational levels. An electronic monitoring tool to track the timely receipt of documents is advisable, especially when evaluation units find themselves managing a significant number of reports and follow-up documents.
- Although the evaluation function should not be held responsible for the substance of a response, which lies with the manager concerned, it must check the quality of management responses to ensure that the recommendations have, indeed, been responded to and have a chance of being implemented. To facilitate the process, evaluation focal points should be established to coordinate the preparation of management responses. In addition, an internal monitoring system should be established to enhance the accountability of managers, and ensure that management responses are submitted in a timely manner.
- In the case of country level and sometimes project level evaluations jointly undertaken by UN entities and governments, management and governments should be expected to provide a response to the evaluation, which is disclosed jointly with the evaluation report. Clear roles and responsibilities are of particular importance in the case of joint evaluations, where inter-agency coordination is required for effective management response and follow-up6.
- Management responses to decentralized evaluations which are managed by agency field offices and/or regional or thematic or policy bureaus7 should follow the processes as described above.
Elements of good practices for management responses
- The following attempts to distil key elements of good evaluation practice that promote effective follow-up through the management response.
- A focus on increasing the level of ownership of the evaluation findings and recommendations through both formal and informal processes during the evaluation process improves the likelihood of effective management response and evaluation follow-up.
- Clearly defined roles and responsibilities in the processes dealing with management response and follow-up are needed and should be communicated to all key evaluation stakeholders, including managers, officers and members of Governing Bodies.
- Establish an agreed deadline by which Management or other key stakeholders (e.g. Governments and possibly other partners), should provide their formal response to the evaluation.
- A focal point should be nominated by management to coordinate the management response. This is particularly important in cases where the evaluation involves several operational units, and different management levels.
- In the case of joint evaluations involving several agencies/partners, an ad-hoc group with management representatives of the different agencies/partners should be formed to elicit a coordinated management response.
- In case the concerned managers lack experience in preparing a management response, the central evaluation unit should routinely provide support by showing good examples of management response and clarifying any doubts, making reference to the evaluation policy of the organization (if there is one). The support role of the central evaluation unit is particularly important in the case of agencies with decentralized evaluation offices or decentralized evaluation focal points.
- The Management Response should clearly indicate whether Management accepts, partially accepts or rejects the recommendations. If the latter is the case, the reason(s) for the rejection should be provided. In the former case, actions to be taken should be mentioned in detail, indicating the time frame and specific unit(s) responsible to implement the planned action(s). When more than one unit is mentioned, it should be clear which unit is responsible for which action(s). This information should be presented in the form of a management response matrix, showing the relevant information at a glance.
- Management Responses should be disclosed in conjunction with the evaluation. However, if the management response does not become available within the agreed period, and if there are no acceptable reasons to extend (or further extend) the deadline, the evaluation report is disclosed with an indication that the management response was not made available at the date in which it was due.
- Evaluators should be encouraged and expected to pursue opportunities for dialogue with management on evaluation recommendations and management response, trying to facilitate managers‟ task but, at the same time, being careful to ensure their independence and to promote management‟s ownership of, and commitment to, their response. Indeed, dialogue at all levels of the evaluation process increases the perceived relevance, and stakeholder ownership of evaluations.
Follow up processes and learning
- The main purposes of institutionalizing follow-up processes to evaluations are: 1) to strengthen the use of evaluations; 2) increase stakeholder and management buy-in to improve performance; and 3) to facilitate in-depth dialogue about evaluation results and follow-up to influence the planning and implementation of strategies, programmes and projects.
- It has been demonstrated that transparent management response and follow-up processes increase the implementation rate of the recommendations.8 UNEG Standard 1.5 requires the evaluation function to ensure that follow-up and regular progress reports are compiled on the implementation of the recommendations emanating from the evaluations already carried out, to be submitted to the Governing Bodies and/or Heads of organizations. While this may not be the practice for all evaluation functions in the UN system, all evaluation functions should consider implementing mechanisms that facilitate follow-up of evaluation recommendations.
Systematic follow up to evaluations
- As described in the previous section, the management response matrix clearly outlines the recommendations from the evaluation, the response from management, and the actions to be taken including a clear indication of the entity responsible for the action and the timeline for completion.
- Reporting on follow-up to evaluations should take place at regular intervals, e.g. on an annual or biannual basis. Each organization should determine the appropriate intervals and ensure that they are communicated to staff and stakeholders. Reporting intervals are ideally aligned with the organization‟s planning processes. A default expiration period for the tracking of follow-up to recommendations of evaluations is desirable to ensure that costs (including financial and human) necessary for tracking the implementation of recommendations are balanced with the benefits 9. It is also important to allow flexibility in terms of changing actions that have been agreed upon in order to ensure their relevance within a changing context.
- There are several mechanisms for the systematic follow-up that are considered good practice:
– Electronic platforms have proven to be a successful mechanism for tracking the actions taken in response to the recommendations of an evaluation. The benefits of an electronic platform include the ability to generate reports and complete disaggregated analyses on the implementation across the organization, and will facilitate access by all stakeholders to the information generated. Organizations interested in developing an electronic platform for tracking should seek lessons learned from those organizations that have implemented such a platform.
– Reporting to governing bodies (and thus to the entire organization and its stakeholders) on an annual or biannual basis on the status of the implementation of recommendations is an effective means of ensuring accountability. The report could come in the form of an Annual Evaluation Report that covers multiple aspects dealing with evaluation in the organization or a report specifically focused on the implementation status of evaluation recommendations and follow-up actions. Reporting can serve as an incentive to implement follow-up actions in a timely fashion.
– Discussions on planned follow-up to evaluations and the status of implementation of the recommendations are essential for ensuring that stakeholders are aware of the findings and the actions planned and/or taken. Discussions will enable stakeholders to provide comments and suggestions for moving forward. Discussions on the follow-up to evaluations can take place systematically at the annual meeting of the Governing Body and/or through Senior Management Teams. Such discussions should focus on strategic issues of corporate significance and on recurrent findings and recommendations from project evaluations. Discussions will build ownership within the organization and serve as a further incentive to implement follow-up actions in a timely fashion.
Learning and contribution to knowledge development
- Systematic mechanisms for follow-up to evaluation recommendations are positive steps in institutionalizing a system for follow-up. However, in order to ensure effective and appropriate follow-up they should be complemented by other incentives and less formal mechanisms.
- Several mechanisms for facilitating learning and knowledge development from evaluations are considered good practice:
– Knowledge products can include the actual evaluation report, an evaluation brief, an e-newsletter with a short summary, or other products. Knowledge products should contain the key findings and recommendations, be tailored to the audience and facilitate the use of information through clear and easy to understand language while at the same time maintaining linkage to the broader expected results of the organization. The strategy for dissemination of knowledge products is of utmost importance; it has been shown that an effective strategy depends on: correct targeting of intended users, the appropriateness of the means used to facilitate access to the evaluation findings, and, in particular, the timing of the evaluation, so as to make evaluation findings available when decisions are taken. It is also important to take advantage of the new media and technology available for disseminating knowledge such as wiki‟s and „YouTube‟.
– Meetings and workshops facilitate the sharing of tacit (or implicit) knowledge from evaluations. Tacit knowledge is the knowledge that is not captured in evaluation reports, for example the interpretation of evaluation findings and recommendations by individual staff members. Tacit knowledge is essential for a full understanding and an appropriate and effective implementation of the recommendations at the organizational level. The extent to which evaluation documents, follow-up reports, and lessons learned are discussed and shared, significantly affects the use of evaluation results, ensures transparency, and serves as an incentive for the organization‟s staff whenever the documents are disclosed to the public and/or presented in front of the Governing Bodies.
– Communities of practice (COP‟s) are informal mechanisms that have the potential of creating an enabling environment for the use of evaluations, providing evaluators and staff with opportunities to persuade the managers to implement the recommendations through the sharing of knowledge and good practice.
Conclusions and suggestions for use of good practices
- The present guidelines should be tailored to the specific context of each organisation, as the evaluation functions of UN entities vary greatly in terms of the level of independence, capacity and organizational evaluation culture, which affect appropriate roles and responsibilities, the level of acceptance of evaluation and the appropriateness of related follow-up activities.
- Management responses and follow-up to evaluations should be reflected in agency evaluation policies, which should clarify the respective roles and responsibilities of the evaluation function vis-à-vis management. While management is ultimately accountable, at the minimum evaluation units are expected to facilitate processes and promote activities related to the follow-up to evaluations.
- Evaluation processes should aim to increase the level of ownership of findings and recommendations through both formal and informal approaches. For evaluation processes and results to be fully captured and owned as organizational lessons, it is of central importance that a tailored dissemination or communication strategy is developed for each evaluation.
- Management responses should clearly indicate whether Management accepts, partially accepts, or rejects the recommendations. Follow-up should be well coordinated and timeframes for action agreed. Good practice suggests that management responses should be disclosed in conjunction with the evaluation. Complementing formal management responses with facilitation of learning and knowledge development from evaluations is necessary for building a culture for utilizing evaluations beyond compliance. Formalisation of management response processes in evaluation policies, followed by systematic application, is an effective way to promote organization-wide learning and improve both operational programming and implementation.
- A combined approach that incorporates oral and written, formal and informal communication is deemed necessary to ensure that follow-up to evaluations supports organizational accountability and learning for enhanced effectiveness.
1A. Engelhardt, Management Response and Follow-up to evaluation recommendations: overview and lessons learned, 2008.
2 O. Feinstein, Institutional practices for Management Response and Evaluation Follow-up, 2009.
3 Some agencies give an order of priority for recommendations, and / or require a timeframe to be specified.
4 The factors influencing the use of evaluations are discussed in Feinstein, Osvaldo (2002): “Use of Evaluations and Evaluation of their Use”, Evaluation, No. 8
5 Available at www.unevaluation.org.
6 OECD, DAC Guidance for Managing Joint Evaluations, 2006
7 This is an adaptation of the definition used by WFP and UNDP, which also applies to UNICEF and most organizations. Furthermore, this definition is consistent with the one included in Hildenwall and Öbrand (2008)
8 Achim Engelhardt, “Management response and follow-up to evaluation recommendations: overview and lessons learned” p. 5.
9 For example, JIU per default tracks the follow-up to recommendations for a period of 4 years.
Please see Norms and Standards of Evaluation.
Major Requirements: The Standards for Evaluation Reports are underpinned by two major requirements:
1. Comprehensiveness, and
Virtually evaluations are presented as written reports and one of the tasks after the evaluation is completed is to disseminate its results to potential users. It is essential, however to have already ascertained that the evaluation has produced credible information and well-founded recommendations. UN-Habitat recommends a standard format for evaluation reports. The format is
intended both to facilitate writing reports by evaluators and checking reports by evaluation managers and others. The format is not compulsory, but it should be used unless there is a good reason for doing otherwise.
- Title page to include:
• Name of programme/project to be evaluated;
• Date of the evaluation report;
• Location of programme
- Table of contents
- Executive Summary
The Executive Summary should contain, summary of evaluation with emphasis on :
• Purpose and scope; methodology used, data collection and analysis methods used; major limitations;
• Main findings;
• Lessons learned;
• And recommendations.
The Introduction should contain, in not more than one page:• Purpose of the report;
• Scope of the programme/project;
• Scope of the evaluation and evaluation questions
• Structure of the report;
- The evaluated intervention (Policy, institution, programme/project description)
The programme/project description including:
• Economic, social and cultural dimensions, history, logic in relation to organizational work
• Stakeholders involvement;
• Issues to be addressed;
• Linkages to other objects;
• References to relevant documents and mandates;
• What results were expected to be achieved;
• Other information (phases, timeline, budgets etc.);
- Evaluation profile
The evaluation profile should cover,
• Reason for carrying out the evaluation;
• Design of the evaluation/justification of the methodology used;
• Description of methodology:
• Data sources used;
• Data collection and analysis methods used;
• Major limitations;
• Evaluation team;
• Performance expectations (indicators);
• Participation/stakeholders’ contribution;
• Specifics for addressing evaluation questions;
- Evaluation findings
The evaluation findings should include:
• Factual evidence relevant to the questions asked by the evaluation and interpretation of such evidence Findings regarding resources used;• Findings about outputs;
• Findings about outcomes and impact where possible;
• Progress compared with initial plans (achievements/challenges);
• Findings on unintended effects;
• Issues of effectiveness, efficiency and relevance
- Evaluative conclusions
• Add value to the findings (sum of findings = conclusions);
• Focus on issues of significance related to key questions of performance relative to the expectations;
- Lessons learned
• Prevent mistakes;Contribute to general knowledge
• Contain suggestions to improve future performance;
• Be supported by evidence and findings;
• Be adequate in terms of the TOR;
• Facilitate implementation;
• Evaluation Work Plan
• Evaluation work plan;
• Data collection instruments;
• List of important documentation;