In 2007, UN-Habitat launched an ambitious organizational renewal, making a commitment to its governing bodies, donors, Member States and its partners to become more results-oriented and accountable. It provided a coherent framework for strategic planning and management – including evaluation.
RBM, adopted by UN-Habitat, emphasizes the importance of defining realistic results to achieve, clearly identifying beneficiaries and designing interventions to meet their needs. In this context, evaluation is expected to play a fundamental role in the agency’s transformation into a more results-oriented, transparent and accountable organization.
This section describes evaluation requirements at UN-Habitat and evaluation processes23, divided into three stages: planning evaluations, implementing evaluations and using evaluation findings. A set of “evaluation tools”, including checklists and templates are provided to guide and support evaluation steps where necessary.
23 The evaluation approach adopted by UN-Habitat, shown in figure 36 is based on evaluation guidelines used by other organizations, mostly OIOS.
4.2.1 Three main types of evaluation in UN-Habitat
(a) Corporate and thematic evaluations with a global perspective, as well as ‘high risk’ areas of operations. This includes mandatory external evaluations requested by the UN-Habitat governing bodies, donors or other inter-agency bodies, or discretionary external evaluations requested by UN-Habitat.
(b) Project and programme evaluations focus on delivery of outcomes and operational performance in terms of efficiency, effectiveness, relevance, impact and sustainability of UN-Habitat interventions. These evaluations are typically ex-ante evaluations, and mid-term and end-of-project evaluations. Mid-term evaluations are undertaken for projects with over four years duration, and with emphasis on high risk projects. End-of-project evaluations are undertaken at the completion of the project. As of 2015, all projects with a value of US$1 million and above require a mandatory end-of-project evaluation conducted by an external consultant.
(c) Mandatory self-evaluation of all closing projects is required by management and is conducted by programme managers at global, regional and country levels. The Project Office coordinates and manages the self-evaluations. To ensure high quality, a few projects are randomly selected and evaluated by the Evaluation Unit. Every six to twelve months, the Evaluation Unit synthesizes the results of evaluation activities, including lessons learned and follow-up on recommendations, and presents a substantive evaluation report to the UN-Habitat Board.
The Evaluation Unit is responsible for managing and conducting evaluations included in the biennial or annual evaluation plans; these evaluations are considered centralized evaluations. Other evaluations commissioned and managed by project leaders in branches, regional offices and country offices are considered decentralized evaluations, for which the Evaluation Unit is responsible for providing technical support. The Evaluation Unit must be informed of all evaluations, including decentralized evaluations and donor-led evaluations, and a copy of the final report submitted to the Evaluation Unit.
Project and programme evaluations should be financed through the projects’ own budget. Project leaders are obliged to include an evaluation budget in their project proposals. The indicative evaluation cost estimate index should be followed for costing of evaluations. Evaluations commissioned or requested by donor agencies or other external entities must be financed by the party that commissioned or requested the evaluation.
Due to the limited resources available, there is a prioritization and risk assessment of interventions to be evaluated. Two evaluations of the Strategic Plan 2014-2019 (mid-term and end-term) must be carried out by the Evaluation Unit over the six-year period of the Plan and should be adequately resourced in the budget process for those years as a core expense. Impact evaluations may be carried out for long-standing demonstration projects and programmes with the costs covered largely by those projects or programmes.
All closing projects must have a self-evaluation report. The self-evaluation report is the responsibility of the project leader and focuses on results achieved and performance of the project. All evaluations managed and conducted by the Evaluation Unit must have a management response, including an action plan to implement accepted recommendations. Regular monitoring of progress in the implementation of the evaluation recommendations is the responsibility of the Evaluation Unit, which will contact responsible offices for the implementation of action plans.
Evaluation capacity development is a critical component to institutionalize evaluation. Training workshops and evaluation tools to support the project-based management approach are developed in order to build evaluation skills and promote evaluation awareness.
4.2.2 Planning, implementing and using evaluations
Evaluation processes can be summarized in three stages namely planning, implementation and use of evaluation findings, as in the figure 37.
Figure 37: Phases of evaluation
4.2.2.1 Planning Evaluations
a) Preparation of UN-Habitat biennium evaluation plan
The UN-Habitat Evaluation Plan is prepared for every biennium and includes evaluation activities to be carried out by UN-Habitat during the two-year programme cycle, as well as related financial resource requirements. The evaluation plan is developed in conjunction with the formulation of UN-Habitat’s biennial programme budget, and thus forms an integral part of the programme planning cycle. The biennium evaluation plan is updated annually.
Prior to the start of the biennium, branch coordinators, regional directors and other programme managers, in consultation with their staff, identify and propose evaluation topics for inclusion in the biennium evaluation plan. The Evaluation Unit reviews the proposals in the context of UN-Habitat’s overall requirements and prepares a draft evaluation plan for review by the UN-Habitat Board.
The prioritization of evaluation topics is a critical exercise and the following criteria (considerations) guide the selection of priority evaluation topics to be included in the evaluation plan.
- Mandatory evaluations requested by the Governing Council, other intergovernmental bodies, donors, etc.
- The relative importance of the proposed evaluation topic within the context of UN-Habitat’s strategic direction and priorities
- Evaluations that are cross-cutting in nature
- Evaluation of ‘high risk’ interventions
- Evaluation of interventions that have innovative value and potential for replication
- Impact evaluations to assess changes brought about by UN-Habitat interventions
- Resource requirements
- Evaluability
The prioritized evaluations will form the evaluation plan that will be managed centrally by the Evaluation Unit. This biennial evaluation plan does not, however, determine the complete set of evaluations actually undertaken. The implementation of the plan is influenced by various factors, including the availability of resources, and requests for ad hoc evaluations by different stakeholders. The plan must be flexible to absorb new demands from within as well as from outside the organization, as the need arises.
Programme/Project managers may initiate and commission evaluations that are not included in the evaluation plan, to assess and seek ways to improve their programmes.
They may be internally or externally conducted. Such evaluations are referred to as decentralized evaluations. Programme managers are responsible for managing decentralized evaluations, but must inform the Evaluation Unit of such evaluations and request technical advice and assistance from the Evaluation Unit.
It is essential that planning for monitoring and evaluation take place at an early stage of project/programme formulation, and resources required for evaluation need to be reflected in project documents. This is because (i) the design of the project affects how it will be evaluated in future; (ii) SMART project results and indicators are foundational to evaluation and; (iii) monitoring results throughout the project’s implementation is critical to having valid information for an evaluation.
b) Budgeting for evaluations
Evaluation being a core function of the organization, it is essential that a core budget be allocated to the evaluation function as part of the overall planning and budgeting processes. This core budget allocation should be complemented by other budget sources, such as donor commitments for specific programmes and evaluation budgets for projects and programmes.
The UNEG norm (N2) states that “the Governing Bodies and/or the Head of the organizations are also responsible for ensuring that adequate resources are allocated to enable the evaluation function to operate effectively and with due independence” The standard benchmark established for evaluation is three to five percent of the overall budget of a programme. However, given the resources constraints in the UN Secretariat, and the developing status of the Secretariat evaluation functions, a benchmark of 1% of the total budget was suggested24.
Programme and project managers/leaders are responsible for ensuring that adequate resources and for evaluation are planned in the project documents, and the Programme Advisory Group (PAG) should not approve projects that do not have adequate resources for evaluation.
Responsibility for provision of financial resources required for evaluation that are not included in the evaluation plan rests with the party that requests or commissions the evaluation. For decentralized evaluations, cost-recovery will be charged for support activities rendered by the Evaluation Unit, such as reviews of TOR, draft evaluation reports, training staff in evaluation and review of project proposals in the Programme Advisory Group.
24 United Nations Secretariat Evaluation Scorecards 2010-2011 to complement the OIOS biennial report on "Strengthening the role of evaluation and application of evaluation findings in programme design, delivery and policy directives".
4.2.2.2 Pre-evaluation: Initial considerations for an evaluation
Before evaluation managers start to design a specific evaluation, they should consider the following elements:
- Establish the need and purpose of the evaluation.
- Establish what needs to be accomplished and the issues to be addressed.
- Identify and engage the relevant stakeholders.
- Determine the scope, approach and appropriate methodology.
- Estimate resources needed and/or available for the evaluation.
- Determine the evaluability of the intervention to be evaluated.
Determining the evaluability of the intervention to be evaluated
Determining evaluability means assessing the intervention to see if the evaluation is feasible, affordable and of sufficient value to proceed.
Unless considerations of evaluability are built into the design, an evaluation may eventually not be feasible.
In additional to developing logical frameworks of programmes and projects, options for data collection and availability of baseline data should be considered during the design phase of an intervention. Table 24 shows six practical steps for determining evaluability of interventions.
Checklist for initial consideration before the evaluation
Before the evaluation manager begins to prepare the TOR, they should have a basic understanding of the evaluability, purpose and issues to be addressed, involvement of stakeholders, scope, approach and methodology, resource, timing and need for a reference group (see table 25).
4.2.2.3 Developing Terms of Reference (TOR) for an evaluation
The Evaluation Manager prepares the TOR once the decision is made to proceed with an evaluation. The TOR document offers the first substantive overview and conceptual outlook of the evaluation. It articulates management’s requirements and expectations for the evaluation and guides the evaluation process, until the evaluation work plan (inception report) takes over as the primary control document. The evaluation work plan, prepared by the evaluator, brings great specificity and precision to evaluation planning – refining and elaborating on what has been set out in the TOR.
Developing an accurate and well-specified TOR is a critical step in managing a high-quality evaluation. Before preparing the TOR, you should have a basic understanding of:
- Why and for whom the evaluation is being done.
- The issues to be addressed and what the evaluation intends to accomplish.
- Who will be involved and the expertise required to complete the evaluation.
- When milestones will be reached and the time frame for completion.
- What resources are available for conducting the evaluation.
(a) What goes into an evaluation TOR? (content of the TOR)
The content of the TOR should provide sufficient background information related to the assignment, and move in a logical order from the evaluation objectives and intended users, through the required qualifications of the evaluation team and the resources available. The level of detail of the sections will vary based on the nature and magnitude of the evaluation task, but essential elements are summarized in box 25.
A TOR presents an overview of the requirements and expectations of the evaluation. It details parameters for the conduct of the evaluation. It provides the background and context for the evaluation: the purpose, objectives and for whom the evaluation is being done; the scope of the evaluation; the framework, including criteria, tailored evaluation questions and how crossing cutting issues such as human rights, gender and environmental issues will be incorporated; evaluation methodology; stakeholders involvement; accountabilities and responsibilities; evaluation team composition and qualifications; procedures and evaluation process; description of deliverables; scheduling of evaluation and resource requirements. This section sets out essential elements of the TOR:
- Title
The title identifies what is being evaluated. A good title is short, descriptive, striking and easily remembered.
- Introduction/Background information and rationale
The opening section of the TOR provides orientation about the overall intervention being evaluated. This section should describe the background and context of the programme and its current status. Main objectives and expected results of the programme must be clearly stated, including key outcome indicators. The context in which the programme is being implemented including organizational, social, political, regulatory, economic or other factors that have been directly relevant to the programme’s implementation should be described. Roles and responsibilities of key stakeholders in the design and implementation of the programme should also be described. In addition, this section should provide information on the legislative authority and mandate for the evaluation, and what is expected to be achieved.
- Purpose and objectives of evaluation
This section should provide the purpose and objectives of the evaluation. It should clarify who the evaluation is for. Why is the evaluation being undertaken, and why is it being undertaken now? And how will the evaluation results be used? While the purpose clarifies why the evaluation is being carried out, the objectives should describe what the evaluation aims to achieve. The following are typical objectives for a programme or project evaluation.
- To assess what the programme achieved vis-à-vis its objectives?
- To assess the relevance, efficiency, effectiveness and sustainability of the programme/intervention
- To assess the extent to which the design and implementation of the programme takes into consideration cross-cutting issues of gender equality and human rights approaches
- To identify concrete recommendations for improvement
- To assess the efficiency with which the outputs are being achieved
4. Evaluation scope and focus
This section presents the parameters of the evaluation in terms of scope and limits. The scope should be realistic given the time and resources available to implement the evaluation. The following should be considered in defining the scope for evaluation:
- The period covered by the evaluation, e.g., past five years of the programme; or since the implementation of the Strategic Plan 2014-2019;
- Geographical coverage: country level, regional, global E.g., African countries targeted by the WATSAN programme;
- Thematic coverage (If it is a programme, which projects will be covered?).
- Criteria against which the subject will be evaluated: All major evaluations usually include the criteria of efficiency, effectiveness , relevance, impact and sustainability (for definitions of the evaluation criteria see UN-Habitat Evaluation Policy page 3, paragraph 10)
5. Evaluation approach and methodology
- Specifying the evaluation approach and methodology for the evaluation is normally challenging. It should describe steps and activities that will be undertaken to answer the evaluation questions. At UN-Habitat, development of the evaluation approach and methodology consists of three steps:
- Determining the design
- Choosing information collection methods
- Determining the method(s) of data analysis
Determining the design
In order to establish whether an intervention has brought about change, the situation before and after the implementation of the intervention must be compared. For this method to be employed, it requires baseline data be established before project implementation.
If changes have been observed after the implementation of the intervention, it is important to determine whether the changes observed can be directly attributed to UN-Habitat’s contribution. One way to do this is to explore the “counterfactual”, which means asking “What would have happened without UN-Habitat’s involvement?” UN-Habitat’s contribution is determined with more certainty if it can be shown that a similar change did not take place for groups or countries that were not targeted by the intervention.
For many evaluations it would be a challenge to ascertain this information for the following reasons: (i) it is difficult to attribute a change directly to UN-Habitat’s involvement, and 2) it is difficult to compare the situation of countries or regions because of differences in historical, political, social and economic conditions. As UN-Habitat’s work is carried out predominantly at global, regional, national and local levels, it is not easy to find suitable comparison groups. For these reasons it is advisable to do a pre and post intervention comparison.
Choosing data collection methods
The methodology and evaluation questions should guide the determination of the data collection method that would be most appropriate. The following considerations may help to determine which method of data collection would be appropriate:
- What data is already available and what needs to be collected?
- What data collection method will best answer the evaluation questions?
- What resources and time are available for data collection
- What method will ensure stakeholder involvement?
- Would the validity, accuracy and reliability (consistent results using the same method) of data be strengthened through a mixed qualitative/quantitative approach?
The quality of the evaluation very much depends on the methods used. Key elements generally include:
- The methodological framework (document review, desk study, interviews, field visits, questionnaires, observation and other participatory techniques, participation of partners and stakeholders, benchmarking)
- Expected data collection methods (instruments used collect information)
- Availability of other relevant data, such as existing from similar programmes
- Process for verifying findings
Determining method(s) for data analysis
Analysis and interpretation of results is a critical exercise. Data analysis is the search for patterns and relationships within the data, and is guided by the evaluation questions. Many different means for analysing qualitative and quantitative data exist. Whichever method is chosen, the evaluation manager and the reference group, if established, should work with the evaluation team to place the findings within the context of the programme or organization; identify possible explanations for unexpected results; and determine what conclusions can be drawn from the data without unduly influencing the recommendations.
6. Stakeholder participation
- This section should specify the involvement of key stakeholders, as appropriate, and provide a sound rationale. It should be clear how specific stakeholders will participate, i.e., in planning and design, data collection and analysis, reporting and dissemination, and/or follow-up.
7. Evaluation team composition
- The expertise, skills, and experience needed will depend on the scope and methodology of the evaluation. The TOR should specify as clearly as possible what the profile of the evaluator or team should be, to attract strong candidates to conduct the evaluation. Useful details in this section relate to:
- Whether the evaluation is to be conducted by an individual or a team, or whether both possibilities could be considered.
- What specific expertise, skills, and prior experience the evaluators are required to have? Evaluators must have extensive experience in carrying out evaluations, technical knowledge of the topic that is being evaluated, as well as other specific expertise, such as country-specific knowledge, language skills, and an understanding of UN-Habitat and the organizational context in which it operates. The M&E Unit will be available to provide support in identifying suitable candidates.
- Distinguishing between desired and mandatory competences, as well as whether competencies are required by the whole team or by certain team members;
- The expected distribution of responsibilities among the team leader and other team members.
- Additional information that will assist in gauging the qualifications of evaluators should be noted in this section.
8. Responsibilities and accountabilities
- This section of the TOR specifies the roles, responsibilities and management arrangements for carrying out the evaluation. Any decision-making arrangements (such as a steering committee or an advisory or reference group) should be described here in terms of their functions. The responsibilities of the evaluation manager, evaluation team leader and team members, as well as other stakeholders should be included in this section.
9. Deliverables
- The outputs and reporting requirements expected for the evaluation should be specified in this section. Generally, the TOR calls for the evaluator to produce three primary deliverables: (i) an evaluation work plan (inception report); (ii) draft evaluation report for review; and (iii) and a final report (including an executive summary). The standard format for preparing the final report is set out in this guide.
10. Evaluation Schedule
- The time frame for products, including milestones should be included in this section. An approximate timetable (to guide preparation of the evaluation work plan) should be prescribed. Alternatively, the TOR may specify the expected scope and deliverables, and request that evaluators propose a realistic time frame.
11. Budget and payment schedules
- The evaluation manager should have cost projections for the evaluation. In cases where a limited budget is likely to constrain the scope and methodology, a good practice is to state the available budget and ask evaluation proposers to describe what they can achieve with that budget. Alternatively, the TOR can ask evaluators to come up with their own estimates based on the tasks they propose. For TORs targeting individual consultants, UN-Habitat will set a budget for the consultant’s fee, with the expectation that travel costs will be arranged and covered separately.
12. Cross-cutting issues human rights, gender issues, youth and climate change /environment in evaluations
- A number of cross-cutting issues need to be taken into account in carrying out evaluation studies. These include gender mainstreaming, human rights, climate change and capacity building. UN-Habitat is committed to ensuring that these basic principles are reflected in all its programming activities and throughout the project cycle.
- UN-Habitat’s Gender Policy and Gender Equality Action plan aim at mainstreaming a gender perspective and practicing a gender-sensitive approach in all UN-Habitat interventions. All UN organizations are guided by the United Nations Charter, and have a responsibility to meet obligations towards the realization of human rights. Many projects impact on the physical environment and climate change, both directly and indirectly. For any project to be truly sustainable, it is important that issues of environmental impact are taken into account. UN-Habitat’s environmental assessment requirements (2004) emphasize integrating environmental assessments in project planning, implementation, monitoring and evaluation, in order to minimize adverse impacts programmes may cause for the environment.
13. Gender equality and empowerment
- The “gender approach” is not concerned with women per se, but with the social construction of gender and the assignment of specific roles, responsibilities and expectations to women and men. The gender approach does not focus solely on productive or reproductive aspects of women’s and men’s lives. Rather, it analyses the nature of the contribution of every member of society both inside and outside the household, and emphasizes the right of everyone to participate in the development process and benefit from the results of the process. Gender analysis should be considered throughout the process from programme planning and design to programme evaluation.
- Indicators need to allow for measurement of benefits to women and men, and these will depend on the nature of the project under evaluation. Indicators need to capture quantitative and qualitative aspects of change. Quantitative indicators should be presented in a sex-disaggregated way. Qualitative information is also critical, and information will need to be collected through participatory methods such as focus groups and case studies. Another area of importance is the need to develop indicators of participation. Examples include pinpointing levels of men’s and women’s participation; women’s and men’s perceptions of the degree of group solidarity and mutual support; women’s and men’s perceptions of the ability of group members to prevent and resolve conflicts; and the participation of women and poorer people in decision-making processes. There is no agreed-upon method to measure empowerment, but it usually involves two aspects:
- personal change in consciousness characterized by a movement towards control, self-confidence and the capacity to make decisions and determine choices; and
- the creation of organisations aimed at social and political change.
14. Human rights
- Human rights are the civil, cultural, economic, political and social rights inherent to all human beings, whatever their nationality, place of residence, sex, sexual orientation, national and ethnic origin, colour, ability, religion, language, or any other factor. They are considered universal, interdependent, and non-discriminatory. All human beings are entitled to these rights without discrimination. The strategy for implementing human rights in UN programming is called the Human Rights-Based approach (HRBA).
- Key concepts of HRBA are:
- The development process is normatively based on international human rights standards and principles;
- It aims for the progressive achievement of all human rights;
- It recognizes human beings as rights-holders and establishes obligations for duty-bearers. It focuses on identifying capacity gaps, and developing capacities accordingly;
- It focuses on discriminated and marginalized groups;
- It gives equal importance to the outcome and process of development.
15. Youth
- Similar to the analysis of gender equality and empowerment, a youth analysis should be part of the total process from project planning and design to project evaluation.
- Indicators need to allow measurement of benefits to youth, and these will depend on the nature of the project under evaluation. Indicators need to capture quantitative and qualitative aspects of change. Quantitative indicators should be presented in an age-disaggregated way. Qualitative information is also critical, and information will need to be collected through participatory methods such as focus groups and case studies.
- Indicators of participation are also important. Examples include pinpointing levels of youth participation; youth perceptions of the degree of group solidarity and mutual support; perceptions of the ability of group members to prevent and resolve conflicts; and youth participation in decision-making processes.
16. Climate Change/ Environmental Aspects
Many projects impact on the physical environment, both directly and indirectly. For any project to be truly sustainable, it is important that issues of environmental impact are taken into account. The following are some key questions from which the most appropriate response should be selected:
- Was an environmental impact assessment made?
- Was environmental damage done by or as a result of the project?
- Did the project respect traditional ways of resource management and production?
- Were environmental risks managed during the course of the project? Will these continue to be managed?
- Overall, will the environmental effects of the project’s activities and results jeopardize the sustainability of the project itself or reach unacceptable levels?
The TOR for an evaluation should contain questions to assess whether human rights, gender and environmental dimensions have been adequately considered by the intervention during its design and implementation. The evaluation manager will have the greatest influence at the initial consideration stage and it is important that they have a good understanding of the application of human rights, gender, youth and climate change/environment in the UN system. If this expertise is missing, it is advisable to seek assistance during the planning and development of TOR.
UN-Habitat Quality Checklist for Evaluation Terms of Reference and Inception Reports
The following checklist (table 26) provides a basis for reviewing the quality of the TOR and inception reports. It should be used by the drafters of the evaluation TOR and inception reports to ensure that all necessary elements are contained within the documents. The checklist is drawn from the UNEG Quality Checklist for Evaluation Terms of Reference and Inception Reports (2010) with modifications.
4.2.2.4 Selection of the evaluator or evaluation team
Evaluations should be conducted by well-qualified evaluators, selected through an established contracting process. A good team should have an appropriate mix of skills and perspectives, and the team leader is responsible for organizing the work distribution, and for making sure that all team members contribute meaningfully. The number of evaluators in a given team will depend on the size of the evaluation. Multi-faceted evaluations will need to be undertaken by a multi-disciplinary team. It is important to uphold the UNEG norms and standards on competences and ethics in order to minimize conflict of interest and maximize the objectivity of the evaluation.
The engagement of an evaluation team essentially involves four steps: (i) deciding on the sourcing options; (ii) identifying potential candidates; (iii) notifying the successful candidates; and (iv) negotiating and signing the contract. In UN-Habitat, the selection process is guided by UN rules of procurement. Members selected must bring different expertise and experience to the evaluation team. If possible, at least one member of the team should be experienced in the sector or technical areas addressed by the evaluation, or have the knowledge of the subject to be evaluated; and at least one other member should preferably be an evaluation specialist experienced in using specific evaluation methodologies.
The composition of the evaluation team should have a gender balance and geographical diversity, and should include professionals from the countries or regions being evaluated. The skills and other qualifications required for the evaluators vary from case to case, but the following are usually important:
- Evaluation expertise
For an evaluation to be successful, the team must have extensive experience in carrying out evaluations and an understanding of RBM principles, as well other specific expertise such as country-specific knowledge, language skills and an understanding of UN-Habitat and the context in which it operates. The evaluators should have the ability to present credible findings derived from evidence and put forward conclusions and recommendations supported by the findings, and the skills necessary for facilitating stakeholder participation and effectively presenting evaluation results to diverse audience. The United Nations Standards for Evaluation in the UN System25 advise that work experience in the following areas is particularly important:
- Design and management of evaluation processes
- Survey design and implementation
- Social science research
- Programme/project/policy planning, monitoring and management
It is also recommended that an evaluator be identified, with specialized experience including data collection and analytical skills in the following areas:
- Understanding of gender considerations
- Understanding of human rights-based approaches to programming
- Logic modelling/logical framework analysis
- Qualitative and quantitative data collection and analysis
- Participatory approaches
In addition, personal skills in the following areas are important:
- Teamwork and cooperation
- Capability to bring together diverse stakeholders
- Communication skills
- Strong drafting skills
- Analytical skills
- Negotiations skills
2. Subject matter expertise
Substantive expertise is always important, although more so in some evaluations than in others. It is not until the evaluation questions have been formulated that the need for subject-matter expertise can be more precisely defined.
3. Local knowledge
A good understanding of local social and cultural conditions is often necessary to help evaluators understand whether an intervention has been successful. When the evaluation involves contacts with local level officials or representatives of target groups, local language skills may be required. In any case, members of the evaluation team should familiarize themselves with the cultural and social values and characteristics of the intended beneficiaries. In this way, they will be better equipped to respect local customs, beliefs and practices throughout the evaluation work.
4. Gender equity representation
An evaluation team should be gender balanced and geographically diverse, and should aim to include professionals from the countries or regions concerned. Using local consultants can also help build evaluation capacity in the countries concerned.
5. Ethical considerations
This is a critical element of selecting and managing an evaluation team. The UNEG website26 has code of conduct guidelines on ethical attitudes and behaviours of evaluators. These codes of conduct must be an integral part of any contract with any consultant to undertake evaluation in UN-Habitat.
UN-Habitat has a roster of consultants and there are a number of rosters online with evaluation professional associations that can be useful in searching for qualified evaluators. Box 26 (below) provides resources for identifying an external evaluator.
Profile of the evaluation consultants
Depending on the complexity of the evaluation, UNEG has outlined levels of expertise for reference27. In general, evaluators should have professional work experience, specific technical knowledge, understanding of evaluation process and interpersonal skills. The following table (table 27) is an evaluator selection checklist developed from the UNEG Standards.
Contract negotiations
The evaluation manager selects and recommends the successful consultant(s) to the recruitment sections for drawing up the contract. Before undertaking evaluation work within UN-Habitat, the evaluation manager should initiate contract negotiation with the evaluator(s). The intent is to establish a mutual understanding of what is to be done, by when, and at what cost, within the best interest of the organization. Methods of payment should also be negotiated, for example:
- 20% upon signing the contract
- 40 % upon submission of draft report
- 40% after approval of final report
Briefing the evaluation team
It is recommended that a briefing session be organized with the evaluation team before the start of the evaluation. The briefing should cover the following:
- Introducing evaluation team members, particularly if they have not worked with each other before
- Ensuring that the evaluation team understands the programme to be evaluated and the organizational context
- Ensuring a common understanding of the purpose, objectives, scope and limitations of the evaluation
- Providing available documentation, and
- Explanation of the reporting requirements.
A list of documentation that may be useful to the evaluation team is listed in the box 27 below:
25 United nations Evaluation Group (UNEG), Norm and Standards for Evaluation inthe UN System, April 2005 (available here) 26 UNEG Code of Conduct 27 UNEG Core Competencies for Evaluators of the UN System (available here)
4.2.2.5 Preparation of evaluation work plan (the inception report)
The evaluation work plan provides an opportunity for evaluators to build on the initial ideas and parameters set out in the TORs, to identify what is feasible, suggest refinements and provide elaboration. It describes the main elements of how the evaluation will be conducted.
It outlines the overview of the intervention being evaluated, the evaluation issues, how findings will be used, the evaluation questions, information sources, evaluation methods, responsibilities and accountabilities, the profiles of evaluation team, a work schedule attaching dates to key milestones for the evaluation and the budget and payment schedule.
Evaluators are therefore expected to review all relevant information related to the intervention being evaluated and prepare an evaluation work plan (the inception report) based on (i) the TOR and (ii) the planning and approval documents. Provision for the preparation of the evaluation work plan should be made in the TOR, and in such cases UN-Habitat normally requires that the evaluation work plan be approved before the evaluation can proceed to the next phase.
Once approved, the evaluation work plan becomes the key management document for the evaluation delivery. In preparing the work plans, evaluators are expected to build on what was put forward in the TOR and identify what is feasible, suggest refinements and provide elaboration.
It is important that both the evaluation manager and the evaluation team come out of the planning process with a clear understanding of how the evaluation work is to be performed. The following table (table 27) provides the main elements of an evaluation work plan.
4.2.2.6 Ethical conduct of Evaluation
Obligations of Evaluators
Independence
Evaluation in UN-Habitat should be demonstrably free of bias. To this end, evaluators are recruited for their ability to exercise independent judgment. Evaluators shall ensure that they are not unduly influenced by the views or statements of any party. Where the evaluator or the evaluation manager comes under pressure to adopt a particular position or to introduce bias into the evaluation findings, it is the responsibility of the evaluator to ensure that independence of judgment is maintained.
Where such pressures may endanger the completion or integrity of the evaluation, the issue should be referred to the evaluation manager who will discuss the concerns of the relevant parties and decide on an approach that will ensure that evaluation findings and recommendations are consistent, verified and independently presented (see below Conflict of Interest).
Impartiality
Evaluations must give a comprehensive and balanced presentation of the strengths and weaknesses of the policy, program, project or organizational unit being evaluated, taking due account of the views of a diverse cross-section of stakeholders. Evaluators shall:
- Operate in an impartial and unbiased manner at all stages of the evaluation.
- Collect diverse perspectives on the subject under evaluation.
- Guard against distortion in their reporting caused by their personal views and feelings.
Credibility
Evaluation shall be credible and based on reliable data and observations. Evaluation reports shall show evidence of consistency and dependability of data, findings, judgements and lessons learned; appropriately reflecting the quality of the methodology, procedures and analysis used to collect and interpret the data.
Evaluation managers and evaluators shall endeavour to ensure that each evaluation is accurate, relevant, and timely, and provides a clear, concise and balanced presentation of the evidence, findings, issues, conclusions and recommendations.
Conflicts of Interest
Conflicts of interest shall be avoided as far as possible so that the credibility of the evaluation process and product shall not be undermined. Conflicts of interest may arise at the level of the Evaluation Unit, or at the level of individual staff members or consultants. Conflicts of interest should be disclosed and dealt with openly and honestly.
Evaluators are required to disclose in writing any past experience, or that of their immediate family, close friends or associates that may give rise to a potential conflict of interest.