2.3.1 RBM in Project Planning

 

This section of the guide responds to a staff need for better understanding of how projects are linked to the delivery of UN-Habitat’s programme and subprogramme level strategic results. It links projects to work programme and strategic planning.

It is also designed to provide staff with the information they need in order to start to apply results-based Management principles to project planning, monitoring and reporting.

After reading this section of the guide, staff should be able to:

  1. closely align project results to work programme outputs and strategic results
  2. confidently apply the RBM approach throughout the project management cycle
  3. have a common understanding of and commitment to RBM

What is the link between project objectives and strategic results?

Projects are the vehicles through which UN-Habitat delivers its work programme outputs and ultimately its strategic results, as contained in the six-year strategic plan and in the biennial strategic framework. Projects are the vehicles that create high-level agency results.

Results from projects aggregate and contribute to the delivery of higher level results (work programme outputs, EAs and strategic results).

For instance, if projects are not implemented as planned to deliver work programme outputs, then UN-Habitat will be unable to deliver the global social and economic benefits outlined in the Strategic Plan.

Figure 24: Link between project objectives and strategic results

 

 

To ensure that the results at project level contribute to delivering the planned strategic results or higher results, project reviews are based on the following criteria:

  • The coherence between the approved programme of work and the contents of projects
  • Coherence and alignment with subprogramme level expected accomplishments,
  • Collaboration and joint programming across focus areas/subprogrammes
  • Relevance of projects (to beneficiaries and identified problem)
  • Effectiveness (the likelihood that the proposed theory of change will deliver results, given the assumptions and identified risks)
  • Feasibility and appropriateness of the intervention (likelihood of success given time & available resources)
  • Technical quality of the project, feasibility and appropriateness of measures for managing any foreseen project risks
  • Clarity of implementation arrangements in showing the distinct roles and responsibilities of, and budget allocation to partners, as well as to branches and regional offices
  • Internal cooperation agreements that show what each Branch and Regional Office is responsible for, including milestones and progress reporting/monitoring roles
  • Clarity on the engagement to be undertaken with stakeholders
  • Utility of the monitoring plan for tracking progress in implementation against delivery by Branches and Regional Offices
  • Cost effectiveness of proposed budgets, which may be assessed on the basis of comparison with similar projects
  • Utility of the project design for addressing the needs of countries, i.e., where applicable, checking the relevance of projects to country needs with the regional offices
  • Potential negative environmental and social impacts of projects
  • Gender, youth, human rights and pro-poor responsiveness
  • Sustainability potential and approach
  • Replication potential and implementation arrangements for promoting replicability
  • Sustainability (likelihood that benefits will be maintained after the project)
  • Horizontal integration across sub-programmes

Figure 25: Delivering as One UN-Habitat: From Outputs to Expected Accomplishments

 

UN-HABITAT has adopted three main RBM working tools at project level (based on best practices of lead development organizations) to make managing for results throughout the entire life-cycle of an investment or project easier for UN-HABITAT staff, partners and executing agencies: (i) the logic model (LM), (ii) the Logical Framework (logframe), which includes the Performance Measurement Framework (PMF), and (iii) the Risk Register. These tools are meant to be flexible working documents throughout the lifecycle of the investment, and can be adjusted or modified under certain circumstances.

The LM and PMF are usually at least partially completed during the planning and design stages of an investment and refined during the development of the implementation plan (this will vary depending on the type of programming in question). The risk register is completed during project design and updated on a regular basis during the project’s implementation.

(a) Theory of Change and the Logic Model (LM): What is a Logic Model/Results Chain and Theory of Change?

A Theory of Change is a diagram that explains how a programme impacts on its beneficiaries. It outlines all the things that a programme does for of its beneficiaries, the ultimate impact that it aims to have on them, and all the separate outcomes that lead to or contribute to that impact. Sometimes called a “results chain”, or LM it is a depiction of the causal or logical relationships between inputs, activities, outputs and outcomes of a given policy, programme or investment.

At the core of “results thinking” is the concept of the results chain, a schematic illustration of the intended causal relationships among various elements (the inputs, activities, outputs and outcomes of a given policy, programme, or initiative) over time, including underlying assumptions. The results chain clearly shows the plausible, causal relationships among its elements, while also clarifying the various cyclical processes and feedback loops planners need to be aware of. The basic rationale is to plan from right to left by initially focusing on impacts and intended outcomes and then identifying the outputs, activities, and inputs required to achieve them. Tracking performance then goes from left to right, feeding information back to inputs and activities to make necessary adjustments and improvements, thus leading to better results.

A basic principle in results planning is to start with the intended impact and outcomes and then identify the outputs, activities and inputs required to achieve them.

The method implies a thorough analysis of the problem that needs to be solved, what changes are desired and what activities and inputs are necessary to achieve them.

Key questions are:

  • What is the present situation or problem (called the undesired situation A)?
  • What do we want to achieve in, for instance, 3 or 5 years (called the desired result or situation B)?
  • How do we get from where we are (A) to where we want to be in 3 or 5 years (B)?
  • What are the risks and assumptions in getting from A to B?
  • How will we know we are succeeding in creating the change we want?

 

 

The LM is divided into six levels; inputs, activities, outputs, sub-expected accomplishments, expected accomplishments and the project objective, each of which represents a distinct step in the causal logic of a policy, programme or investment.

The bottom three levels (inputs, activities and outputs) address the how of an investment, while the top three levels (outcomes) constitute the actual changes that take place: the development results.

Figure 26: Example of a Vertical Project Results Chain

UN-Habitat’s LM template does not include inputs and starts instead at the activity level. To complete a logic model template you need to write clear and concise result statements.

Figure 27: Logic Model as contained in the Concept note

Drafting or Assessing Result Statements during Planning

What is good result (EA) statement?

A result statement outlines what a policy, programme or investment is expected to achieve or contribute to. It describes the change stemming from UN-HABITAT’s contribution to a development activity in cooperation with others. A statement of results should illustrate the type of change that may be expected to occur because of a specific intervention. It should be: (1) as specific as possible, (2) realistic in relation to the time and resources available, and (3) measurable in some (qualitative or quantitative) way.

 

 

 

Developing a logic model (LM) at project level

Here are the steps that need to be taken to create a logic model; the order in which they are undertaken will depend on the status, scope and size of the investment/project:

Step 1: Identify ultimate beneficiaries, intermediaries, and stakeholders.

Step 2: Ensure that the right people (branch, environmental, governance and gender specialists, executing agency, local stakeholders, beneficiaries etc.) are at the table; remember that this is a participatory exercise. This can be done via brainstorming, focus groups, meetings, consultative emails, etc. (Please note that the “right people” may vary based on the type of programming). For directive programming, ensure that country partner organizations, beneficiaries and stakeholders (including women, men and children) are at the table during the design/development of the LM. For responsive programming, ensure that the right UN-Habitat team is at the table during the review and assessment of the LM. The review team should include the development officer or project team lead, branch environmental, governance and gender specialists, and other sector specialists and performance management advisors. As part of your due diligence, you should also validate the LM through a participatory approach.

Step 3: Identify the project’s objective. Start by identifying the problem the investment intends to address. The ultimate objective of an investment is its raison d’etre; the highest level of change we want to see to solve that problem. Make sure to analyze the context (cultural, socio-political, economic, and environmental) surrounding the problem.

Step 4: Identify main activities for both UN-Habitat and partners. Brainstorm the main or key activities of the investment, making sure to address contributing contextual factors. If possible, group activities into broad categories or work-packages to avoid duplication.

Step 5: Identify outputs for each activity package.

Step 6: Make sure activity statements begin with a verb in the imperative form and that outputs are written as completed actions. Outputs are usually things that are bought, produced or generated with project money and that can be counted.

Step 7: Identify logical EA results for immediate and intermediate levels.

Step 8: A logic model is like a pyramid; it gets smaller the closer you move toward the highest level. Three or four changes at the immediate level (changes in access, ability, awareness) may lead to only two changes at the intermediate level (practice, behavior). Similarly, two changes at the intermediate level will lead to only one change at the ultimate level (change in state). The logic model template is flexible and will allow you to change the number of boxes at each level to reflect the logic of your investment. Make sure the number of EA decreases as you move upwards towards the Project’s objective. Try also to have only one or two EAs per box.

 

Step 9: Identify linkages. Check back and forth through the levels (from activities to project objective and from project objective to activities) to make sure everything flows in a logical manner. Make sure there is nothing in your EAs that you do not have an activity to support. Similarly, make sure that all your activities contribute to the EAs listed.

Step 10: Validate with stakeholders/partners. Share the draft logic model with colleagues, branch specialists, stakeholders, and partners, etc., to ensure that the EAs meet their needs and that the investment will actually work the way you have envisioned it.

Step 11: Where required, write the narrative text to illustrate linkages and explain the causality of the logic model. The narrative should speak to the arrows in the logic model: the causal relationship between the levels and HOW we see the proposed activities leading to the expected changes. The most compelling narratives are those that are succinct and use brief, concrete, evidence-based examples to support these explanations.

 

(b) Performance measurement framework

What is Performance Measurement for a project?

Project performance aggregates to contribute to subprogramme results or EAs, and subprogramme EAs in turn, contribute to strategic results. It is important to establish a structured plan for the collection and analysis of performance information. At UN-Habitat, the performance measurement framework (PMF), commonly called the logframe, is the RBM tool used for this purpose at project level.

Why Performance Measurement?

Performance measurement is undertaken on a continuous basis during the implementation of investments so as to empower managers and stakeholder with “real-time” information (use of resources, extent of reach, and progress towards the achievement of outputs and outcomes). This helps identify strengths, weaknesses and problems as they occur, and enables project managers to take timely corrective action during the investment’s life cycle. This in turn increases the chances of achieving the expected results.

Monitoring provides accurate and up-to-date information on progress:

  • To provide regular feedback and early indications of progress, or lack thereof;
  • To track the actual performance or situation against what was planned/expected.

Monitoring is for the purpose of learning and decision-making:

  • To detect early signs of potential problems and success areas;
  • To take corrective action;
  • To improve the design and performance of ongoing programmes;
  • To generate knowledge about what works and what does not.

Monitoring serves to improve accountability:

  • To ensure that a programme or process continues to be relevant, and is achieving results as intended;
  • To make an overall judgement about the effectiveness of interventions.

What is a PMF or Project Logframe?

A performance measurement framework is a plan to systematically collect relevant data over the lifetime of an investment to assess and demonstrate progress made in achieving expected results. It documents the major elements of the monitoring system and ensures that performance information is collected on a regular basis. It also contains information on baselines, targets, and responsibility for data collection.

As with the LM, the PMF should be developed and/or assessed in a participatory fashion, with the inclusion of local partners, beneficiaries, stakeholders and relevant UN-Habitat staff. UN-Habitat has a standard PMF Template.

 

The PMF is divided into eight columns: expected results, indicators, baseline data, targets, data collection methods, frequency and responsibility. To complete a PMF you will need to fill in each of the columns accurately.

Definitions:

Expected results column:

The expected results column is divided into four rows, one for each of the outputs, Sub-EAs (immediate outcomes), EAs or intermediate outcomes and project objective/goal. To complete this column, simply cut and paste the result statements from your LM into the appropriate row.

Performance indicators:

Performance indicators are what you will use to measure actual results. A performance indicator is a quantitative or qualitative unit of measurement that specifies what is to be measured along a scale or dimension, but is neutral; it does not indicate a direction or change nor does it embed a target. It is important that stakeholders agree a priori on the indicators that will be used to measure the performance of the investment.

Quantitative performance indicators are discrete measures such as number, frequency, percentile, and ratio, (e.g., number of human rights violations, ratio of women-to-men in decision-making positions in government).

Qualitative performance indicators are measures of an individual or group’s judgment and/or perception of the presence or absence of specific conditions, the quality of something, or an opinion about something (e.g., client opinion of the timeliness of service).

Qualitative indicators can be expressed concretely when used to report on achievement of results. They should convey specific information that shows progress towards results and is useful for project management and planning.

For more on criteria for strong performance indicators see section 2.2.2, page 47.

Steps to complete a PMF or logframe

The development of the PMF starts at the planning and

design phase. Remember, some elements of the PMF may be established after or during project implementation (ex: collection of baseline data and setting of some targets).

Step 1: Ensure that the information for your PMF is developed in a participatory fashion, including key local stakeholders, partners, beneficiaries and the appropriate UN-Habitat specialists.

Step 2: Cut and paste the objective, expected accomplishments, sub-expected accomplishments and outputs from your Concept LM into the appropriate boxes in the PMF template.

Step 3: Establish performance indicators for your expected outcomes and outputs and enter the performance indicators for the final, intermediate and immediate outcomes and outputs. Validate and check the quality of your performance indicators. Do they have: validity, reliability, sensitivity, utility, and affordability?

Step 4: Establish the “Data source for verifying indicator” and “Data collection method” for your chosen performance indicators. Look to include multiple lines of evidence wherever possible to increase the reliability of your performance data.

Step 5: Fill in the “Frequency” and “Responsibility” columns for each performance indicator. Decide whether information on each performance indicator needs to be collected on an ongoing basis as part of performance monitoring, or periodically (quarterly, bi-annually or annually?

Step 6: Fill in baseline data where it exists. If reliable historical data on your performance indicators exists (in the form of government data, information from a previous phase of the investment or information gathered during a needs analysis), then it should be used; otherwise you will have to collect a set of baseline data at the first opportunity (within the first 6-12 months after commencement of project).

If you will be gathering the data later, indicate this in your PMF with a statement like: “Baseline data to be collected at investment inception” or “Data to be provided by the Implementing organization after communities identified.” If possible set the date by when this will be completed (this should be done within the first year).

Step 7: Establish realistic targets for each indicator in relation to the baseline data you have identified for year 1, 2, 3 etc.). This sets the expectations for performance over a fixed period of time. Key targets based on gaps and priorities identified during initial analysis are necessary to establish budgets and allocate resources, and play an important role in project planning and design. Others may be established later, once a baseline study has been conducted.

What are project assumptions?

Assumptions refer to the positive conditions that are necessary to ensure that:

  • planned activities will produce the expected results; and
  • the logical, cause-effect relationship between different results will occur as expected.

Implicit and explicit assumptions underlying projects need to be identified and assessed in terms of their validity. Assumptions that turn out to be incorrect need to be addressed; although some can turn out to be project ‘killer’ assumptions. Assumptions that may turn out to be unfounded include:

  • that governments will enforce agreed upon policies;
  • that the private sector will participate;
  • that technical alternatives function as thought;
  • that development environment trade-offs can be reconciled;
  • that the price of fossil fuels will remain high;
  • that human expansion into forests or reserves can be controlled; and many more

External assumptions are closely related to impact drivers, except that they are judged to be largely beyond the power of the project to influence or address. The critical assumptions that have already been identified in project documentation may well be a useful starting point for identifying the assumptions likely to influence the outcomes-impacts pathways. Achieving results depends on whether or not the assumptions you make remain or prove to be true. Incorrect assumptions at any stage of the results chain can become an obstacle to achieving the expected results.

 

(c) Risk register

What is a project risk?

  1. Certainty
  2. Uncertainty
  3. The unknown
  4. A surprise
  5. Danger
  6. Something that can go wrong
  7. Failure to get things right
  8. A missed opportunity

Definitions:

Risk

Risk is the chance of something happening that will have a negative impact on the project’s objectives. Risk appetite is the amount of risk —broadly speaking— an entity is willing to accept in pursuit of value. Use quantitative or qualitative terms (e.g., earnings risk vs. reputation risk), and consider risk tolerance (range of acceptable variation).

Risk analysis

Risk analysis identifies how likely it is that the conditions necessary to achieve the expected results will not be present. Risk analysis allows you to consider strategies to manage the risks you identify. Some external factors may be beyond your control, but other factors will be manageable with slight adjustments in the project or approach.

It is recommended that stakeholders take part in the risk analysis as they offer different perspectives and may have key information about the context. The risks associated with achieving outputs are generally low because project managers can make changes as needed to ensure that results are achieved.

Risk Register

A risk register lists the most important risks, the results of their analysis and a summary of risk response strategies. Information on the status of the risk is included over a regular reporting schedule. The risk register should be continuously updated and reviewed throughout the course of a project. Risk is measured in terms of consequences (or impact) and likelihood (or probability).

Integrated Risk Management at UN-Habitat

Integrated Risk Management is a continuous, proactive and systematic process to understand, manage and communicate risk across the organization. Other government departments, donors and private sector companies use similar frameworks.

Elements of integrated risk management:

  1. Development of a project risk profile
  2. Establishment of an integrated risk management framework
  3. Practicing integrating risk management at all levels
  4. Ensuring continuous risk management learning

Integrated risk management supports a consistent approach to risk management across the Agency both vertically and horizontally. UN-Habitat is recognized as working in high-risk environments. By providing a common and consistent platform, we can reduce uncertainty for staff and managers and allow them to better understand and manage their risks. As a result, they will be in a position to make informed decisions and take responsible risks where appropriate.

Key objectives of the risk management in projects:

Integrated risk management helps UN-Habitat strengthen its decision making process in managing risks that are within its control, and positions the agency to better respond to risks that are beyond its control. Specific objectives are:

  1. Develop a systematic approach to risk management
  2. Contribute to a risk-aware culture
  3. Propose simpler, more effective practices
  4. Provide an on-going scan of key risks
  5. Communicate the benefits of risk management to all stakeholders
  6. Ensure that the framework for managing risk continues to remain appropriate

Figure 29: Categories of risk in UN-Habitat

 

 

Figure 30: Basic Risk Model (adapted from World Bank)

UN-Habitat uses a standardized Risk Register Template (see table 13)

Steps to complete a risk register:

Step 1: Under “Risk,” write down the key risks to the project. There should be at least two risks each for the categories operational, financial and development risks, and at least one risk in the category of reputational risk.

Step 2: For each risk selected, establish the current risk level, i.e. the intensity of the risk. A risk map or some other tool may be useful for determining the level. Identify the risk on the four-point scale below, and apply the correct colour.

Step 3: Over a regular monitoring schedule, re-rate the risk and apply the colour and so on. Monitoring periods will vary according to the project, but a typical period is three months.

Step 4: Indicate if the risk is the same as one found in the programme risk assessment (if one exists).

Step 5: A risk is an uncertainty about a result. Indicate the level of the results as found on your logic model.

Step 6: Give a brief summary of the risk response strategies that will be used to manage the risk or to prevent a risk event.

Step 7: Indicate the risk owner. If possible, there should only be one person per box. The owner will vary according to who is the person that actually has to deal with a given risk event.

Risk Monitoring: In the real world of development, the risk profile will change constantly during the life of the project. As risks arise or disappear, change the corresponding risk definitions and risk level. Also track the use and effectiveness of the risk response strategies, and change the “Risk Response” column as necessary.

NB: Please do not hesitate to rate risks as “Red” if that is their real level

Probability X consequence= Risk factor