What to expect In the previous section, you learned how to design your M&E framework, system, and set processes for your team. Since you now have your data flow and policy implementation monitored, you have already laid down the foundation of evaluating your policy impact. While monitoring remains an internal task for your department/ministry to measure outputs of the policy, impact evaluation is necessarily an external task strategically outsourced to the right set of agencies/individuals to measure the outcomes and impact of the policies.
Figure 1: Monitoring vs. Evaluation Figure 1: Monitoring vs. Evaluation Before you get started Impact evaluation of a policy - Last step or the first? Policy impact evaluation (IE) is technically the last step in the policy process. However, seeds of an effective IE are sown at the inception stage, which makes it one of the first steps in the policymaking process. For impact-oriented policies, you might need to work on designing an M&E framework and deciding the system, indicators and strategy at the design stage of your new data-to-policymaking process.
Why am I evaluating the policy impact? Policy IE can have multiple aims or purposes, including:
Demonstrating the impact of the policy with short-term, intermediate and long-term outcomes. Comparing relative impacts of policies with different components. Identifying the relative cost-benefit or cost-effectiveness of a policy. The focus of the evaluation may be several different areas, including the following:
Short-term, intermediate and long-term outcomes and impacts. Changes in target audience/beneficiary behaviour, awareness, attitudes or knowledge. Costs of implementing the policy. Cost savings resulting from policy implementation. How to scope an assessment? For a successful impact assessment, it is often helpful to scope the work by organizing a series of M&E workshops with your staff and connected departments to ask deep and wide-ranging questions.
Figure 2: Questions to discuss in scoping workshops Figure 2: Questions to discuss in scoping workshops For the new policy, you may want to ask the following questions:
Is the problem correctly identified, or was the correct problem identified? Do I need a Theory of Change/Proof of Concept to set up M&E framework and indicators for this policy? Was there any important data that was left out of the analysis? Did this influence the analysis? How to get started Being a policymaker, you may not necessarily have expertise in M&E like an evaluation specialist would. Yet, you hold the onus of the successes and failures of the policies you design and implement. It is therefore important for you to understand a couple of key aspects to consider.
Resource Spotlight: Jargon-Buster
Jargon-Buster from the Scottish Government that can help you get acquainted with terms and terminologies of the evaluation processes and system. The jargon buster is a straightforward, user-friendly, and practical guide for policymakers to support them on evaluation in the policymaking cycle.
The Purpose What do you want as an outcome of your IE? Based on the type of evaluation, you may want to outline the goals and outcomes of the evaluation that helps you define the key indicators and deliverables. The primary purpose of IE is to determine whether a policy (or program, project, intervention according to the specific case) has an impact on a few key outcomes and, more specifically, to quantify how large that impact is. It puts emphasis on long-term effects as well as on the search of cause-and-effect relationships between policy and results. It also acknowledges the logic of attribution (the idea that a change is solely due to the policy or intervention under investigation) as much as contribution (the idea that the influence of the policy or intervention investigated is just one of many factors which contribute to a change).
The Partners/Stakeholders Who performs your IE? Based on your demands and expected outcomes, you can explore partnering from a vast evaluation ecosystem comprising of independent impact evaluators, evaluating agencies, universities, tech platforms, international organizations, etc. Once you’ve understood and outlined the above, you’ll be in a better position to finalize your Terms of Reference (ToR) /Request for Proposal (RFP) documents as well as your IE partner. A well written RFP that clearly defines the purpose of the IE is the first and the most important step towards evaluating your policy impact. In general in governments, RFPs are written by designated staff or an external consultant. However, if, as a policymaker, you co-design the RFP strategically, it can help address the most critical challenges and compliances on programme quality as well as staff performance.
The M&E System/Plan There’s not a best (or one) way to develop a M&E system. The system that’s ultimately developed should fit your context, needs and purposes. Additionally, M&E systems should ideally be underpinned by the OECD DAC guidelines or criteria of (i) relevance, (ii) coherence, (iii) effectiveness, (iv) efficiency, (v) impact and (vI) sustainability.
Resource Spotlight: Government Guidebooks
For a more practical understanding on how you can apply or design evaluations, check out some government guidebooks like:
How to implement For a successful implementation of the IE process, here’s a quick checklist:
ToR/RFP: The ToR should require that a clear understanding of the intervention be a prerequisite for the evaluation design. Sector and area expertise may not be essential but is certainly an advantage. The ToR for an IE should also stress the need for a credible counterfactual analysis. Proposals or concept notes should make clear how this issue will be addressed, being explicit about the evaluation approach. The evaluation team can be a mix of personnel with the technical competence to implement these methods as well as the expertise of policy designing, programme delivery, and most importantly, data governance. Find a guidebook here. Theory of Change: A theory of change (ToC) articulates how your organization’s activities cause the change you’re aiming for. Whether those activities are developing policy research, initiatives, or campaigns, ToC is a crucial tool for developing strategy. It explains how and why your activities will create your intended results. Producing a ToC strengthens programme and campaign design, articulating how and why a desired change is expected to come about and the assumptions or associated risks. Plus, when produced as part of a group exercise, it helps to ensure that all relevant stakeholders are aligned. Does everyone grasp the objectives and intended outcomes of the project? Are they clear on their role within that plan? Some interesting examples of ToCs from the Government of UK , New Zealand , Canada , and USAID .Data sources: Good quality data are essential to good IE. The evaluation design must be clear on the sources of data and realistic about how long it’ll take to collect and analyse primary data. Time and cost : The time required for an IE depends on whether primary data collection is involved. If so, 18 months is a reasonable estimate from inception to final report. If there’s no primary data collection, then 12 months may be feasible. The survey is the largest cost component of an IE. Peer review : An independent peer review should be undertaken by a person qualified in IE. As a lead of the department/ministry/district, you may also set up a peer review comprised of policymakers/administrative officials working on similar/common themes, programmes and policies for shared expertise, lateral collaboration and buy-in for ongoing policies, projects and programmes. Now that you have your IE checklist, based on the timelines, resources and objectives, you may choose to go for implementing a full-fledged evaluation or a Rapid IE (see figure below). This is mainly the task of the evaluation agency/expert onboarded, but it’s good to have a basic idea of the processes and milestones of the evaluation cycle to monitor better, gain the best out of the evaluation and take next steps strategically.
Figure 3: Rapid Impact Evaluation Flow Chart I Source: Canada Government Figure 3: Rapid Impact Evaluation Flow Chart I Source: Canada Government
How to increase the quality of an evaluation? To add quality and value to your data-driven policy’s evaluation, try this:
Conduct rigorous evaluations of new and untested programs to ensure that they warrant continued funding. Leverage available resources to conduct evaluations. Require evaluations as a condition for continued funding for new initiatives. Target evaluations to high priority programs. Make better use of administrative data—information typically collected for operational and compliance purposes—to enhance program evaluations. Develop a centralized repository for program evaluations. The following guiding questions help you check the quality of an evaluation and discuss the results with your team:
Was the problem correctly identified? Were any important data left out of the analysis? Did this influence the analysis? Is the policy having the desired effect? Are there any needs for modification, change, or re-design? What should be done differently next time? Were you only looking for an evaluation report or beyond? What result to strive for The products of the evaluation generally includes:
A comprehensive report that captures the flavour of the exercise as well as giving an objective and rigorous assessment of the achievements, and that identifies lessons for future public engagement. This report should be publicly available. A summary report that can be made more widely available (e.g. to participants and interviewees for the research), that covers the main points and lessons from the evaluation. You can choose to circulate internally with your staff/partners to begin the process of ‘inward looking’ change. Data repository comprised of qualitative and quantitative data that was collected and analysed during the entire process of evaluation. Ideally, your IE report will lead you to at least one of the following:
Making ecosystem level changes in your policies/systems. Reviewing your system and human resource strengths and weaknesses and take corrective steps for skilling/upskilling/new hirings needed. Assessing your return of investment to better design future policies, budgets, and planning. With the data and evaluation resources, you could now have trends on predictive/anticipatory governance and lessons learned to plan and save your resources effectively in the next policy cycle. What’s next? You now have your evaluation report, findings, and potential recommendations. It’s time for you to embed principles of participatory governance in your policymaking process. This won’t just help you communicate your achievements and challenges, but also win citizen’s trust which is your priority as a public servant. Here are few indicative steps:
Communicating the results Being one of the last evaluation tasks, discuss upfront how the results will be shared. Most importantly, identify who your primary users are. While the primary user maybe you and your policy team, the findings can be communicated to others for different reasons. For example, lessons learned from the evaluation can be helpful to stakeholders in the same field or it may be worthwhile synthesizing some of the findings into blogs, podcasts, articles, and stories.
Dealing with challenging findings This issue highlights the value of taking a learning approach to all evaluations - nothing is a 'failure' if we agree to learn from evaluations and take action to improve. This also highlights the importance of rigour, for example, obtaining decent sample sizes for pilots and using control groups. Good quality evaluations provide more justification for action.
Negative results are equally valuable as a way of identifying ‘what not to do’. Of course, the reasons for failure should be fully explored – was it a poor policy or poorly implemented?
Lessons learned - your notes. Make notes for your successor, so they can deliver better learning from your insights. Evaluations can be as much about learning and improvement as they are about accountability. Use the findings wisely to plan your next move in the policy ecosystem. Manage expectations from the start - evaluations can't answer every question!