What to expect This article aims at stressing the importance of M&E, as well as showing how (new) data, as accessed, and analysed in previous steps, can help increase the effectiveness of M&E.
Monitoring implementation of policies aims at checking the progress of planned outputs of the policy, while evaluating impact of a policy is assessing the larger outcomes, goals and results of a policy.
At the end of this section, you should be able to understand the following framework and processes that lead to an effective monitoring system for data-driven policy implementation:
Monitoring policy implementation- measuring the progress and outputs and laying down strong foundations for evaluating impact. Approaches, tools, techniques, and resources needed for monitoring. Recalibration, iteration, and course correction using data. How to get started Being a policymaker, you are not expected to learn or implement M&E principles and methodologies in entirety. But it is important to understand the framework that helps you make the right decisions on appropriate quantitative and qualitative indicators to be measured, appropriate human resources and agencies to be identified for conducting thorough M&E, difference between in-house and outsourced tasks under M&E system, drawing insights from M&E data and taking timely corrective measures to meet the goals/outcomes of the policies.
Figure 1: Monitoring vs. Evaluation Figure 1: Monitoring vs. Evaluation In the earlier chapters, you have learnt how to identify and strengthen your data ecosystem. It may be helpful to consider the CART principle developed by Innovations for Poverty Action: monitoring data should be credible, actionable, responsible, and transportable. Since you are in possession of data that can be analysed, interpreted, and visualized, we have basic prerequisites to help you design an effective M&E system.
Figure 2: Steps of designing an M&E system Figure 2: Steps of designing an M&E system A robust M&E system and M&E plan will help you navigate through your current and future challenges, as well as design possible solutions. To ensure your M&E system is iterative, interactive, participatory and most suited to your demand, you may want to conduct a quick participatory scenario mapping exercise that helps you ask the right questions, map all possible scenarios linked to the potential answers and indicate insights and next steps accordingly. Having all your stakeholders in the room (in addition to existing government stakeholders) will also help you gain buy-in and joint ownership for your policy/project.
How to implement You now have a well-designed M&E system with regular inflow of data and skilled teams tasked to monitor the policies and programmes. Now let’s move to action by beginning monitoring to lay foundations of a strong impact evaluation.
Example: Guiding Principles for M&E of UK's National Data Strategy
UK’s National Data Strategy framework has three guiding principles for how monitoring and evaluation of the National Data Strategy will operate:
Dynamic and forward looking: a framework with a strategic focus, tied to delivering real-world imbpacts and keeping the National Data Strategy relevant in the ever-evolving context of data and digital
Outcome-oriented: a framework which focuses on the delivery of the key strategic priorities for the National Data Strategy and achieving results.
Proportionate: a framework focused on taking the most appropriate approach which adds genuine value to the work under the National Data Strategy, and which doesn’t exist to simply ‘tick a box’.
The framework is comprised of three core elements, drawing on the original National Data Strategy framework:
Setting the right indicators using new types of data Jointly with other line ministries and their KPIs (qualitative and quantitative) define indicators. Ask your IT Teams to set rules and decide on responsive action based on the identified indicators. Your IT and Policy Team could help strengthen and convert it into a hybrid model of M&E resources (in-house and external) and embedding feedback loops and policy transformation mechanisms in place as a response to gaps, risks and challenges identified in monitoring processes. Set performance linked rules and responsibilities for the teams working on the policy as well on M&E. Establish mechanism that enable you to look at the monitoring results and define appropriate actions. Direct your IT Teams to set rules and decide on responsive action based on the identified indicators. Example: Conduct data-driven reviews to improve program performance
Regularly scheduled data-driven performance management meetings enable agency and state leaders to discuss performance data, develop or refine performance objectives, identify areas for improvement, promote innovative strategies, foster coordination, and hold managers accountable for results. This approach was developed by the New York City Police Department and popularized by the city of Baltimore through CitiStat. The CitiStat model allowed Baltimore leaders to focus on performance goals, improve service delivery, and generate $350 million in savings over a seven-year period, enabling it to reinvest $54 million in new programming for children. Using a similar approach, Maryland StateStat measures state-wide performance and tracks key indicators from biweekly agency data, which are analyzed for trends to inform strategies for improvement. Regular meetings are held with the governor, agency heads, and StateStat staff to clarify goals, refine approaches for achieving outcomes, and track performance. This use of data has engendered a culture of organizational learning in which program managers and agency leaders discuss challenges and solve problems.
When, where and how does monitoring need to occur? For an effective monitoring system, you must first set up a strong governance framework involving all key stakeholders to ensure there is no fall back from any part of the policy monitoring system.
Based on the implementation of your policy, you ideally plan to set a frequency of monitoring:
Concurrent real-time monitoring of certain indicators, e.g. vaccine supply chains, mid-day meal in schools Monthly/quarterly/biannual monitoring of certain indicators, e.g. progress review meetings held at village level; number of citizens services enrolled digitally at local level Random monitoring of quality and reliability of the data collected through random field surveys Fixed-term monitoring of performances, e.g. of in-house/ contractual staff deployed in field services How do I know whether my data-driven policy monitoring is successful? By now you will easily be able to differentiate between a traditional policymaking process versus a data-driven policymaking process. Here is the checklist to know if your data-driven policy monitoring is successful:
What’s next? A ‘Learning by Doing’ journey for you:
You now have the M&E system, the plan, data inflow and your teams in place, you have set accountability and responsibilities. You are now all set to monitor, observe, capture lessons learnt, and recalibrate your policy actions based on initial insights from the monitoring data. From here, it is a ‘learning by doing’ journey as you incorporate course corrections as you figure out the risks and lessons learnt across various components of your policy. While you incorporate course corrections, you also modify the indicators and all connected elements of your M&E system. Once you manage to lay down the foundation of an iterative M&E system, it gets easier to evaluate impact and check if you are in the right direction.
Evaluation does not necessarily have to take place at the end of the project or policy cycle, it could be introduced mid-way to make changes and take corrective action during the implementation and recalibrate the data strategy connected with the concerned policy or programme.
In line with effective monitoring methodologies, the next article gives you an overview of various strategies, approaches, tools and methodologies for an effective impact evaluation .