Artificial Intelligence (AI) is not just a technological tool. With data as its building block, AI offers sophisticated ways of generating, connecting and interpreting data. AI also presents new avenues for designing, implementing and evaluating data-informed policies. By leveraging AI-driven tools such as computer vision, natural language processing (NLP) and predictive analytics, policymakers can tackle complex challenges more efficiently and inclusively.
However, AI is still a collection of emerging technologies, especially in the field of public policy, where its use requires a measured, cautious approach. Within policymaking, each AI implementation should be treated as a pilot, exploring its potential for possible public sector applications. Responsible use of AI is critical and requires continuous human oversight to anticipate and mitigate risks, such as bias and ethical concerns.
While AI is transforming data-informed decision-making, policymakers in many countries still lack AI skills and have limited knowledge of AI infrastructure and enabling environments. The AI section of the Data to Policy Navigator aims to demystify AI for policymakers. It offers tips on how to harness the potential of AI for inclusive policymaking, with an emphasis on accountability, experimentation and careful oversight.
This section does not cover how to deploy AI-enabled public services, global governance of AI, the broader ethical concerns of AI or how to navigate international AI regulations.
AI technologies are deeply embedded in our lives, from email spam filters and search engines to translation services and recommender systems. Most people rarely notice the presence of AI in their daily interactions. Before exploring AI applications in public policy, let’s define the term AI. It is important to note that different definitions of AI exist and continue to evolve. In the context of public policy, AI is a broad term encompassing technologies that enable computers to perform tasks or make decisions based on data inputs. AI includes machine learning, recommender algorithms, computer vision, natural language processing and generative technologies like chatbots and voice interfaces.
In Mexico, women disproportionately shoulder unpaid care work, which limits their economic participation. To address this, the Government of Mexico City, the German development agency GIZ and other partners developed an intelligent, AI-driven care workmap. NLP was used to process data from crowdsourced insights and from administrative records. The resulting AI-informed platform now helps policymakers promote gender equality and economic opportunities for women and identify locations for childcare centres.
All AI systems, from a straightforward machine learning model to sophisticated neural systems, rely on data to learn, reason or decide. Data is the foundational raw material for AI. Data quality, protection and governance are critical to ensuring that AI systems function effectively and ethically. In essence, AI can transform data into actionable insights, making it a key tool for policymakers navigating complex challenges.
AI technologies can either be ‘open source’ or ‘proprietary’. Each approach offers unique advantages and trade offs:
Choosing between open source and proprietary options impacts budget, data security, control, and ease of customization—all critical considerations for policymakers exploring AI options.
While many view open-source AI as AI systems that can be freely modified, shared, and used (such as large language models from Meta and world foundation models from NVIDIA), AI as an extension of data poses new challenges as to what constitutes true ‘open source AI’, and the debate around the definition continues today.
As a consideration, the Digital Public Goods Alliance (DPGA) mandates that training and testing datasets for AI systems must have open data licenses for the products to be considered digital public goods (DPGs). However, AI systems often utilize diverse datasets from various sources for training. When these datasets include sensitive information with potential for misuse, access must be carefully managed. Consequently, due to the legal and ethical constraints of open sourcing the training datasets, many AI systems and products are currently not recognized as DPGs.
As AI technologies become widespread, adherence to international norms and regulations is essential. This ensures compliance with applicable laws, fosters interoperability with international systems and aligns local policies with global best practices. At the core of these standards are shared values such as transparency, accountability, fairness, safety and an emphasis on human rights.
Policymakers should align AI strategies with these evolving global standards to ensure ethical and effective deployment of AI.
Key international considerations include:
While all three—the OECD AI Principles, the UNESCO Recommendation on the Ethics of AI, and the ASEAN Guide on AI Governance and Ethics—stress transparency, fairness, accountability and respect for human rights, each has a slightly different focus and scope. The OECD Principles, shaped by governmental and economic priorities, focus on policies for innovation, risk management, and fostering interoperability across jurisdictions. The UNESCO Recommendation has a stronger emphasis on cultural and societal dimensions, including protecting and promoting cultural diversity, ethical reflection, and the environmental impacts of AI. In comparison, the ASEAN Guide on AI Governance and Ethics focuses on practical implementation within the Southeast Asian context. It provides region-specific guidance for both organisations and governments, promoting alignment among member states.
AI tools can enhance public policy design, implementation, monitoring and evaluation by supporting decision-makers across various sectors. The three core AI capabilities that are particularly relevant for policymaking include:
Computer vision enables AI systems to interpret and understand visual information from images and videos.
Potential applications in policymaking:
NLP and voice technologies allow AI to understand, interpret and generate human language in both text and speech formats.
Potential applications in policymaking:
Predictive AI uses historical data to identify patterns and forecast trends, helping policymakers anticipate future challenges and optimize operations.
Potential applications in policymaking:
Multimodal AI systems: There are emerging technologies that combine multiple AI capabilities, such as NLP and computer vision, to handle diverse data inputs. These systems, while not achieving AGI status, offer integrated solutions that resemble complex human cognition. Policymakers are advised to keep track of multimodal AI advancements and their applications.
While AI has the potential to enhance policymaking, it is crucial to exercise rigorous oversight and stringent risk assessments when integrating AI. AI systems, though powerful, can perpetuate biases, misinterpret data, and/or provide recommendations that are not aligned with the public good. Policymakers must ensure that AI tools are used transparently and ethically, with a strong emphasis on human oversight and intervention at every stage. AI should serve as a supportive tool rather than a decision-maker. Human intervention is needed to ensure that outcomes are not only aligned with societal values but also grounded in ethical considerations, transparency and accountability.
Policymakers are encouraged to consult resources like the AI Incident Database, which documents past AI failures and incidents. Reviewing these cases can help avoid similar pitfalls by learning from previous mistakes when deploying AI.
When deploying AI, particularly Large Language Models (LLMs), several considerations emerge that can affect the quality, speed and control of the chosen AI tool: