Introduction to AI for Public Policy

Generated using ChatGPT, 2025

Introduction

Artificial Intelligence (AI) is not just a technological tool. With data as its building block, AI offers sophisticated ways of generating, connecting and interpreting data. AI also presents new avenues for designing, implementing and evaluating data-informed policies. By leveraging AI-driven tools such as computer vision, natural language processing (NLP) and predictive analytics, policymakers can tackle complex challenges more efficiently and inclusively.

However, AI is still a collection of emerging technologies, especially in the field of public policy, where its use requires a measured, cautious approach. Within policymaking, each AI implementation should be treated as a pilot, exploring its potential for possible public sector applications. Responsible use of AI is critical and requires continuous human oversight to anticipate and mitigate risks, such as bias and ethical concerns.

While AI is transforming data-informed decision-making, policymakers in many countries still lack AI skills and have limited knowledge of AI infrastructure and enabling environments. The AI section of the Data to Policy Navigator aims to demystify AI for policymakers. It offers tips on how to harness the potential of AI for inclusive policymaking, with an emphasis on accountability, experimentation and careful oversight.

This section does not cover how to deploy AI-enabled public services, global governance of AI, the broader ethical concerns of AI or how to navigate international AI regulations.  

Through out the Navigator, you will see content with this color coding. We use this color coding to indicate that there is new AI related content. Feel free to explore the site, and find additional AI for Public Policy content.

What is AI?

AI technologies are deeply embedded in our lives, from email spam filters and search engines to translation services and recommender systems. Most people rarely notice the presence of AI in their daily interactions. Before exploring AI applications in public policy, let’s define the term AI. It is important to note that different definitions of AI exist and continue to evolve. In the context of public policy, AI is a broad term encompassing technologies that enable computers to perform tasks or make decisions based on data inputs. AI includes machine learning, recommender algorithms, computer vision, natural language processing and generative technologies like chatbots and voice interfaces.

AI generally falls into three categories:

Narrow AI:

Designed to perform specific tasks, such as analysing images, translating text, or forecasting trends, narrow AI (sometimes also called ‘weak AI’) is limited but performs these functions exceptionally well. In government contexts, most current AI applications fall into this category and require careful testing and evaluation before being fully implemented.

Generated using ChatGPT, 2025

Narrow AI

Generative AI:

Capable of creating new content—such as text, images or audio—generative AI has unique applications in areas like language translation, data synthesis and public engagement. Although highly promising, generative systems raise challenges in terms of content accuracy and ethics.

Generative AI

Generated using ChatGPT, 2025

Artificial General Intelligence (AGI):

Although AGI does not yet exist yet, it is sometimes referred to as the ‘holy grail’ of AI. In theory, AGI would have human-like cognitive abilities and be capable of operating across diverse domains. Although AGI remains a distant aspiration with significant risks and challenges, it represents a long-term research direction.

Artificial General
Intelligence (AGI)

Generated using ChatGPT, 2025

In Mexico, women disproportionately shoulder unpaid care work, which limits their economic participation. To address this, the Government of Mexico City, the German development agency GIZ and other partners developed an intelligent, AI-driven care workmap. NLP was used to process data from crowdsourced insights and from administrative records. The resulting AI-informed platform now helps policymakers promote gender equality and economic opportunities for women and identify locations for childcare centres.

All AI systems, from a straightforward machine learning model to sophisticated neural systems, rely on data to learn, reason or decide. Data is the foundational raw material for AI. Data quality, protection and governance are critical to ensuring that AI systems function effectively and ethically. In essence, AI can transform data into actionable insights, making it a key tool for policymakers navigating complex challenges.

Open source versus proprietary AI tools

AI technologies can either be ‘open source’ or ‘proprietary’. Each approach offers unique advantages and trade offs:

  • Open source: Open-source AI tools and frameworks are publicly accessible, usually free to download or use and allow for transparency and collaborative development. This fosters adaptability and customization but requires careful management to ensure data security and regulatory compliance.
  • Proprietary (closed source): Proprietary AI tools are typically offered by third-party companies and offer dedicated support and resources. However, they can be inflexible, require a paid license or subscription and may have limited transparency. These tools often provide streamlined deployment, but organizations must weigh the dependency on external vendors and long-term costs.

Choosing between open source and proprietary options impacts budget, data security, control, and ease of customization—all critical considerations for policymakers exploring AI options.

Caution: Open source in the context of AI

While many view open-source AI as AI systems that can be freely modified, shared, and used (such as large language models from Meta and world foundation models from NVIDIA), AI as an extension of data poses new challenges as to what constitutes true ‘open source AI’, and the debate around the definition continues today.

As a consideration, the Digital Public Goods Alliance (DPGA) mandates that training and testing datasets for AI systems must have open data licenses for the products to be considered digital public goods (DPGs). However, AI systems often utilize diverse datasets from various sources for training. When these datasets include sensitive information with potential for misuse, access must be carefully managed. Consequently, due to the legal and ethical constraints of open sourcing the training datasets, many AI systems and products are currently not recognized as DPGs.

International AI norms and regulations

As AI technologies become widespread, adherence to international norms and regulations is essential. This ensures compliance with applicable laws, fosters interoperability with international systems and aligns local policies with global best practices. At the core of these standards are shared values such as transparency, accountability, fairness, safety and an emphasis on human rights.

Policymakers should align AI strategies with these evolving global standards to ensure ethical and effective deployment of AI.

Key international considerations include:

  • Regulatory standards: Countries and regions, including the European Union and its Artificial Intelligence Act, are developing standards to govern the responsible use of AI. These standards guide policymakers on mitigating risks related to bias, privacy and accountability in policy applications.
  • AI strategy: In 2024, the African Union released its Continental Artificial Intelligence Strategy. This strategy presents an Africa-centric, development-oriented and inclusive approach built around five key areas: leveraging the benefits of AI, strengthening AI capabilities, minimizing risks, stimulating investment and fostering cooperation. It outlines a unified vision for the continent and defines policy action to unlock the transformative potential of AI in promoting development and prosperity in Africa.
  • International Recommendations:
    • The UNESCO Recommendation on the Ethics of Artificial Intelligence from 2021 provide a global framework for ensuring AI technologies align with human rights, sustainability and equity. The recommendation highlights the need for transparency, accountability and inclusivity, addressing risks like bias and digital divides while promoting innovation and environmental responsibility. The recommendation calls for ethical governance, international collaboration and capacity-building to maximize the benefits of AI societies everywhere.
    • AI Principles from the Organisation for Economic Co-operation and Development (OECD): First adopted in 2019 and updated in 2024, the OECD AI Principles provides an intergovernmental standard for developing and deploying AI in a trustworthy manner that respects human rights and democratic values. The principles encompass five core values-based principles—promoting inclusive growth and well-being; respecting human rights and democratic values; ensuring transparency and explainability; safeguarding robustness, security and safety; and establishing accountability. To support these goals, OECD recommends policymakers take the following steps: invest in AI research and development; foster inclusive Ai ecosystems; shape governance and a policy environment that is interoperable across borders; build human capacity and address labour market transitions; and engage in international co-operation for trustworthy AI.
    • Association of Southeast Asian Nations (ASEAN) Guide on AI Governance and Ethics A practical guide for organisations in the South East Asia region to design, develop and deploy AI in commercial and/or non-military settings, the ASEAN Guide on AI Governance and Ethics is grounded in internationally accepted AI principles and values, while highlighting four key components: internal governance, human involvement in AI-augmented decision-making, operations management and stakeholder engagement. The Guide also includes six use cases on AI governance as references.
International recommendations and considerations:

While all three—the OECD AI Principles, the UNESCO Recommendation on the Ethics of AI, and the ASEAN Guide on AI Governance and Ethics—stress transparency, fairness, accountability and respect for human rights, each has a slightly different focus and scope. The OECD Principles, shaped by governmental and economic priorities, focus on policies for innovation, risk management, and fostering interoperability across jurisdictions. The UNESCO Recommendation has a stronger emphasis on cultural and societal dimensions, including protecting and promoting cultural diversity, ethical reflection, and the environmental impacts of AI. In comparison, the ASEAN Guide on AI Governance and Ethics focuses on practical implementation within the Southeast Asian context. It provides region-specific guidance for both organisations and governments, promoting alignment among member states.

AI for public policy

AI tools can enhance public policy design, implementation, monitoring and evaluation by supporting decision-makers across various sectors. The three core AI capabilities that are particularly relevant for policymaking include:

1. Computer vision

Computer vision enables AI systems to interpret and understand visual information from images and videos.

Potential applications in policymaking:

  • Visual monitoring: AI can analyse satellite imagery, public cameras and real-time visual data to monitor infrastructure, track environmental changes or assess disaster response needs. This is particularly useful for urban planning, environmental monitoring and crisis management.
2. Natural Language Processing (NLP) and voice technologies

NLP and voice technologies allow AI to understand, interpret and generate human language in both text and speech formats.

Potential applications in policymaking:

  • Text analysis: AI can analyse and summarize large volumes of documents, providing insights and supporting evidence-based policymaking.
  • AI-driven chatbots: Chatbots can automate citizen inquiries, help policymakers brainstorm ideas, and extract key insights from extensive datasets.
  • Language translation: AI can instantly translate government documents into multiple languages, making policies more accessible to diverse populations.
  • Voice-enabled accessibility: AI-powered voice interfaces improve access to government services for people with disabilities, enabling interactions through voice commands.
3. AI-driven forecasts and analytics

Predictive AI uses historical data to identify patterns and forecast trends, helping policymakers anticipate future challenges and optimize operations.

Potential applications in policymaking:

  • Aggregating and cleaning data: AI can integrate data from various sources (e.g., census data, economic reports, social media) for a comprehensive policy overview.
  • Predictive analytics: AI helps forecast trends in areas such as health, employment and education based on historical patterns.
  • Pattern recognition and detecting anomalies: AI systems can detect anomalies and gaps (e.g., fraud, policy errors) or identify recurring patterns in available datasets to refine public policies.

Multimodal AI systems: There are emerging technologies that combine multiple AI capabilities, such as NLP and computer vision, to handle diverse data inputs. These systems, while not achieving AGI status, offer integrated solutions that resemble complex human cognition. Policymakers are advised to keep track of multimodal AI advancements and their applications.

Important ethical considerations in AI-driven policy

While AI has the potential to enhance policymaking, it is crucial to exercise rigorous oversight and stringent risk assessments when integrating AI. AI systems, though powerful, can perpetuate biases, misinterpret data, and/or provide recommendations that are not aligned with the public good. Policymakers must ensure that AI tools are used transparently and ethically, with a strong emphasis on human oversight and intervention at every stage. AI should serve as a supportive tool rather than a decision-maker. Human intervention is needed to ensure that outcomes are not only aligned with societal values but also grounded in ethical considerations, transparency and accountability. 

AI Incident Database:

Policymakers are encouraged to consult resources like the AI Incident Database, which documents past AI failures and incidents. Reviewing these cases can help avoid similar pitfalls by learning from previous mistakes when deploying AI.

Experimenting with AI in public policy: Key considerations

When deploying AI, particularly Large Language Models (LLMs), several considerations emerge that can affect the quality, speed and control of the chosen AI tool:

  • Self-trained versus black box: Self-trained models allow for greater customization but require significant resources and expertise. Black-box solutions (systems whose internal workings are a mystery to its users, or even its developers), while convenient, limit insight into the underlying mechanics, which may restrict data control and model interpretability.
  • Open versus closed systems: Open systems (often open source) offer adaptability and transparency, while closed systems provide support and ease of use. Policymakers must consider their data storage and usage needs when choosing between these options.
  • Local versus cloud deployment: Cloud-based models are scalable and easy to access but may pose data sovereignty issues. Local deployments offer more data control but are resource intensive.

Questions for policymakers when considering AI tools:

  • Data provenance: data's journey (record) as it goes through different processes and transformation.
  • Can the model be tuned or customized?
  • What are the costs, limitations and long-term impacts of the AI solution?

CAUTION:

While AI has the potential to enhance policymaking, it is crucial to exercise caution when integrating AI. AI systems, though powerful, can perpetuate biases, misinterpret data, and/or provide recommendations that are not aligned with the public good. Policymakers must ensure that AI tools are used transparently and ethically, with a strong emphasis on human oversight and intervention at every stage. AI should serve as a supportive tool rather than a decision-maker, with human intervention needed to ensure that its outcomes are not only aligned with societal values but also grounded in ethical considerations, transparency, and accountability.

Related Use Cases

Ai Related content
Lisbon harnesses AI to map and scale solar installations
Lisbon uses AI to map solar installations and optimize energy policies, enhancing climate action and policymaking efficiency.
Ai Related content
Data-driven water infrastructure maintenance enabled by AI
AI-driven leak detection in Burgas uses sensors and machine learning to identify leaks in real time, reducing water loss and optimizing maintenance.
Ai Related content
Amplifying Women’s Voices for Economic Participation by Addressing Access to Childcare
Learn how Mexico City's Government is developing an intelligent platform that integrates various data sources that might support women's participation in the labor market.