Towards a national AI governance framework

Over the past two years, Nigeria has made significant strides in recognising the impact of artificial intelligence (AI) and in fostering the growth of AI startups and adoption across the country. In September 2025, the National Information Technology Development Agency (NITDA) finally published the National Strategy on Artificial Intelligence following nearly a year of consultations with diverse stakeholders in the ecosystem including researchers, policymakers, academics, and other key participants committed to advancing AI innovation and development in Nigeria.


The final National Artificial Intelligence Strategy 2025 aligns closely with Nigeria’s vision of becoming a global leader in responsible and ethical AI innovation. It emphasizes the transformative potential of AI to drive national development, stimulate economic growth, and promote social progress.


The strategy is anchored on five key pillars that guide its strategic objectives: (1) building foundational AI infrastructure; (2) developing and sustaining a world-class AI ecosystem; (3) accelerating AI adoption and sectoral transformation; (4) ensuring the responsible and ethical development and deployment of AI; and (5) establishing a robust AI governance framework.


The Strategy also acknowledges the potential risks associated with artificial intelligence and emphasizes the importance of responsible development and deployment. It identifies several key areas of concern. Accuracy is highlighted as a critical issue, since errors or erroneous predictions by AI systems can lead to serious consequences for individuals and society at large. Ensuring system accuracy is therefore essential. Bias is another recognized risk, as societal biases can be reflected in training data or introduced through AI algorithms. To mitigate this, the Strategy recommends excluding criteria related to protected characteristics such as ethnicity from AI systems wherever possible. Transparency is also underscored as a foundational principle, as a lack of transparency makes it difficult to assign accountability for inaccurate or harmful AI-generated outcomes. Finally, ‘governance’ is identified as vital to effective risk management. Robust governance processes, particularly in data governance, are deemed critical.
With respect to protecting the fundamental human rights of users, the Strategy advocates for the development of an AI Ethics Assessment Framework that provides a structured approach to evaluating the ethical implications of AI projects prior to deployment. This framework will assess the moral and societal impacts of AI technologies throughout their entire lifecycle, offering a systematic process for identifying, analyzing, and addressing ethical considerations during the design, development, deployment, and use of AI systems.


In addition, the Strategy supports the formulation of a comprehensive set of National AI Principles to serve as transparent guidelines for all aspects of AI development, deployment, and use in Nigeria. It also calls for the establishment of a robust AI Governance and Regulatory System, led by NITDA, to provide clear guidance, enforce ethical standards, and promote responsible AI practices. Furthermore, the Strategy emphasizes the importance of transparent terms and guidelines for responsible AI operations, alongside a comprehensive risk management framework aimed at minimizing potential negative impacts arising from AI deployment and usage.


In line with the implementation plan outlined in the National Artificial Intelligence Strategy, NITDA recently announced that it is developing a framework to guide the responsible adoption of AI across key sectors, including governance, healthcare, education, and agriculture. According to NITDA’s director-general, this framework aims to ensure that AI technologies are deployed ethically, safely, and in ways that promote national development.
Other countries have introduced similar initiatives to promote responsible AI governance. For example, the United Kingdom has adopted a pro-innovation, sector-specific approach guided by a set of non-statutory principles i.e safety, security, transparency, accountability, and redress. Rather than enacting new legislation, the UK relies on existing regulators to interpret and implement these principles within their respective sectors. In contrast, the European Union’s AI Act represents a risk based, comprehensive, and legally binding framework that classifies AI systems based on risk levels — unacceptable, high, limited, or minimal — and imposes stringent requirements on high-risk applications. In developing its AI governance framework, Nigeria should focus on a model that promotes innovation while protecting users.
As AI technologies continue to evolve, emerging developments such as Generative AI, AI agents, and Agentic AI highlight the need for governance frameworks that are both holistic and forward-looking. These frameworks should be adaptable enough to anticipate the rapid evolution of AI systems. Sector-specific regulations may also be necessary, allowing industries to address the unique challenges and risks associated with their operations.


For instance, in the financial technology (FinTech) sector, AI is increasingly used for credit scoring, fraud detection, anti-money laundering, and customer service automation. Earlier this year, Flutterwave, Africa’s most valuable fintech company, reported significant improvements following the integration of AI into its operations. The company achieved a 60 percent reduction in transaction processing time, enhanced fraud detection accuracy, and strengthened compliance with regulatory mandates across multiple jurisdictions.
One key point is that data lies at the core of AI systems, serving as the foundation for model development, decision-making, and overall system performance. In the fintech sector, companies routinely collect and process payment, transaction, and behavioural information from users. While this data presents significant opportunities for AI-driven innovation, such as fraud detection, intelligent advisory solutions, and advanced credit scoring, it also raises critical concerns regarding privacy and data protection. Responsible use of such data requires robust safeguards to ensure that individuals’ personal and financial information is handled securely, ethically, and in compliance with applicable regulations.
In developing a nuanced and context-sensitive approach to AI governance, Nigeria should draw on international best practices, including the OECD AI Principles (adopted by 47 countries) and the UNESCO Recommendation on the Ethics of Artificial Intelligence. These frameworks provide globally recognised guidance for responsible AI development, emphasizing transparency, accountability, inclusiveness, human-centered design, and, importantly, data privacy and protection. In 2024, the OECD updated its principles to address emerging challenges, particularly those associated with generative AI, highlighting the need for safety, privacy, intellectual property protection, and the integrity of information ecosystems.
As Nigeria moves forward in AI adoption across different sectors, aligning national policies with these principles will be essential to ensure that AI drives innovation while safeguarding the rights and trust of individuals.

Leave a Comment

Towards a national AI governance framework

Over the past two years, Nigeria has made significant strides in recognising the impact of artificial intelligence (AI) and in fostering the growth of AI startups and adoption across the country. In September 2025, the National Information Technology Development Agency (NITDA) finally published the National Strategy on Artificial Intelligence following nearly a year of consultations with diverse stakeholders in the ecosystem including researchers, policymakers, academics, and other key participants committed to advancing AI innovation and development in Nigeria.


The final National Artificial Intelligence Strategy 2025 aligns closely with Nigeria’s vision of becoming a global leader in responsible and ethical AI innovation. It emphasizes the transformative potential of AI to drive national development, stimulate economic growth, and promote social progress.


The strategy is anchored on five key pillars that guide its strategic objectives: (1) building foundational AI infrastructure; (2) developing and sustaining a world-class AI ecosystem; (3) accelerating AI adoption and sectoral transformation; (4) ensuring the responsible and ethical development and deployment of AI; and (5) establishing a robust AI governance framework.


The Strategy also acknowledges the potential risks associated with artificial intelligence and emphasizes the importance of responsible development and deployment. It identifies several key areas of concern. Accuracy is highlighted as a critical issue, since errors or erroneous predictions by AI systems can lead to serious consequences for individuals and society at large. Ensuring system accuracy is therefore essential. Bias is another recognized risk, as societal biases can be reflected in training data or introduced through AI algorithms. To mitigate this, the Strategy recommends excluding criteria related to protected characteristics such as ethnicity from AI systems wherever possible. Transparency is also underscored as a foundational principle, as a lack of transparency makes it difficult to assign accountability for inaccurate or harmful AI-generated outcomes. Finally, ‘governance’ is identified as vital to effective risk management. Robust governance processes, particularly in data governance, are deemed critical.
With respect to protecting the fundamental human rights of users, the Strategy advocates for the development of an AI Ethics Assessment Framework that provides a structured approach to evaluating the ethical implications of AI projects prior to deployment. This framework will assess the moral and societal impacts of AI technologies throughout their entire lifecycle, offering a systematic process for identifying, analyzing, and addressing ethical considerations during the design, development, deployment, and use of AI systems.


In addition, the Strategy supports the formulation of a comprehensive set of National AI Principles to serve as transparent guidelines for all aspects of AI development, deployment, and use in Nigeria. It also calls for the establishment of a robust AI Governance and Regulatory System, led by NITDA, to provide clear guidance, enforce ethical standards, and promote responsible AI practices. Furthermore, the Strategy emphasizes the importance of transparent terms and guidelines for responsible AI operations, alongside a comprehensive risk management framework aimed at minimizing potential negative impacts arising from AI deployment and usage.


In line with the implementation plan outlined in the National Artificial Intelligence Strategy, NITDA recently announced that it is developing a framework to guide the responsible adoption of AI across key sectors, including governance, healthcare, education, and agriculture. According to NITDA’s director-general, this framework aims to ensure that AI technologies are deployed ethically, safely, and in ways that promote national development.
Other countries have introduced similar initiatives to promote responsible AI governance. For example, the United Kingdom has adopted a pro-innovation, sector-specific approach guided by a set of non-statutory principles i.e safety, security, transparency, accountability, and redress. Rather than enacting new legislation, the UK relies on existing regulators to interpret and implement these principles within their respective sectors. In contrast, the European Union’s AI Act represents a risk based, comprehensive, and legally binding framework that classifies AI systems based on risk levels — unacceptable, high, limited, or minimal — and imposes stringent requirements on high-risk applications. In developing its AI governance framework, Nigeria should focus on a model that promotes innovation while protecting users.
As AI technologies continue to evolve, emerging developments such as Generative AI, AI agents, and Agentic AI highlight the need for governance frameworks that are both holistic and forward-looking. These frameworks should be adaptable enough to anticipate the rapid evolution of AI systems. Sector-specific regulations may also be necessary, allowing industries to address the unique challenges and risks associated with their operations.


For instance, in the financial technology (FinTech) sector, AI is increasingly used for credit scoring, fraud detection, anti-money laundering, and customer service automation. Earlier this year, Flutterwave, Africa’s most valuable fintech company, reported significant improvements following the integration of AI into its operations. The company achieved a 60 percent reduction in transaction processing time, enhanced fraud detection accuracy, and strengthened compliance with regulatory mandates across multiple jurisdictions.
One key point is that data lies at the core of AI systems, serving as the foundation for model development, decision-making, and overall system performance. In the fintech sector, companies routinely collect and process payment, transaction, and behavioural information from users. While this data presents significant opportunities for AI-driven innovation, such as fraud detection, intelligent advisory solutions, and advanced credit scoring, it also raises critical concerns regarding privacy and data protection. Responsible use of such data requires robust safeguards to ensure that individuals’ personal and financial information is handled securely, ethically, and in compliance with applicable regulations.
In developing a nuanced and context-sensitive approach to AI governance, Nigeria should draw on international best practices, including the OECD AI Principles (adopted by 47 countries) and the UNESCO Recommendation on the Ethics of Artificial Intelligence. These frameworks provide globally recognised guidance for responsible AI development, emphasizing transparency, accountability, inclusiveness, human-centered design, and, importantly, data privacy and protection. In 2024, the OECD updated its principles to address emerging challenges, particularly those associated with generative AI, highlighting the need for safety, privacy, intellectual property protection, and the integrity of information ecosystems.
As Nigeria moves forward in AI adoption across different sectors, aligning national policies with these principles will be essential to ensure that AI drives innovation while safeguarding the rights and trust of individuals.

Leave a Comment