Recent developments in the United States and the European Union indicate a notable shift in how major economies are approaching artificial intelligence (AI) governance. These moves generally reflect an effort to ease regulatory burdens, streamline compliance, and create more flexible environments for AI development. The emerging trend has raised questions about how other regions, including Africa and Nigeria, may situate themselves within this evolving global landscape.
On November 19, the European Commission unveiled the Digital Omnibus package, which revisits elements of the GDPR, the ePrivacy Directive, and the AI Act. The initiative signals a move toward regulatory simplification and greater openness to data use for AI systems. Among the key components is a proposal to narrow the definition of personal data and expand permitted uses of pseudonymized and anonymized data for AI training.
The package also revises aspects of automated decision-making rules, postpones compliance timelines under the AI Act, and reduces certain consumer-facing obligations such as cookie banner requirements. The Commission states that these changes are intended to streamline obligations while maintaining high standards of safety and data protection. Under the proposal, the application of high-risk AI requirements would be tied to the availability of supporting tools and standards. The implementation timeline could extend up to 16 months to ensure adequate guidance for organisations. Additional measures aim to improve access to high-quality data, consolidate EU data rules, and reduce compliance complexities for small and medium-sized companies.
Similarly, federal authorities in the United States are advancing a proposal to eliminate state law obstruction of National AI Policy. The proposal outlines an intention to establish a uniform national approach to AI regulation and reduce the regulatory differences that have emerged across U.S. states over the past three years, during which hundreds of AI-related bills have been introduced and many passed. These state-level measures address issues such as consumer protection, children’s safety, transparency obligations, and limitations on certain uses of AI.
The proposed federal order seeks to place a moratorium on such state-level laws, redirecting oversight to national structures. It further instructs the Federal Trade Commission to apply existing federal laws to AI models and sets up a Department of Justice led task force to examine state policies. The development reflects ongoing debates in the U.S. about balancing innovation incentives with protective regulation.
Together, these developments illustrate a wider transition in digital governance particularly as it concerns artificial intelligence. Policymakers in these major economies are reassessing whether earlier regulatory models, often comprehensive and prescriptive, are suited to the rapid pace of AI evolution. The emerging regulatory direction emphasizes simplification, recalibration, and increasing flexibility.
Countries outside the U.S. and EU are observing these shifts as they continue shaping their own regulatory paths. Regions such as India, which is still finalizing its Digital Personal Data Protection Act implementation, are navigating similar questions about timelines, definitions, and the balance between innovation and oversight. Some jurisdictions may also opt to delay certain enforcement mechanisms while aligning with global discussions about AI standards.
Implications for Africa and Nigeria
Across Africa, governments are developing strategies to support AI adoption and digital transformation and observers are noting how global regulatory movements may influence the continent’s emerging frameworks. As major economies adjust their approaches, African policymakers may assess how these changes intersect with local priorities, institutional capacities, and economic contexts.
In Nigeria, conversations around AI governance continue to evolve, with attention on issues such as data protection, digital industrial policy, and innovation ecosystems. The transformation underway in global regulatory centers may shape expectations about the types of frameworks that could become prevalent internationally and this may contribute to ongoing debates about the balance between flexible rules that encourage innovation and safeguards that address social, human and ethical considerations.
The developments in the U.S. and EU point to a phase in which digital regulation is characterized less by rigid, prescriptive requirements and more by continuous adjustment to technological and market dynamics. As countries evaluate their positions within this shifting environment, the global conversation on AI governance continues to expand, offering a range of approaches that different regions may study as they develop their own regulatory trajectories.