NHRC moves to regulate AI, partners tech firms to protect Nigerians from harm
March 21, 2025103 views0 comments
Joy Agwunobi
The National Human Rights Commission (NHRC) has outlined plans to collaborate with technology companies to prevent the misuse of Artificial Intelligence (AI) and protect Nigerians from potential harm.
Tony Ojukwu, executive secretary of the NHRC, made this known at an AI Governance virtual event organised by the International Network for Corporate Social Responsibility (IN-CSR) in partnership with the United Nations Working Group and the National Information Technology Development Agency (NITDA).
This development comes in the wake of the African Union’s (AU) endorsement of the Continental Artificial Intelligence (AI) Strategy, which encourages AI adoption across public and private sectors among member states, including Nigeria. The AU had detailed its vision in a document published on its website on August 9, 2024.
Read Also:
Speaking at the workshop, Ojukwu emphasised that while AI offers numerous benefits, it must be managed with strong ethical principles to prevent negative consequences such as bias in algorithms, privacy violations, and human rights infringements.
Rather than viewing AI as a threat, he noted that the Commission sees it as an opportunity to expand its role in digital rights protection.
As part of its strategy, the NHRC will engage with technology firms to ensure that AI systems operate transparently. “This transparency will allow for independent audits, redress mechanisms, and accountability measures to prevent discrimination or harm,” Ojukwu stated.
He also highlighted plans to require tech companies to conduct human rights due diligence (HRDD) before deploying AI-driven technologies. This, he said, will involve thorough risk assessments to identify potential harms and ensure ethical compliance.
Despite the growing complexity of AI systems, Ojukwu stressed the importance of human oversight in their development and application. He assured participants that the NHRC will serve as a bridge between government institutions, private sector innovators, academia, and civil society groups in advancing AI governance.
He further outlined the Commission’s role in: Partnering with international experts and rights advocates to establish AI standards that uphold human rights, engaging not only with large technology firms but also with communities that are most vulnerable to AI-driven disruptions, and setting clear accountability measures for both public and private entities involved in AI development and deployment.
Also speaking at the event, Kashifu Inuwa Abdullahi, director-general of NITDA, represented by Emmanuel Edet from the agency’s regulations and compliance unit, addressed concerns over AI-related risks. He stated that NITDA is working on developing diverse and high-quality local data sets to train AI models that are inclusive and free from biases often present in foreign data.
Abdullahi revealed that the agency has partnered with research institutions, AI developers, and regulatory bodies to create ethical guidelines for data collection, fairness audits, and bias mitigation. He further disclosed that NITDA, in collaboration with an innovative AI startup and other government agencies, is spearheading the development of Nigeria’s first government-backed large-language model (LLM).
This AI model, currently being trained in five low-resource Nigerian languages alongside accented English, is aimed at preserving Nigeria’s linguistic diversity in AI applications. According to Abdullahi, the initiative underscores the commitment to using locally sourced data to ensure that AI serves all segments of society fairly.
He reaffirmed NITDA’s dedication to enforcing transparency, accountability, and fairness in AI governance, ensuring that AI systems deployed in Nigeria align with ethical standards and serve the interests of all citizens.