Achieving the right balance: The national AI strategy and privacy protection
April 30, 2024386 views0 comments
CHUMA AKANA
Chuma Akana is a privacy law expert and a technology law and policy advocate. He is the Founder of Innovation and Tech Lawyers Network. He is a regular speaker and writer on fintech, AI, emerging technologies, and global privacy laws. Chuma is a Tech, Law and Security Program Fellow at the American University, Washington College of Law.
Artificial Intelligence (AI) stands at the forefront of global technological advancement, promising transformative changes across various sectors. Recognising its potential, nations around the world are formulating national AI strategies to harness its benefits while addressing ethical, regulatory, and societal implications. The United States has embraced AI innovation with fervour, emphasising technological leadership, economic growth, and national security. Its AI strategy focuses on investment in research and development, talent cultivation, and industry collaboration. In contrast, Europe has adopted a more cautious approach, prioritising ethical AI development and safeguarding citizens’ rights. The European Commission’s AI strategy emphasises human-centric AI, ethical guidelines, and regulatory frameworks such as the General Data Protection Regulation (GDPR). Europe’s strategy is committed to balancing innovation with societal well-being and privacy protection.
Read Also:
Artificial Intelligence and Machine Learning can offer competitive advantages and efficiencies; however, it is important to promote responsible AI usage, focusing on risk identification, mitigation strategies, and consistent monitoring of AI applications. As Nigeria endeavours to formulate its National Artificial Intelligence (AI) Strategy, it faces the critical task of striking the balance between fostering AI innovation and managing both the interplay of possible threats to security, privacy and fairness. While fostering AI innovation, it is important to have robust regulatory frameworks which establish clear guidelines for AI development and deployment, with a focus on ethical considerations and privacy protection, as this will enhance public trust and confidence. Nigeria’s National AI Strategy should align closely with existing privacy protection laws, such as the Nigerian Data Protection Act thereby implementing mechanisms for transparent data governance, informed consent, and accountability to safeguard citizens’ privacy rights.
Ethical data use: The National AI Strategy should prioritise ethical data use, ensuring that AI applications respect individuals’ privacy rights and adhere to the principles of the NDPA. For example, AI algorithms should be designed to minimise data collection and processing to the extent necessary for achieving predefined objectives, thus respecting the principle of data minimisation under the NDPA.
Informed consent: The Nigeria Data Protection Act emphasises the importance of obtaining informed consent from data subjects before processing their personal data. The National AI Strategy should incorporate mechanisms to ensure that AI systems obtain explicit consent or rely on other lawful basis for processing personal data in accordance with the Act.
Data security and anonymisation: Given the sensitivity of personal data, the National AI Strategy should emphasise the importance of data security and anonymisation techniques in AI development. By implementing robust data protection measures, such as encryption and pseudonymisation, AI practitioners can mitigate the risk of unauthorised access or misuse of personal data, as mandated by the NDPA. Data controllers and processors are mandated to implement appropriate technical and organisational measures to ensure the security of personal data against unauthorised or unlawful processing and accidental loss.
Accountability and transparency: The Nigerian Data Protection Act imposes obligations on data controllers and processors to demonstrate accountability and transparency in their data processing activities. The National AI Strategy should promote similar principles, encouraging AI developers and deployers to uphold transparency and accountability standards throughout the AI lifecycle.
Moreover, with the emergence of large language models (LLMs) in Nigeria, managing the data used for training and processing requires careful attention to storage, transparency, and fairness. Data collected for these models is typically processed through sophisticated machine learning algorithms that derive patterns and insights from vast quantities of text and other media. The data is stored in secure databases with multiple layers of encryption and access controls to protect against unauthorised access and data breaches. To ensure transparency, developers of these language models may implement robust data governance policies, including clear documentation of the data sources, collection methods, and processing techniques. This approach allows stakeholders to trace the journey of data from collection to final output, ensuring accountability at each step. Additionally, using open-source tools and frameworks can foster greater community oversight and trust in the AI development process.
Fairness in AI processing is achieved through rigorous bias detection and mitigation strategies. This includes evaluating the data for any inherent biases related to gender, ethnicity, or other demographic factors, and implementing algorithms designed to neutralise such biases. Developers might also engage in regular audits of the language models to ensure they maintain ethical standards, incorporating feedback from diverse user groups to address any emerging concerns.
Furthermore, promoting diversity among AI researchers and practitioners is crucial to reducing bias. A more inclusive team can better understand and correct biases that may inadvertently arise during the development process. By embedding fairness and transparency into every stage of AI model development, Nigeria can build large language models that are not only effective but also just and equitable.
In essence, in order to tackle the challenges of AI governance, Nigeria’s National AI Strategy must focus on protecting citizens’ privacy rights as mandated by the Nigerian Data Protection Act (NDPA). By adhering to the NDPA’s guidelines for lawful data processing, data subject rights, data security, and accountability, Nigeria can create a supportive environment for AI innovation that also preserves key privacy principles. Emphasising ethical AI development within this robust regulatory framework helps Nigeria navigate the complex landscape of AI governance. This approach not only ensures legal compliance but also fosters public trust in the country’s AI ecosystem. By balancing innovation with privacy protection, Nigeria can tap into AI’s potential while guiding its digital future in a responsible and ethical direction.
- business a.m. commits to publishing a diversity of views, opinions and comments. It, therefore, welcomes your reaction to this and any of our articles via email: comment@businessamlive.com