Joy Agwunobi
The National Information Technology Development Agency (NITDA) has raised a fresh cybersecurity alert, cautioning Nigerians about newly identified weaknesses in advanced ChatGPT models that could expose users to data-leakage and manipulation attacks.
The advisory was issued through the agency’s Computer Emergency Readiness and Response Team (CERRT.NG), amid the rising adoption of AI tools across the country for personal, academic, corporate, and government-related tasks.
The warning comes as the use of AI-powered platforms continues to surge nationwide, with millions relying on conversational models for content generation, research support, coding, and workflow automation. According to NITDA, new findings from cybersecurity researchers reveal seven critical vulnerabilities affecting OpenAI’s GPT-4o and GPT-5 model family, exposing users to a sophisticated form of exploitation known as indirect prompt injection.
How the attacks work
The agency explained that attackers can hide malicious instructions inside everyday digital content including website text, comments, and even URLs. When ChatGPT analyses such material during its browsing, search, or summarisation functions, it may unintentionally execute the hidden commands.
“By embedding hidden instructions in webpages, comments, or crafted URLs, attackers can cause ChatGPT to execute unintended commands simply through normal browsing, summarisation, or search actions,” the advisory noted.
Some of the identified weaknesses allow bad actors to bypass model safety filters by concealing harmful instructions behind trusted domains. Others exploit markdown rendering gaps, allowing malicious text to blend seamlessly into regular content.
In more severe scenarios, the researchers found that attackers could manipulate the model’s “memory,” leading ChatGPT to store harmful prompts that then influence future conversations, a phenomenon known as memory poisoning.
While OpenAI has issued partial fixes, NITDA warns that large language models still lack the ability to fully differentiate legitimate user intent from hidden or deceptive instructions.
Potential risks for Nigerian users
The agency outlined several risks that could arise from exploiting these vulnerabilities, including: the model performing unauthorised actions, exposure of private or sensitive user information, altered, misleading, or manipulated responses,and long-term behavioural distortion caused by poisoned model memory
CERRT.NG cautioned that users may unknowingly trigger these attacks without clicking any links, as the issue can arise simply when ChatGPT processes online content containing concealed instructions.
Recommended safety measures
In response, NITDA is urging individuals, private organisations, and public institutions to strengthen their safeguards when interacting with AI platforms.
Key recommendations include:Restricting or disabling browsing and auto-summarisation of unverified websites within enterprise systems, allowing browsing and memory functions only when required, as well as ensuring deployed GPT-4o and GPT-5 systems are updated regularly to incorporate available security patches
This latest notice follows an earlier advisory issued in August, when the agency alerted the public to a critical security flaw in embedded SIM (eSIM) technology used across billions of devices globally. The vulnerability, traced to the GSMA TS 48 Generic Test Profile (version 6.0 and earlier), affected the eUICC chips that power smartphones, tablets, wearable devices and IoT systems. NITDA warned that attackers could exploit the flaw to install rogue applets, extract cryptographic keys, or clone eSIM profiles , risks that could enable persistent device control, intercepted communication, or covert backdoor access.
With the accelerating integration of digital systems and AI tools into daily life, NITDA said it will continue to monitor emerging threats and issue guidance aimed at protecting Nigerian users and institutions from evolving cyber risks.