When Asahr launched its glossy new TalentMatch AI, the marketing pitch was delivered like small chops at a Lagos wedding — hot, endless trays, everyone nodding in approval. The platform promised to be the saviour of HR: scan CVs, analyse video interviews, even judge the grammar in your cover letters. Ten years of hiring data were fed into it — job descriptions, performance reviews, candidate demographics. On paper, it sounded perfect.
But ask any Nigerian about a family meeting and fairness: the goat that gets slaughtered knows the truth. In this case, the “goat” are the underrepresented candidates. The AI began favouring certain universities and polished Queen’s English accents, while sidelining talented people whose paths looked less “prestigious.”
This isn’t just an anecdote. A 2018 MIT and Stanford study showed commercial AI systems misclassified darker-skinned women up to 34 percent of the time, compared with less than one percent for lighter-skinned men. A 2022 UK study found that AI hiring tools were twice as likely to reject applicants with non-standard English phrasing — not because of competence, but because the model absorbed old biases. Efficiency at scale, yes — but fairness quietly thrown out.
Asahr’s clients in Europe and North America quickly noticed. Civil society groups criticised the system’s lack of transparency. Regulators opened files, sharpening their pencils like WAEC invigilators. GDPR’s Article 22 restricts solely automated decisions with significant effects, and the new EU AI Act classifies hiring systems as “high risk.” Translation: fines that can make a CFO start calculating in tears.
And here’s the kicker: even in Nigeria, the NDPR says the same thing. Article 2.3(2)(d) gives people the right not to be subject to decisions based solely on automated processing, including profiling, if it significantly affects them. The only exceptions? If it’s necessary for a contract, if there’s explicit consent, or if the law authorises it. And even then, controllers must give people safeguards: the right to demand human review, express their own view, and challenge the decision. Put simply — neither GDPR nor NDPR allows companies to just blame the robot.
Now, picture TalentMatch tested here: “What are your strengths?” Candidate: “Omo, I dey deliver sharp-sharp.” The algorithm would likely crash or label them “unsuitable for global synergy.” Yet that same candidate might be the one who can actually solve problems faster than a Silicon Valley engineer sipping kale juice. Local realities rarely fit neatly into imported datasets.
Governance was supposed to be Asahr’s shield. The company swore it had conducted a DPIA, consulted its ethics board, and built a risk plan. But external auditors discovered sloppy version control, minimal human oversight, and stakeholders left out of the design stage. It’s like cooking egusi soup without tasting it, then inviting the whole village to eat.
And if this happened in Abuja? The Senate Committee on Technology would already have “invited” Asahr for a public grilling. A press release would promise “ongoing investigations,” while Nigerians on Twitter/X would unleash the real punishment: “Even robot dey do tribalism.” Once the memes start, your brand equity evaporates faster than old PHCN light on a Saturday night.
The lesson here is bigger than one company. A generator that powers only the big man’s mansion while the rest of the street remains dark is not progress — it is a hierarchy disguised as innovation. AI in hiring has immense potential, but without transparency, fairness, and genuine oversight, it simply industrialises discrimination.
So what should have been done? Real human oversight, not rubber-stamped ethics minutes. Proper consultation with affected groups before deployment. Rigorous version control and ongoing monitoring. Above all, humility: AI is not magic. Feed it biased data and you simply get biased decisions at lightning speed.
For Nigerian organisations tempted to import or build similar tools, the warning is clear. Don’t just copy-paste foreign platforms and trust the marketing deck. Test them on local context — throw in CVs from UniLag, ABU Zaria, and Covenant. Add slang, accents, and real career trajectories. If the model chokes, fix it before it embarrasses you. Because here, when technology fails, we don’t file quiet complaints — we roast. And once the crowd laughs at your brand, regaining trust is harder than passing JAMB without expo.
Asahr’s saga is a global reminder. Whether in Brussels or Abuja, the rules are clear: if AI is deciding who gets jobs, it must be transparent, explainable, and accountable. Otherwise, the robots won’t just take jobs — they’ll reserve the best ones for people with the “right” names, accents, and diplomas. That isn’t innovation; it’s bias with Wi-Fi. And in 2025, that’s the fastest way to turn your shiny platform into an expensive generator that lights only the penthouse while everyone else stares into darkness.