Have we left human ethics behind?

Michael Irene is a data and information governance practitioner based in London, United Kingdom. He is also a Fellow of Higher Education Academy, UK, and can be reached via moshoke@yahoo.com; twitter: @moshoke
February 25, 2025274 views0 comments
If I had told you ten years ago that companies would adopt artificial intelligence in massive tranches, you’d likely have dismissed the idea. Yet, here we are, existing where machine learning and automation underpins nearly every aspect of modern life.
The truth is, AI didn’t appear overnight. Machine learning has been shaping our daily lives for decades. Long before we had sophisticated algorithms, the first machine was man — the human brain, processing data, identifying patterns, and making decisions based on experience.
Take, for example, a school setting. When a child is enrolled, teachers and administrators collect data — name, gender, age, learning abilities, and even medical conditions. This information helps shape their educational experience, ensuring they receive the right support. In essence, this is a rudimentary form of machine learning — gathering inputs to create a predictive model that informs future actions.
However, what separates human decision-making from AI-driven processes is the ethical lens through which data is used. In education, the welfare of the child is at the heart of every decision. There are checks and balances, ensuring that information is used responsibly, with safeguarding measures in place.
Fast forward to today, and we have outsourced vast quantities of data processing to machines. AI systems now sort, analyse, and make decisions at a scale humans could never achieve. Businesses have embraced AI for efficiency, cost-cutting, and automation — often without fully understanding the implications of entrusting machines with data-driven decision-making.
This shift has introduced a critical dilemma. Have we left behind the ethical responsibility that once guided data usage? Have we become too detached from the consequences of machine-driven decisions?
For decades, data belonged to humans. People collected it, interpreted it, and made judgements based on context, emotion, and moral responsibility. Now, algorithms decide who gets a loan, which job candidates make the shortlist, or even how criminal sentences are determined. These models operate with cold precision, often lacking the ability to factor in nuance, fairness, or human dignity.
The problem isn’t AI itself — it’s how we integrate it into society. We treat AI as an all-knowing, unbiased decision-maker, yet we fail to acknowledge that it is only as good as the data we feed it. And that data? It reflects the biases, prejudices, and systemic flaws ingrained in society.
Where humans once questioned, debated, and refined decisions, we now often accept algorithmic outputs as absolute truths. The assumption that AI is infallible is not only flawed but dangerous.
The challenge ahead is not to stop AI from evolving, but to ensure we embed ethical principles into its foundations. Human oversight must remain a priority, ensuring that AI enhances decision-making rather than replacing human judgement. Transparency is non-negotiable. Companies using AI must be clear about how models function, what data they use, and where biases may exist. Regulation must also catch up, with stronger AI governance to enforce fairness, accountability, and ethical integrity.
Technology should serve humanity, not the other way around. The real question is whether we are designing AI to reflect human values, or allowing machines to redefine them. The answer to that will shape our future more than any algorithm ever could.
- business a.m. commits to publishing a diversity of views, opinions and comments. It, therefore, welcomes your reaction to this and any of our articles via email: comment@businessamlive.com