Table of Contents
Joy Agwunobi
After a year defined by unprecedented enthusiasm around artificial intelligence, 2026 may usher in a more measured phase for the technology, as governments, businesses and societies begin to confront the contradictions emerging from its rapid adoption, according to insights from the World Economic Forum (WEF).
In its analysis titled “AI paradoxes: Five contradictions to watch in 2026 and why AI’s future isn’t straightforward,” the Forum suggests that while AI continues to transform sectors such as healthcare, manufacturing and logistics, its long-term economic, social and environmental impacts remain uncertain. Despite massive investments and soaring expectations, returns are uneven, and the trade-offs associated with deploying AI at scale are becoming increasingly visible.
The WEF notes that AI’s promise is undeniably transformative, but its real-world deployment is constrained by complex choices that range from widening inequality and rising energy consumption to labour market disruption and geopolitical tensions. Even as concerns grow over a potential “AI bubble” amid global economic uncertainty, enthusiasm for adopting the technology remains strong, with staying competitive in the AI race seen as a strategic priority for 2026 across both advanced and emerging economies.
As AI evolves at speed, disrupting established industries and enabling entirely new business models; experts cited by the Forum are urging policymakers and corporate leaders to focus on systems that deliver sustainable, long-term value rather than short-term gains. However, an increasingly fragmented global political environment is making coordinated international approaches more difficult to achieve.
Against this backdrop, the Forum identifies five major paradoxes shaping the AI landscape in 2026; tensions that reflect not only the capabilities of the technology itself, but also the choices and behaviours of the humans designing, deploying and regulating it.
Jobs lost, jobs gained, but not evenly
One of the most persistent questions surrounding AI is whether it will eliminate jobs or create new opportunities. According to the WEF’s Future of Jobs Report 2025, the answer appears to be both.
Based on a survey of more than 1,000 leading global employers, the report projects that between 2025 and 2030, around 170 million new roles will be created globally, while 92 million existing jobs will be displaced, resulting in a net gain of 78 million jobs. However, the transition is expected to be disruptive.
Analytical thinking has emerged as the most in-demand skill, with seven out of ten employers identifying it as essential. This is followed by resilience, flexibility, agility, leadership and social influence skills that remain difficult to automate. While many employers are planning to restructure their operations in response to AI, nearly two-thirds say they intend to hire workers with specialised AI skills, even as 40 percent anticipate reducing headcount in areas where automation can replace human labour.
The Forum estimates that close to 40 percent of the skills currently required in the workplace will change over the next five years. While demand for technology-focused roles is growing rapidly, frontline occupations, including farm work, construction, delivery services, nursing, teaching and social work, are also expected to see significant expansion.
Further analysis in the WEF-backed paper “Jobs of Tomorrow: Technology and the Future of the World’s Largest Workforces” points to wide variations across sectors. While some tasks are being fully automated, others are being enhanced by AI, particularly those involving creativity, complex problem-solving and interpersonal interaction. Counterintuitively, the Forum suggests that the widespread adoption of AI may increase demand for distinctly human skills rather than diminish it provided workers can transition effectively into new roles.
Productivity gains that initially slow companies down
Another contradiction lies in AI’s relationship with productivity. While AI is often promoted as a tool for efficiency, evidence suggests that its benefits are not always immediate.
Research cited by the Forum from MIT Sloan shows that manufacturing firms adopting AI frequently experience short-term productivity declines before seeing improvements. This phenomenon, known as the “productivity paradox,” reflects an adjustment period in which new digital tools are poorly aligned with legacy systems, data infrastructure and existing workflows.
This pattern, described as an “AI adoption J-curve,” highlights the importance of complementary investments in training, process redesign and organisational change. In many cases, AI systems require more human oversight than initially expected, particularly as AI agents begin to plan and coordinate tasks across multiple systems.
The Forum’s AI Agents in Action report notes that while these agents can accelerate decision-making and execution, they also introduce new monitoring and governance burdens. Over time, however, organisations that successfully navigate this transition tend to outperform peers that fail to adopt AI, both in productivity and market share.
In knowledge-based sectors, the picture is more mixed. A study from the MIT Media Lab found that 95 percent of organisations using AI reported no measurable financial returns, with some workers complaining that low-quality AI-generated outputs, often referred to as “AI slop”, are creating additional work rather than reducing it. McKinsey research suggests this may be because many organisations are still experimenting with AI in pilot phases, with only a small group successfully scaling the technology to deliver tangible benefits.
An information flood that may reward human authenticity
Generative AI’s ability to produce human-like text, images, audio and video has led to an explosion of content online, raising concerns about quality, trust and misinformation. Some estimates now suggest that AI-generated articles may already outnumber those written by humans.
The Forum warns that this surge risks flooding the digital space with mediocre or misleading material, while enabling a sharp rise in deepfakes. Projections indicate that the number of deepfake videos shared on content platforms could reach eight million in 2025, up from just 500,000 in 2023. Misinformation and disinformation were ranked among the top global risks for 2025 in the WEF’s Global Risks Report, with studies showing that humans struggle to reliably detect high-quality deepfakes.
As the line between real and synthetic content becomes increasingly blurred, some analysts believe trust itself may become a scarce and valuable commodity. The Forum suggests that credible, transparent and human-verified content could command a premium in an environment saturated with AI-generated material.
Its paper “The Intervention Journey: A Roadmap to Effective Digital Safety Measures” argues that trust online must be actively built through accountability, clear authenticity signals and meaningful human oversight, rather than assumed as a by-product of technological progress.
A generation caught between opportunity and anxiety
Younger people, often described as digital natives, have an especially complex relationship with AI. While surveys show that nearly half of Gen Z respondents use generative AI tools weekly, a significant proportion report anxiety about the technology’s impact on their thinking, creativity and future prospects.
Research from MIT suggests that excessive reliance on AI could be associated with reduced cognitive engagement, weaker memory retention and diminished originality. At the same time, many young people are increasingly concerned about AI’s environmental footprint, including its water and energy use and the environmental costs of mining critical minerals for data-centre infrastructure.
The WEF’s New Economy Skills report highlights a further challenge: many traditional entry-level roles are being reshaped or eliminated by automation, while fast-growing jobs increasingly demand experience and specialised skills from the outset. This leaves younger workers expected to be “AI-ready” without clear pathways to acquire experience on the job.
The rise of autonomous AI agents compounds this issue. As these systems begin to perform tasks once handled by junior employees, questions arise about how future workers will develop the judgement, context and professional confidence traditionally gained through early-career roles.
Powering AI without overloading the planet
Perhaps the most pressing paradox concerns energy. AI’s rapid growth is driving a rise in electricity demand, with US data centres alone projected to consume 8.6 percent of the country’s total power by 2035. Globally, data centres used around 415 terawatt-hours of electricity in 2024, a figure the International Energy Agency expects to more than double by 2030.
Yet AI also holds significant potential to improve energy systems. The technology can enhance renewable energy forecasting, balance power grids, optimise building efficiency and enable more flexible demand that aligns with variable solar and wind output.
This dual role, the WEF notes, positions AI as both a strain on energy systems and a possible solution to their limitations. Discussions at COP30 around the “twin transition” linking digital transformation with the energy transition highlighted the need for AI growth to actively support clean energy deployment rather than simply increase consumption.
The Forum’s report “From Paradox to Progress: A Net-Positive AI Energy Framework” argues that managing AI’s energy impact is no longer a future challenge but an immediate innovation priority. By aligning AI development with climate, economic and energy goals, the Forum suggests organisations can turn responsible design into a competitive advantage.