Joy Agwunobi

As artificial intelligence (AI) becomes increasingly embedded in modern education and the workplace, a new global report has emerged to guide higher education institutions through the urgent task of reimagining student assessments.
The report titled “The Next Era of Assessment: A Global Review of AI in Assessment Design,” provides the first comprehensive framework for redesigning academic assessments to reflect the realities and possibilities of an AI-driven world.
Released by the Digital Education Council in collaboration with Pearson, the report draws insights from 101 global case studies across higher education institutions. It introduces 14 practical methodologies for integrating AI into assignments—ranging from essays and presentations to exams and course-based projects—each mapped to relevant learning outcomes and AI competencies such as prompt design, output evaluation, and AI ethics.
The report responds to growing concerns within academia about how to adapt assessment models to a rapidly changing technological environment. As AI tools like chatbots, large language models, and automated grading systems become more accessible, educators are increasingly challenged to maintain assessment integrity while preparing students for the workforce of the future.
“As AI reshapes work and learning, faculty are increasingly asking for a practical methodology they can trust to guide assignment design,” the report noted, adding “They want to move beyond academic integrity concerns and toward assignments that give students hands-on experience using, critiquing, and collaborating with AI.”
According to the Digital Education Council’s Global Faculty Survey (2025), 54 percent of faculty believe that current assessments require significant change. In response, the report categorises AI-integrated assessment innovation into two dominant approaches:
- AI to Enhance Traditional Assessment – where AI is used to support core subject learning.
- AI as the Key Object of Study – where students are explicitly taught to understand, evaluate, and use AI tools ethically and effectively.
The methodologies presented aim to turn assessments into a training ground for practical AI fluency—an essential competency in today’s workforce. From AI-guided self-assessments to prompt engineering and ethics-based critiques, the report showcases how institutions are embedding real-world relevance into their assessment strategies.
To ensure this transformation is sustainable, the report introduces the concept of “AI-resilience” as a baseline design principle. Rather than relying solely on students’ compliance with academic integrity policies, AI-resilient assessments incorporate structural safeguards to minimise AI misuse and preserve authentic learning outcomes.
“One in two faculty say current assignments should be redesigned to be more AI-resistant,” the report noted, underscoring the urgency for structural reform.
The report outlines an end-to-end model for understanding the role of artificial intelligence (AI) in higher education assessment, structured around five core stages: setting learning outcomes, curriculum planning, assessment development, assessment delivery, and feedback and review. Each stage, according to the report, can be evaluated through two key lenses– what is now possible because of AI, and what must be adapted in response to AI.
Under the first lens, the report highlights that AI is now capable of analysing large datasets—including labour market intelligence and skills frameworks—to help identify existing skills gaps and inform the development of relevant, up-to-date learning outcomes.
It further notes that by automating routine academic tasks, AI enables students to dedicate more time to higher-order cognitive activities, thereby allowing assessments to focus on critical and complex skill development.
The report indicates that AI can now generate or recommend curriculum maps and sequencing aligned with intended learning objectives. It can also assist in the creation of personalised learning pathways by drawing on student profiles and learning analytics. In relation to assessment design, AI is cited as being capable of producing assessment materials such as quizzes, case studies, and rubrics.
The report adds that AI may also serve as a tool for writing support, simulation exercises, or guided reflection, contributing to more authentic, real-world-oriented learning experiences.
In the area of assessment delivery, the report observes that AI offers real-time feedback to guide student improvement and can be used to proctor or monitor exams. It also notes the potential of AI to enhance oral or scenario-based assessments by enabling simulated “role-play” or live Q&A formats, allowing students to demonstrate applied skills in dynamic environments.
At the feedback and review stage, the report states that AI can assist with grading, provide personalised student feedback, conduct large-scale analysis of assessment data, and generate performance summaries to support instructional improvement.
On what must be adapted in response to AI, the report notes that assessments incorporate AI-related competencies, including the evaluation of AI outputs and responsible use of AI tools. It emphasises that learning outcomes should differentiate between skills that learners must develop independently and those that can be augmented through AI.
The report calls for curriculum planning to include clear policies on when and how AI tools may be used during assessments. It suggests that institutions should provide structured opportunities for students to engage with AI tools in ways that are ethical, critical, and effective—ensuring that both human and AI-related capabilities are developed concurrently.
Further, the report advises that assessments be restructured to discourage excessive dependence on AI. It stresses the importance of moving away from output-focused assignments and toward tasks that assess student reasoning and thought processes. Rubrics, the report notes, should be updated to reward originality, critical analysis, and thoughtful application of AI tools.
The report also underscores the need for educators to clearly communicate expectations regarding AI use for each assessment. In some instances, it recommends the incorporation of synchronous or in-class activities to reinforce academic integrity, as these formats allow instructors to observe students’ work in real time. The report further proposes that delivery methods be revised to capture the process by which students complete their assignments—not merely the final outcomes.
Finally, the report urges institutions to regularly review and update their assessment strategies. It stresses that as AI capabilities continue to evolve, assessments must also adapt to remain valid, robust, and resilient—guarding against the risk of students outsourcing essential learning to machines.
One of the report’s key recommendations is to eliminate opportunities for inappropriate AI use not through surveillance, but through intentional assessment design. It advocates for a shift from asynchronous tasks—like take-home essays or online quizzes—to synchronous formats that inherently restrict AI interference.
“Supervised exams, oral presentations, classroom discussions, and in-class writing workshops are structurally resistant to AI use,” the report explained. “These formats reduce AI misuse not by monitoring, but by designing out the opportunity.”
However, the report also acknowledges that not all assessments can or should occur in tightly controlled environments. For ongoing, formative assessments, instructors are encouraged to design around AI’s limitations, using techniques like contextualised application tasks and process documentation where students explain their steps and rationale throughout the assignment.
“When allowing students to use AI in assessments, instructors must carefully reconsider where its use is appropriate and where it must be restricted, to ensure that AI supports—rather than undermines—the intended learning outcomes,” the report warned.
Commenting on the findings, Ebrahim Mathews, senior vice president of Pearson International Higher Education, emphasised the report’s role in aligning education with workforce needs, stating: “As a trusted partner to educators and employers worldwide, Pearson is enhancing assessments to meet a growing demand in the age of AI. This report offers a framework that will help students build and prove the proficiency needed to enter the workforce and close the global skills gap.”
Danny Bielik, president of the Digital Education Council, also emphasised the urgency for immediate action in education systems: “Education providers around the world are calling for practical changes they can make today to their teaching and assessment. They can’t wait, because AI is in their classrooms here and now.”
Similarly, Alessandro Di Lullo, chief executive officer of the Digital Education Council, described the report as a unifying blueprint, stating, “This report turns fragmented experimentation into a global roadmap. It helps institutions redesign assessment for both rigour and relevance.”