Search for a command to run...
<p>This document provides guidance to Irish higher education institutions on designing assessment practices that remain valid, credible, and educationally defensible in the presence of generative artificial intelligence. It responds to the structural disruption posed by large language models and related tools, which can now generate fluent text, code, and other artefacts at negligible cost, undermining assessment models that rely on take-home written products as primary evidence of learning.</p> <p>Rejecting the pursuit of ‘AI-proof’ assessment, the guidance advances the concept of AI-resilient assessment: approaches that acknowledge the permanence of AI in academic and professional life while preserving the integrity of standards, the reliability of judgement, and the public trust placed in higher education awards. Resilience is framed not as technical containment, but as the capacity of assessment systems to adapt while keeping learning, reasoning, and responsibility visible.</p> <p>The document outlines design principles that shift assessment from polished outputs towards evidential traces of thinking and judgement. It emphasises contextualisation, process visibility, interaction, and the combination of modes over time, recognising both the strengths and the limits of these strategies in an AI-saturated environment. Particular attention is paid to the risks of detection-led integrity approaches, including false accusations, bias, and erosion of student trust, and to the pedagogical costs of investing in technological countermeasures rather than educational design.</p> <p>A range of AI-resilient approaches is examined in detail, including invigilated and case-based examinations, scaffolded and contextualised written work, oral and dialogic assessment, performance-based and community-engaged assessment, and structured assessment in partnership with AI to develop critical AI literacy. Throughout, the guidance stresses that authenticity arises from the ability to probe reasoning and adapt to challenge, rather than from superficial markers of originality.</p> <p>The document also addresses institutional implementation, highlighting the resource, workload, accessibility, and equity implications of assessment redesign. It argues that AI-resilient assessment cannot be achieved through individual staff effort alone, but requires institutional investment in professional development, quality assurance, workload models, and inclusive design, alongside clear expectations for responsible AI use and disclosure.</p> <p>This guidance is intended for educators, academic leaders, assessment designers, quality assurance units, and policymakers seeking to sustain credible assessment in higher education as generative AI becomes a routine feature of learning and professional practice. It should be read alongside the HEA’s National Policy Framework and associated resources hosted on the HEA Generative AI Resource Portal.</p>
DOI: 10.82110/z7md-rm66