Search for a command to run...
AI generated text-to-image tools are rapidly entering educational settings, yet little is known about how their visual biases intersect with goals of equity, inclusion, and critical AI literacy. This systematic literature review maps and analyzes empirical studies that examine bias and representation in educational uses of AI generated text-to-image. Following PRISMA guidelines, we identified 31 peer reviewed studies published between 2023 and 2025 across K-12, higher education, and professional learning contexts. We used a six part analytic framework (gender; race, ethnicity, and socioeconomic status; culture and religion; age; body and (dis)ability; and content) to code how studies conceptualized and investigated bias. Across the corpus, biased representation was pervasive: images frequently centered white, male, Western, thin, and non disabled figures, while diversity related to age, body, and ability was largely overlooked. Most studies relied on image audits and qualitative methods, with few experimental or intervention based designs. This review is the first to synthesize how educational research conceptualizes, measures, and responds to bias in text-to-image tools’ outputs. The findings reveal significant blind spots and highlight directions for research, design, and policy aimed at aligning generative AI with educational inclusion and critical AI literacy. • First systematic review of AI text-to-image use in education (31 studies). • Shows how educators and students use AI images in diverse educational contexts. • Identifies recurring gender, racial, cultural, age, body and (dis)ability biases in AI images. • Reveals tensions between visually appealing images and factual or ethical accuracy. • Argues for moving from prompt engineering toward critical, technomoral AI literacy.
Published in: Computers and Education Artificial Intelligence
Volume 10, pp. 100587-100587