Search for a command to run...
Purpose This paper interrogates the cultural and ideological implications of generative artificial intelligence (AI) systems – particularly large language models (LLMs) and image generators – through the theoretical framework of Edward Said’s Orientalism. This study aims to investigate whether AI technologies perpetuate asymmetrical representations that echo colonial knowledge hierarchies. Design/methodology/approach Using a mixed-methods design, the study combines a quantitative toxicity analysis of 2,460 ChatGPT-generated texts prompted by culturally assigned personas with a visual content analysis of 100 AI-generated images. Statistical tests, including ANOVA and chi-square, assess bias and stereotyping patterns across cultural representations. Findings Results indicate higher toxicity scores for prompts associated with Arabic personas compared to Western ones. AI-generated visuals disproportionately depict Arabs and Jews in stereotypical, traditional attire, while Western subjects are portrayed more casually and neutrally. These patterns suggest that generative AI tools reinforce Orientalist tropes and epistemic asymmetries. Research limitations/implications The findings are based on a limited sample of personas and image generation tools. Further research should explore cross-platform comparisons and expand to other cultural and linguistic groups to assess the generalizability of algorithmic bias patterns. Practical implications The study underscores the urgency of ethical AI design that incorporates epistemic justice, inclusive data sets and cultural plurality to mitigate representational harms in AI systems deployed across education, governance and media. Originality/value This paper offers a novel application of Orientalism to the analysis of AI systems, revealing how algorithmic outputs function as new modalities of cultural representation that echo and amplify colonial discourses at scale.
Published in: Journal of Information Communication and Ethics in Society