Search for a command to run...
Progress in generative artificial intelligence (AI) and large language models (LLMs) has enabled several educational technology breakthroughs that automate the sometimes tedious and labour-intensive processes of generating and evaluating instructional materials. Despite the hype, there is insufficient information on how task fit, perceived ethical influence, perceived utility, and reported ease of use influence teachers’ continuance intention to utilise LLMs for teaching and learning content creation. We sought to extend the Technology Acceptance Model (TAM) by integrating the new constructs of Task-Technology Fit (TTF), Social Influence (SI), and Perceived Ethical Influence (PEI) with its long-established constructs of Perceived Usefulness (PU) and Perceived Ease of Use (PEU) to predict Continuance Intention (CI). A quantitative questionnaire survey was administered to Ugandan educators, yielding 231 responses. The data collection and analysis phase took two steps. In phase one, descriptive statistics were computed using IBM SPSS version 23, while the measurement model’s reliability, convergent, and discriminant validity, model fitting, and hypotheses were tested using ADANCO. The study’s findings underscore the significant influence of TTF and SI on both PU and PEU, which in turn strongly influence Ugandan educators’ continued intentions to use LLM technologies in their instructional endeavours. Nonetheless, the ethical dimension, which appears prominent in theoretical discourse, had no substantial effect. In phase two, we explored the unexpected null effect of ethical influence in a follow-up survey on awareness of LLM ethical concerns. Results from 105 participants showed a significant impact of ethical issues on educators’ trust in LLMS. The study contributes to the ongoing discourse on the responsible adoption of AI in education and underscores the need for targeted ethical literacy in AI-powered instructional practice.