Search for a command to run...
Purpose This service evaluation investigated frontline staff attitudes towards artificial intelligence (AI) implementation in NHS learning disabilities services to address critical knowledge gaps in workforce perspectives. Despite growing NHS AI adoption, systematic understanding of staff concerns remains limited, particularly regarding vulnerable populations who face heightened risks around consent capacity, communication barriers and potential exploitation. This study aims to capture staff perceptions of AI benefits, concerns and implementation needs to inform evidence-based, ethically-grounded Trust-level digital strategy that prioritises patient safety while supporting workforce readiness for technological change. Design/methodology/approach This mixed-methods service evaluation used an online questionnaire (n = 68) and semistructured focus group to explore staff attitudes in NHS specialist learning disabilities services. Participants included clinical professionals and nonclinical operational staff recruited through team meetings and electronic communications during July 2025–August 2025. The quantitative survey assessed AI familiarity using five-point scales, examining comfort levels, concerns regarding vulnerable patients, perceived benefits and training needs. A 30-minute focus group conducted via MS Teams explored clinical experiences, safeguarding concerns and implementation barriers. Descriptive statistics analysed quantitative responses while thematic analysis examined qualitative data. The study received Trust Practice Audit Implementation Group approval with voluntary participation and informed consent protocols. Findings Most staff (57%) demonstrated basic AI understanding, with 16% already using AI tools. Attitudes were predominantly cautious: 40% expressed neutrality and 35% voiced concerns about implementation with learning disabilities patients. Administrative efficiency emerged as the primary recognised benefit (62%), with limited support for clinical applications. Training priorities emphasised both AI fundamentals (47%) and ethical reassurance regarding bias and safety (47%). Qualitative analysis revealed four themes: heightened vulnerability concerns around patients’ capacity to distinguish AI from human interactions, significant safeguarding and exploitation risks, pragmatic engagement and training needs and governance. Originality/value This service evaluation addresses a critical gap by examining frontline workforce perspectives on AI implementation in an intellectual disabilities’ services, a population often marginalised in digital health transformation. It reveals unique vulnerabilities absent from general health-care AI literature, particularly around reality testing, consent capacity and exploitation risks through AI interactions. Unlike broader NHS AI surveys focusing on technical feasibility or public trust, this research captures specialist staff concerns about safeguarding implications and therapeutic relationship preservation. Findings provide evidence-based guidance for developing population-specific governance frameworks rather than applying standard protocols unsuitable for vulnerable groups. The equal emphasis on technical training and ethical reassurance offers practical insights for staged implementation strategies that balance innovation with patient safety.
Published in: Advances in Mental Health and Intellectual Disabilities