Search for a command to run...
The VISTA AI Risk Assessment Toolkit offers a clear and systematic process to help Trusted Research Environments (TREs) evaluate whether a proposed artificial intelligence (AI) or machine learning (ML) model can be responsibly trained on sensitive data and when complete, be safely exported without compromising individual privacy. As AI becomes increasingly important in health and care research, it is essential that the public can benefit from these advances while remaining confident that personal information is protected. The toolkit guides TRE teams through every stage of a project, from a researcher’s first application to model training, to the final checks before a model can leave the secure environment. It introduces tools such as the Researcher AI Questionnaire and the AI Risk Assessment Form, which help reviewers understand how a model will be built, what data it will use, and what safeguards are needed. It also includes a registry to record all approved models, supporting transparency and long-term oversight. A key feature is the integration of SACRO-ML, which uses simulated privacy attacks to test whether a trained model might accidentally disclose information about individuals. By combining practical governance steps with technical evidence, the toolkit helps TREs support valuable AI research in a responsible, privacy protecting way.