Search for a command to run...
The rise in automation across industries is leading to machines increasingly performing functions that render human intervention obsolete. Artificial Intelligence (AI) unlocks the machine's ability to reach higher levels of autonomy by carrying out the tasks of reasoning and planning based on its perceived surroundings. With AI becoming increasingly responsible for safety-critical functions, it is imperative for manufacturers, regulators and certification authorities to ensure these functions are performed safely. Scenario-based methods have emerged as an effective solution to train and test the AI model of autonomous systems. It allows the system to be exposed to the various operational conditions it may encounter once deployed into the real world. However, this raises the question of whether the operational conditions considered for the training and testing of the AI model are truly sufficient. OASISS (ODD-based AI Safety In autonomouS Systems) aims to quantitatively assess the adequacy of the training and testing data of an AI model in relation to its targeted area of operation. The OASISS framework uncovers the gaps that are prevalent in the current scenario-based training and testing landscape and provides an evaluation mechanism guided by the dataset-related safety properties outlined in ISO PAS 8800.