Search for a command to run...
Abstract One of the primary concerns related to the AI systems regards the potential for them to perpetuate biases and discriminatory practices, since they learn from historical data that may contain inherent prejudices. The human-in-the-loop (HITL) approach has emerged as a possible solution to tackle the risks of biased decision-making and related challenges associated with AI adoption. Such an approach relies on the assumption that AI decisions and actions should be supervised and, if necessary, modified or validated by humans: the machine can learn human knowledge and experience during the loop for improving the transparency, accountability, and performance of AI systems. On the other hand, the HITL approach is helpful for adapting and reconceptualizing the users’ role, valorizing their interaction with the learning system, as well as their provision of feedback, guidance, or input when needed. Although the HITL method brings several benefits, it might also convey some challenges. These include, for instance, the potential slowing down of decision-making processes and the risk of human error. The explainable AI (XAI) techniques facilitate a more informed and efficient HITL process, including human oversight. XAI is key to adopt an effective human-centered perspective and make the process more accessible and efficient. XAI gives rise to a bidirectional communication channel between human operators and the intelligent system. The users are prioritized as the primary driver of this interactive bi-directional process: they are empowered to collaborate with the system for enhancing the effectiveness of the interaction and the outcomes of the system itself, thereby contributing to adaptability. This chapter explores the HITL approach and the XAI techniques from a double perspective—technical and ethics driven—and outlines how they can be effectively combined with automation to make all AI models and their associated results transparent and ethically sound. On the other hand, the synergic use of the HITL approach and XAI can contribute to prevent or minimize the so-called hallucination effect. Such an effect occurs when AI systems generate inaccurate or logically inconsistent responses, outcomes, or data, relying on patterns learned during training, which can lead to incorrect or far-fetched conclusions. These kinds of conclusions are not acceptable, when it comes to accountability, e.g., ethical or contractual situations, for instance, in health or manufacturing applications. The chapter also outlines the initial expected outcomes of the validation of this cross-intersection of the HITL and XAI in different domain-specific demo cases, in alignment with the AI, Data, and Robotics Partnership. Once completed, such demo cases will demonstrate how this joint action of the HITL and XAI is expected to be paramount in different scenarios in view of prioritizing the role of humans toward fairness, accountability, and trustworthiness. All the demo cases will showcase high real-world relevance. For instance, the robotics demo case will showcase real-world relevance by using wearable sensors to provide real-time feedback, enhancing safety and efficiency through effective cognitive load management.