Search for a command to run...
The integration of AI into cybersecurity represents a significant shift in the protection of digital infrastructures. AI enhances predictive threat detection, automated incident response, and real-time risk management, enabling more proactive security practices. However, the growing autonomy and opacity of AI systems introduce substantial legal, ethical, and governance challenges. This chapter examines these issues at the intersection of technological innovation and regulatory accountability, drawing on EU legal frameworks such as the AI Act and the AI Liability Directive. It analyses how algorithmic opacity and the “black box” problem undermine transparency, due process, and liability attribution, while highlighting persistent responsibility gaps and the dual-use nature of AI in defensive and offensive cyber operations. Emphasising explainable AI as a foundation for trustworthy governance, the chapter proposes a risk-based governance model that aligns technical explainability with legal accountability and human oversight to enhance digital trust and protect fundamental rights.
Published in: Advances in computational intelligence and robotics book series