Search for a command to run...
Critical infrastructure (CI) sectors increasingly expose web-based supervisory control and data acquisition (SCADA), industrial control systems (ICS), and operational technology (OT) interfaces to the public internet, making them prime targets for zero-day web exploits. Traditional rule-based and signature-dependent Web Application Firewalls (WAFs) are fundamentally limited against unknown attack vectors, suffering from high false-negative rates when confronting polymorphic, evasive, or novel payloads that bypass predefined patterns. This paper presents a systematic review of artificial intelligence-enhanced WAFs (AI-WAFs) specifically tailored for critical infrastructure protection, analyzing more than 120 peer-reviewed studies published between 2018 and 2024. The review categorizes and evaluates AI techniques applied to next-generation WAFs, including supervised machine learning (ML), deep learning (DL) architectures (CNN, LSTM, Transformer-based models), unsupervised anomaly detection, reinforcement learning for dynamic policy adaptation, and hybrid ensembles. Performance metrics in zero-day scenarios show that ultramodern DL-based systems achieve detection rates of 94–99% with false-positive rates below 0.7%, significantly outperforming conventional WAFs (typically <65% detection on unseen exploits). Particular attention is devoted to CI-specific constraints: ultra-low latency requirements (<200 µs processing budget), deterministic behavior, explainability mandates under regulatory frameworks (NERC-CIP, IEC 62443, NIS2), resilience to adversarial ML attacks, and operation in air-gapped or low-bandwidth environments. Building on identified gaps, we propose a novel multi-layered AI-WAF reference architecture for CI environments comprising (1) a high-speed payload feature extraction engine, (2) a hierarchical detection stack combining lightweight unsupervised models at the edge with heavyweight Transformer/Graph Neural Network ensembles in the cloud or on-premises orchestration layer, (3) a feedback-driven continuous learning loop hardened against poisoning, and (4) formal explainability and policy generation modules that translate neural decisions into auditable ModSecurity/NGINX rules. The framework introduces a “Zero-Trust Zero-Day” paradigm that treats every request as potentially malicious until probabilistically cleared at multiple independent layers. Key contributions include a taxonomy of AI-WAF approaches, a comparative evaluation under CI-realistic conditions, identification of open challenges (adversarial robustness, concept drift in OT protocols, regulatory-compliant transparency), and a publicly releasable dataset synthesis roadmap for future CI-focused benchmarking.
Published in: Scientia. Technology, science and society.
Volume 3, Issue 2, pp. 11-32