Search for a command to run...
Access control systems rely increasingly on multimodal biometric and behavioral signals to enhance security and robustness against sophisticated attacks. However, when heterogeneous modalities provide conflicting evidence, such as valid biometric credentials accompanied by abnormal behavioral or acoustic patterns, traditional fusion strategies based on static thresholds or majority voting often fail, leading to false alarms or insecure authorization decisions. This paper addresses this critical limitation by proposing a contextual decision-making fusion framework designed to resolve conflicting multimodal evidence at the decision-making level. The proposed approach models access control as a decision-making problem in a context of uncertainty, where independent agents generate modality-specific evidence from authentication channels based on face, voice, and fingerprints. A centralized fusion mechanism integrates heterogeneous results using adaptive reliability weighting and contextual reasoning to resolve conflicts before operational decisions are made. Rather than treating each modality independently, the framework explicitly considers inconsistencies, uncertainties, and situational context when aggregating evidence. The framework is evaluated using public benchmarks, including VGGFace2, VoxCeleb2, and FVC2004, combined with controlled multimodal scenarios that induce conflicting evidence. Experimental results obtained under controlled contradiction scenarios show that the proposed fusion strategy reduces false alarms and improves decision consistency by approximately 18%. These results are interpreted within the scope of controlled multimodal simulations.