Search for a command to run...
Recent advances in generative image models have enabled the creation of highly realistic political deepfakes, posing serious risks to information integrity, public trust, and democratic processes. While automated deepfake detectors are increasingly deployed in moderation and investigative pipelines, most existing systems provide only point predictions and fail to indicate when outputs are unreliable—an operationally critical limitation in high-stakes political contexts. This work investigated conditional, uncertainty-aware political deepfake detection using stochastic convolutional neural networks within a strictly empirical, decision-oriented reliability framework. Rather than framing uncertainty from a purely Bayesian or interpretive perspective, uncertainty was evaluated through observable criteria, including calibration and its relationship to prediction errors under global and confidence-conditioned evaluation regimes. A politically focused binary image dataset was constructed via deterministic metadata-based filtering from a large public real–synthetic corpus. Two pretrained CNN backbones, ResNet-18 and EfficientNet-B4, were fully fine-tuned end-to-end for binary classification. Deterministic inference was compared with stochastic procedures, including single-pass stochastic prediction, Monte Carlo dropout with multiple forward passes, temperature scaling for calibration, and an ensemble-based uncertainty surrogate as a non-Bayesian reference. Evaluation protocols were defined, with the fake class treated as positive, ROC-AUC and thresholded confusion matrices reported, and experiments conducted under controlled in-distribution settings with supplementary generator-disjoint out-of-distribution analysis. The results showed that calibrated probabilistic outputs and uncertainty estimates supported downstream decision-making by enabling risk-aware moderation policies. A systematic confidence-band analysis further delineated when uncertainty added operational value beyond predicted confidence, clarifying the practical scope and limitations of uncertainty-aware deepfake detection in political contexts.