Search for a command to run...
Abstract Can deception be detected solely from written text? Cues of deceptive communication are inherently subtle, and even more so in text-only communication. Yet, prior studies have reported considerable success in automatic deception detection. In this study, we hypothesize that such findings are largely driven by artifacts introduced during data collection and do not generalize beyond specific datasets. We revisit this foundational assumption by introducing a belief-based deception framework, which defines deception as a misalignment between an author’s claims and their true beliefs, irrespective of factual accuracy, allowing deception cues to be studied in isolation. Based on this framework, we construct three corpora – collectively referred to as DeFaBel – including a German-language corpus of deceptive and non-deceptive arguments and a multilingual version in German and English, each collected under different conditions to account for potential belief change and enable cross-linguistic analysis. Using these corpora, we evaluate commonly reported linguistic cues of deception. Across all three DeFaBel variants, we find that these cues exhibit negligible and statistically insignificant correlations with deception labels, contrary to prior work that treats such cues as reliable indicators. We further benchmark against other English deception datasets, that follow similar data collection protocols. While some of these show statistically significant correlations, the effect sizes remain low and, critically, the set of predictive cues is inconsistent across datasets. We also evaluate deception detection using feature-based models, pre-trained language models, and instruction-tuned large language models. While some of these models perform well on established deception datasets, they consistently perform near chance on DeFaBel. Our findings challenge the core assumption that deception can be reliably inferred from linguistic cues and call for a rethinking how deception is studied and modeled in Natural Language Processing.