Search for a command to run...
The detection of fake news is a multidimensional challenge that demands solutions extending beyond purely computational approaches. Although advances in natural language processing, machine learning, deep learning, and multimodal analysis have strengthened technical capabilities, misinformation continues to proliferate. Fake news thrives within environments shaped by complex social interactions, platform-specific advantages, and human judgement. Social factors (such as user profiles, sharing behaviours, engagement metrics, network structures, and crowd-sourced credibility signals) play a critical role in how misinformation is created, propagated, and perceived, yet these contextual nuances are often overlooked in algorithmic models. These social dimensions operate alongside technical elements, including linguistic cues, visual content, temporal dissemination patterns, and hybrid feature integration. Drawing on a review of the recent literature, this work synthesises sociotechnical elements to inform the development of an integrated approach to the detection of fake news. We introduce the SHAPE conceptual framework to guide the development of the Informed Fake News Advisor (IFNA). This conceptual framework will guide the creation of IFNA, which will consist of detection tools that combine technical precision with sensitivity to social context. By framing fake news detection as a sociotechnical problem, IFNA shifts the focus from isolated technical optimisation towards a holistic design philosophy, supporting the development of solutions that are both effective in detection and responsible in deployment within complex information ecosystems.
Published in: International Conference on Cyber Warfare and Security
Volume 21, Issue 1, pp. 520-530