Search for a command to run...
Abstract The rapid spread of Deepfake videos on social media platforms has raised severe concerns regarding digital misinformation, identity theft, and political manipulation. Deepfakes utilize sophisticated deep learning techniques to synthesize hyper-realistic videos that are difficult for humans and traditional algorithms to detect. To address this challenge, we propose a novel hybrid Deepfake detection model that integrates ResNeXt, a high-performance convolutional neural network (CNN), with Long Short-Term Memory (LSTM) networks for spatial-temporal analysis. ResNeXt is leveraged for frame-level spatial feature extraction, while LSTM captures temporal inconsistencies across video frames. The combined architecture enables a more holistic understanding of both static and dynamic patterns presented in manipulated videos. The model is trained and evaluated on benchmark datasets including FaceForensics++ and the Deepfake Detection Challenge (DFDC), achieving an accuracy of 97.8% and outperforming several state-of-the-art detection frameworks. Its strong generalization capability highlights its potential for real-world deployment, especially in online environments where Deepfake content continues to evolve. This research contributes meaningfully to automated video forensics and provides a reliable, scalable strategy to combat the growing threat of Deepfake proliferation across social platforms.