Search for a command to run...
Automated deception detection remains an open challenge in behavioral AI, with traditional polygraph systems constrained by invasiveness, inconsistent reliability, and limited scalability. This paper presents FaceVeritas, a real-time, noninvasive lie detection framework that identifies deceptive behavior exclusively through facial micro-expression analysis using computer vision, without reliance on speech, physiological sensors, or text input. The system captures live video at 30FPS, extracts 468 three-dimensional facial landmarks via MediaPipeFaceMesh, and computes seven behavioral features per frame — Eye Aspect Ratio (EAR), blink rate, Mouth Openness Ratio (MOR), eyebrow lift, head yaw (θyaw), head pitch (θpitch), and normalized face distance (rface). Raw features are temporally stabilized using Exponential Moving Average (EMA, α = 0.2) to suppress landmark jitter while preserving genuine micro-expression transients. A supervised Random Forest classifier (100 trees) trained on the Bag-of-Lies and Real-Life Trial datasets generates a continuous deception probability score per frame. Experimental evaluation on a held-out test set of 200 samples achieves 78.5% accuracy, 77.1% precision, 81.0% recall, 79.0% F1-score, and 92ms per-frame inference on a standard Intel i5 CPU without GPU acceleration — sufficient for real-time deployment. Blink rate (28.3%) and head yaw (24.1%) emerge as the strongest feature discriminators. FaceVeritas outperforms voice stress analysis by 13.7 percentage points and exceeds manual FACS annotation by 7.3 percentage points, while requiring only a standard consumer camera. The interpretable Random Forest architecture [11] addresses a critical gap in forensic applicability compared to black-box deep-learning alternatives.
Published in: International Journal for Research in Applied Science and Engineering Technology
Volume 14, Issue 3, pp. 4367-4378