Search for a command to run...
This study presents a comprehensive solution for enhancing smart campus security through a pedestrian re-identification system designed for cross-modality and cross-scene scenarios. The proposed framework focuses on visible–infrared person re-identification under multi-camera environments, where significant modality discrepancies, scene variations, occlusions, and complex pedestrian dynamics pose major challenges to reliable identity association. To address the limitations of conventional re-identification approaches under varying illumination, viewpoint changes, and domain shifts, the framework integrates two core components: a Dynamic Cross-Scene Pedestrian Re-identification Model (DCSPRM) and an Adaptive Scene Integration Strategy. The DCSPRM introduces a unified feature extraction pipeline that jointly captures spatial appearance cues and short-term temporal behavioral patterns. By combining convolutional and recurrent architectures with scene-aware adaptation, the model learns robust identity representations that remain stable across heterogeneous cameras and sensing modalities. Temporal consistency mechanisms further enhance identity reliability by smoothing feature variations over consecutive frames and mitigating noise caused by occlusion or abrupt environmental changes. Complementing this, the Adaptive Scene Integration Strategy incorporates multimodal contextual information and graph-based correlation modeling to refine cross-camera association. Through scene-aware feature refinement and cross-view correlation propagation, the strategy improves cross-camera consistency and reduces mismatches under diverse scene conditions. In addition, an adaptive feedback mechanism is designed to support future online optimization in dynamic environments. Extensive experiments on publicly available visible–infrared person re-identification benchmarks demonstrate that the proposed system achieves substantial improvements in identification accuracy, robustness, and cross-camera consistency compared with state-of-the-art methods. The results validate the effectiveness of combining dynamic feature learning with adaptive scene-level integration, highlighting the systems strong generalization capability in realistic smart campus security scenarios.