Search for a command to run...
Efficient and reliable road damage detection is a critical component of intelligent transportation and infrastructure control systems that rely on visual sensing technologies. Existing road damage detection models are facing challenges such as missed detection of fine cracks, poor adaptability to lighting changes, and false positives under complex backgrounds. In this study, we propose an enhanced YOLO-based framework, YOLO-ERCD, designed to improve the accuracy and robustness of sensor-acquired image data for road crack detection. The datasets used in this work were collected from vehicle-mounted and traffic surveillance camera sensors, representing typical visual sensing systems in automated road inspection. The proposed architecture integrates three key components: (1) a residual convolutional block attention module, which preserves original feature information through residual connections while strengthening spatial and channel feature representation; (2) a channel-wise adaptive gamma correction module that models the nonlinear response of the human visual system to light intensity, adaptively enhancing brightness details for improved robustness under diverse lighting conditions; (3) a visual focus noise modulation module that reduces background interference by selectively introducing noise, emphasizing damage-specific features. These three modules are specifically designed to address the limitations of YOLOv10 in feature representation, lighting adaptation, and background interference suppression, working synergistically to enhance the model's detection accuracy and robustness, and closely aligning with the practical needs of road monitoring applications. Experimental results on both proprietary and public datasets demonstrate that YOLO-ERCD outperforms recent road damage detection models in accuracy and computational efficiency. The lightweight design also supports real-time deployment on edge sensing and control devices. These findings highlight the potential of integrating AI-based visual sensing and intelligent control, contributing to the development of robust, efficient, and perception-aware road monitoring systems.