Search for a command to run...
Fabric defect detection is a critical task in textile manufacturing, where manual inspection remains inconsistent, labour-intensive, and unsuitable for high-speed production environments. Although deep learning–based detectors have shown strong potential, many existing models are too computationally demanding for practical deployment in real-time industrial inspection systems. This study proposes a lightweight deformable YOLO-based framework for accurate and efficient fabric defect detection. The model is built on YOLOv5s and enhanced through three efficiency-oriented architectural improvements: Bidirectional Feature Pyramid Network (BiFPN) for improved multi-scale feature fusion, Deformable Convolutional Networks (DCNv2) for stronger geometric adaptability, and Efficient Pyramid Split Attention (EPSA) for enhanced feature discrimination. The proposed model was trained and evaluated on the Alibaba Tianchi fabric defect dataset, comprising 5,913 images across 20 defect categories. Experimental evaluation was conducted using mean Average Precision (mAP), model size, and real-time suitability, supported by ablation and comparative analyses. Results show that the proposed method improved mAP from 41.9% for the baseline YOLOv5s to 48.2%, representing a gain of 6.3 percentage points. The findings indicate that targeted architectural optimisation can improve detection accuracy while preserving the lightweight characteristics required for industrial implementation. The proposed framework offers a practical solution for automated fabric inspection and provides a useful reference for efficiency-oriented defect detection in smart manufacturing environments.
Published in: International Journal of Latest Technology in Engineering Management & Applied Science
Volume 15, Issue 3, pp. 15-39