Search for a command to run...
Background: Dermoscopic image segmentation plays a crucial role in computer-aided diagnostic (CAD) systems as an important tool for diagnosing skin lesions. The precise segmentation of lesion regions in dermatological images facilitates more objective and accurate diagnostic decision-making in clinical practice. However, many existing deep learning models face challenges in accurately segmenting lesion edges and are computationally complex, which limits their deployment on resource-constrained edge devices. Therefore, this study aimed to design a dermoscopic image segmentation model that simultaneously addresses the challenges of edge segmentation and maintains low computational complexity. Methods: We developed a wide edge-assisted lightweight dermoscopic image segmentation network (WENet) consisting of a lightweight encoder and a wide-edge-assisted decoder. The encoder is constructed using a squeeze dual-path convolution (SDPC), which adopts a bottleneck design and employs asymmetric convolutions with large and small dilation rates to significantly reduce model complexity while ensuring efficient feature extraction. It also integrates the statistical multi-feature adaptive channel recalibration attention (SACA) module for precise channel feature recalibration. The decoder consists of the wide-boundary generator (WBG), prediction information fusion decoding layer (PFDL), and progressive multi-scale feature fusion segmentation head (PMSSH). The WBG generates wide-edge labels by combining ground-truth annotations with morphological erosion and applies deep supervision to guide the model to learn boundary features, enhancing the model’s edge segmentation performance without increasing the parameter count. The PFDL fuses region and boundary predictions with decoding features and employs a grouped design using the SDPC for feature extraction. It then enhances spatial feature information using group multi-axis Hadamard product attention (GHPA). The PMSSH progressively integrates multi-scale features to bridge the semantic gap across scales, ultimately producing the final segmentation map. Results: WENet was evaluated on the International Skin Imaging Collaboration (ISIC)2017, ISIC2018, and Pedro Hispano 2 (PH2) datasets, and achieved mean intersection over union (mIoU) scores of 80.37%, 81.34%, and 85.98%, and specificity (Spe) values of 98.37%, 97.39%, and 96.23%, respectively, while maintaining a model size under 15KB. The model has significantly fewer parameters compared to recent state-of-the-art models, while maintaining excellent segmentation performance. Conclusions: The proposed WENet presents an accurate yet computationally efficient solution for dermoscopic image segmentation, outperforming state-of-the-art methods in both model compactness and boundary segmentation precision.
Published in: Quantitative Imaging in Medicine and Surgery
Volume 16, Issue 4, pp. 316-316