Search for a command to run...
Accurate volumetric segmentation of 3D medical imaging modalities is critical for therapy planning and clinical diagnosis, particularly for brain tumor delineation. Traditional convolutional neural network (CNN)-based architectures face challenges while capturing global contextual information and modeling long-range dependencies in complex 3D volumetric data, limiting their segmentation performance. Transformer-based models have emerged as promising alternatives to CNNs for such tasks, addressing their limitations in capturing global spatial dependencies. We propose 3D-ViT-UNet, a novel U-shaped vision transformer (ViT)-based encoder-decoder architecture for end-to-end volumetric brain tumor segmentation. The model employs 3D Window Multi-Head Self-Attention (3D-W-MSA) to capture local features and a 3D Dilated-Window Multi-Head Self-Attention (3D-DW-MSA) to capture global features while reducing computational complexity. Moreover, for preserving absolute and relative positional information and preventing permutation equivalence limitation in transformers, a dynamic position encoding strategy is integrated. The proposed model demonstrates state-of-the-art (SOTA) performance for brain tumor segmentation on the BraTS 2020 dataset. It achieves a superior average Dice Similarity Coefficient (DSC) of 84.81% and a Hausdorff Distance (HD) of 4.87 mm with reduced computational complexity compared to existing methods. Also, an improvement in delineation of tumor boundaries and accurate segmentation across modalities is demonstrated through the qualitative results. Extensive quantitative and qualitative evaluations highlight the capability of 3D-ViT-UNet to achieve high accuracy with a smaller model size and lower FLOPs, making it an effective and efficient solution for clinical applications involving volumetric brain tumor segmentation.
Published in: PLOS Digital Health
Volume 5, Issue 3, pp. e0001323-e0001323