Search for a command to run...
Brain tumors pose a serious threat to human health, and accurate magnetic resonance imaging segmentation is crucial for clinical diagnosis. Existing high‐performance deep models are difficult to deploy in resource‐constrained clinical settings due to their high computational complexity. While knowledge distillation can compress models, its traditional paradigm often overlooks the heterogeneous requirements of segmentation tasks for both global context and local detail, resulting in insufficient accuracy for lightweight models. To address this issue, this study proposes a lightweight brain tumor recognition method (KDLM) based on knowledge distillation. First, we design a dual‐teacher collaboration strategy, introducing teacher models that focus on global tumor region segmentation and local detail capture, respectively, to provide complementary guidance to the student model. Second, we construct a lightweight student network (MLSNet), centered around a multiscale feature fusion module and a residual channel attention mechanism. This aims to efficiently fuse multilevel features and provide a robust network support for absorbing dual knowledge. Experiments on the MenIN and Hunan University of Medicine General Hospital glioma datasets demonstrate that the proposed method achieves Dice coefficients of 0.809 and 0.806, respectively, outperforming most existing models. This significantly reduces the number of parameters and computational cost, effectively resolving the trade‐off between accuracy and efficiency. It provides a practical solution for the implementation of AI in resource‐constrained clinical environments.