Search for a command to run...
Relevance. Precise identification of brain tumours in magnetic resonance imaging (MRI) is a critical task in medical image analysis. Although deep learning–based object detectors achieve high localisation accuracy, their limited transparency restricts trust and routine adoption in clinical practice, highlighting the need for explainable artificial intelligence (XAI) approaches. Object of research. The object of this research is the automated detection of brain tumours in MRI scans using convolutional neural network – based object detection models. Subject of research. The subject of the research is the integration of YOLOv8 object detection models with the Local Interpretable Model-Agnostic Explanations (LIME) method to interpret individual detection outputs in medical imaging. Purpose. The aim of this paper is to develop and evaluate an explainable framework for brain tumour detection in MRI images by integrating YOLOv8-based object detection with LIME-based interpretation and by quantitatively assessing the quality of the generated explanations. Methods. Two YOLOv8 variants (YOLOv8n and YOLOv8s) were trained and evaluated on a publicly available MRI dataset containing glioma, meningioma, and pituitary tumour classes. LIME was applied to generate superpixel-based, box-conditioned local explanations for individual detections. Detection performance was assessed using precision, recall, mAP@50, and mAP@50–95. Explanation quality was quantitatively evaluated using stability, sparsity, maximum superpixel weight, and entropy metrics. Results. Experimental results demonstrate that both YOLOv8 models achieve high detection performance, with YOLOv8s providing slightly improved accuracy. LIME successfully highlights image regions that most influence model decisions, and the proposed quantitative metrics confirm that the generated explanations are stable, informative, and aligned with clinically relevant tumour regions. Conclusions. The proposed framework provides a practical approach for combining accurate tumour localisation with interpretable and quantitatively validated explanations, supporting reliability-oriented evaluation of AI-based clinical decision-support systems.
Published in: Innovative technologies and scientific solutions for industries