Search for a command to run...
Percutaneous thermal ablation is a minimally invasive treatment for hepatocellular carcinoma. Evaluating the treatment success depends on the accurate quantification of the margin achieved between the ablation zone and the tumor. However, manual delineation of the ablation zone is labor-intensive, motivating the development of automated approaches. Existing deep learning-based ablation zone segmentation methods adopt voxel-wise segmentation. Voxel-wise models often underperform in noisy or low-contrast cases with poorly visible ablation zone borders, producing underestimated and fragmented masks that require complex post-processing. In addition, existing ablation zone segmentation models are trained primarily on tumor segmentation datasets, or rely on user interaction for mask correction. Contour-based segmentation methods improve boundary delineation, but their performance typically depends on the visibility of the borders, and produce overly-smooth contours. In this work, we present a fully automated deep learning model for ablation zone segmentation that addresses these limitations by predicting contours directly in the Fourier domain. Fourier contour embeddings enable precise modeling of curved shapes, and produce continuous segmentation masks. Thus, eliminating the need for extensive post-processing or manual correction. To avoid overly-smooth contours, we introduce a multiscale deep supervision strategy with dynamic loss weighting, encouraging the model to capture high-frequency boundary features. We train and evaluate our method on a dedicated ablation zone dataset specifically annotated for this task. Our results demonstrate improved prediction accuracy in Dice score and distance-based metrics compared to existing models. In particular, a detailed contrast-to-noise ratio (CNR) analysis shows that our model consistently outperforms existing approaches across all CNR levels. • Developing a deep learning-based model for ablation zone segmentation. • Model jointly predicts a segmentation mask and contour at different detail levels. • Incorporating multi-scale deep supervision with dynamic loss weighting in the model. • The incorporated strategies improve boundary feature learning and contour accuracy. • Thoroughly evaluating model performance across varying contrast-to-noise ratio values.
Published in: Biomedical Signal Processing and Control
Volume 120, pp. 110038-110038