Abstract:
Segmenting skin lesions from dermatoscopic images is crucial for the quantitative analysis and early diagnosis of skin cancer. However, automatic segmentation remains challenging due to blurred lesion boundaries, low contrast between lesions and surrounding skin, and the presence of artifacts, all of which complicate the segmentation process. Although the visual state space model based on the Mamba architecture demonstrates notable advantages over Transformer models in terms of linear computational complexity and long-range dependency modeling, it still struggles to preserve the topological integrity of segmentation results. To address this issue, a topological prior guided dual-branch vision Mamba network (TDBVM) is proposed. The proposed architecture adopts a dual-encoder design to extract topological priors and deep semantic features independently. In particular, the topology branch incorporates a multi-color space topological component extraction module to generate topological prior maps. These priors are fused with visual features through a multi-scale fusion mechanism, effectively guiding feature learning, enhancing the model’s ability to capture complex lesion boundaries and morphological variations, and suppressing artifacts. Experimental results on ISIC2018, ISIC2017, ISIC2016 and PH
2 datasets demonstrate that the proposed method significantly outperforms existing state-of-the-art approaches in terms of both segmentation accuracy and topological structure preservation, showing better segmentation results and robust generalization capability.