Abstract:
Multi-anchor graph approaches have attracted more and more attention for their potential in addressing the challenges of large-scale multi-view clustering. However, existing methods leveraging multi-anchor graphs encounter several hurdles when it comes to tackling this challenge. The consistency-anchored graph learning methods struggle with handling misaligned anchor graphs and necessitates additional post-processing with consistency graph, thereby constraining the accuracy and reliability of clustering outcomes. And the anchor graph ensemble clustering method fails to harness the complementary information from different views during the independent generation of candidate base clustering and overlooks the original anchor graphs during fusion, thus impacting the effectiveness and stability of clustering results. To address these challenges, we propose a novel approach based on double-ended joint learning for multi-view clustering. The method fully considers the duality between multi-anchor information and samples in multi-anchor graphs, achieving synchronized clustering between anchor-end and sample-end. Moreover, under the guidance of multi-anchor information, it achieves joint alignment between sample-end clustering and multiple anchor-end clustering. Unlike existing methods, the approach does not require direct learning of consistent anchor graph, thus can handle any type of anchor misalignment issues and mitigating the negative impact of separate graph learning and partitioning on clustering performance. Additionally, it utilizes multiple anchor graphs for anchor-end clustering and sample-end clustering within a unified optimization framework, effectively addressing the limitations of base clustering and the ensemble stage in leveraging multiple anchor graphs. Experimental results demonstrate that the proposed method outperforms several comparative methods in terms of clustering performance and time consumption, effectively enhancing the clustering performance of multi-view data. The relevant code for the proposed method and comparative methods is provided in the supplementary material: http://github.com/lxd1204/DLMC.