Citation: | Du Liang, Li Xiaodong, Chen Yan, Zhou Peng, Qian Yuhua. Double-Ended Joint Learning for Multi-View Clustering[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202440175 |
Multi-anchor graph approaches have attracted more and more attention for their potential in addressing the challenges of large-scale multi-view clustering. However, existing methods leveraging multi-anchor graphs encounter several hurdles when it comes to tackling this challenge. The consistency-anchored graph learning methods struggle with handling misaligned anchor graphs and necessitates additional post-processing with consistency graph, thereby constraining the accuracy and reliability of clustering outcomes. And the anchor graph ensemble clustering method fails to harness the complementary information from different views during the independent generation of candidate base clustering and overlooks the original anchor graphs during fusion, thus impacting the effectiveness and stability of clustering results. To address these challenges, we propose a novel approach based on double-ended joint learning for multi-view clustering. The method fully considers the duality between multi-anchor information and samples in multi-anchor graphs, achieving synchronized clustering between anchor-end and sample-end. Moreover, under the guidance of multi-anchor information, it achieves joint alignment between sample-end clustering and multiple anchor-end clustering. Unlike existing methods, the approach does not require direct learning of consistent anchor graph, thus can handle any type of anchor misalignment issues and mitigating the negative impact of separate graph learning and partitioning on clustering performance. Additionally, it utilizes multiple anchor graphs for anchor-end clustering and sample-end clustering within a unified optimization framework, effectively addressing the limitations of base clustering and the ensemble stage in leveraging multiple anchor graphs. Experimental results demonstrate that the proposed method outperforms several comparative methods in terms of clustering performance and time consumption, effectively enhancing the clustering performance of multi-view data. The relevant code for the proposed method and comparative methods is provided in the supplementary material: http://github.com/lxd1204/DLMC.
[1] |
Chao Guoqing, Sun Shiliang, Bi Jinbo. A survey on multiview clustering[J]. IEEE Transactions on Artificial Intelligence, 2021, 2(2): 146−168 doi: 10.1109/TAI.2021.3065894
|
[2] |
Cai Deng, Chen Xinlei. Large scale spectral clustering via landmark-based sparse representation[J]. IEEE Transactions on Cybernetics, 2014, 45(8): 1669−1680
|
[3] |
Nie Feiping, Wang Xiaoqian, Huang Heng. Clustering and projected clustering with adaptive neighbors[C]//Proc of the 20th ACM SIGKDD Int Conf on Knowledge Discovery and Data Mining. New York: ACM, 2014: 977−986
|
[4] |
Kang Zhao, Zhou Wangtao, Zhao Zhitong, et al. Large-scale multi-view subspace clustering in linear time[C]//Proc of the AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2020: 4412−4419
|
[5] |
Wang Siwei, Liu Xinwang, Liu Suyuan, et al. Align then fusion: Generalized large-scale multi-view clustering with anchor matching correspondences[J]. Advances in Neural Information Processing Systems, 2022, 35: 5882−5895
|
[6] |
Li Xuelong, Zhang Han, Wang Rong, et al. Multiview clustering: A scalable and parameter-free bipartite graph fusion method[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(1): 330−344 doi: 10.1109/TPAMI.2020.3011148
|
[7] |
Wei Xia, Quanxue Gao, Wang Qianqian, et al. Tensorized bipartite graph learning for multi-view clustering[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(4): 5187−5202. doi: 10.1109/TPAMI.2022.3187976
|
[8] |
Yang Ben, Zhang Xuetao, Li Zhongheng, et al. Efficient multi-view k-means clustering with multiple anchor graphs[J]. IEEE Transactions on Knowledge and Data Engineering, 2023, 35(7): 6887−6900
|
[9] |
Lao Jinghuan, Huang Dong, Wang Changdong, et al. Towards scalable multi-view clustering via joint learning of many bipartite graphs[J]. IEEE Transactions on Big Data, 2024, 10(1): 77−91 doi: 10.1109/TBDATA.2023.3325045
|
[10] |
Huang Dong, Wang Changdong, Wu Jiansheng, et al. Ultra-scalable spectral clustering and ensemble clustering[J]. IEEE Transactions on Knowledge and Data Engineering, 2020, 32(6): 1212−1226 doi: 10.1109/TKDE.2019.2903410
|
[11] |
Wei Zhu, Nie Feiping, Li Xuelong. Fast spectral clustering with efficient large graph construction[C]//Prof of the 2017 IEEE Int Conf on Acoustics, Speech and Signal Processing. Piscataway, NJ: IEEE, 2017: 2492−2496
|
[12] |
Zhang Pei, Wang Siwei, Li Liang, et al. Let the data choose: Flexible and diverse anchor graph fusion for scalable multi-view clustering [C]//Proc of the AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2023: 11262−11269
|
[13] |
于晓,刘慧,林毓秀,等. 一致性引导的自适应加权多视图聚类[J]. 计算机研究与发展,2022,59(7):1496−1508 doi: 10.7544/issn1000-1239.20210126
Yu Xiao, Liu Hui, Lin Yuxiu, et al. Consistency-guided adaptive weighted multi-view clustering[J]. Journal of Computer Research and Development, 2022, 59(7): 1496−1508 (in Chinese) doi: 10.7544/issn1000-1239.20210126
|
[14] |
Huang Dong, Wang Changdong, Lai Jianhuang. Fast multi-view clustering via ensembles: Towards scalability, superiority, and simplicity[J]. IEEE Transactions on Knowledge and Data Engineering, 2023, 35(11): 11388−11402 doi: 10.1109/TKDE.2023.3236698
|
[15] |
Zhou Peng, Du Liang, Liu Xinwang, et al. Partial clustering ensemble[J]. IEEE Transactions on Knowledge and Data Engineering, 2024, 36(5): 2096−2109 doi: 10.1109/TKDE.2023.3321913
|
[16] |
Liu Jiyuan, Liu Xinwang, Yang Yuexiang, et al. One-pass multi-view clustering for large-scale data[C]//Proc of the IEEE/CVF Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2021: 12344−12353
|
[17] |
Kang Zhao, Lin Zhiping, Zhu Xiaofeng, et al. Structured graph learning for scalable subspace clustering: From single view to multiview[J]. IEEE Transactions on Cybernetics, 2022, 52(9): 8976−8986 doi: 10.1109/TCYB.2021.3061660
|
[18] |
Wang Siwei, Liu Xinwang, Zhu Xinzhong, et al. Fast parameter-free multi-view subspace clustering with consensus anchor guidance[J]. IE-EE Transactions on Image Processing, 2022, 31: 556−568 doi: 10.1109/TIP.2021.3131941
|
[19] |
Liu Suyuan, Wang Siwei, Zhang Pei, et al. Efficient one-pass multi-view subspace clustering with consensus anchors[C]//Proc of the AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2022: 7576−7584
|
[20] |
Fang Siguo, Huang Dong, Cai Xiaosha, et al. Efficient multi-view clustering via unified and discrete bipartite graph learning[J]. arXiv preprint, arXiv: 2209.04187, 2023
|
[21] |
Liu Xinwang, Liu Li, Liao Qing, et al. One pass late fusion multi-view clustering[C]//Prof of the Int Conf on Machine Learning. New York: ACM, 2021: 6850- 6859
|
[22] |
Yang Ben, Zhang Xuetao, Nie Feiping, et al. Fast multi-view clustering via nonnegative and orthogonal factorization[J]. IEEE Transactions on Image Processing, 2020, 30: 2575−2586
|
[23] |
Cai Xiao, Nie Feiping, Huang Heng. Multi-view k-means clustering on big data[C]//Prof of the 23rd Int Joint Conf on Artificial Intelligence. San Francisco, CA: Margan Kaufmann, 2013: 2598−2604
|
[24] |
Xu Jinglin, Han Junwei, Nie Feiping, et al. Re-weighted discriminatively embedded k-means for multi-view clustering[J]. IEEE Transactions on Image Processing, 2017, 26(6): 3016−3027 doi: 10.1109/TIP.2017.2665976
|
[25] |
Nie Feiping, Shi Shaojun, Li Xuelong. Auto-weighted multi-view co-clustering via fast matrix factorization[J]. Pattern Recognition, 2020, 102: 107207 doi: 10.1016/j.patcog.2020.107207
|
[26] |
Fred A L N, Jain A K. Combining multiple clusterings using evidence accumulation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27(6): 835−850 doi: 10.1109/TPAMI.2005.113
|
[27] |
Iam-On N, Boongoen T, Garrett S, et al. A link-based approach to the cluster ensemble problem[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(12): 2396−2409 doi: 10.1109/TPAMI.2011.84
|
[28] |
Tao Zhiqiang, Liu Hongfu, Li Sheng, et al. Robust spectral ensemble clustering[C]//Proc of the 25th ACM Int Conf on Information and Knowledge Management. New York: ACM, 2016: 367−376
|
[29] |
Huang Dong, Wang Changdong, Lai Jianhuang. Locally weightedensemble clustering[J]. IEEE Transactions on Cybernetics, 2017, 48(5): 1460−1473
|
[30] |
Topchy A, Jain A K, Punch W. Clustering ensembles: Models of consensus and weak partitions[J]. IEEE Transactions on Pattern Analysisand Machine Intelligence, 2005, 27(12): 1866−1881 doi: 10.1109/TPAMI.2005.237
|
[31] |
Bai Liang, Liang Jiye, Du Hangyuan. An information-theoretical framework for cluster ensemble[J]. IEEE Transactions on Knowledge and Data Engineering, 2019, 31(8): 1464−1477
|
[32] |
Strehl A, Ghosh J. Cluster ensembles—A knowledge reuse framework for combining multiple partitions[J]. Journal of Machine Learning Research, 2002, 3: 583−617
|
[33] |
Zhou Peng, Du Liang, Liu Xinwang, et al. Self-paced clustering ensemble[J]. IEEE Transactions on Neural Networks and Learning Systems, 2021, 32(4): 1497−1511 doi: 10.1109/TNNLS.2020.2984814
|
[34] |
Li Zhihui, Nie Feiping, Chang Xiaojun, et al. Balanced clustering via exclusive lasso: A pragmatic approach[C]//Proc of the AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2018: 3596−3603
|