ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2016, Vol. 53 ›› Issue (7): 1605-1611.doi: 10.7544/issn1000-1239.2016.20148362

• 人工智能 • 上一篇    下一篇



  1. (江南大学物联网工程学院 江苏无锡 214122) (
  • 出版日期: 2016-07-01
  • 基金资助: 

Distributed Low Rank Representation-Based Subspace Clustering Algorithm

Xu Kai, Wu Xiaojun, Yin Hefeng   

  1. (School of Internet of Things Engineering, Jiangnan University, Wuxi, Jiangsu 214122)
  • Online: 2016-07-01

摘要: 针对基于低秩表示的子空间分割算法运算时间较长、聚类的准确率也不够高,提出一种基于分布式低秩表示的稀疏子空间聚类算法(distributed low rank representation-based sparse subspace clustering algorithm, DLRRS),该算法采用分布式并行计算来得到低秩表示的系数矩阵,然后保留系数矩阵每列的前k个绝对值最大系数,其他系数置为0,用此系数矩阵构造一个稀疏的样本关系更突出的相似度矩阵,接着用谱聚类得到聚类结果.但是其不具备增量学习功能,为此再提出一种基于分布式低秩表示的增量式稀疏子空间聚类算法(scalable distributed low rank representation based sparse subspace clustering algorithm, SDLRRS),如果有新增样本,可以利用前面的聚类结果对新增样本进行分类得到最后的结果.实验结果表明:所提2种子空间聚类算法不仅有效减少算法的运算时间,还提高了聚类的准确率,从而验证算法是有效可行的.

关键词: 低秩表示, 子空间聚类, 并行计算, 增量学习, 系数重建

Abstract: Vision problem ranging from image clustering to motion segmentation can naturally be framed as subspace segmentation problem, in which one aims to recover multiple low dimensional subspaces from noisy and corrupted input data. Low rank representation-based subspace segmentation algorithm (LRR) formulates the problem as a convex optimization and achieves impressive results. However, it needs to take a long time to solve the convex problem, and the clustering accuracy is not high enough. Therefore, this paper proposes a distributed low rank representation-based sparse subspace clustering algorithm (DLRRS). DLRRS adopts the distributed parallel computing to get the coefficient matrix, then take the absolute value of each element of the coefficient matrix, and retain the k largest coefficients per column and set the other elements to 0 to get a new coefficient matrix. Finally, DLRRS performs spectral clustering over the new coefficient matrix. But it doesn’t have incremental learning function, so there is a scalable distributed low rank representation-based sparse subspace clustering algorithm (SDLRRS) here. If new samples are brought in, SDLRRS can use the former clustering result to classify the new samples to get the final result. Experimental results on AR and Extended Yale B datasets show that the improved algorithms can not only obviously reduce the running time, but also achieve higher accuracy, which verifies that the proposed algorithms are efficient and feasible.

Key words: low rank representation, subspace clustering, parallel computing, incremental learning, coefficients reconstruction