张远鹏, 邓赵红, 钟富礼, 杭文龙, 王士同. 基于代表点评分策略的快速自适应聚类算法[J]. 计算机研究与发展, 2018, 55(1): 163-178.
 引用本文: 张远鹏, 邓赵红, 钟富礼, 杭文龙, 王士同. 基于代表点评分策略的快速自适应聚类算法[J]. 计算机研究与发展, 2018, 55(1): 163-178.
Zhang Yuanpeng, Deng Zhaohong, Chung Fu-lai, Hang Wenlong, Wang Shitong. Fast Self-Adaptive Clustering Algorithm Based on Exemplar Score Strategy[J]. Journal of Computer Research and Development, 2018, 55(1): 163-178.
 Citation: Zhang Yuanpeng, Deng Zhaohong, Chung Fu-lai, Hang Wenlong, Wang Shitong. Fast Self-Adaptive Clustering Algorithm Based on Exemplar Score Strategy[J]. Journal of Computer Research and Development, 2018, 55(1): 163-178.

## Fast Self-Adaptive Clustering Algorithm Based on Exemplar Score Strategy

• 摘要: 在基于代表点的聚类算法中，为了解决算法自适应性和聚类速度问题，在快速压缩集密度估计的基础上，提出了一种基于代表点评分策略的快速自适应聚类算法.该算法的提出基于3个非常重要的假设：1)每个簇有一个代表点，且代表点来自簇内高密度样本；2)代表点或在压缩集中，或在压缩集附近且与压缩集中样本具有高度相似性；3)各簇中样本围绕代表点并沿着压缩集扩散.基于第1个和第2个假设，提出用代表点分值来评估样本成为代表点的可能性，并分析了其合理性.基于第3个假设和代表点分值，构建了一种快速的自适应聚类算法，该算法将所有样本按照其代表点分值从大到小排序，形成代表点候选集；然后从代表点候选集中逐个选择代表点，利用其邻域不断传递标签至整个压缩集；最后采用同样的方法将压缩集中样本的标签扩散至整个数据集，在此过程中引入抽样，提高标签传播速度.在人工数据集和真实数据集上的实验表明：所提出的算法能够处理任意形状的数据集和大规模数据集，且不需要指定类别数.

Abstract: Among the exemplar-based clustering algorithms, in order to improve their efficiencies and make them self-adaptive, a fast self-adaptive clustering algorithm based on exemplar score (ESFSAC) is proposed based on our previous work, a fast reduced set density estimator (FRSDE). The proposed ESFSAC is based on three significant assumptions that are stated as: 1) exemplars should come from high-density samples; 2) exemplars should be either the components of the reduced set or their neighbors with high similarities; 3) clusters can be diffused by surrounding both exemplars and its labeled reduced set. Based on the first two assumptions, a quantity called exemplar score is proposed to estimate the possibility of a sample as an exemplar and its rationale is theoretically analyzed. With exemplar score and the third assumption, a fast self-adaptive clustering algorithm is proposed. In this novel algorithm, firstly, all samples are ranked ordered by their exemplar scores descendingly, and stored in a set called exemplar candidate set. Secondly, exemplars in the candidate set are selected one by one and their labels are propagated to their neighbors in the reduced set. Thirdly, with the same strategy, the unlabeled samples gain their labels from the samples in the reduced set. To speed up this process, a sampling algorithm is introduced. The power of the proposed algorithm is demonstrated on several synthetic and real world datasets. The experimental results show that the proposed algorithm can deal with datasets with different shapes and large scale datasets without presetting the number of clusters.

/

• 分享
• 用微信扫码二维码

分享至好友和朋友圈