ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2016, Vol. 53 ›› Issue (3): 559-570.doi: 10.7544/issn1000-1239.2016.20148218

• 软件技术 • 上一篇    下一篇



  1. 1(暨南大学信息科学技术学院 广州 510632); 2(中山大学信息科学与技术学院 广州 510006) (
  • 出版日期: 2016-03-01
  • 基金资助: 

Efficient Duplicate Detection Approach for High Dimensional Big Data


  1. 1(College of Information Science and Technology, Jinan University, Guangzhou 510632); 2(School of Information Science and Technology, Sun Yat-sen University, Guangzhou 510006)
  • Online: 2016-03-01

摘要: 大数据时代多源、异构、海量的数据正逐渐成为各种应用的主流.多源异构不可避免地会使数据出现重复,同时庞大的数据量对重复检测的效率提出了极高的要求,传统技术在大数据环境下并不能很好地对高维数据进行重复检测,就此问题展开研究,分析了传统SNM类方法的不足,将重复问题概化为一类特殊的聚类问题,利用R-树建立了高效的索引,利用聚类簇的特性减少了在R-树叶子中比较的次数,利用重复检测的Apriori性质实现了对高维数据集并行处理.实验结果表明,提出的算法能有效地提高高维数据的重复检测效率.

关键词: 大数据, 高维数据, 数据挖掘, 数据预处理, 重复检测

Abstract: The big data era has huge quantity of heterogeneous data from multiple sources be widely used in various domains. Data from multiple sources and of various structures make data duplication inevitable. In addition, such a large amount of data generates an increasing demand for efficient duplicate detection algorithms. Traditional approaches have difficulties in dealing with high dimensional data in big data scenarios. This paper analyses the deficiency of traditional SNM(sorted neighbour method) methods and proposes a novel approach based on clustering. An efficient indexing mechanism is first created with the help of R-tree, which is a variant of B-tree for multi-dimensional space. The proposed algorithm reduces the comparisons needed by taking advantage of the characteristics of clusters and outperforms existing duplicate detection approaches such as SNM, DCS, and DCS++. Furthermore, based on the apriori property of duplicate detection, we develop a new algorithm which can generate the duplicate candidates in parallel manner of the projection of original dataset and then use them to reduce search space of high-dimensional data. Experimental results show that this parallel approach works efficiently when high-dimensional data is encountered. This significant performance improvement suggests that it is ideal for duplicate detection for high dimensional big data.

Key words: big data, high dimension data, data mining, data preprocessing, duplicate detection