ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2015, Vol. 52 ›› Issue (7): 1546-1557.doi: 10.7544/issn1000-1239.2015.20140093

• 系统结构 • 上一篇    下一篇

基于对象的OpenXML复合文件去重方法研究

阎芳1,2,李元章1,张全新1,谭毓安1   

  1. 1(北京理工大学计算机学院 北京 100086); 2(北京物资学院信息学院 北京 101149) (yfjoy@163.com)
  • 出版日期: 2015-07-01
  • 基金资助: 
    基金项目:国家“八六三”高技术研究发展计划基金项目(2013AA01A212);国家自然科学基金项目(61370063);北京高等学校青年英才计划项目(YETP1532,YETP1178)

Object-Based Data De-Duplication Method for OpenXML Compound Files

Yan Fang1,2, Li Yuanzhang1, Zhang Quanxin1, Tan Yu’an1   

  1. 1(School of Computer Science & Technology, Beijing Institute of Technology, Beijing 100086);2(School of Information, Beijing Wuzi University, Beijing 101149)
  • Online: 2015-07-01

摘要: 现有的重复数据删除技术大部分是基于变长分块(content defined chunking, CDC)算法的,不考虑不同文件类型的内容特征.这种方法以一种随机的方式确定分块边界并应用于所有文件类型,已经证明其非常适合于文本和简单内容,而不适合非结构化数据构成的复合文件.分析了OpenXML标准的复合文件属性,给出了对象提取的基本方法,并提出基于对象分布和对象结构的去重粒度确定算法.目的是对于非结构化数据构成的复合文件,有效地检测不同文件中和同一文件不同位置的相同对象,在文件物理布局改变时也能够有效去重.通过对典型的非结构化数据集合的模拟实验表明,在综合情况下,对象重复数据删除比CDC方法提高了10%左右的非结构化数据的去重率.

关键词: 变长分块, 对象, 非结构化数据, OpenXML标准, 复合文件, 重复数据删除

Abstract: Content defined chunking (CDC) is a prevalent data de-duplication algorithm for removing redundant data segments in storage systems. Current researches on CDC do not consider the unique content characteristic of different file types, and they determine chunk boundaries in a random way and apply a single strategy for all the file types. It has been proven that such method is suitable for text and simple contents, and it doesn’t achieve the optimal performance for compound files. Compound file is composed of unstructured data, usually occupying large storage space and containing multimedia data. Object-based data de-duplication is the current most advanced method and is the effective solution for detecting duplicate data for such files. We analyze the content characteristic of OpenXML files and develop an object extraction method. A de-duplication granularity determining algorithm based on the object structure and distribution is proposed during this process. The purpose is to effectively detect the same objects in a file or between the different files, and to be effectively de-duplicated when the file physical layout is changed for compound files. Through the simulation experiments with typical unstructured data collection, the efficiency is promoted by 10% compared with CDC method in the unstructured data in general.

Key words: content defined chunking (CDC), object, unstructured data, OpenXML standard, compound file, data de-duplication

中图分类号: