ISSN 1000-1239 CN 11-1777/TP

Journal of Computer Research and Development ›› 2015, Vol. 52 ›› Issue (7): 1546-1557.doi: 10.7544/issn1000-1239.2015.20140093

Previous Articles     Next Articles

Object-Based Data De-Duplication Method for OpenXML Compound Files

Yan Fang1,2, Li Yuanzhang1, Zhang Quanxin1, Tan Yu’an1   

  1. 1(School of Computer Science & Technology, Beijing Institute of Technology, Beijing 100086);2(School of Information, Beijing Wuzi University, Beijing 101149)
  • Online:2015-07-01

Abstract: Content defined chunking (CDC) is a prevalent data de-duplication algorithm for removing redundant data segments in storage systems. Current researches on CDC do not consider the unique content characteristic of different file types, and they determine chunk boundaries in a random way and apply a single strategy for all the file types. It has been proven that such method is suitable for text and simple contents, and it doesn’t achieve the optimal performance for compound files. Compound file is composed of unstructured data, usually occupying large storage space and containing multimedia data. Object-based data de-duplication is the current most advanced method and is the effective solution for detecting duplicate data for such files. We analyze the content characteristic of OpenXML files and develop an object extraction method. A de-duplication granularity determining algorithm based on the object structure and distribution is proposed during this process. The purpose is to effectively detect the same objects in a file or between the different files, and to be effectively de-duplicated when the file physical layout is changed for compound files. Through the simulation experiments with typical unstructured data collection, the efficiency is promoted by 10% compared with CDC method in the unstructured data in general.

Key words: content defined chunking (CDC), object, unstructured data, OpenXML standard, compound file, data de-duplication

CLC Number: