高级检索
    阎芳, 李元章, 张全新, 谭毓安. 基于对象的OpenXML复合文件去重方法研究[J]. 计算机研究与发展, 2015, 52(7): 1546-1557. DOI: 10.7544/issn1000-1239.2015.20140093
    引用本文: 阎芳, 李元章, 张全新, 谭毓安. 基于对象的OpenXML复合文件去重方法研究[J]. 计算机研究与发展, 2015, 52(7): 1546-1557. DOI: 10.7544/issn1000-1239.2015.20140093
    Yan Fang, Li Yuanzhang, Zhang Quanxin, Tan Yu’an. Object-Based Data De-Duplication Method for OpenXML Compound Files[J]. Journal of Computer Research and Development, 2015, 52(7): 1546-1557. DOI: 10.7544/issn1000-1239.2015.20140093
    Citation: Yan Fang, Li Yuanzhang, Zhang Quanxin, Tan Yu’an. Object-Based Data De-Duplication Method for OpenXML Compound Files[J]. Journal of Computer Research and Development, 2015, 52(7): 1546-1557. DOI: 10.7544/issn1000-1239.2015.20140093

    基于对象的OpenXML复合文件去重方法研究

    Object-Based Data De-Duplication Method for OpenXML Compound Files

    • 摘要: 现有的重复数据删除技术大部分是基于变长分块(content defined chunking, CDC)算法的,不考虑不同文件类型的内容特征.这种方法以一种随机的方式确定分块边界并应用于所有文件类型,已经证明其非常适合于文本和简单内容,而不适合非结构化数据构成的复合文件.分析了OpenXML标准的复合文件属性,给出了对象提取的基本方法,并提出基于对象分布和对象结构的去重粒度确定算法.目的是对于非结构化数据构成的复合文件,有效地检测不同文件中和同一文件不同位置的相同对象,在文件物理布局改变时也能够有效去重.通过对典型的非结构化数据集合的模拟实验表明,在综合情况下,对象重复数据删除比CDC方法提高了10%左右的非结构化数据的去重率.

       

      Abstract: Content defined chunking (CDC) is a prevalent data de-duplication algorithm for removing redundant data segments in storage systems. Current researches on CDC do not consider the unique content characteristic of different file types, and they determine chunk boundaries in a random way and apply a single strategy for all the file types. It has been proven that such method is suitable for text and simple contents, and it doesn’t achieve the optimal performance for compound files. Compound file is composed of unstructured data, usually occupying large storage space and containing multimedia data. Object-based data de-duplication is the current most advanced method and is the effective solution for detecting duplicate data for such files. We analyze the content characteristic of OpenXML files and develop an object extraction method. A de-duplication granularity determining algorithm based on the object structure and distribution is proposed during this process. The purpose is to effectively detect the same objects in a file or between the different files, and to be effectively de-duplicated when the file physical layout is changed for compound files. Through the simulation experiments with typical unstructured data collection, the efficiency is promoted by 10% compared with CDC method in the unstructured data in general.

       

    /

    返回文章
    返回