ISSN 1000-1239 CN 11-1777/TP

    2020 Advances in Big Data and Intelligent Storage Systems

    Default Latest Most Read
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Journal of Computer Research and Development    2020, 57 (2): 241-242.   DOI: 10.7544/issn1000-1239.2020.qy0201
    Abstract1510)   HTML159)    PDF (192KB)(803)       Save
    Related Articles | Metrics
    A Write-Optimized Re-computation Scheme for Non-Volatile Memory
    Zhang Ming, Hua Yu, Liu Lurong, Hu Rong, Li Ziyi
    Journal of Computer Research and Development    2020, 57 (2): 243-256.   DOI: 10.7544/issn1000-1239.2020.20190524
    Abstract674)   HTML19)    PDF (2602KB)(452)       Save
    The rise of non-volatile memory (NVM) technology brings many opportunities and challenges to computer storage systems. Compared with DRAM, NVM has the advantages of non-volatility, low energy consumption and high storage density as persistent memory. However, it has the disadvantages of limited erase/write times and high write latency. Therefore, it is necessary to reduce the write operations to the non-volatile main memory to improve system performance and extend the lifetime of NVM. To address this problem, this paper proposes ROD, which is a re-computation scheme based on the out degree of computing nodes. Due to the performance gap between CPU and memory leading to computing resources waste, ROD selectively discards the computed results which should be stored into the NVM and re-computes them when needed. By swapping the storage with computation, ROD reduces the number of writes to the NVM. We have implemented ROD and evaluated it on Gem5 simulator with NVMain. We also implemented the greedy re-computation scheme and the store-only scheme as state-of-the-art work to be compared with ROD. Experimental results from powerstone benchmark show that ROD reduces write operations by 44.3% on average (up to 68.5%) compared with the store-only scheme. The execution time is reduced by 28.1% on average (up to 68.6%) compared with the store-only scheme, and 9.3% on average (up to 19.4%) compared with the greedy scheme.
    Related Articles | Metrics
    A High Throughput NVM Storage System Based on Access Request Conflict Detection
    Cai Tao, Wang Jie, Niu Dejiao, Liu Peiyao, Chen Fuli
    Journal of Computer Research and Development    2020, 57 (2): 257-268.   DOI: 10.7544/issn1000-1239.2020.20190526
    Abstract571)   HTML14)    PDF (2727KB)(225)       Save
    The NVM storage is a useful way to improve the efficiency of storage system in computer. However, there is a lack of adaptation and optimization mechanisms for NVM storage devices in I/O stack of operating system. Especially, the file system-based lock mechanism becomes an important factor affecting the efficiency of NVM storage systems. In this paper, we embed the management function of the access request for storage system in the NVM storage device. In order to improve the access request concurrency of operating system and alleviate the performance bottleneck caused by the device interface, we remove the existing lock mechanism in file system and use the conflict detection algorithm in NVM storage devices. Firstly, the structure of the high-throughput NVM storage system is given. Then, an access request management method based on two-dimensional linked list is design to change the structure of access request management and reduce the conflict of management. And a conflict detection algorithm is design to manage the access requests for the sharing data stored in NVM storage device. The new submission and release process for access requests are designed. Finally, the prototype of high throughput NVM storage system named HTPM is implemented based on PMEM, which is the open source NVM storage device simulator from Intel. The I/O performance and throughput of HTPM are tested by Fio and Filebench. The results show that HTPM can improve the IOPS by 31.9% and the I/O performance by 21.4% compared with PMEM.
    Related Articles | Metrics
    Fingerprint Search Optimization for Deduplication on Emerging Storage Devices
    He Kewen, Zhang Jiachen, Liu Xiaoguang, Wang Gang
    Journal of Computer Research and Development    2020, 57 (2): 269-280.   DOI: 10.7544/issn1000-1239.2020.20190543
    Abstract552)   HTML13)    PDF (1942KB)(289)       Save
    Fingerprint search part is I/O intensive, and the performance of the external storage device is the bottleneck of fingerprint search. Therefore, this paper focuses on the fingerprint search of data deduplication system. This paper compares the traditional eager deduplication algorithm with lazy deduplication algorithms that reduce the number of disk accesses, and studies deduplication algorithm on the emerging storage devices: Optane SSD and persistent memory, and gives optimization suggestions. In this paper, we model the fingerprint search delay of the eager deduplication algorithm and the lazy deduplication algorithm, and three conclusions under the new storage device are obtained through the modeling results: 1) The number of fingerprints for batched search should be reduced; 2) The local ring size should be reduced on faster devices, and the local loop size has an optimal value; 3) On fast devices, the eager fingerprint lookup is better than the lazy fingerprint lookup. Finally, the experimental results verify the correctness of our model on the actual HDD, Optane SSD and emulated persistent memory. The eager algorithm is better than the lazy algorithm on the emerging storage devices, and the locality ring optimal value is advanced, which basically conforms to the conclusions of the proposed model.
    Related Articles | Metrics
    A Hybrid Approach for Managing Data Pages in Persistent Memory File Systems
    Chen Youmin, Zhu Bohong, Han Yinjun, Tu Yaofeng, Shu Jiwu
    Journal of Computer Research and Development    2020, 57 (2): 281-290.   DOI: 10.7544/issn1000-1239.2020.20190574
    Abstract536)   HTML15)    PDF (1047KB)(260)       Save
    Intel has officially released the Optane DC Persistent Memory based on 3D-Xpoint technology in April 2019, which provides new opportunities for building efficient persistent memory storage systems. However, existing software is far from fully exploiting the hardware performance, due to the ignorance of utilizing the byte-addressable feature of persistent memory. This paper proposes a hybrid data page management (HDPM) mechanism. It manages file data by selectively using the copy-on-write technique and log-structure, so as to fully utilize the byte-addressable feature of persistent memory. It can avoid the redundant copy overhead as in traditional approaches when processing un-aligned or small-sized writes. To guarantee the read performance unaffected, HDPM introduces reverse-scanning mechanism, which avoids the additional data copying when rebuilding data pages from the log. HDPM also introduces a multi-stage garbage collection mechanism for log cleaning. When a single log is too large, it’s automatically reclaimed by read/write system calls. When the persistent memory space is limited, a background thread asynchronously reclaims the log space with a lock-free approach, without affecting the normal read/write performance. Experiments show that HDPM provides high write performance. Compared with NOVA, a state-of-the-art persistent memory file system, HDPM exhibits 58% lower write latency at most with the small-sized and write-intensive workload, and provides comparable performance for read operations. Our evaluation with Filebench shows that HDPM outperforms NOVA by 33% at most with 10 concurrent threads.
    Related Articles | Metrics
    A Cross-Datacenter Erasure Code Writing Method Based on Generator Matrix Transformation
    Bao Han, Wang Yijie, Xu Fangliang
    Journal of Computer Research and Development    2020, 57 (2): 291-305.   DOI: 10.7544/issn1000-1239.2020.20190542
    Abstract393)   HTML7)    PDF (4535KB)(173)       Save
    In cross-datacenter storage systems, existing writing methods of erasure code usually has low encoding efficiency, low transmission efficiency, and large network resource consumption. Therefore, cross-datacenters erasure code usually has a low writing rate. This paper proposes a cross-datacenter erasure code writing method based on generator matrix transformation called CREW. Specifically, we first propose a greedy strategy-based transmission topology construction algorithm called GBTC, which can construct a tree-structured transmission topology with incremental weights (the weights are set to the network distances between datacenters) from top to bottom to organize data transmission between datacenters. Then, we propose a generator matrix transformation algorithm called GMT. Without changing the linear relationship of coded blocks, GMT can transform the generator matrix so that the number of data blocks related to a coded block is negatively correlated with the network distance between the datacenter where the coded block is located and the root of the tree-structured topology. Therefore, CREW only needs to transfer a small number of data blocks through a long network distance to write data. Thus, the network resource consumption is reduced. Finally, we propose a distributed pipelined writing algorithm called DPW to distribute encoding operations to different nodes for parallel execution and limit the number of forwards of data blocks, thereby improving encoding efficiency and transmission efficiency. Experiments show that compared with writing methods of traditional erasure code, the write rate of CREW is increased by 36.3%~57.9%. And compared with the existing writing method of cross-datacenter erasure code (IncEncoding), the writing rate of CREW is increased by 32.4%.
    Related Articles | Metrics
    Proactive Fault Tolerance Based on “Collection—Prediction—Migration—Feedback” Mechanism
    Yang Hongzhang, Yang Yahui, Tu Yaofeng, Sun Guangyu, Wu Zhonghai
    Journal of Computer Research and Development    2020, 57 (2): 306-317.   DOI: 10.7544/issn1000-1239.2020.20190549
    Abstract749)   HTML25)    PDF (1574KB)(321)       Save
    Hard disk fault has become the main source of failure in data centers, which seriously affects the reliability of data. The traditional data fault tolerant technology is usually realized by increasing data redundancy, which has some shortcomings. Proactive fault tolerant technology has become a research hotspot, because it can predict hard disk failures and migrate data ahead of time. However, the existing technology mostly studies hard disk fault prediction, but lacks the research of collection, migration and feedback, which causes difficulty in commercialize. This paper proposes a whole process proactive fault tolerant on “Collection—Prediction—Migration—Feedback” mechanism, which includes time-sharing hard disk information collection method, sliding window record merging and sample building method, multi-type hard disk fault prediction method, multi-disk joint data migration method, and two-level validation of prediction results with fast feedback method. The test results show that the impact of collecting hard disk information on front-end thread is only 0.96%, the recall rate of hard disk fault prediction is 94.66%, and data repair time is 55.10% less than traditional methods. This work has been used stably in ZTE’s data center, which meets the objectives of proactive fault tolerance technology, such as high-reliability, high-intelligence, low-interference, low-cost and wide-application.
    Related Articles | Metrics
    A Benefit Model Based Data Reuse Mechanism for Spark SQL
    Shen Yijie, Zeng Dan, Xiong Jin
    Journal of Computer Research and Development    2020, 57 (2): 318-332.   DOI: 10.7544/issn1000-1239.2020.20190563
    Abstract472)   HTML14)    PDF (3529KB)(242)       Save
    Analyzing massive data to discover the potential values in them can bring great benefits. Spark is a widely used data analytics engine for large-scale data processing due to its good scalability and high performance. Spark SQL is the most commonly used programming interface for Spark. There are a lot of redundant computations in data analytic applications. Such redundancies not only waste system resources but also prolong the execution time of queries. However, current implementation of Spark SQL is not aware of redundant computations among data analytic queries, and hence cannot remove them. To address this issue, we present a benefit model based, fine-grained, automatic data reuse mechanism called Criss in this paper. Criss automatically identifies redundant computations among queries. Then it uses an I/O performance aware benefit model to automatically choose the operator results with the biggest benefit and cache these results using a hybrid storage consisting of both memory and HDD. Moreover, cache management and data reuse in Criss are partition-based instead of the whole result of an operator. Such fine-grained mechanism greatly improves query performance and storage utilization. We implement Criss in Spark SQL using modified TachyonFS for data caching. Our experiment results show that Criss outperforms Spark SQL by 40% to 68%.
    Related Articles | Metrics
    Efficient Index and Query Algorithm Based on Geospatial Big Data
    Zhao Huihui, Zhao Fan, Chen Renhai, Feng Zhiyong
    Journal of Computer Research and Development    2020, 57 (2): 333-345.   DOI: 10.7544/issn1000-1239.2020.20190565
    Abstract657)   HTML28)    PDF (4069KB)(484)       Save
    In recent years, with the rapid development of advanced technologies such as intelligent target recognition, electronic sensors, collaborative control and computer networks, intelligent transportation systems have achieved qualitative leapfrogging. Modern intelligent transportation systems can realize intelligent transportation of vehicles, roads and clouds management platform. However, the intelligent transportation system relies on a large amount of two-dimensional geospatial information data generated every day. Therefore, how to efficiently store and query large-scale geospatial data is of great significance for the future popularization and development of the intelligent transportation system. However, due to the complexity of urban traffic information, large amount of data, and fast update speed, the current spatial indexing technology is difficult to efficiently search for two-dimensional geospatial information data. In order to optimize the storage organization structure of two-dimensional geospatial information data under spatial big data and improve retrieval efficiency, this paper proposes a spatial index tree construction algorithm for multi-layer slice recursion of two-dimensional geospatial information data (multi-layer slice recursive, MSR). The proposed algorithm first sorts and divides the first dimension of the map data to generate FD slices. Then, the second dimension of the map data in the FD slice is sorted to generate SD slices, and in the SD slice, the current slice and the adjacent slices are divided into spatial objects. Finally, data clustering operation is performed on the comparison between the length of the spatial object and the node capacity, and the MSR Tree is recursively generated from the bottom up by judging whether all the slices complete the clustering operation. Experimental results show that the query performance of the 2-dimensional space storage structure constructed by the MSR algorithm is better than the most representative spatial indexing technology based on the R-tree batch-loading algorithm (sort tile recursive, STR), STR-grid hybrid algorithm (str-grid), and efficient geometric range query (EGRQ).
    Related Articles | Metrics