ISSN 1000-1239 CN 11-1777/TP


    Default Latest Most Read
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Journal of Computer Research and Development    2018, 55 (9): 2000-2001.  
    Abstract1229)   HTML41)    PDF (334KB)(577)       Save
    Related Articles | Metrics
    Survey on Approximate Storage Techniques
    Wu Yu, Yang Juan, Liu Renping, Ren Jinting, Chen Xianzhang, Shi Liang, Liu Duo
    Journal of Computer Research and Development    2018, 55 (9): 2002-2015.   DOI: 10.7544/issn1000-1239.2018.20180295
    Abstract1474)   HTML23)    PDF (2788KB)(790)       Save
    With the rapid development of cloud computing and Internet of things, how to store the explosively growing data becomes a challenge for storage systems. In tackling this challenge, approximate storage technology draws broad attention for its huge potential in saving the cost of storage and improving the system performance. Approximate storage techniques trade off the accuracy of the outputs for performance or energy efficiency taking advantages of the intrinsic tolerance to inaccuracies of many common applications. In this way, the applications improve their performance or energy efficiency while meeting the user requirements. Therefore, how to exploit the features of storages and fault-tolerant applications to improve data access performance, decrease space overhead, and reduce energy consumption is becoming a key problem for storage systems. In this paper, we first introduce the definition of approximate storage technology and show the techniques for identifying the approximate areas in the data. Then, we elaborate the approximate storage techniques for CPU cache, main memory, and secondary storage, respectively. We discuss the advantages and disadvantages of these approximate storage techniques along with the corresponding application scenarios. In the end of this paper, we summarize the features of approximate storage techniques and discuss the research directions of approximate storage techniques.
    Related Articles | Metrics
    Survey on High Density Magnetic Recording Technology
    Wang Guohua, David Hung-Chang Du, Wu Fenggang, Liu Shiyong
    Journal of Computer Research and Development    2018, 55 (9): 2016-2028.   DOI: 10.7544/issn1000-1239.2018.20180264
    Abstract1253)   HTML13)    PDF (2383KB)(723)       Save
    In the era of big data, the demand for large-capacity disks has been growing. With minimal technology changes to the existing disk head and storage media of hard disks, the shingled magnetic recording (SMR) technology is the best choice to increase the disk storage capacity. The interlaced magnetic recording (IMR) technology is a newly developed technology in recent years, which can achieve higher storage density and random write performance than SMR. In this paper, we first introduce the shingled track layout of SMR drive and the resulting write amplification problem. We also review the data management methods that mitigate write amplification problem, the evaluation of performance characterizations, and the research on SMR-based upper applications. Then we introduce the interlaced track layout of IMR drive and its data write amplification problem. We also analyze the future research topics of IMR drive. Finally, we compare SMR drive and IMR drive from the storage density, random write performance, and other aspects. A variety of SMR-based upper applications, like file system, database, and RAID, prove that SMR drive can be effectively used to replace conventional disks to build large-scale storage systems. The advantages of IMR drive over SMR drive will make it have a bright future.
    Related Articles | Metrics
    A Tiny-Log Based Persistent Transactional Memory System
    Chen Juan, Hu Qingda, Chen Youmin, Lu Youyou, Shu Jiwu, Yang Xiaohui
    Journal of Computer Research and Development    2018, 55 (9): 2029-2037.   DOI: 10.7544/issn1000-1239.2018.20180294
    Abstract1065)   HTML6)    PDF (2156KB)(342)       Save
    In recent years,in order to exploit the performance advantage of persistent memory, researchers have designed various lightweight persistent transactional memory systems. Atomicity and consistency of transactions are mostly ensuredusing the logging mechanism. However,compared with conventional memory,memory cells of persistent memory tend to have higher write latency and limited endurance. This paper observes two problems in the existing persistent transactional memory systems: Firstly,existing systems do not distinguish between different types of write operations in the transaction. No matter whether the writes are updating existing data in memory or adding data to newly allocated memory, existing systems use the same logging mechanism to ensure consistency. Secondly,existing systems persist the address and data of every write operation to the log, even if most of them can be compressed to reduce the size. Both of the above problems lead to redundant log operations,resulting in extra write latency and write wearing. In order to solve the above problems,this paper designs and implements TLPTM,a tiny-log persistent transactional memory system. It is based on two optimization techniques: (1)AALO (allocation-aware log optimization), effectively avoids the logging overhead generated by the operations of adding data to newly allocated memory; (2)CBLO (compression-based log optimization), compresses the log before writing it to the NVM and reduces the overhead of logwriting. The experimental results show that compared with Mnemosyne, AALO improves the system performance by 15%~24%, and TLPTM using both optimizations reduces the write wearing of logging by 70%~81%.
    Related Articles | Metrics
    A Log-Structured Key-Value Store Based on Non-Volatile Memory
    You Litong, Wang Zhenjie, Huang Linpeng
    Journal of Computer Research and Development    2018, 55 (9): 2038-2049.   DOI: 10.7544/issn1000-1239.2018.20180258
    Abstract1354)   HTML26)    PDF (2337KB)(632)       Save
    Non-volatile memory (NVM) technologies are promising that would change the future of storage. NVM possesses many attractive capabilities such as byte addressability, low access latency, and persistence. It provides a great opportunity for the integration of DRAM and NVM in a unified main storage space. NVM could access data through the memory bus and CPU related instructions, which makes it possible to design a fast and persistent storage system in non-volatile memory. Existing key-value stores proposed for block devices implement NVM as block devices, which conceal the performance that NVM provides. A few existing key-value stores for NVM fail to provide consistency when hardware supports (e.g., cache flush) on power failures are unavailable. In this paper, we present a non-volatile memory key-value storage system, named TinyKV, which utilizes the log structure as its core framework. We propose a static concurrent, cache-friendly Hash table implementation using the characteristics of the key-value workloads. TinyKV separates the maintenance for data log of each worker thread in order to guarantee high concurrency. In addition, we implement the log structure technology for memory management and design a multi-tier memory allocator to ensure consistency. To reduce write latency, we reduce writes to NVM and cache flushing instructions by using cache flushing instructions. Our experiments demonstrate that TinyKV outperforms traditional key-value stores in both throughput and scalability.
    Related Articles | Metrics
    Largepages Supported Hierarchical DRAMNVM Hybrid Memory Systems
    Chen Ji, Liu Haikun, Wang Xiaoyuan, Zhang Yu, Liao Xiaofei, Jin Hai
    Journal of Computer Research and Development    2018, 55 (9): 2050-2065.   DOI: 10.7544/issn1000-1239.2018.20180269
    Abstract1396)   HTML11)    PDF (5092KB)(573)       Save
    Hybrid memory systems composed of non-volatile memory (NVM) and DRAM can offer large memory capacity and DRAM-like performance. However, with the increasing memory capacity and application footprints, the address translation overhead becomes another system performance bottleneck due to lower translation lookaside buffer (TLB) converge. Large pages can significantly improve the TLB converge, however, they impede fine-grained page migration in hybrid memory systems. In this paper, we propose a hierarchical hybrid memory system that supports both large pages and fine-grained page caching. We manage NVM and DRAM with large pages and small pages, respectively. The DRAM is used as a cache to NVM by using a direct mapping mechanism. We propose a cache filtering mechanism to only fetch frequently-access (hot) data into the DRAM cache. CPUs can still access the cold data directly in NVM through a DRAM bypassing mechanism. We dynamically adjust the threshold of hot data classification to adapt to the diversifying and dynamic memory access patterns of applications. Experimental results show that our strategy improves the application performance by 69.9% and 15.2% compared with a NVM-only system and the state-of-the-art CHOP scheme, respectively. The performance gap is only 8.8% compared with a DRAM-only memory system with large pages support.
    Related Articles | Metrics
    R-Tree Optimization Method Using Internal Parallelism of Flash Memory-Based Solid-State Drives
    Chen Yubiao, Li Jianzhong, Li Yingshu, Li Faming, Gao Hong
    Journal of Computer Research and Development    2018, 55 (9): 2066-2082.   DOI: 10.7544/issn1000-1239.2018.20180254
    Abstract1270)   HTML5)    PDF (5119KB)(437)       Save
    Recently, flash memory-based solid state disk has more magnificent improvement on internal design than before, which brings rich internal parallelism to solid state disk. R-tree index is widely applied in spatial data management, but up to now, the proposed R-tree optimization methods on solid state disk do not take the internal parallelism into consideration, and also the approach designed for traditional magnetic disk is not suitable for solid state disk. So all of the previous R-tree optimization doesn’t use internal parallelism mechanism of solid state disk to make the query and update operation more efficient. In order to exploit internal parallelism to speed up R-tree. Firstly, a parallel batch asynchronous I/O submitting library is realized. Secondly, optimizing algorithms to accelerate the R-tree search and update operations are achieved by aggregating read or write operations to batch submit through the previous library, Thirdly, we analyze the minimal speed up expectation theoretically, and prove that normal solid state can achieve speed up of at least 1.86 times expectation speed-up with 4 channels and 2.93 times expectation speed-up with 8 channels. Through the experiments on two kind of solid state disk, our optimization R-tree can achieve stable 3 times speed up for query operation compared with original R-tree, and also speed up of about 2 times for update operation. No matter for query intensive or update intensive application scenarios, there is speedup between them.
    Related Articles | Metrics
    APMSS: The New Solid Storage System with Asymmetric Interface
    Niu Dejiao, He Qingjian, Cai Tao, Wang Jie, Zhan Yongzhao, Liang Jun
    Journal of Computer Research and Development    2018, 55 (9): 2083-2093.   DOI: 10.7544/issn1000-1239.2018.20180198
    Abstract1094)   HTML5)    PDF (3588KB)(344)       Save
    The solid storage system is an important way to solve the problem of computer memory wall. But the existing block-based storage management strategy can’t use the advantages of byte-addressable characteristics in solid storage system and causes write amplification, which seriously reduces the I/O performance and lifetime of NVM devices. In term of this problem, the new solid storage system with asymmetric interface named APMSS is presented, and the management of read and write access request are separated by the analysis of two type access request. The read access request is still managed by block unit to avoid increasing the overhead of I/O stack and keep high performance by cache. The minimized direct write strategy is designed and the write access request is managed by dynamic granularity to reduce the communication and written data to solid storage system. At the same time, the multi-granularity mapping algorithm is designed to avoid exchanging amplification between memory and solid storage system and improve the I/O performance. Then the write amplification of solid storage system could be avoided and the write performance of NVM devices could be improved. The prototype is implemented based on PMBD which is the open source solid storage system simulator. Fio and Filebench are used to test the read and write performance of Ext2 on PMBD, Ext4 on PMBD and Ext4 on APMSS. The test results show that Ext4 on APMSS can improve sequential write performance by 9.6%~29.8% compared with Ext2 and Ext4 on PMBD.
    Related Articles | Metrics