ISSN 1000-1239 CN 11-1777/TP

Journal of Computer Research and Development ›› 2021, Vol. 58 ›› Issue (5): 1118-1128.doi: 10.7544/issn1000-1239.2021.20190871

Previous Articles    

Content Type Based Jumping Probability Caching Mechanism in NDN

Guo Jiang1,2, Wang Miao1, Zhang Yujun1,2   

  1. 1(Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190);2(University of Chinese Academy of Sciences, Beijing 100049)
  • Online:2021-05-01
  • Supported by: 
    This work was supported by the National Key Research and Development Program of China (2018YFB1800403, 2016YFE0121500), the Research Program of Network Computing Innovation Research Institute (E061010003), the National Natural Science Foundation of China (61902382, 61972381, 61672500), and the Strategic Priority Research Program of Chinese Academy of Sciences (XDC02030500).

Abstract: In-network caching, which makes every networking node have a universal cache function, has become a key technology in NDN (named data networking) to achieve efficient access to information and to effectively reduce Internet backbone traffic. When users need to obtain information, any networking node (e.g., router) caching their content can directly provide the corresponding content after receiving their request so as to improve the response efficiency of user requests. However, NDN adopts a ubiquitous caching policy, which caches the content repeatedly and indiscriminately on the transmission path between the content provider and user, resulting in data redundancy and indiscriminate content caching. To this end, we propose a based on content type jumping probability caching mechanism in NDN. According to content features (e.g., delay requirement and bandwidth occupation), we first divide into four content types including dynamic, realtime, big data, and small data. We then build the cache policy with hops pending, which stores data on transmission nodes discontinuously in order to reduce redundant cache in space. Based on content types, we provide differential caching service to reduce redundancy furtherly and to improve the user's efficiency in retrieving content, such as no-cache, networking edge-based probability cache, networking sub-edge-based probability cache, and networking core-based probability cache. The experimental results confirm that the proposed caching mechanism can reduce data redundancy and the latency of content retrieving.

Key words: named data networking (NDN), data redundancy, caching policy, content type, diff-erential caching service

CLC Number: