高级检索
    郭江, 王淼, 张玉军. 命名数据网络中基于内容类型的隔跳概率缓存机制[J]. 计算机研究与发展, 2021, 58(5): 1118-1128. DOI: 10.7544/issn1000-1239.2021.20190871
    引用本文: 郭江, 王淼, 张玉军. 命名数据网络中基于内容类型的隔跳概率缓存机制[J]. 计算机研究与发展, 2021, 58(5): 1118-1128. DOI: 10.7544/issn1000-1239.2021.20190871
    Guo Jiang, Wang Miao, Zhang Yujun. Content Type Based Jumping Probability Caching Mechanism in NDN[J]. Journal of Computer Research and Development, 2021, 58(5): 1118-1128. DOI: 10.7544/issn1000-1239.2021.20190871
    Citation: Guo Jiang, Wang Miao, Zhang Yujun. Content Type Based Jumping Probability Caching Mechanism in NDN[J]. Journal of Computer Research and Development, 2021, 58(5): 1118-1128. DOI: 10.7544/issn1000-1239.2021.20190871

    命名数据网络中基于内容类型的隔跳概率缓存机制

    Content Type Based Jumping Probability Caching Mechanism in NDN

    • 摘要: 网络化缓存是命名数据网络实现对信息的高效获取,有效降低互联网骨干网络流量的关键技术.网络化缓存将缓存作为普适的功能添加到每个网络节点.用户需要获取信息时,缓存有该内容的任意网络节点(例如路由器)接收到用户请求后都可直接向用户返回相应内容,提升用户请求响应效率.然而,命名数据网络采用泛在缓存使得内容发布者到用户的传输路径上的各节点对内容进行重复并无差别缓存,造成数据冗余、内容缓存无差别对待问题.为此,提出一种基于内容类型的隔跳概率缓存机制.首先根据业务特征(例如时延要求、带宽占用)将内容划分为4种类型:动态类、实时类、大数据类、以及小数据类;其次构造隔跳待定缓存策略,将数据存储在非连续的传输节点上,从空间上减少冗余缓存;最后针对不同内容提供差异化缓存服务:无缓存、网络边缘概率缓存、网络次边缘概率缓存、以及网络核心概率缓存策略,从而进一步降低冗余数据,同时提高用户获取内容的效率.实验结果表明,该机制能够减少冗余缓存,降低用户请求内容时延.

       

      Abstract: In-network caching, which makes every networking node have a universal cache function, has become a key technology in NDN (named data networking) to achieve efficient access to information and to effectively reduce Internet backbone traffic. When users need to obtain information, any networking node (e.g., router) caching their content can directly provide the corresponding content after receiving their request so as to improve the response efficiency of user requests. However, NDN adopts a ubiquitous caching policy, which caches the content repeatedly and indiscriminately on the transmission path between the content provider and user, resulting in data redundancy and indiscriminate content caching. To this end, we propose a based on content type jumping probability caching mechanism in NDN. According to content features (e.g., delay requirement and bandwidth occupation), we first divide into four content types including dynamic, realtime, big data, and small data. We then build the cache policy with hops pending, which stores data on transmission nodes discontinuously in order to reduce redundant cache in space. Based on content types, we provide differential caching service to reduce redundancy furtherly and to improve the user's efficiency in retrieving content, such as no-cache, networking edge-based probability cache, networking sub-edge-based probability cache, and networking core-based probability cache. The experimental results confirm that the proposed caching mechanism can reduce data redundancy and the latency of content retrieving.

       

    /

    返回文章
    返回