高级检索

    一种面向工业边缘计算应用的缓存替换算法

    A Cache Replacement Algorithm for Industrial Edge Computing Application

    • 摘要: 工业应用对数据传输的确定性有严格要求,有必要通过合理的缓存策略保障工业边缘网络的实时服务性能保障.首先面向工业边缘计算应用场景阐述了边缘缓存问题模型.然后分析了工业应用中用户请求的动态性特点,结合工业用户请求的特征属性,给出用户请求内容流行度变化的预测方法.在此基础上提出了基于属性特征流行度预测的缓存替换(combing periodic popularity prediction and size caching strategy, PPPS)算法,根据最近周期窗口内主导属性特征的热度预测值,和尺寸参数一起确定缓存内容价值.实验结果表明:与MPC(most-popular content)、贪婪双尺寸(greedy dual size, GDS)、最近最久未使用(least recently used, LRU)、最近最少访问频次(least frequently used, LFU)、先进先出(first in first out, FIFO)这5种经典算法相比,提出的PPPS算法在缓存命中率和平均延迟2种性能指标下,在不同的用户请求模型、内容大小分布、内容种类参数下均取得最优性能,有效提升了边缘缓存的命中率,提高了缓存利用效率,降低了用户请求内容的延迟.

       

      Abstract: Industrial applications usually have strict requirements of data transmission certainty. It is therefore essential for industrial edge computing applications to deploy a proper caching strategy at edge nodes, in order to ensure the real-time performance guarantee. The cache optimization problem is formulized considering the specific requirements of industrial applications. The content request is modeled as shot noise model (SNM) to reflect the dynamic characteristics of popularity. A scheme of popularity prediction is then proposed by defining a feature similarity function of the requested content set in the latest periodic time window. Based on it, a new cache replacement algorithm called combing periodic popularity prediction and size caching strategy (PPPS) is proposed. The value of each cache content is determined together with the popularity, size and time updates parameters. The content with minimum value will be deleted with the highest priority when content replacement happens. The experimental results show that the proposed PPPS algorithm outperforms all the 5 baseline algorithms, which are the most popular content (MPC), greedy dual size (GDS), least recently used (LRU), least frequently used (LFU), and FIFO algorithm. PPPS algorithm obtains the best performance of hit rate and the average delay in all the testing cases with different parameter settings on user request models, content size distributions, and content types.

       

    /

    返回文章
    返回