高级检索

    基于Cache锁和直接缓存访问的网络处理优化方法

    A Cache Locking and Direct Cache Access Based Network Processing Optimization Method

    • 摘要: 通过分析计算机系统网络数据处理相关程序的访存行为、局部性特点和系统交互等问题,指出在高速网络环境下传统处理器网络子系统设计存在很大缺陷,并进一步提出一种基于软硬件协同设计的优化方案.该方案具体包括改进的直接缓存访问技术、关键程序的cache锁策略和相应系统互连结构及一致性协议等.实验表明,与传统方案相比,基于该方案的网络TCP传输带宽提高约48%,极限情况下UDP丢包率下降40%,传输延时降低超过10%.网络测试程序在与SPEC2000测试程序并发执行情况下,网络数据带宽提高约44%.此外还讨论了该优化方案与其他网络优化技术共同使用的基本原则和相应策略.

       

      Abstract: As network speed continues to grow, new challenges of network processing are emerging. Although many innovated solutions have been proposed in recent years, based on the analysis of the memory accessing trace and program locality in network processing, we point out that there are still defects in current processor network subsystem designs. Moreover, we find that the interaction and context switch between network processing and local programs are bottlenecks of network performance promotion, which have not been paid enough attention before. Motivated by the studies, a hardware and software co-design solution for network optimization is proposed, which includes improved direct cache access scheme, cache locking for system software, related interconnection architecture and the coherence protocol. The experiment shows that based on the proposed system, the peak TCP bandwidth is increased about 48%, while the UDP package loss rate is decreased by 40% under heavy pressure, and the network latency is decreased by more than 10%. Especially, the network bandwidth is improved about 44% when network processing benchmark executes with SPEC2000 programs in parallel. Also we discuss the collaboration scheme among the proposed solution and other main stream network optimization technologies, as well as the basic rules for the collaboration of multiple network optimization techniques.

       

    /

    返回文章
    返回