Processing math: 100%
  • 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
高级检索

无丢失网络流量管理综述

张乙然, 王尚广, 任丰原

张乙然, 王尚广, 任丰原. 无丢失网络流量管理综述[J]. 计算机研究与发展, 2025, 62(5): 1290-1306. DOI: 10.7544/issn1000-1239.202440096
引用本文: 张乙然, 王尚广, 任丰原. 无丢失网络流量管理综述[J]. 计算机研究与发展, 2025, 62(5): 1290-1306. DOI: 10.7544/issn1000-1239.202440096
Zhang Yiran, Wang Shangguang, Ren Fengyuan. Survey on Traffic Management in Lossless Networks[J]. Journal of Computer Research and Development, 2025, 62(5): 1290-1306. DOI: 10.7544/issn1000-1239.202440096
Citation: Zhang Yiran, Wang Shangguang, Ren Fengyuan. Survey on Traffic Management in Lossless Networks[J]. Journal of Computer Research and Development, 2025, 62(5): 1290-1306. DOI: 10.7544/issn1000-1239.202440096
张乙然, 王尚广, 任丰原. 无丢失网络流量管理综述[J]. 计算机研究与发展, 2025, 62(5): 1290-1306. CSTR: 32373.14.issn1000-1239.202440096
引用本文: 张乙然, 王尚广, 任丰原. 无丢失网络流量管理综述[J]. 计算机研究与发展, 2025, 62(5): 1290-1306. CSTR: 32373.14.issn1000-1239.202440096
Zhang Yiran, Wang Shangguang, Ren Fengyuan. Survey on Traffic Management in Lossless Networks[J]. Journal of Computer Research and Development, 2025, 62(5): 1290-1306. CSTR: 32373.14.issn1000-1239.202440096
Citation: Zhang Yiran, Wang Shangguang, Ren Fengyuan. Survey on Traffic Management in Lossless Networks[J]. Journal of Computer Research and Development, 2025, 62(5): 1290-1306. CSTR: 32373.14.issn1000-1239.202440096

无丢失网络流量管理综述

基金项目: 国家自然科学基金项目(62302055,62132007,62221003);中央高校基本科研业务费专项资金
详细信息
    作者简介:

    张乙然: 1995年生. 博士,副研究员,博士生导师. CCF会员. 主要研究方向为网络流量管理与控制、数据中心网络、卫星网络

    王尚广: 1982 年生. 博士,教授,博士生导师. CCF 杰出会员. 主要研究方向为服务计算、移动边缘计算、云计算、卫星计算

    任丰原: 1970年生. 博士,教授,博士生导师. CCF高级会员. 主要研究方向为网络流量管理与控制、数据中心网络、物联网/工业互联网

  • 中图分类号: TP393

Survey on Traffic Management in Lossless Networks

Funds: This work was supported by the National Natural Science Foundation of China (62302055, 62132007, 62221003) and the Fundamental Research Funds for the Central Unversities.
More Information
    Author Bio:

    Zhang Yiran: born in 1995. PhD, associate professor, PhD supervisor. Member of CCF. Her main research interests include network traffic management and control, datacenter network, and satellite network

    Wang Shangguang: born in 1982. PhD, professor, PhD supervisor. Distinguished member of CCF. His research interests include service computing, mobile edge computing, cloud computing, and satellite computing

    Ren Fengyuan: born in 1970. PhD, professor, PhD supervisor. Senior member of CCF. His main research interests include network traffic management and control, datacenter network, and IoT/industrial Internet

  • 摘要:

    近年来,无丢失网络在高性能计算、数据中心等领域得到了广泛应用. 无丢失网络通过链路层流量控制技术保障网内交换机不会因缓存溢出而丢包,避免了数据丢失与重传,极大提高了应用的时延和吞吐量性能. 然而,链路层流量控制带来的负面效应(拥塞扩展、死锁等)使得无丢失网络的大规模部署面临着诸多挑战. 因此,引入流量管理技术来提升无丢失网络的可扩展性得到了更多关注. 对应用于高性能计算领域和数据中心领域的典型无丢失网络InfiniBand和无丢失以太网的流量管理研究进展进行系统性综述,首先介绍链路层流量控制的负面影响和流量管理的目标,总结无丢失网络传统的流量管理架构. 然后根据流量管理技术路线(拥塞控制、拥塞隔离、多路径负载均衡等)以及驱动的位置(发送端驱动、接收端驱动等),对InfiniBand和无丢失以太网流量管理的最新研究进展进行分类和阐述,分析对应的优势以及局限性. 最后指出无丢失网络流量管理进一步研究中需要着重探索的问题,包括无丢失网络流量管理统一架构、主机内与网络联合流量管理以及面向领域应用的流量管理.

    Abstract:

    Lossless networks are increasingly widely used in high performance computing (HPC), data centers and other fields. Lossless networks use link layer flow control to ensure that packets will not be dropped by switches due to buffer overflow, thus avoiding loss retransmission and greatly improving the latency and throughput performance of applications. However, the negative effects introduced by link layer flow control (congestion spreading, deadlock, etc.) impose challenges for the large-scale deployment of lossless networks. Therefore, the introduction of traffic management technology to improve the scalability of lossless networks has received great attention. We systematically review the research progress of traffic management in typical lossless networks used in HPC and data centers including InfiniBand and lossless Ethernet. First, we introduce the negative impact of link layer flow control and the goals of traffic management, and summarize the traditional traffic management architecture of lossless networks. Then according to the traffic management technical route (congestion control, congestion isolation, load balancing etc.) and the driven location (sender-driven, receiver-driven, etc.), we classify and elaborate on the latest research progress of InfiniBand and lossless Ethernet traffic management, and analyze the corresponding advantages and limitations. Finally, we point out the issues that need to be explored in further research on lossless network traffic management, including unified architecture for traffic management, joint congestion management within the host and the network, and traffic management for domain applications.

  • 随着人类日益增长的能源需求和不可再生资源的枯竭,核聚变能源由于其清洁性和安全性作为解决长期能源需求的解决方案,越来越受到人类社会的关注,目前正在建设中的国际热核实验反应堆(international thermonuclear experimental reactor,ITER)是实现核聚变能和平应用的重要里程碑. 磁约束核聚变是产生热核聚变能的最重要方法之一[1-2]. 在反应堆中实现和维持等离子体聚变过程具有巨大的科学和技术挑战,其中针对等离子体稳定性的研究有助于理解、预测、控制和减轻等离子体破坏的威胁,是优化燃烧等离子体运行模式,改善等离子体约束和输运的重要保障,是设计和制造先进的核聚变装置的重要依据.

    数值模拟是等离子体稳定性研究中的关键方法之一,相比理论研究,它能够分析复杂的物理过程,而相比实验研究,它更加经济和灵活. 在等离子体物理数值模拟研究中,回旋动理学理论经常被用来研究在拉莫尔半径空间尺度下的动理学不稳定性和湍流传输[3-5]. 在回旋动理学理论中,通过回旋平均方法将描述分布函数的方程维度从6维降低到5维,使得其特别适用于研究更长时间尺度下的等离子体不稳定性和湍流传输物理过程.

    粒子网格法(particle in cell,PIC)由于其良好的可扩展性、物理守恒性、波粒相互作用描述准确性等优势,在众多回旋动理学模拟算法中具有广泛适用度和应用前景[6-8]. 基于PIC算法的突出特点,科研学者在解决特定时空尺度物理问题的同时,逐步向多时空尺度耦合的非线性复杂物理模拟演进. 其对磁约束核聚变高性能数值模拟中涉及的程序架构、计算性能、算法优化、并行效率都提出了前所未有的挑战. 许多科研学者尝试借助异构平台的计算性能满足回旋动理学PIC代码日益增长的算力需求,在移植优化和数值算法上作出了诸多努力.

    GTC代码是早期受益于异构并行计算的代码之一,基于CUDA在天河一号上展示2~3倍的加速[9]. 基于OPENACC在Titan上展示了2~3倍的加速,在Summit上展示了3~4倍的加速[10]. 基于Intel Xeon Phi加速器,在天河二号上展示了2~5倍的加速[11]. ORB5代码基于OPENACC,在Tesla P100 GPU和Tesla V100 GPU的Summit中分别获得了4倍和5倍的加速[12].

    在上述研究中,通常着重考虑了等离子体中电子对模型的贡献,针对电子的模拟,凭借访存规则等优势可以获得较高的计算性能加速. 而聚变产物Alpha粒子与动理学离子类似,回旋半径较大,必须在回旋运动轨迹上进行回旋平均,从而带来大量非规则的网格数据访存,对访存性能提出了很高的要求. 文献显示在只有动理学离子和绝热电子的情况下,异构移植给整体性能带来了负面的优化[13]. 考虑到聚变产物Alpha粒子的约束和输运是磁约束聚变能否成功的关键. 本文重点聚焦于以Alpha粒子为代表的回旋动理学代码的异构移植和性能优化.

    本文的移植优化及分析测试在天河新一代超级计算机上进行. 天河新一代超级计算机使用异构处理器MT-3000[14],它包含16个CPU,4个加速集群(簇),96个控制核心和1 536个加速核心,理论计算密度高达145FLOPB. 每个加速核心以超长指令字(very long instruction word, VLIW)方式工作,每16个加速器核心和1个控制核心被组织成1个加速阵列,以SIMD指令控制. MT-3000具有混合的存储器层次结构,包括每个集群的GSM(6MB),HBSM(48MB),DDR(32GB)存储器,每个加速阵列的AM(768KB)和SM(64KB)片上存储器为加速核供给数据. 其架构如图1所示.

    图  1  MT-3000架构图
    Figure  1.  The architecture diagram of MT-3000

    在异构处理器MT-3000上移植程序时有2个挑战:一方面,如何高效使用复杂的内存结构高效的将数据传递到加速阵列;另一方面,如何充分发挥高计算密度特性. 这2方面的挑战需要在程序移植优化时打破传统基于CPU的程序设计结构更多地强调计算性能的作用,从而实现整体性能的提高.

    VirtEx是基于PIC算法开发的回旋动理学模拟代码,已成功用于分析线性电阻撕裂不稳定性[15]. 代码按照PIC方法,将带电粒子以拉格朗日法描述,对应在连续相空间的分布函数采样点;而场信息以欧拉法描述,采用结构化网格描述平衡场,采用非结构化网格描述扰动场[16]. VirtEx代码的并行化策略是通过在环形方向上将模拟区域划分为不同的子域实现空间并行化,每个子域由1组进程管理. 该组中的每个进程拥有子区域内的场信息副本,并在该子域内将粒子按照进程编号进行并行划分.

    VirtEx代码的主要结构如图2所示,其主循环使用2阶龙格-库塔算法,在每个循环中,通过函数Push更新粒子在相空间的位置,其可以更加细致的分为粒子对场信息的回旋平均函数PG(push gather)和粒子位置更新函数PI(push interpolation);通过函数Locate计算粒子位置和扰动场网格之间插值的权重系数;通过函数Charge计算在非结构化扰动网格上的分布函数矩. 而其他热点部分主要是对非结构化网格上的扰动场更新和粒子MPI通信等操作. 其中3个函数PushLocateCharge为代码的热点,共占主循环时间的85%以上.

    图  2  VirtEx代码的主要结构及热点分布
    Figure  2.  Main structure of the VirtEx code and hotspot distribution

    3个热点函数中涉及的算法如下所示:

    算法1. 函数PushGather回旋平均算法.

    输入:环向格点权重wzpart, 径向格点权重wppart, 极向格点权重wtpart, 格点编号jtpart, 扰动电场gradphi;

    输出:回旋平均扰动场wpgc.

    for (mp=0; mp<mpmax; mp++)/*粒子循环*/

    for(igyro=0;igyro<ngyro;igyro++) /*回旋平均 循环*/

    读取粒子所在的格点权重及索引;

    以索引读取gradphi

    计算临时变量e

    end for

    累加计算wpgc,供函数PI使用*/

    end for

    算法2. 函数PushInterpolation粒子位置更新算法.

    输入:相空间坐标zpart, 历史相空间坐标zpart0,回旋平均扰动场wpgc

    输出:相空间坐标zpart.

    for (mp=0; mp<mpmax; mp++)/*粒子循环*/

    读取粒子信息 zpart ,wpgc

    插值获取网格信息、电场、磁场等;

    计算场对粒子的作用;

    推动粒子更新速度位置信息;

    end for

    算法3. 函数Locate粒子到场的插值权重系数算法.

    输入:相空间坐标zpart

    输出:环向格点权重wzpart, 径向格点权重wppart, 极向格点权重wtpart, 格点编号jtpart.

    for (mp=0; mp<mpmax; mp++)/*粒子循环*/

    for(igyro=0; igyro<ngyro; igyro++)/*回旋平均 循环*/

    读取粒子信息zpart

    读取网格信息;

    计算粒子插值权重;

    end for

    end for

    算法4. 函数Charge非结构化扰动网格上的分布函数矩算法.

    输入:环向格点权重wzpart, 径向格点权重wppart, 极向格点权重wtpart, 格点编号jtpart

    输出:电流密度density.

    for (mp=0; mp<mpmax; mp++)/*粒子循环*/

    插值获取网格信息、电场、磁场;

    for(igyro=0; igyro<ngyro; igyro++)/*回旋平均 循环*/

    读取粒子插值权重;

    计算粒子对于周围格点的扰动量;

    粒子信息向网格上规约到density

    end for

    end for

    上述3个热点函数中的4个算法外层循环体均围绕粒子展开,且粒子间具有良好的独立性,面向异构处理器MT-3000异构移植工作主要围绕粒子循环的向量指令集改写展开.

    同时,为了更好适配向量指令集的访存特性,在数据结构上做了改写,将粒子数据使用SOA(struct of array)数据结构标识,网格数据使用AOS(array of struct)数据结构. 粒子数据具有数量多,独立性好的特性,配合SOA数据结构更适用于发挥向量指令运算的优势;而网格数据数量远远小于粒子数,访存量巨大,AOS的数据结构能够充分发挥内存局部性. 针对数据结构的改写工作为后续程序的性能优化提供了重要的保障.

    基于上述对于程序热点函数的分析,回旋动理学PIC数值模拟算法涉及粒子与网格数据间的大量访存,尤其在面向扰动场网格数据的访存操作中存在非规则访问和原子写操作,二者对于访存性能提出了艰难的挑战,几个热点函数的访存与计算量统计如表1所示.

    表  1  VirtEx热点函数的初始计算密度统计
    Table  1.  Initial calculated density statistics of VirtEx hot spot function
    函数 浮点计算量/FLO 访存量/B 计算密度/FLOPB
    PG 269mp 232mp 1.15
    PI 462mp 224mp 1.98
    Locate 238mp 200mp 1.17
    Charge 158mp 200mp 0.75
    注:变量mp表示粒子数量,变量前系数为热点函数中每个粒子计算访存量的统计值.
    下载: 导出CSV 
    | 显示表格

    因此,如何将计算密度在1~2 FLOPB的访存密集型模块,通过性能优化策略发挥高计算密度型异构设备的计算性能,是关键性的研究内容,也是本文的研究重点. 在本章中通过中间变量的即时计算,基于SM片上存储的软件缓存设计,热点函数合并3种优化方法展开介绍.

    在传统基于CPU的程序设计中,开发者更倾向于主动寻找公用数据预先计算并暂存于内存中,利用多级高速缓存,通过索引获取数据,通过增加访存量换取计算量的减少. 然而,这种优化方法并不适合于基于宽向量计算的高计算密度型异构设备,大量引入访存会限制计算能力的发挥,同时使用索引的非规则访存模式也不适用于向量计算. 因此,考虑到新架构的特点,本文采用了与传统方法截然相反的优化方法来提高计算性能.

    在VirtEx中,磁场、温度、密度、安全因子等中间变量可以将预计算转换为即时计算,引入热点函数中,按照每个粒子对中间变量的需求完成计算. 该操作可以有效减少热点函数中的规则访存和非规则访存,降低流水线中断次数,避免由于按索引访问所带来的向量重组操作.

    通过热点函数分析,可以进行优化的中间变量重要分为2类. 一类以每个径向网格上的极向网格点数mtheta为例,该函数可以在热点函数中完成即时计算:

    mthetai=2Floor(πriΔl+0.5). (1)

    另一类中间变量却难以直接解析化表达,例如粒子在非结构化扰动场网格中的位置索引信息igrid,其形式为

    igridi=1+i1j=0mthetai, (2)
    mthetai=2πrΔl+δi=ai+b+δi. (3)

    如式(2)所示,变量igrid的计算基于变量mtheta的累加式,而由于函数Floor引入的不连续性,导致变量igrid的数学公式不能通过简单的变换和积分得出.

    由于极向格点数远大于1,且径向格点在r坐标描述下是均匀的,当残差δi1igrid同样可以表示为

    igridi=ai2+bi+c+ri, (4)

    其中残差r远小于二次函数部分. 为了能够构建igrid的解析表达式,采用多项式来拟合二次函数的部分,而残差可以通过周期函数f来降低到0.5以下,如图3所示. 从而igrid的解析表达式可以表示为如下的形式:

    图  3  位置索引变量igrid真实值与数值拟合的对比
    Figure  3.  Comparison of the real value and numerical fitting of the location index variable igrid
    igridi=Round[ai2+bi+c+f(i)]. (5)

    得益于对平衡剖面信息的解析化表达和即时计算,函数PushInterpolation和函数Locate中的随机内存访问过程得到减少. 只有热点函数PushGather中存在针对扰动场回旋平均的随机内存访问,在下面的章节中会论述相应的优化方法.

    在基于CPU的通用架构中,内置的缓存机制允许开发者在编程时无需关注高速缓存,更多的是将其视为自动化的访存系统. 而在MT-3000处理器中,考虑到性能,内存和SM/AM之间,以及SM/AM和向量寄存器之间的数据交换需要由程序员手动控制. 在处理内存的随机访问时,依赖DMA接口操作需要依赖索引和数据,造成了内存带宽的浪费. 为了解决这个问题,本文针对加速阵列内部片上存储SM设计软缓存机制,充分发挥内存结构和内存局部性的优势.

    在VirtEx热点函数中有2个非规则访问,其中一个是在函数Push中涉及到对扰动场网格数据的非规则访问,另一个是在函数Charge中涉及到对扰动场网格数据更新的原子写操作.

    函数Charge通过累加操作(+=)将粒子信息到网格上,由于粒子分散在子域内的多个进程,且网格数远小于粒子数,这将涉及到原子操作. 读/写锁是MT-3000处理器中解决数据竞争的重要方法,因此基于读/写锁设计了1种多级同步的软件缓存机制,首先在SM中进行细粒度(如单字)更新,不涉及任何同步操作;其次,使用读写锁保证缓存块在被换出时不会受到数据竞争. 同时完成缓存块从SM到主存储器的累加操作.

    函数PushGaher主要通过4点回旋平均算法获取粒子在回旋运动轨迹上的扰动场信息. 由于片上缓存空间有限,回旋平均算法的随机访问性质会对主存带来巨大的访存开销. 因此基于片上SM存储设计了1种软件缓存机制,该机制通过粒子索引将网格数据按照缓存块读入,如果向量宽度内所有粒子的索引均在缓存块内命中,组装网格数据向量传到向量寄存器完成向量计算;如果索引未在缓存块命中,按照所需索引完成缓存块数据的更新. 同时考虑到性能和局部性的平衡,设计64个缓存块并使用哈希作为缓存块的标识.

    在软件缓存机制的实施后,非规则访存被有效转化,访存带宽的压力得到了缓解. 为缓存命中问题. 进一步地,考虑到回旋平均算法需获取轨迹上每1点的扰动场信息,由于粒子在速度空间分布的随机性,在更新粒子位置后,极坐标方向的粒子分布会被分散,从而扰乱粒子在非结构化扰动场网格上的分布. 程序现有的基于粒子所在径向网格点的排序算法,由于加速阵列中的片上存储空间有限,该算法不足以支撑高计算密度的异构设备,导致缓存命中率的降低.

    图4显示了排序算法优化前后,粒子序号与相应的非结构化网格序号之间的关系,其中psi排序是原始的径向排序算法,igrid排序是改进的排序算法,按照粒子所在的网格点排序,增强了空间局部性. 优化后的排序采用桶式排序算法,每个桶对应于粒子所属的网格点,由于粒子运动的对称性,每个桶的容量总是与每个网格的粒子数同序,因此该算法的复杂性与原来的psi排序同样是O(N).

    图  4  不同排序算法下的粒子格点编号对比
    Figure  4.  Comparison of particle lattice numbers under different sorting algorithms

    不同排序算法下针对扰动场变量gradphi的缓存命中率,如表2所示,在64个缓存块和1 024 B缓存块大小的情况下,扰动场变量gradphi在没有粒子排序的情况下命中率为77.99%,接近于psi排序下的84.47%,而采用igrid排序可以获得99.15%的缓存命中率,得益于超高的缓存命中率,针对变量gradphi的非规则访问可以被近似视作规则访问.

    表  2  不同排序算法下针对扰动场变量gradphi的缓存命中率
    Table  2.  Cache Hit Rate for Disturbance Field Variable gradphi Under Different Sorting Algorithms
    排序算法缓存命中率/%
    不排序77.99
    psi排序84.47
    igrid排序99.15
    下载: 导出CSV 
    | 显示表格

    通过热点函数面向异构加速器MT-3000的移植以及上述几种优化方式的应用. 非规则访存操作已经被近似消除,减轻了访存带宽的压力. 在经过优化后,热点函数PGPILocate的浮点计算量、访存量以及计算密度的统计数据如表3所示,其中mp表示粒子数量,考虑到每个粒子相同的操作,其在统计中作为系数表示. 从数据上可以看出,由于函数PG中的回旋平均操作主要涉及内存访问,其计算密度仅为1.39;而时间占比最高的函数PI,考虑到基于粒子的计算特点,计算密度仅为12.4;而函数Locate在经过变量即时计算优化后,计算密度达到56.3. 综上所述,时间占比高达40%的函数Push的计算密度需要进一步提高计算访存比.

    表  3  热点函数合并优化后计算密度统计
    Table  3.  Hot Spot Function is Merged and Optimized to Calculate the Density Statistics
    函数 浮点计算量/FLO 访存量/B 计算密度/FLOPB
    PG 277mp 198.64mp 1.39
    PI 1 888mp 152mp 12.4
    Locate 12 161mp 216mp 56.3
    PushOpt 14 326mp 134.64mp 106.4
    注:变量mp表示粒子数量,变量前系数为热点函数中每个粒子计算访存量的统计值.
    下载: 导出CSV 
    | 显示表格

    函数PG,PILocate在PIC算法中是计算粒子运动的3个相关函数,函数Locate负责计算插值系数,函数PG负责获取网格数据,函数PI负责推动粒子,三者在算法上具备可合并性. 将函数Locate引入到函数Push中,并将函数PGPI合并,合并后输入仅为粒子信息和网格信息,输出为粒子信息,减少了对于大量中间变量的读写. 优化函数PushOpt的计算密度达到106.4 FLOPB,进一步缩小了与理论值的差距.

    在该这个基准算例测试中,我们用1个MPI进程控制1个MT-3000加速集群(簇),在天河新一代超算系统上使用120个节点上的480个MPI进程和480个簇. 该基准测试使用了1.23 × 106个网格,模拟了2.5 × 109个粒子.

    表4显示了CPU版本和优化版本之间在主循环和热点函数上的性能对比,CPU版本的3个主要的热点函数的占比达到86.06%. 结果显示,基于MT-3000处理器的应用加速效果良好,总体速度提高了4.2倍,其中函数Push和函数Locate分别实现了10.9倍和13.3倍的加速,在具有原子操作的函数Charge实现了16.2倍的性能提升.

    表  4  基准算例的性能表现
    Table  4.  The Performance of Benchmark Examples
    热点函数 CPU版本 优化后版本 加速比
    计算时间/s 占比/% 计算时间/s 占比/%
    主循环 845.63 100 201.46 100 4.2
    Push 323.86 38.30 29.64 14.71 10.9
    Locate 128.69 15.22 9.67 4.80 13.3
    Charge 275.19 32.54 16.98 8.43 16.2
    下载: 导出CSV 
    | 显示表格

    本节展示了优化后的VirtEx程序的弱扩展性测试结果. 在弱扩展性测试中,基准测试为120个节点,使用了3.86 × 105个网格,模拟了3.7 × 109个粒子. 随着节点数增加至3 840个,模拟的粒子数也相应的增加到了1.18 × 1011. 经过多轮测试取平均后的并行效率,如图5所示,在天河新一代超算系统的3 840个节点5 898 240个加速器核心上,其并行效率为88.4%,展示了良好的弱扩展性.

    图  5  120个节点到3 840个节点的弱扩展性测试结果
    Figure  5.  Weak scalability test results from 120 to 3 840 nodes

    基于天河新一代超算系统的异构加速器MT-3000对大规模并行磁约束聚变回旋动理学模拟代码VirtEx进行代码移植和性能优化,围绕高计算密度型系统和访存密集型应用间存在的矛盾. 通过中间变量的即时计算、定制化的软件缓存设计、空间局部性优化、热点函数合并等优化策略,并通过数据分析验证了优化的合理性. 同时在基准测试中,VirtEx的优化显示了良好的加速效果,其中函数Push提速10.9倍,函数Locate提速13.3倍,函数Charge提速16.2倍,从而使整个程序提速4.2倍. 并且在3 840个节点的5 898 240个加速器核心上展示了良好的可扩展性,并行效率为88.4%.

    作者贡献声明:李青峰负责程序设计、移植、测试,并撰写论文;李跃岩负责设计并实现优化算法;栾钟治负责程序瓶颈分析和解决方案提供;张文禄提供了针对程序原理和算法方面的指导;龚春叶提供了针对异构加速设备的优化指导;郑刚提供了系统测试环境及保障工作;康波提供了共性技术的指导;孟祥飞负责设计研究方案并把控研究进度.

  • 图  1   无丢失网络链路层流量控制

    Figure  1.   Link layer flow controls in lossless networks

    图  2   链路层流量控制的负面影响

    Figure  2.   Side effects of link layer flow control

    图  3   IBCC架构

    Figure  3.   Architecture of IBCC

    图  4   TCD三态转换示意图

    Figure  4.   Illustration of ternary states transitions in TCD

    表  1   InfiniBand主要流量管理特点对比

    Table  1   Features Comparison of Main Traffic Management in InfiniBand

    流量管理
    方案
    代表
    算法
    处理端点
    拥塞
    处理网内
    拥塞
    可部署性
    被动拥塞
    控制
    IBCC[9] 商用网卡已支持,
    需参数调优
    RRCC[24] 需修改网卡
    拥塞隔离 RECN[30] 交换机设计实现复杂,
    尚无商用交换机支持
    多路径负
    载均衡
    vFtree[27] 无数据包乱序,但仅适用
    于特定拓扑和负载
    AR[34] 有数据包乱序,部分商用
    交换机和网卡已支持
    AFAR[37] 无数据包乱序,需要修改
    MPI库和子网控制器
    端点主动
    拥塞控制
    SRP[25] 接收端网卡设计复杂
    SMSRP[26] 接收端网卡设计复杂
    其他 CCFIT[38] 交换机设计实现复杂,
    尚无商用交换机支持
    下载: 导出CSV

    表  2   无丢失以太网主要拥塞控制算法的特点对比

    Table  2   Features Comparison of Main Congestion Control Algorithms in Lossless Ethernet

    分类 算法 拥塞信号 收敛速度 可部署性
    发送端驱动 QCN[40] 量化拥塞通知 较快 不支持IP路由的网络
    DCQCN[39] ECN 较快 交换机配置ECN
    TIMELY[43] RTT 较快 无需修改交换机
    HPCC[11] INT 全网部署INT
    ACC[51] TCD[42] 需要交换机支持TCD[42],修改网卡
    交换机驱动 RoCC[52] 交换机设计复杂
    接收端驱动 PCN[45] NP-ECN 修改交换机、网卡
    RCC[58] RTT 修改网卡
    下载: 导出CSV

    表  3   无丢失以太网主要负载均衡方案的特点对比

    Table  3   Features Comparison of Main Load Balancing Schemes in Lossless Ethernet

    分类 方案 粒度 乱序程度 功能 可部署性
    基于交换机 DRILL[59] 数据包 需要网络控制器 需要修改交换机和
    软件协议栈
    CONGA[60] Flowlet 支持非对称拓扑 部分商用交换机支持
    LetFlow[61] Flowlet 支持非对称拓扑 部分商用交换机支持
    Conweave[62] 子流 支持非对称拓扑 仅需修改ToR层交换机
    基于端侧 Presto[63] Flowcell 需要网络控制器 需要修改软件协议栈
    MP-RDMA[64] 数据包 支持非对称拓扑 需要修改RDMA网卡
    Proteus[66] 数据包 支持非对称拓扑 需要修改RDMA网卡
    下载: 导出CSV
  • [1]

    Sherman B, Thordal M, Hanson K. NVMe over Fibre Channel [M]. Hoboken, NJ: John Wiley & Sons, 2019

    [2]

    Aspencore Network. Congestion management clears a path through 10 GbE [EB/OL]. [2024-01-02]. https://www.edn.com/congestion-management-clears-a-path-through-10-gbe/

    [3]

    Zhu Yibo, Kang Nanxi, Cao Jiaxin, et al. Packetlevel telemetry in large datacenter networks[C]//Proc of the 2015 ACM Conf on Special Interest Group on Data Communication. New York: ACM, 2015: 479−491

    [4]

    Li Yuliang, Miao Rui, Kim C, et al. Lossradar: Fast detection of lost packets in data center networks[C]//Proc of the 12th Int Conf on Emerging Networking Experiments and Technologies. New York: ACM, 2016: 481–495

    [5]

    The Research Institution of China Mobile. White paper on network evolution of intelligent computing center for AI large model[EB/OL]. 2023 [2024-01-02]. http://www.ecconsortium.org/Uploads/file/20230517/1684313521798632.pdf

    [6]

    Guo Chuanxiong, Wu Haitao, Deng Zhong, et al. RDMA over commodity Ethernet at scale[C]//Proc of the 2016 ACM SIGCOMM Conf. New York: ACM, 2016: 202–215

    [7]

    NVIDIA. InfiniBand accelerates six of the top ten supercomputers in the world, including the top three, and four of the top five on June's TOP500 [EB/OL]. [2024-01-02]. https://nvidianews.nvidia.com/news/infiniband-accelerates-six-of-the-top-ten-supercomputers-in-the-world-including-the-top-three-and-four-of-the-top-five-on-june-s-top500

    [8]

    InfiniBand Trade Association. Life in the fast lane: InfiniBand continues to reign as HPC interconnect of choice [EB/OL]. [2024-01-02]. https://www.infinibandta.org/lifeinthefastlaneinfinibandcontinuestoreignashpcinterconnectofchoice/

    [9]

    InfiniBand Trade Association. InfiniBand architecture specification release 1.4 [EB/OL]. [2021-02-01]. https://cw.infinibandta.org/document/dl/8567

    [10]

    IEEE. IEEE 802.1 Qbb-priority-based flow control [EB/OL]. [2024-01-02]. http://www.ieee802.org/1/pages/802.1bb.html.

    [11]

    Li Yuliang, Miao Rui, Liu H H, et al. HPCC: High precision congestion control[C]//Proc of the ACM Special Interest Group on Data Communication. New York: ACM, 2019: 44−58

    [12]

    InfiniBand Trade Association. Supplement to InfiniBand architecture specification volume 1 release 1.2. 1. annex a17: RoCEv2 [EB/OL]. [2020-12-01]. https://cw.infinibandta.org/document/dl/7781

    [13]

    Chen Yanpei, Griffith R, Liu Junda, et al. Understanding TCP throughput collapse in datacenter networks[C]//Proc of the 1st ACM Workshop on Research on Enterprise Networking. New York: ACM, 2009: 73–82

    [14] 曾高雄,胡水海,张骏雪,等. 数据中心网络传输协议综述[J]. 计算机研究与发展,2020,57(1):74−84 doi: 10.7544/issn1000-1239.2020.20190519

    Zeng Gaoxiong, Hu Shuihai, Zhang Junxue, et al. Overview of Data Center Network Transport Protocols[J]. Journal of Computer Research and Developement, 2020, 57(1): 74−84 (in Chinese) doi: 10.7544/issn1000-1239.2020.20190519

    [15]

    Alizadeh M, Greenberg A, Maltz D A, et al. Data center TCP (DCTCP)[C]//Proc of the ACM SIGCOMM Conf. New York: ACM, 2010: 63–74

    [16]

    IETF. A Remote Direct Memory Access Protocol Specification (RFC 5040) [EB/OL]. [2024-05-21]. https://datatracker.ietf.org/doc/html/rfc5040

    [17]

    Alali F, Mizero F, Veeraraghavan M, et al. A measurement study of congestion in an InfiniBand network[C/OL]//Proc of the 2017 Network Traffic Measurement and Analysis Conf (TMA). Piscataway, NJ: IEEE, 2017[2024-05-21]. https://ieeexplore.ieee.org/document/8002911

    [18]

    Qian Kun, Cheng Wenxue, Zhang Tong, et al. Gentle flow control: Avoiding deadlock in lossless networks[C]//Proc of the ACM Special Interest Group on Data Communication. New York: ACM, 2019: 75–89

    [19]

    Hu Shuihai, Zhu Yibo, Cheng Peng, et al. Tagger: Practical PFC deadlock prevention in data center networks[J]. IEEE/ACM Transactions on Networking, 2019, 27(2): 889902

    [20]

    Hu Shuihai, Zhu Yibo, Cheng Peng, et al. Deadlocks in datacenter networks: Why do they form, and how to avoid them[C]//Proc of the 15th ACM Workshop on Hot Topics in Networks. New York: ACM, 2016: 92–98

    [21]

    Gran E G, Eimot M, Reinemo S A, et al. First experiences with congestion control in InfiniBand hardware[C/OL]//Proc of the IEEE Int Symp on Parallel Distributed Processing (IPDPS). Piscataway, NJ: IEEE, 2010 [2024-02-21]. https://doi.org/10.1109/IPDPS.2010.5470419

    [22]

    Pfister G, Gusat M, Denzel W, et al. Solving hot spot contention using InfiniBand architecture congestion control[C/OL]//Proc of the High Performance Interconnects for Distributed Computing. Piscataway, NJ: IEEE, 2005 [2024-02-21]. https://www.researchgate.net/publication/242408366

    [23]

    Liu Qian, Russell R D, Gran E G. Improvements to the InfiniBand congestion control mechanism[C]// Proc of the 24th IEEE Annual Symp on High-Performance Interconnects (HOTI). Piscataway, NJ: IEEE, 2016: 27−36

    [24]

    Zhang Yiran, Qian Kun, Ren Fengyuan. Receiver-driven congestion control for InfiniBand[C/OL]//Proc of the 50th Int Conf on Parallel Processing (ICPP). New York: ACM, 2021 [2024-02-21]. https://doi.org/10.1145/3472456.3472466

    [25]

    Jiang Nan, Becker D U, Michelogiannakis G, et al. Network congestion avoidance through speculative reservation[C/OL]// Proc of the IEEE Int Symp on High-Performance Computer Architecture. Piscataway, NJ: IEEE, 2012 [2024-02-21]. https://doi.org/10.1109/HPCA.2012.6169047

    [26]

    Jiang Nan, Dennison L, Dally W J. Network endpoint congestion control for finegrained communication[C/OL]//Proc of the Int Conf for High Performance Computing, Networking, Storage and Analysis. New York: ACM, 2015[2024-02-21]. https://doi.org/10.1145/2807591.2807600

    [27]

    Guay W L, Bogdanski B, Reinemo S A, et al. vFtree ― A Fattree routing algorithm using virtual lanes to alleviate congestion[C]// Proc of the 2011 IEEE Int Parallel Distributed Processing Symp. Piscataway, NJ: IEEE, 2011: 197−208

    [28]

    HPC Advisory Council. Understanding basic InfiniBand QoS [EB/OL]. [2024-01-02]. https://hpcadvisorycouncil.atlassian.net/wiki/spaces/HPCWORKS/pages/1178075141/Understanding+Basic+InfiniBand+QoS

    [29]

    EscuderoSahuquillo J, Garcia P J, Quiles F J, et al. A new proposal to deal with congestion in InfiniBandbased fattrees[J]. Journal of Parallel and Distributed Computing, 2014, 74(1): 1802−1819

    [30]

    Duato J, Johnson I, Flich J, et al. A new scalable and costeffective congestion management strategy for lossless multistage interconnection networks[C]// Proc of the 11th Int Symp on High-Performance Computer Architecture. Piscataway, NJ: IEEE, 2005: 108−119

    [31]

    Garcia P, Quiles F, Flich J, et al. Efficient, scalable congestion management for interconnection networks[J]. IEEE Micro, 2006, 26(5): 52−66

    [32]

    EscuderoSahuquillo J, Garcia P, Quiles F, et al. Cost-effective congestion management for interconnection networks using distributed deterministic routing[C]//Proc of the 16th IEEE Int Conf on Parallel and Distributed Systems. Piscataway, NJ: IEEE, 2010: 355−364

    [33]

    Geoffray P, Hoefler T. Adaptive routing strategies for modern high performance networks [C]//Proc of the 16th IEEE Symp on High Performance Interconnects. Piscataway, NJ: IEEE, 2008: 165−172

    [34]

    NVIDIA. How to configure adaptive routing and self healing networking [EB/OL]. [2024-01-02]. https://enterprise-support.nvidia.com/s/article/How-To-Configure-Adaptive-Routing-and-Self-Healing-Networking-New

    [35]

    NVIDIA. NVIDIA ConnectX-7[EB/OL]. [2024-05-21]. https://resources.nvidia.com/en-us-accelerated-networking-resource-library/connectx-7-datasheet

    [36]

    NVIDIA. NVIDIA BlueField networking platform[EB/OL]. [2024-05-21]. https://docs.nvidia.com/networking/display/bf3dpu/introduction

    [37]

    Smith S A, Cromey C E, Lowenthal D K, et al. Mitigating interjob interference using adaptive flowaware routing[C]// Proc of the Int Conf for High Performance Computing, Networking, Storage and Analysis(SC18). Piscataway, NJ: IEEE, 2018: 346−360

    [38]

    EscuderoSahuquillo J, Gran E G, Garcia P J, et al. Combining congested-flow isolation and injection throttling in HPC interconnection networks[C]//Proc of the 2011 Int Conf on Parallel Processing. New York: ACM, 2011: 662−672

    [39]

    Zhu Yibo, Eran H, Firestone D, et al. Congestion control for largescale RDMA deployments[C]//Proc of the 2015 ACM Conf on Special Interest Group on Data Communication. New York: ACM, 2015: 523–536

    [40]

    IEEE. 802.1 Qau―Congestion Notification [EB/OL]. 2010[2024-01-02]. http://www.ieee802.org/1/pages/802.1au.html

    [41]

    Floyd S, Jacobson V. Random early detection gateways for congestion avoidance[J]. IEEE/ACM Transactions on Networking, 1993, 1(4): 397−413

    [42]

    Zhang Yiran, Liu Yifan, Meng Qingkai, et al. Congestion detection in lossless networks[C]//Proc of the 2021 ACM Conf on Special Interest Group on Data Communication. New York: ACM, 2021: 370–383

    [43]

    Mittal R, Lam V T, Dukkipati N, et al. TIMELY: Rttbased congestion control for the datacenter [C]//Proc of the 2015 ACM Conf on Special Interest Group on Data Communication. New York: ACM, 2015: 537–550

    [44]

    Patke A, Jha S, Qiu Haoran, et al. Delay sensitivity-driven congestion mitigation for HPC systems [C]//Proc of the ACM Int Conf on Supercomputing. New York: ACM, 2021: 342–353

    [45]

    Cheng Wenxue, Qian Kun, Jiang Wanchun, et al. Rearchitecting congestion management in lossless Ethernet [C]//Proc of the 17th USENIX Symp on Networked Systems Design and Implementation (NSDI 20). Berkeley, CA: USENIX Association, 2020: 19−36

    [46]

    Open Compute Project. Inband network telemetry in broadcom trident3[EB/OL]. [2024-01-02]. https://www.opencompute.org/files/INTInBandNetworkTelemetryAPowerfulAnalyticsFrameworkforyourDataCenterOCPFinal3.pdf

    [47]

    Xu Lisong, Harfoush K, Rhee I. Binary increase congestion control (bic) for fast long-distance networks[C]// Proc of IEEE INFOCOM. Piscataway, NJ: IEEE, 2004: 2514−2524

    [48]

    Stephens B, Cox A L, Singla A, et al. Practical DCB for improved data center networks[C]//Proc of the IEEE Conf on Computer Communications. Piscataway, NJ: IEEE, 2014: 1824−1832

    [49]

    ZhuYibo, Ghobadi M, Misra V, et al. ECN or delay: Lessons learnt from analysis of DCQCN and TIMELY[C]//Proc of the Conf on Emerging Network Experiment and Technology. New York: ACM, 2016: 313−327

    [50]

    Kumar G, Dukkipati N, Jang K, et al. Swift: Delay is simple and effective for congestion control in the datacenter[C]//Proc of the Annual Conf of the ACM Special Interest Group on Data Communication on the Applications, Technologies, Architectures, and Protocols for Computer Communication. New York: ACM, 2020: 514−528

    [51]

    Zhang Yiran, Meng Qingkai, Hu Chaolei, et al. Revisiting congestion control for lossless Ethernet[C]// Proc of the 21st USENIX Symp on Networked Systems Design and Implementation (NSDI 24). Berkeley, CA: USENIX Association, 2024: 131−148

    [52]

    Taheri P, Menikkumbura D, Vanini E, et al. RoCC: Robust congestion control for RDMA[C]// Proc of the 16th Int Conf on Emerging Networking Experiments and Technologies. New York: ACM, 2020: 17−30

    [53]

    Cho I, Jang K, Han D. Credit-scheduled delay-bounded congestion control for datacenters [C]//Proc of the Conf of the ACM Special Interest Group on Data Communication. New York: ACM, 2017: 239−252

    [54]

    Gao P, Narayan A, Kumar G, et al. pHost: Distributed nearoptimal datacenter transport over commodity network fabric[C/OL]//Proc of the 11th ACM Conf on Emerging Networking Experiments and Technologies. New York: ACM, 2015[2024-02-21]. https://doi.org/10.1145/2716281.2836086

    [55]

    Handley M, Raiciu C, Agache A, et al. Rearchitecting datacenter networks and stacks for low latency and high performance[C]//Proc of the Conf of the ACM Special Interest Group on Data Communication. New York: ACM, 2017: 29−42

    [56]

    Montazeri B, Li Y, Alizadeh M, et al. Homa: A receiver-driven low-latency transport protocol using network priorities[C]//Proc of the 2018 Conf of the ACM Special Interest Group on Data Communication. New York: ACM, 2018: 221–235

    [57]

    Hu Shuihai, Bai Wei, Zeng Gaoxiong, et al. Aeolus: A building block for proactive transport in datacenters [C]//Proc of the Conf of the ACM Special Interest Group on Data Communication. New York: ACM, 2020: 422−434

    [58]

    Zhang Jiao, Zhong Xiaolong, Wan Zirui, et al. RCC: Enabling receiver-driven RDMA congestion control with congestion divide-and-conquer in datacenter networks[J]. IEEE/ACM Transactions on Networking, 2023, 31(1): 103−117 doi: 10.1109/TNET.2022.3185105

    [59]

    Ghorbani S, Yang Zibin, Godfrey P B, et al. DRILL: Micro load balancing for low-latency data center networks[C]//Proc of the Conf of the ACM Special Interest Group on Data Communication. New York: ACM, 2017: 225−238

    [60]

    Alizadeh M, Edsall T, Dharmapurikar S, et al. CONGA: Distributed congestion-aware load balancing for datacenters[C]//Proc of the 2014 ACM Conf on SIGCOMM. New York: ACM, 2014: 503−514

    [61]

    Vanini E, Pan Rong, Alizadeh M, et al. Let It Flow: Resilient asymmetric load balancing with flowlet switching[C]//Proc of the 14th USENIX Symp on Networked Systems Design and Implementation (NSDI 17). Berkeley, CA: USENIX Association, 2017: 407−420

    [62]

    Song C, Khooi X, Joshi R, et al. Network load balancing with in-network reordering support for RDMA[C]//Proc of the ACM SIGCOMM 2023 Conf. New York: ACM, 2023: 816–831

    [63]

    He Keqiang, Rozner E, Agarwal K, et al. Presto: Edge-based load balancing for fast datacenter networks[C]//Proc of the 2015 ACM Conf on Special Interest Group on Data Communication. New York: ACM, 2015: 465−478

    [64]

    Lu Yuanwei, Chen Guo, Li Bojie, et al. Multi-path transport for RDMA in datacenters[C]//Proc of the 15th USENIX Symp on Networked Systems Design and Implementation (NSDI 18). Berkeley, CA: USENIX Association, 2018: 357−371

    [65]

    Wischik D, Raiciu C, Greenhalgh A, et al. Design, implementation and evaluation of congestion control for multipath TCP[C/OL]// Proc of the 8th USENIX Symp on Networked Systems Design and Implementation (NSDI 11). Berkeley, CA: USENIX Association, 2011[2024-02-21]. http://www.usenix.org/events/nsdi11/tech/full_papers/Wischik.pdf

    [66]

    Hu Jinbin, Zeng Chaoliang, Wang Zilong, et al. Enabling load balancing for lossless datacenters[C/OL]//Proc of the 31st IEEE Int Conf on Network Protocols (ICNP). Piscataway, NJ: IEEE, 2023[2024-02-21]. https://doi.org/10.1109/ICNP59255.2023.10355615

    [67]

    Microsoft. MSCCL[EB/OL]. [2024-01-02]. https://github.com/microsoft/msccl

    [68]

    Microsoft. DeepSpeed[EB/OL]. [2024-01-02]. https://github.com/microsoft/DeepSpeed

    [69]

    Shalev L, Ayoub H, Bshara N, et al. A cloud-optimized transport protocol for elastic and scalable HPC[J]. IEEE Micro, 2020, 40(6): 67−73 doi: 10.1109/MM.2020.3016891

    [70]

    Goyal P, Shah P, Zhao K, et al. Backpressure flow control[C]//Proc of the 19th USENIX Symp on Networked Systems Design and Implementation (NSDI 22). Berkeley, CA: USENIX Association, 2022: 779−805

    [71]

    IEEE. IEEE 802.1 Qcz―Congestion Isolation [EB/OL]. 2019[2024-01-02]. https://1.ieee802.org/tsn/8021qcz/

    [72]

    Ultra Ethernet Consortium. Ultra Ethernet consortium [EB/OL]. 2023[2024-01-02]. https://ultraethernet.org

    [73]

    Saksham A, Arvind K, Rachit A, et al. Host congestion control[C]//Proc of the ACM SIGCOMM 2023 Conf. New York: ACM, 2023: 275−287

    [74]

    NVIDIA. NVLink and NVSwitch: Fastest HPC fata center platform [EB/OL]. [2024-01-02]. https://www.nvidia.com/en-us/data-center/nvlink/

    [75]

    Huang Yanping, Cheng Youlong, Bapna A, et al. GPipe: Efficient training of giant neural networks using pipeline parallelism[C]//Proc of the 33rd Int Conf on Neural Information Processing Systems. New York: ACM, 2019: 103−112

    [76]

    Khani M, Ghobadi M, Alizadeh M, et al. SiP-ML: High-bandwidth optical network interconnects for machine learning training[C]//Proc of the Conf of the ACM Special Interest Group on Data Communication. New York: ACM, 2021: 657−675

    [77]

    Narayanan D, Harlap A, Phanishayee A, et al. PipeDream: Generalized pipeline parallelism for DNN training[C]//Proc of the 27th ACM Symp on Operating Systems Principles. New York: ACM, 2019: 1−15

    [78] 王帅,李丹. 分布式机器学习系统网络性能优化研究进展[J]. 计算机学报,2021,45(7):1384−1411

    Wang Shuai, Li Dan. Research progress on network performance optimization of distributed machine learning system[J]. Chinese Journal of Computers, 2021, 45(7): 1384−1411 (in Chinese)

    [79]

    Rajasekaran S, Ghobadi M, Kumar G, et al. Congestion control in machine learning clusters[C]//Proc of the 21st ACM Workshop on Hot Topics in Networks. New York: ACM, 2022: 235−242

    [80]

    Rajasekaran S, Ghobadi M, Akella A. CASSINI: Network-aware job scheduling in machine learning clusters[C]//Proc of the 21st USENIX Symp on Networked Systems Design and Implementation (NSDI 24). Berkeley, CA: USENIX Association, 2024: 1403−1420

    [81]

    Katebzadeh M, Costa P, Grot B. Saba: Rethinking datacenter network allocation from application’s perspective[C]//Proc of the 18th European Conf on Computer Systems (EuroSys). New York: ACM, 2023: 623−638

    [82]

    Hashemi S H, Abdu J, Campbell R. TicTac: Accelerating distributed deep learning with communication scheduling[C]//Proc of the 1st Machine Learning and Systems. California: MLSys, 2019: 418−430

    [83]

    Jayarajan A, Wei J, Gibson G, et al. Priority-based parameter propagation for distributed DNN training[C]// Proc of the 1st Machine Learning and Systems. California: MLSys, 2019: 132−145

    [84]

    Peng Yanghua, Zhu Yibo, Chen Yangrui, et al. A generic communication scheduler for distributed DNN training acceleration[C]//Proc of the 27th ACM Symp on Operating Systems Principles. New York: ACM, 2019: 16−29

    [85]

    Poutievski L, Mashayekhi O, Ong J, et al. Jupiter evolving: Transforming Google’s data center network via optical circuit switches and software-defined networking[C]//Proc of the ACM SIGCOMM 2022 Conf. New York: ACM, 2022: 66−85

    [86]

    Ballani H, Costa P, Behrendt R, et al. Sirius: A flat datacenter network with nanosecond optical switching[C]//Proc of the ACM SIGCOMM 2020 Conf. New York: ACM, 2020: 782−797

    [87]

    Xue Xuwei, Pan Bitao, Chen Sai, et al. Experimental assessments of fast optical switch and control system for data center networks[C/OL]//Proc of the 2021 Optical Fiber Communications Conf and Exhibition (OFC). Piscataway, NJ: IEEE, 2021[2024-02-21]. https://ieeexplore.ieee.org/document/9489828

    [88]

    Zhao Shizhen, Zhang Qizhou, Cao Peirui, et al. Flattened clos: Designing high-performance deadlock-free expander data center networks using graph contraction[C]// Proc of the 20th USENIX Symp on Networked Systems Design and Implementation (NSDI 23). Berkeley, CA: USENIX Association, 2023: 663−683

    [89]

    Zhao Shizhen, Cao Peirui, Wang Xinbing. Understanding the performance guarantee of physical topology design for optical circuit switched data centers[J]. Measurement and Analysis of Computing Systems, 2022, 5(3): 1−24

    [90]

    Cao Peirui, Zhao Shizhen, Teh M Y, et al. TROD: Evolving from electrical data center to optical data center[C/OL]//Proc of the 29th IEEE Int Conf on Network Protocols (ICNP). Piscataway, NJ: IEEE, 2021[2024-02-21]. https://doi.org/10.1109/ICNP52444.2021.9651977

图(4)  /  表(3)
计量
  • 文章访问数:  99
  • HTML全文浏览量:  12
  • PDF下载量:  43
  • 被引次数: 0
出版历程
  • 收稿日期:  2024-02-20
  • 修回日期:  2024-12-17
  • 录用日期:  2025-01-08
  • 网络出版日期:  2025-01-08
  • 刊出日期:  2025-04-30

目录

/

返回文章
返回