• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search
Zhang Yiran, Wang Shangguang, Ren Fengyuan. Survey on Traffic Management in Lossless Networks[J]. Journal of Computer Research and Development, 2025, 62(5): 1290-1306. DOI: 10.7544/issn1000-1239.202440096
Citation: Zhang Yiran, Wang Shangguang, Ren Fengyuan. Survey on Traffic Management in Lossless Networks[J]. Journal of Computer Research and Development, 2025, 62(5): 1290-1306. DOI: 10.7544/issn1000-1239.202440096

Survey on Traffic Management in Lossless Networks

Funds: This work was supported by the National Natural Science Foundation of China (62302055, 62132007, 62221003) and the Fundamental Research Funds for the Central Unversities.
More Information
  • Author Bio:

    Zhang Yiran: born in 1995. PhD, associate professor, PhD supervisor. Member of CCF. Her main research interests include network traffic management and control, datacenter network, and satellite network

    Wang Shangguang: born in 1982. PhD, professor, PhD supervisor. Distinguished member of CCF. His research interests include service computing, mobile edge computing, cloud computing, and satellite computing

    Ren Fengyuan: born in 1970. PhD, professor, PhD supervisor. Senior member of CCF. His main research interests include network traffic management and control, datacenter network, and IoT/industrial Internet

  • Received Date: February 20, 2024
  • Revised Date: December 17, 2024
  • Accepted Date: January 08, 2025
  • Available Online: January 08, 2025
  • Lossless networks are increasingly widely used in high performance computing (HPC), data centers and other fields. Lossless networks use link layer flow control to ensure that packets will not be dropped by switches due to buffer overflow, thus avoiding loss retransmission and greatly improving the latency and throughput performance of applications. However, the negative effects introduced by link layer flow control (congestion spreading, deadlock, etc.) impose challenges for the large-scale deployment of lossless networks. Therefore, the introduction of traffic management technology to improve the scalability of lossless networks has received great attention. We systematically review the research progress of traffic management in typical lossless networks used in HPC and data centers including InfiniBand and lossless Ethernet. First, we introduce the negative impact of link layer flow control and the goals of traffic management, and summarize the traditional traffic management architecture of lossless networks. Then according to the traffic management technical route (congestion control, congestion isolation, load balancing etc.) and the driven location (sender-driven, receiver-driven, etc.), we classify and elaborate on the latest research progress of InfiniBand and lossless Ethernet traffic management, and analyze the corresponding advantages and limitations. Finally, we point out the issues that need to be explored in further research on lossless network traffic management, including unified architecture for traffic management, joint congestion management within the host and the network, and traffic management for domain applications.

  • [1]
    Sherman B, Thordal M, Hanson K. NVMe over Fibre Channel [M]. Hoboken, NJ: John Wiley & Sons, 2019
    [2]
    Aspencore Network. Congestion management clears a path through 10 GbE [EB/OL]. [2024-01-02]. https://www.edn.com/congestion-management-clears-a-path-through-10-gbe/
    [3]
    Zhu Yibo, Kang Nanxi, Cao Jiaxin, et al. Packetlevel telemetry in large datacenter networks[C]//Proc of the 2015 ACM Conf on Special Interest Group on Data Communication. New York: ACM, 2015: 479−491
    [4]
    Li Yuliang, Miao Rui, Kim C, et al. Lossradar: Fast detection of lost packets in data center networks[C]//Proc of the 12th Int Conf on Emerging Networking Experiments and Technologies. New York: ACM, 2016: 481–495
    [5]
    The Research Institution of China Mobile. White paper on network evolution of intelligent computing center for AI large model[EB/OL]. 2023 [2024-01-02]. http://www.ecconsortium.org/Uploads/file/20230517/1684313521798632.pdf
    [6]
    Guo Chuanxiong, Wu Haitao, Deng Zhong, et al. RDMA over commodity Ethernet at scale[C]//Proc of the 2016 ACM SIGCOMM Conf. New York: ACM, 2016: 202–215
    [7]
    NVIDIA. InfiniBand accelerates six of the top ten supercomputers in the world, including the top three, and four of the top five on June's TOP500 [EB/OL]. [2024-01-02]. https://nvidianews.nvidia.com/news/infiniband-accelerates-six-of-the-top-ten-supercomputers-in-the-world-including-the-top-three-and-four-of-the-top-five-on-june-s-top500
    [8]
    InfiniBand Trade Association. Life in the fast lane: InfiniBand continues to reign as HPC interconnect of choice [EB/OL]. [2024-01-02]. https://www.infinibandta.org/lifeinthefastlaneinfinibandcontinuestoreignashpcinterconnectofchoice/
    [9]
    InfiniBand Trade Association. InfiniBand architecture specification release 1.4 [EB/OL]. [2021-02-01]. https://cw.infinibandta.org/document/dl/8567
    [10]
    IEEE. IEEE 802.1 Qbb-priority-based flow control [EB/OL]. [2024-01-02]. http://www.ieee802.org/1/pages/802.1bb.html.
    [11]
    Li Yuliang, Miao Rui, Liu H H, et al. HPCC: High precision congestion control[C]//Proc of the ACM Special Interest Group on Data Communication. New York: ACM, 2019: 44−58
    [12]
    InfiniBand Trade Association. Supplement to InfiniBand architecture specification volume 1 release 1.2. 1. annex a17: RoCEv2 [EB/OL]. [2020-12-01]. https://cw.infinibandta.org/document/dl/7781
    [13]
    Chen Yanpei, Griffith R, Liu Junda, et al. Understanding TCP throughput collapse in datacenter networks[C]//Proc of the 1st ACM Workshop on Research on Enterprise Networking. New York: ACM, 2009: 73–82
    [14]
    曾高雄,胡水海,张骏雪,等. 数据中心网络传输协议综述[J]. 计算机研究与发展,2020,57(1):74−84 doi: 10.7544/issn1000-1239.2020.20190519

    Zeng Gaoxiong, Hu Shuihai, Zhang Junxue, et al. Overview of Data Center Network Transport Protocols[J]. Journal of Computer Research and Developement, 2020, 57(1): 74−84 (in Chinese) doi: 10.7544/issn1000-1239.2020.20190519
    [15]
    Alizadeh M, Greenberg A, Maltz D A, et al. Data center TCP (DCTCP)[C]//Proc of the ACM SIGCOMM Conf. New York: ACM, 2010: 63–74
    [16]
    IETF. A Remote Direct Memory Access Protocol Specification (RFC 5040) [EB/OL]. [2024-05-21]. https://datatracker.ietf.org/doc/html/rfc5040
    [17]
    Alali F, Mizero F, Veeraraghavan M, et al. A measurement study of congestion in an InfiniBand network[C/OL]//Proc of the 2017 Network Traffic Measurement and Analysis Conf (TMA). Piscataway, NJ: IEEE, 2017[2024-05-21]. https://ieeexplore.ieee.org/document/8002911
    [18]
    Qian Kun, Cheng Wenxue, Zhang Tong, et al. Gentle flow control: Avoiding deadlock in lossless networks[C]//Proc of the ACM Special Interest Group on Data Communication. New York: ACM, 2019: 75–89
    [19]
    Hu Shuihai, Zhu Yibo, Cheng Peng, et al. Tagger: Practical PFC deadlock prevention in data center networks[J]. IEEE/ACM Transactions on Networking, 2019, 27(2): 889902
    [20]
    Hu Shuihai, Zhu Yibo, Cheng Peng, et al. Deadlocks in datacenter networks: Why do they form, and how to avoid them[C]//Proc of the 15th ACM Workshop on Hot Topics in Networks. New York: ACM, 2016: 92–98
    [21]
    Gran E G, Eimot M, Reinemo S A, et al. First experiences with congestion control in InfiniBand hardware[C/OL]//Proc of the IEEE Int Symp on Parallel Distributed Processing (IPDPS). Piscataway, NJ: IEEE, 2010 [2024-02-21]. https://doi.org/10.1109/IPDPS.2010.5470419
    [22]
    Pfister G, Gusat M, Denzel W, et al. Solving hot spot contention using InfiniBand architecture congestion control[C/OL]//Proc of the High Performance Interconnects for Distributed Computing. Piscataway, NJ: IEEE, 2005 [2024-02-21]. https://www.researchgate.net/publication/242408366
    [23]
    Liu Qian, Russell R D, Gran E G. Improvements to the InfiniBand congestion control mechanism[C]// Proc of the 24th IEEE Annual Symp on High-Performance Interconnects (HOTI). Piscataway, NJ: IEEE, 2016: 27−36
    [24]
    Zhang Yiran, Qian Kun, Ren Fengyuan. Receiver-driven congestion control for InfiniBand[C/OL]//Proc of the 50th Int Conf on Parallel Processing (ICPP). New York: ACM, 2021 [2024-02-21]. https://doi.org/10.1145/3472456.3472466
    [25]
    Jiang Nan, Becker D U, Michelogiannakis G, et al. Network congestion avoidance through speculative reservation[C/OL]// Proc of the IEEE Int Symp on High-Performance Computer Architecture. Piscataway, NJ: IEEE, 2012 [2024-02-21]. https://doi.org/10.1109/HPCA.2012.6169047
    [26]
    Jiang Nan, Dennison L, Dally W J. Network endpoint congestion control for finegrained communication[C/OL]//Proc of the Int Conf for High Performance Computing, Networking, Storage and Analysis. New York: ACM, 2015[2024-02-21]. https://doi.org/10.1145/2807591.2807600
    [27]
    Guay W L, Bogdanski B, Reinemo S A, et al. vFtree ― A Fattree routing algorithm using virtual lanes to alleviate congestion[C]// Proc of the 2011 IEEE Int Parallel Distributed Processing Symp. Piscataway, NJ: IEEE, 2011: 197−208
    [28]
    HPC Advisory Council. Understanding basic InfiniBand QoS [EB/OL]. [2024-01-02]. https://hpcadvisorycouncil.atlassian.net/wiki/spaces/HPCWORKS/pages/1178075141/Understanding+Basic+InfiniBand+QoS
    [29]
    EscuderoSahuquillo J, Garcia P J, Quiles F J, et al. A new proposal to deal with congestion in InfiniBandbased fattrees[J]. Journal of Parallel and Distributed Computing, 2014, 74(1): 1802−1819
    [30]
    Duato J, Johnson I, Flich J, et al. A new scalable and costeffective congestion management strategy for lossless multistage interconnection networks[C]// Proc of the 11th Int Symp on High-Performance Computer Architecture. Piscataway, NJ: IEEE, 2005: 108−119
    [31]
    Garcia P, Quiles F, Flich J, et al. Efficient, scalable congestion management for interconnection networks[J]. IEEE Micro, 2006, 26(5): 52−66
    [32]
    EscuderoSahuquillo J, Garcia P, Quiles F, et al. Cost-effective congestion management for interconnection networks using distributed deterministic routing[C]//Proc of the 16th IEEE Int Conf on Parallel and Distributed Systems. Piscataway, NJ: IEEE, 2010: 355−364
    [33]
    Geoffray P, Hoefler T. Adaptive routing strategies for modern high performance networks [C]//Proc of the 16th IEEE Symp on High Performance Interconnects. Piscataway, NJ: IEEE, 2008: 165−172
    [34]
    NVIDIA. How to configure adaptive routing and self healing networking [EB/OL]. [2024-01-02]. https://enterprise-support.nvidia.com/s/article/How-To-Configure-Adaptive-Routing-and-Self-Healing-Networking-New
    [35]
    NVIDIA. NVIDIA ConnectX-7[EB/OL]. [2024-05-21]. https://resources.nvidia.com/en-us-accelerated-networking-resource-library/connectx-7-datasheet
    [36]
    NVIDIA. NVIDIA BlueField networking platform[EB/OL]. [2024-05-21]. https://docs.nvidia.com/networking/display/bf3dpu/introduction
    [37]
    Smith S A, Cromey C E, Lowenthal D K, et al. Mitigating interjob interference using adaptive flowaware routing[C]// Proc of the Int Conf for High Performance Computing, Networking, Storage and Analysis(SC18). Piscataway, NJ: IEEE, 2018: 346−360
    [38]
    EscuderoSahuquillo J, Gran E G, Garcia P J, et al. Combining congested-flow isolation and injection throttling in HPC interconnection networks[C]//Proc of the 2011 Int Conf on Parallel Processing. New York: ACM, 2011: 662−672
    [39]
    Zhu Yibo, Eran H, Firestone D, et al. Congestion control for largescale RDMA deployments[C]//Proc of the 2015 ACM Conf on Special Interest Group on Data Communication. New York: ACM, 2015: 523–536
    [40]
    IEEE. 802.1 Qau―Congestion Notification [EB/OL]. 2010[2024-01-02]. http://www.ieee802.org/1/pages/802.1au.html
    [41]
    Floyd S, Jacobson V. Random early detection gateways for congestion avoidance[J]. IEEE/ACM Transactions on Networking, 1993, 1(4): 397−413
    [42]
    Zhang Yiran, Liu Yifan, Meng Qingkai, et al. Congestion detection in lossless networks[C]//Proc of the 2021 ACM Conf on Special Interest Group on Data Communication. New York: ACM, 2021: 370–383
    [43]
    Mittal R, Lam V T, Dukkipati N, et al. TIMELY: Rttbased congestion control for the datacenter [C]//Proc of the 2015 ACM Conf on Special Interest Group on Data Communication. New York: ACM, 2015: 537–550
    [44]
    Patke A, Jha S, Qiu Haoran, et al. Delay sensitivity-driven congestion mitigation for HPC systems [C]//Proc of the ACM Int Conf on Supercomputing. New York: ACM, 2021: 342–353
    [45]
    Cheng Wenxue, Qian Kun, Jiang Wanchun, et al. Rearchitecting congestion management in lossless Ethernet [C]//Proc of the 17th USENIX Symp on Networked Systems Design and Implementation (NSDI 20). Berkeley, CA: USENIX Association, 2020: 19−36
    [46]
    Open Compute Project. Inband network telemetry in broadcom trident3[EB/OL]. [2024-01-02]. https://www.opencompute.org/files/INTInBandNetworkTelemetryAPowerfulAnalyticsFrameworkforyourDataCenterOCPFinal3.pdf
    [47]
    Xu Lisong, Harfoush K, Rhee I. Binary increase congestion control (bic) for fast long-distance networks[C]// Proc of IEEE INFOCOM. Piscataway, NJ: IEEE, 2004: 2514−2524
    [48]
    Stephens B, Cox A L, Singla A, et al. Practical DCB for improved data center networks[C]//Proc of the IEEE Conf on Computer Communications. Piscataway, NJ: IEEE, 2014: 1824−1832
    [49]
    ZhuYibo, Ghobadi M, Misra V, et al. ECN or delay: Lessons learnt from analysis of DCQCN and TIMELY[C]//Proc of the Conf on Emerging Network Experiment and Technology. New York: ACM, 2016: 313−327
    [50]
    Kumar G, Dukkipati N, Jang K, et al. Swift: Delay is simple and effective for congestion control in the datacenter[C]//Proc of the Annual Conf of the ACM Special Interest Group on Data Communication on the Applications, Technologies, Architectures, and Protocols for Computer Communication. New York: ACM, 2020: 514−528
    [51]
    Zhang Yiran, Meng Qingkai, Hu Chaolei, et al. Revisiting congestion control for lossless Ethernet[C]// Proc of the 21st USENIX Symp on Networked Systems Design and Implementation (NSDI 24). Berkeley, CA: USENIX Association, 2024: 131−148
    [52]
    Taheri P, Menikkumbura D, Vanini E, et al. RoCC: Robust congestion control for RDMA[C]// Proc of the 16th Int Conf on Emerging Networking Experiments and Technologies. New York: ACM, 2020: 17−30
    [53]
    Cho I, Jang K, Han D. Credit-scheduled delay-bounded congestion control for datacenters [C]//Proc of the Conf of the ACM Special Interest Group on Data Communication. New York: ACM, 2017: 239−252
    [54]
    Gao P, Narayan A, Kumar G, et al. pHost: Distributed nearoptimal datacenter transport over commodity network fabric[C/OL]//Proc of the 11th ACM Conf on Emerging Networking Experiments and Technologies. New York: ACM, 2015[2024-02-21]. https://doi.org/10.1145/2716281.2836086
    [55]
    Handley M, Raiciu C, Agache A, et al. Rearchitecting datacenter networks and stacks for low latency and high performance[C]//Proc of the Conf of the ACM Special Interest Group on Data Communication. New York: ACM, 2017: 29−42
    [56]
    Montazeri B, Li Y, Alizadeh M, et al. Homa: A receiver-driven low-latency transport protocol using network priorities[C]//Proc of the 2018 Conf of the ACM Special Interest Group on Data Communication. New York: ACM, 2018: 221–235
    [57]
    Hu Shuihai, Bai Wei, Zeng Gaoxiong, et al. Aeolus: A building block for proactive transport in datacenters [C]//Proc of the Conf of the ACM Special Interest Group on Data Communication. New York: ACM, 2020: 422−434
    [58]
    Zhang Jiao, Zhong Xiaolong, Wan Zirui, et al. RCC: Enabling receiver-driven RDMA congestion control with congestion divide-and-conquer in datacenter networks[J]. IEEE/ACM Transactions on Networking, 2023, 31(1): 103−117 doi: 10.1109/TNET.2022.3185105
    [59]
    Ghorbani S, Yang Zibin, Godfrey P B, et al. DRILL: Micro load balancing for low-latency data center networks[C]//Proc of the Conf of the ACM Special Interest Group on Data Communication. New York: ACM, 2017: 225−238
    [60]
    Alizadeh M, Edsall T, Dharmapurikar S, et al. CONGA: Distributed congestion-aware load balancing for datacenters[C]//Proc of the 2014 ACM Conf on SIGCOMM. New York: ACM, 2014: 503−514
    [61]
    Vanini E, Pan Rong, Alizadeh M, et al. Let It Flow: Resilient asymmetric load balancing with flowlet switching[C]//Proc of the 14th USENIX Symp on Networked Systems Design and Implementation (NSDI 17). Berkeley, CA: USENIX Association, 2017: 407−420
    [62]
    Song C, Khooi X, Joshi R, et al. Network load balancing with in-network reordering support for RDMA[C]//Proc of the ACM SIGCOMM 2023 Conf. New York: ACM, 2023: 816–831
    [63]
    He Keqiang, Rozner E, Agarwal K, et al. Presto: Edge-based load balancing for fast datacenter networks[C]//Proc of the 2015 ACM Conf on Special Interest Group on Data Communication. New York: ACM, 2015: 465−478
    [64]
    Lu Yuanwei, Chen Guo, Li Bojie, et al. Multi-path transport for RDMA in datacenters[C]//Proc of the 15th USENIX Symp on Networked Systems Design and Implementation (NSDI 18). Berkeley, CA: USENIX Association, 2018: 357−371
    [65]
    Wischik D, Raiciu C, Greenhalgh A, et al. Design, implementation and evaluation of congestion control for multipath TCP[C/OL]// Proc of the 8th USENIX Symp on Networked Systems Design and Implementation (NSDI 11). Berkeley, CA: USENIX Association, 2011[2024-02-21]. http://www.usenix.org/events/nsdi11/tech/full_papers/Wischik.pdf
    [66]
    Hu Jinbin, Zeng Chaoliang, Wang Zilong, et al. Enabling load balancing for lossless datacenters[C/OL]//Proc of the 31st IEEE Int Conf on Network Protocols (ICNP). Piscataway, NJ: IEEE, 2023[2024-02-21]. https://doi.org/10.1109/ICNP59255.2023.10355615
    [67]
    Microsoft. MSCCL[EB/OL]. [2024-01-02]. https://github.com/microsoft/msccl
    [68]
    Microsoft. DeepSpeed[EB/OL]. [2024-01-02]. https://github.com/microsoft/DeepSpeed
    [69]
    Shalev L, Ayoub H, Bshara N, et al. A cloud-optimized transport protocol for elastic and scalable HPC[J]. IEEE Micro, 2020, 40(6): 67−73 doi: 10.1109/MM.2020.3016891
    [70]
    Goyal P, Shah P, Zhao K, et al. Backpressure flow control[C]//Proc of the 19th USENIX Symp on Networked Systems Design and Implementation (NSDI 22). Berkeley, CA: USENIX Association, 2022: 779−805
    [71]
    IEEE. IEEE 802.1 Qcz―Congestion Isolation [EB/OL]. 2019[2024-01-02]. https://1.ieee802.org/tsn/8021qcz/
    [72]
    Ultra Ethernet Consortium. Ultra Ethernet consortium [EB/OL]. 2023[2024-01-02]. https://ultraethernet.org
    [73]
    Saksham A, Arvind K, Rachit A, et al. Host congestion control[C]//Proc of the ACM SIGCOMM 2023 Conf. New York: ACM, 2023: 275−287
    [74]
    NVIDIA. NVLink and NVSwitch: Fastest HPC fata center platform [EB/OL]. [2024-01-02]. https://www.nvidia.com/en-us/data-center/nvlink/
    [75]
    Huang Yanping, Cheng Youlong, Bapna A, et al. GPipe: Efficient training of giant neural networks using pipeline parallelism[C]//Proc of the 33rd Int Conf on Neural Information Processing Systems. New York: ACM, 2019: 103−112
    [76]
    Khani M, Ghobadi M, Alizadeh M, et al. SiP-ML: High-bandwidth optical network interconnects for machine learning training[C]//Proc of the Conf of the ACM Special Interest Group on Data Communication. New York: ACM, 2021: 657−675
    [77]
    Narayanan D, Harlap A, Phanishayee A, et al. PipeDream: Generalized pipeline parallelism for DNN training[C]//Proc of the 27th ACM Symp on Operating Systems Principles. New York: ACM, 2019: 1−15
    [78]
    王帅,李丹. 分布式机器学习系统网络性能优化研究进展[J]. 计算机学报,2021,45(7):1384−1411

    Wang Shuai, Li Dan. Research progress on network performance optimization of distributed machine learning system[J]. Chinese Journal of Computers, 2021, 45(7): 1384−1411 (in Chinese)
    [79]
    Rajasekaran S, Ghobadi M, Kumar G, et al. Congestion control in machine learning clusters[C]//Proc of the 21st ACM Workshop on Hot Topics in Networks. New York: ACM, 2022: 235−242
    [80]
    Rajasekaran S, Ghobadi M, Akella A. CASSINI: Network-aware job scheduling in machine learning clusters[C]//Proc of the 21st USENIX Symp on Networked Systems Design and Implementation (NSDI 24). Berkeley, CA: USENIX Association, 2024: 1403−1420
    [81]
    Katebzadeh M, Costa P, Grot B. Saba: Rethinking datacenter network allocation from application’s perspective[C]//Proc of the 18th European Conf on Computer Systems (EuroSys). New York: ACM, 2023: 623−638
    [82]
    Hashemi S H, Abdu J, Campbell R. TicTac: Accelerating distributed deep learning with communication scheduling[C]//Proc of the 1st Machine Learning and Systems. California: MLSys, 2019: 418−430
    [83]
    Jayarajan A, Wei J, Gibson G, et al. Priority-based parameter propagation for distributed DNN training[C]// Proc of the 1st Machine Learning and Systems. California: MLSys, 2019: 132−145
    [84]
    Peng Yanghua, Zhu Yibo, Chen Yangrui, et al. A generic communication scheduler for distributed DNN training acceleration[C]//Proc of the 27th ACM Symp on Operating Systems Principles. New York: ACM, 2019: 16−29
    [85]
    Poutievski L, Mashayekhi O, Ong J, et al. Jupiter evolving: Transforming Google’s data center network via optical circuit switches and software-defined networking[C]//Proc of the ACM SIGCOMM 2022 Conf. New York: ACM, 2022: 66−85
    [86]
    Ballani H, Costa P, Behrendt R, et al. Sirius: A flat datacenter network with nanosecond optical switching[C]//Proc of the ACM SIGCOMM 2020 Conf. New York: ACM, 2020: 782−797
    [87]
    Xue Xuwei, Pan Bitao, Chen Sai, et al. Experimental assessments of fast optical switch and control system for data center networks[C/OL]//Proc of the 2021 Optical Fiber Communications Conf and Exhibition (OFC). Piscataway, NJ: IEEE, 2021[2024-02-21]. https://ieeexplore.ieee.org/document/9489828
    [88]
    Zhao Shizhen, Zhang Qizhou, Cao Peirui, et al. Flattened clos: Designing high-performance deadlock-free expander data center networks using graph contraction[C]// Proc of the 20th USENIX Symp on Networked Systems Design and Implementation (NSDI 23). Berkeley, CA: USENIX Association, 2023: 663−683
    [89]
    Zhao Shizhen, Cao Peirui, Wang Xinbing. Understanding the performance guarantee of physical topology design for optical circuit switched data centers[J]. Measurement and Analysis of Computing Systems, 2022, 5(3): 1−24
    [90]
    Cao Peirui, Zhao Shizhen, Teh M Y, et al. TROD: Evolving from electrical data center to optical data center[C/OL]//Proc of the 29th IEEE Int Conf on Network Protocols (ICNP). Piscataway, NJ: IEEE, 2021[2024-02-21]. https://doi.org/10.1109/ICNP52444.2021.9651977

Catalog

    Article views (149) PDF downloads (74) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return