Citation: | Wei Xuechao, Zhou Zhe, Xu Yinghui, Zhang Jiejing, Xie Yuan, Sun Guangyu. PetS: A Scalable Inference Serving System for Parameter-Efficient Transformers[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202440206 |
Deploying Transformer models under the conventional pre-train-then-fine-tune paradigm is challenging for multi-task serving, because a full model copy for each downstream task must be maintained, quickly exhausting the storage budget. Recent algorithmic advances in Parameter-Efficient Transformers (PETs) have shown enormous potential to mitigate the storage overhead. They share the pre-trained model among tasks and only fine-tune a small portion of task-specific parameters. Unfortunately, existing serving systems neither have flexible PET task management mechanisms nor can efficiently serve queries to different tasks in batches. Therefore, we propose PetS, a unified framework for multi-task PETs serving. Specifically, different PET tasks are expressed by a unified representation in the same framework, which enables flexible PET task management. Based on the unified representation, we design a specialized PET inference engine to batch different tasks' queries together and execute them with task-agnostic shared operators and task-specific PET operators. Equipped with the PET inference engine, PetS is more scalable with respect to the number of tasks on a single GPU device. To further improve system throughput, we propose a coordinated batching strategy taking query length, PET task type as well as system load balancing together into consideration. To improve the throughput on multiple GPU instances, we also propose a PET-migration based load balancing strategy. We evaluate PetS on platforms with single GPU, including Edge/Desktop/Server GPUs. Comprehensive experiments demonstrate that PetS supports up to 26x more concurrent tasks and improves the serving throughput by 1.53x and 1.63x on Desktop and Server GPU nodes, respectively. On multiple GPUs, our load-balancing strategy also provides up to 29% speedup.
[1] |
Devlin J, Chang M, Lee K, et al. Bert: Pre-training of deep bidirectional transformers for language understanding [C] //Conf of the 34th North American Chapter of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2019: 4171−4186
|
[2] |
Radford A, Wu J, Child R, et al. Language models are unsupervised multitask learners [EB/OL]. San Francisco, CA: OpenAI, 2019[2024-10-04]. https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf
|
[3] |
Brown T B, Mann B, Ryder N, et al. Language models are few-shot learners [C] //Proc of the 34th Int Conf on Neural Information Processing Systems. New York: ACM, 2020: 1877−1901
|
[4] |
Radford A, Narasimhan K, Salimans T, et al. Improving language understanding by generative pre-training [EB/OL]. San Francisco, CA: OpenAI, 2018[2024-10-04]. https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf
|
[5] |
Liu Yinhan, Ott M, Goyal N, et al. Roberta: A robustly optimized BERT pretraining approach [J]. arXiv preprint, arXiv: 1907.11692, 2019
|
[6] |
Yang Zhilin, Dai Zihang, Yang Yiming, et al. XLNet: Generalized autoregressive pretraining for language understanding [C] //Proc of the 33rd Int Conf on Neural Information Processing Systems. New York: ACM, 2019: 5753−5763
|
[7] |
Raffel C, Shazeer N, Roberts A, et al. Exploring the limits of transfer learning with a unified text-to-text transformer[J]. The Journal of Machine Learning Research, 2020, 21(1): 5485−5551
|
[8] |
Zhang Sunan, Roller S, Goyal N, et al. Opt: Open pre-trained transformer language models [J]. arXiv preprint, arXiv: 2205.01068, 2022
|
[9] |
Liu Ze, Lin Yutong, Cao Yue, et al. Swin transformer: Hierarchical vision transformer using shifted windows [C] //Proc of the 35th IEEE/CVF Int Conf on Computer Vision (ICCV). Piscataway, NJ: IEEE, 2021: 10012−10022
|
[10] |
Dosovitskiy A, Beyer L, Kolesnikov A, et al. An image is worth 16x16 words: Transformers for image recognition at scale [J]. arXiv preprint, arXiv: 2010.11929, 2020
|
[11] |
Hu Qinghao, Ye Zhisheng, Wang Zerui, et al. Characterization of large language model development in the datacenter [C] //Proc of the 21st USENIX Symp on Networked Systems Design and Implementation. Berkeley, CA: USENIX Association, 2024: 709−729
|
[12] |
Fang Jiarui, Yu Yang, Zhao Chengduo, et al. TurboTransformers: An efficient GPU serving system for transformer models [C] //Proc of the 26th ACM SIGPLAN Symp on Principles and Practice of Parallel Programming. New York: ACM, 2021: 389−402
|
[13] |
Crankshaw D, Wang Xin, Zhou Guilio, et al. Clipper: A low-latency online prediction serving system [C] //Proc of the 14th USENIX Symp on Networked Systems Design and Implementation. Berkeley, CA: USENIX Association, 2017: 613−627
|
[14] |
Gao Pin, Yu Lingfan, Wu Yongwei, et al. Low latency RNN inference with cellular batching [C/OL] //Proc of the 13th EuroSys Conf. New York: ACM, 2018[2025-01-18]. https://dl.acm.org/doi/10.1145/3190508.3190541
|
[15] |
Guo Demi, Rush A, Kim Y. Parameter-efficient transfer learning with diff pruning [C] //Proc of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th Int Joint Conf on Natural Language Processing. Stroudsburg, PA: ACL, 2021: 4884−4896
|
[16] |
Houlsby N, Giurgiu A, Jastrzebski S, et al. Parameter-efficient transfer learning for NLP [C] //Proc of the 36th Int Conf on Machine Learning. New York: PMLR, 2019: 2790−2799
|
[17] |
Zaken E B, Ravfogel S, Goldberg Y. BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models [C/OL] //Proc of the 60th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2022[2025-01-18]. https://aclanthology.org/2022.acl-short.1.pdf
|
[18] |
Zhao Mengjie, Lin Tao, Mi Fei, et al. Masking as an efficient alternative to finetuning for pretrained language models [C] //Proc of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2020: 2226−2241
|
[19] |
Hu E, Shen Yelong, Wallis P, et al. LoRA: Low-rank adaptation of large language models [J]. arXiv preprint, 2021: arXiv: 2106.09685
|
[20] |
NVIDIA. Fast Transformer [EB/OL]. 2021[2024-10-04]. https://github.com/NVIDIA/FasterTransformer
|
[21] |
Wang Xiaohui, Xiong Ying, Wei Yang, et al. LightSeq: A high performance inference library for transformers [J]. arXiv preprint, arXiv: 2010.13887, 2020
|
[22] |
Gururajan A K, Lopez-Cuena E, Bayarri-Planas J, et al. Aloe: A family of fine-tuned open healthcare LLMs [J]. arXiv preprint, arXiv: 2405.01886, 2024
|
[23] |
Gupta A, Shirgaonkar A, Balaguer A D L, et al. RAG vs fine-tuning: Pipelines, tradeoffs, and a case study on agriculture [J]. arXiv preprint, arXiv: 2401.08406, 2024
|
[24] |
Yang Hongyang, Liu X, Wang C D. Fingpt: Open-source financial large language models [J]. arXiv preprint, arXiv: 2306.06031, 2023
|
[25] |
Romero F, Li Qian, Yadwadkar N J, et al. INFaaS: Automated model-less inference serving [C] //Proc of the 29th USENIX Annual Technical Conf. Berkeley, CA: USENIX Association, 2021: 397−411
|
[26] |
Shen Haichen, Chen Lequn, Jin Yuchen, et al. Nexus: A GPU cluster engine for accelerating DNN-based video analysis[C] //Proc of the 27th ACM Symp on Operating Systems Principles. New York: ACM, 2019: 322−337
|
[27] |
Sidhu S, Wing J, Japi A. Rafiqi: A GPU-based deep learning model serving system [R]. Berkeley, CA: University of California, 2020
|
[28] |
NVIDIA. Triton inference server [EB/OL]. 2018[2024-10-04]. https://developer.nvidia.com/nvidia-triton-inference-server
|
[29] |
Google. TensorFlow serving [EB/OL]. 2016[2024-10-04]. https://github.com/tensorflow/serving
|
[30] |
Prasanna S, Rogers A, Rumshisky A. When bert plays the lottery, all tickets are winning [J]. arXiv preprint, arXiv: 2005.00561, 2020
|
[31] |
Mao Yuning, Mathias L, Hou Rui, et al. UniPELT: A unified framework for parameter-efficient language model tuning [C] //Proc of the 60th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2022: 6253−6264
|
[32] |
Gale T, Zaharia M, Young C, et al. Sparse GPU kernels for deep learning [C/OL] //Proc of the 33rd Int Conf for High Performance Computing, Networking, Storage and Analysis. New York: ACM, 2020[2025-01-18]. https://dl.acm.org/doi/10.5555/3433701.3433723
|
[1] | Wu Tianxing, Cao Xudong, Bi Sheng, Chen Ya, Cai Pingqiang, Sha Hangyu, Qi Guilin, Wang Haofen. Constructing Health Management Information System for Major Chronic Diseases Based on Large Language Model[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202440570 |
[2] | Zhao Yun, Liu Dexi, Wan Changxuan, Liu Xiping, Liao Guoqiong. Mental Health Text Matching Model Integrating Characters’ Mental Portrait[J]. Journal of Computer Research and Development, 2024, 61(7): 1812-1824. DOI: 10.7544/issn1000-1239.202220987 |
[3] | Fu Tao, Chen Zhaojiong, Ye Dongyi. GAN-Based Bidirectional Decoding Feature Fusion Extrapolation Algorithm of Chinese Landscape Painting[J]. Journal of Computer Research and Development, 2022, 59(12): 2816-2830. DOI: 10.7544/issn1000-1239.20210830 |
[4] | Gan Xinbiao, Tan Wen, Liu Jie. Bidirectional-Bitmap Based CSR for Reducing Large-Scale Graph Space[J]. Journal of Computer Research and Development, 2021, 58(3): 458-466. DOI: 10.7544/issn1000-1239.2021.20200090 |
[5] | Zhou Donghao, Han Wenbao, Wang Yongjun. A Fine-Grained Information Diffusion Model Based on Node Attributes and Content Features[J]. Journal of Computer Research and Development, 2015, 52(1): 156-166. DOI: 10.7544/issn1000-1239.2015.20130915 |
[6] | Li Yaxiong, Zhang Jianqiang, Pan Deng, Hu Dan. A Study of Speech Recognition Based on RNN-RBM Language Model[J]. Journal of Computer Research and Development, 2014, 51(9): 1936-1944. DOI: 10.7544/issn1000-1239.2014.20140211 |
[7] | Huang He, Sun Yu'e, Chen Zhili, Xu Hongli, Xing Kai, Chen Guoliang. Completely-Competitive-Equilibrium-Based Double Spectrum Auction Mechanism[J]. Journal of Computer Research and Development, 2014, 51(3): 479-490. |
[8] | Zhu Feng, Luo Limin, Song Yuqing, Chen Jianmei, Zuo Xin. Adaptive Spatially Neighborhood Information Gaussian Mixture Model for Image Segmentation[J]. Journal of Computer Research and Development, 2011, 48(11): 2000-2007. |
[9] | Ma Xiao, Wang Xuan, and Wang Xiaolong. The Information Model for a Class of Imperfect Information Game[J]. Journal of Computer Research and Development, 2010, 47(12). |
[10] | Ma Liang, Chen Qunxiu, and Cai Lianhong. An Improved Model for Adaptive Text Information Filtering[J]. Journal of Computer Research and Development, 2005, 42(1): 79-84. |