Citation: | Zheng Yingying, Zhou Junlong, Shen Yufan, Cong Peijin, Wu Zebin. Time and Energy-Sensitive End-Edge-Cloud Resource Provisioning Optimization Method for Collaborative Vehicle-Road Systems[J]. Journal of Computer Research and Development, 2023, 60(5): 1037-1052. DOI: 10.7544/issn1000-1239.202220734 |
With the continuous development of information technology, intelligent transportation system has gradually become the trend of future transportation. However, the increasing number of time-sensitive and computation-intensive applications in intelligent transportation systems has brought severe challenges to resource-limited vehicles. The end-edge-cloud hierarchical computing architecture is an effective means to cope with this challenge. In the collaborative end-edge-cloud vehicle-road system, vehicle users can offload time-sensitive tasks to nearby roadside units to ensure the timing requirement and offload computation-intensive tasks to the cloud to meet their needs of computing power. However, task offloading also leads to additional transmission latency and energy overhead. In addition, tasks may also suffer from errors during transmission, resulting in degraded reliability. Therefore, to ensure the user experience of vehicles in the collaborative end-edge-cloud vehicle-road system, a multi-agent reinforcement learning based resource scheduling scheme is proposed. The scheme makes full use of the end-edge-cloud architecture’s characteristics and adopts the centralized training-decentralized execution framework to construct a deep neural network which decides the optimal offloading and computing resource allocation for tasks and hence optimizes system latency and energy consumption under the reliability constraint. To verify the efficiency of the proposed scheme, a metric named utility value is adopted in the experiment to show the improvement on latency and energy efficiency. Experimental results show that compared with the existing approaches, the utility value increased by our scheme can be up to 221.9%.
[1] |
Li Zhu, Yu F R, Wang Yige, et al. Big data analytics in intelligent transportation systems: A survey[J]. IEEE Transactions on Intelligent Transportation Systems, 2019, 20(1): 383−398 doi: 10.1109/TITS.2018.2815678
|
[2] |
韩牟,杨晨,华蕾,等. 面向移动边缘计算车联网中车辆假名管理方案[J]. 计算机研究与发展,2022,59(4):781−795
Han Mu, Yang Chen, Hua Lei, et al. Vehicle pseudonym management scheme in Internet of vehicles for mobile edge computing[J]. Journal of Computer Research and Development, 2022, 59(4): 781−795 (in Chinese)
|
[3] |
Yu Rong, Zhang Yan, Gjessing S, et al. Toward cloud-based vehicular networks with efficient resource management[J]. IEEE Network: The Magazine of Computer Communications, 2013, 27(5): 48−55
|
[4] |
Ding Yan, Li Kenli, Liu Chubo, et al. A potential game theoretic approach to computation offloading strategy optimization in end-edge-cloud computing[J]. IEEE Transactions on Parallel and Distributed Systems, 2022, 33(6): 1503−1519 doi: 10.1109/TPDS.2021.3112604
|
[5] |
佟兴,张召,金澈清,等. 面向端边云协同架构的区块链技术综述[J]. 计算机学报,2021,44(12):2345−2366
Tong Xing, Zhang Zhao, Jin Cheqing, et al. Blockchain for end-edge-cloud architecture: A survey[J]. Chinese Journal of Computers, 2021, 44(12): 2345−2366 (in Chinese)
|
[6] |
Kai Caihong, Zhou Hao, Yi Yibo, et al. Collaborative cloud-edge-end task offloading in mobile-edge computing networks with limited communication capability[J]. IEEE Transactions on Cognitive Communications and Networking, 2021, 7(2): 624−634 doi: 10.1109/TCCN.2020.3018159
|
[7] |
段文雪,胡铭,周琼,等. 云计算系统可靠性研究综述[J]. 计算机研究与发展,2020,57(1):102−123
Duan Wenxue, Hu Ming, Zhou Qiong, et al. Reliability in cloud computing system: A review[J]. Journal of Computer Research and Development, 2020, 57(1): 102−123 (in Chinese)
|
[8] |
Lv Zhihan, Bellavista P, Song Houbing. Sustainable solutions for the intelligent transportation systems [EB/OL]. [2022-11-17].https://dl.airtable.com/.attachments/a6886226b96834c5aeac1da84634ab49/1a2b6b17/SustainableSolutionsfortheIntelligentTransportationSystems.pdf
|
[9] |
Wan Shaohua, Li Xiang, Xue Yuan, et al. Efficient computation offloading for Internet of vehicles in edge computing-assisted 5G networks[J]. The Journal of Supercomputing, 2020, 76(4): 2518−2547 doi: 10.1007/s11227-019-03011-4
|
[10] |
Zhu Xiaoyu, Luo Yueyi, Liu Anfeng, et al. Multiagent deep reinforcement learning for vehicular computation offloading in IoT[J]. IEEE Internet of Things Journal, 2021, 8(12): 9763−9773 doi: 10.1109/JIOT.2020.3040768
|
[11] |
Cao Zilong, Zhou Pan, Li Ruixuan, et al. Multiagent deep reinforcement learning for joint multichannel access and task offloading of mobile-edge computing in industry 4.0[J]. IEEE Internet of Things Journal, 2020, 7(7): 6201−6213 doi: 10.1109/JIOT.2020.2968951
|
[12] |
Sun Yuxuan, Guo Xueying, Song Jinhui, et al. Adaptive learning-based task offloading for vehicular edge computing systems[J]. IEEE Transactions on Vehicular Technology, 2019, 68(4): 3061−3074 doi: 10.1109/TVT.2019.2895593
|
[13] |
Wang Zhe, Zhao Dongmei, Ni Minming, et al. Collaborative mobile computation offloading to vehicle-based cloudlets[J]. IEEE Transactions on Vehicular Technology, 2021, 70(1): 768−781 doi: 10.1109/TVT.2020.3043296
|
[14] |
Zeng Feng, Chen Qiao, Meng Lin, et al. Volunteer assisted collaborative offloading and resource allocation in vehicular edge computing[J]. IEEE Transactions on Intelligent Transportation Systems, 2021, 22(6): 3247−3257 doi: 10.1109/TITS.2020.2980422
|
[15] |
Luo Quyuan, Li Changle, Luan T, et al. Minimizing the delay and cost of computation offloading for vehicular edge computing[J]. IEEE Transactions on Services Computing, 2021, 15(5): 2897−2909
|
[16] |
Zhao Junhui, Li Qiuping, Gong Yi, et al. Computation offloading and resource allocation for cloud assisted mobile edge computing in vehicular networks[J]. IEEE Transactions on Vehicular Technology, 2019, 68(8): 7944−7956 doi: 10.1109/TVT.2019.2917890
|
[17] |
Wang Hansong, Li Xi, Ji Hong, et al. Dynamic offloading scheduling scheme for MEC-enabled vehicular networks [C]//Proc of IEEE/CIC Int Conf on Communications in China. Piscataway, NJ: IEEE, 2018: 206−210
|
[18] |
Dai Penglin, Hu Kaiwen, Wu Xiao, et al. A probabilistic approach for cooperative computation offloading in MEC-assisted vehicular networks[J]. IEEE Transactions on Intelligent Transportation Systems, 2022, 23(2): 899−911 doi: 10.1109/TITS.2020.3017172
|
[19] |
许小龙,方子介,齐连永,等. 车联网边缘计算环境下基于深度强化学习的分布式服务卸载方法[J]. 计算机学报,2021,44(12):2382−2405 doi: 10.11897/SP.J.1016.2021.02382
Xu Xiaolong, Fang Zijie, Qi Lianyong, et al. A deep reinforcement learning-based distributed service offloading method for edge computing empowered Internet of vehicles[J]. Chinese Journal of Computers, 2021, 44(12): 2382−2405 (in Chinese) doi: 10.11897/SP.J.1016.2021.02382
|
[20] |
Ning Zhaolong, Dong Peiran, Wang Xiaojie, et al. Deep reinforcement learning for vehicular edge computing: an intelligent offloading system[J]. ACM Transactions on Intelligent Systems and Technology, 2019, 10(6): 1−24
|
[21] |
Althamary I, Huang C W, Lin P, et al. A survey on multi-agent reinforcement learning methods for vehicular networks [C]//Proc of the 15th Int Wireless Communications and Mobile Computing Conf. Piscataway, NJ: IEEE, 2019: 1154−1159
|
[22] |
Kingma D P, Ba J. Adam: A method for stochastic optimization [J]. arXiv preprint, arXiv: 1412.6980, 2014
|
[23] |
卢海峰,顾春华,罗飞,等. 基于深度强化学习的移动边缘计算任务卸载研究[J]. 计算机研究与发展,2020,57(7):1539−1554
Lu Haifeng, Gu Chunhua, Luo Fei, et al. Research on task offloading based on deep reinforcement learning in mobile edge computing[J]. Journal of Computer Research and Development, 2020, 57(7): 1539−1554 (in Chinese)
|
[24] |
Sutton R S, Maei H R, Precup D, et al. Fast gradient-descent methods for temporal-difference learning with linear function approximation [C]//Proc of the 26th Annual Int Conf on Machine Learning. New York: ACM, 2009: 993−1000
|
[25] |
Cao Kun, Li Liying, Cui Yangguang, et al. Exploring placement of heterogeneous edge servers for response time minimization in mobile edge-cloud computing[J]. IEEE Transactions on Industrial Informatics, 2020, 17(1): 494−503
|
[26] |
Chen Zhao, Wang Xiaodong. Decentralized computation offloading for multi-user mobile edge computing: A deep reinforcement learning approach[J]. EURASIP Journal on Wireless Communications and Networking, 2020, 2020(1): 637−646
|
[27] |
Yi Zhang, Liu Yu, Zhou Junlong, et al. Slow-movement particle swarm optimization algorithms for scheduling security-critical tasks in resource-limited mobile edge computing[J]. Future Generation Computer Systems, 2020, 112: 148−161 doi: 10.1016/j.future.2020.05.025
|
[28] |
Lillicrap T P, Hunt J J, Pritzel A, et al. Continuous control with deep reinforcement learning [J]. arXiv preprint, arXiv: 1509.02971, 2015
|
[29] |
NVIDIA Corporation. NVIDIA Jetson AGX Xavier series [EB/OL]. (2022-10-31)[2022-11-17].https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-agx-xavier/
|
[1] | Ge Zhenxing, Xiang Shuai, Tian Pinzhuo, Gao Yang. Solving GuanDan Poker Games with Deep Reinforcement Learning[J]. Journal of Computer Research and Development, 2024, 61(1): 145-155. DOI: 10.7544/issn1000-1239.202220697 |
[2] | Zeng Junjie, Qin Long, Xu Haotian, Zhang Qi, Hu Yue, Yin Quanjun. Exploration Approaches in Deep Reinforcement Learning Based on Intrinsic Motivation: A Review[J]. Journal of Computer Research and Development, 2023, 60(10): 2359-2382. DOI: 10.7544/issn1000-1239.202220388 |
[3] | Gu Tianlong, Gao Hui, Li Long, Bao Xuguang, Li Yunhui. An Approach for Training Moral Agents via Reinforcement Learning[J]. Journal of Computer Research and Development, 2022, 59(9): 2039-2050. DOI: 10.7544/issn1000-1239.20210474 |
[4] | Lu Haifeng, Gu Chunhua, Luo Fei, Ding Weichao, Yang Ting, Zheng Shuai. Research on Task Offloading Based on Deep Reinforcement Learning in Mobile Edge Computing[J]. Journal of Computer Research and Development, 2020, 57(7): 1539-1554. DOI: 10.7544/issn1000-1239.2020.20190291 |
[5] | Qi Faxin, Tong Xiangrong, Yu Lei. Agent Trust Boost via Reinforcement Learning DQN[J]. Journal of Computer Research and Development, 2020, 57(6): 1227-1238. DOI: 10.7544/issn1000-1239.2020.20190403 |
[6] | Zhang Wentao, Wang Lu, Cheng Yaodong. Performance Optimization of Lustre File System Based on Reinforcement Learning[J]. Journal of Computer Research and Development, 2019, 56(7): 1578-1586. DOI: 10.7544/issn1000-1239.2019.20180797 |
[7] | Zhang Kaifeng, Yu Yang. Methodologies for Imitation Learning via Inverse Reinforcement Learning: A Review[J]. Journal of Computer Research and Development, 2019, 56(2): 254-261. DOI: 10.7544/issn1000-1239.2019.20170578 |
[8] | Wang Qian, Nie Xiushan, Yin Yilong. A Reinforcement Learning Algorithm for Traffic Offloading in Dense Heterogeneous Network[J]. Journal of Computer Research and Development, 2018, 55(8): 1706-1716. DOI: 10.7544/issn1000-1239.2018.20180310 |
[9] | Liu Quan, Fu Qiming, Yang Xudong, Jing Ling, Li Jin, Li Jiao. A Scalable Parallel Reinforcement Learning Method Based on Intelligent Scheduling[J]. Journal of Computer Research and Development, 2013, 50(4): 843-851. |
[10] | Shi Chuan, Shi Zhongzhi, Wang Maoguang. Online Hierarchical Reinforcement Learning Based on Path-matching[J]. Journal of Computer Research and Development, 2008, 45(9). |