• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search
Guo Yanchao, Gao Ling, Wang Hai, Zheng Jie, Ren Jie. Power Optimization Based on Dynamic Content Refresh in Mobile Edge Computing[J]. Journal of Computer Research and Development, 2018, 55(3): 563-571. DOI: 10.7544/issn1000-1239.2018.20170716
Citation: Guo Yanchao, Gao Ling, Wang Hai, Zheng Jie, Ren Jie. Power Optimization Based on Dynamic Content Refresh in Mobile Edge Computing[J]. Journal of Computer Research and Development, 2018, 55(3): 563-571. DOI: 10.7544/issn1000-1239.2018.20170716

Power Optimization Based on Dynamic Content Refresh in Mobile Edge Computing

More Information
  • Published Date: February 28, 2018
  • Nowadays, with the rapid development of mobile Internet and related technologies, social applications have become one of the mainstream applications. At the same time, the functions of mobile applications are also getting richer and richer, and their energy consumption requirements and information processing capabilities are also growing. In view of the problem of high energy consumption and computing power caused by mobile social platforms ignoring network status and frequently refreshing content (words, pictures, videos, etc.), a consumption optimization model based on Markov decision process (MDP) in edge computing is proposed. The model considers the network status in different environments and performs data processing through the local edge computing layer (simulating the local edge computing mode and completing data processing) according to the current power of the mobile phone and the user refresh rate. It selects optimal strategy from the decision tables generated by the Markov decision process, and dynamically selects the best network access and refreshes the best download picture format. The model not only reduces refresh time, but also reduces the power consumption of the mobile platform. The experimental results show that compared with the picture refresh mode using a single network, the energy consumption optimization model proposed in this paper reduces the energy consumption by about 12.1% without reducing the number of user refresh cycles.
  • Related Articles

    [1]Gu Tianlong, Gao Hui, Li Long, Bao Xuguang, Li Yunhui. An Approach for Training Moral Agents via Reinforcement Learning[J]. Journal of Computer Research and Development, 2022, 59(9): 2039-2050. DOI: 10.7544/issn1000-1239.20210474
    [2]Yu Xian, Li Zhenyu, Sun Sheng, Zhang Guangxing, Diao Zulong, Xie Gaogang. Adaptive Virtual Machine Consolidation Method Based on Deep Reinforcement Learning[J]. Journal of Computer Research and Development, 2021, 58(12): 2783-2797. DOI: 10.7544/issn1000-1239.2021.20200366
    [3]Wu Jinjin, Liu Quan, Chen Song, Yan Yan. Averaged Weighted Double Deep Q-Network[J]. Journal of Computer Research and Development, 2020, 57(3): 576-589. DOI: 10.7544/issn1000-1239.2020.20190159
    [4]Feng Wei, Hang Wenlong, Liang Shuang, Liu Xuejun, Wang Hui. Deep Stack Least Square Classifier with Inter-Layer Model Knowledge Transfer[J]. Journal of Computer Research and Development, 2019, 56(12): 2589-2599. DOI: 10.7544/issn1000-1239.2019.20180741
    [5]Bai Chenjia, Liu Peng, Zhao Wei, Tang Xianglong. Active Sampling for Deep Q-Learning Based on TD-error Adaptive Correction[J]. Journal of Computer Research and Development, 2019, 56(2): 262-280. DOI: 10.7544/issn1000-1239.2019.20170812
    [6]Zhang Kaifeng, Yu Yang. Methodologies for Imitation Learning via Inverse Reinforcement Learning: A Review[J]. Journal of Computer Research and Development, 2019, 56(2): 254-261. DOI: 10.7544/issn1000-1239.2019.20170578
    [7]Zhu Fei, Wu Wen, Liu Quan, Fu Yuchen. A Deep Q-Network Method Based on Upper Confidence Bound Experience Sampling[J]. Journal of Computer Research and Development, 2018, 55(8): 1694-1705. DOI: 10.7544/issn1000-1239.2018.20180148
    [8]Zhu Fei, Liu Quan, Fu Qiming, Fu Yuchen. A Least Square Actor-Critic Approach for Continuous Action Space[J]. Journal of Computer Research and Development, 2014, 51(3): 548-558.
    [9]Shi Chuan, Shi Zhongzhi, Wang Maoguang. Online Hierarchical Reinforcement Learning Based on Path-matching[J]. Journal of Computer Research and Development, 2008, 45(9).
    [10]Chen Zonghai, Wen Feng, Nie Jianbin, and Wu Xiaoshu. A Reinforcement Learning Method Based on Node-Growing k-Means Clustering Algorithm[J]. Journal of Computer Research and Development, 2006, 43(4): 661-666.
  • Cited by

    Periodical cited type(15)

    1. 王靖,方旭明. Wi-Fi7多链路通感一体化的功率和信道联合智能分配算法. 计算机应用. 2025(02): 563-570 .
    2. 葛振兴,向帅,田品卓,高阳. 基于深度强化学习的掼蛋扑克博弈求解. 计算机研究与发展. 2024(01): 145-155 . 本站查看
    3. Xiaodong Zhuang,Xiangrong Tong. A dynamic algorithm for trust inference based on double DQN in the internet of things. Digital Communications and Networks. 2024(04): 1024-1034 .
    4. 李迎港,童向荣. 基于知识引导的自适应序列强化学习模型. 模式识别与人工智能. 2023(02): 108-119 .
    5. 冯景瑜,张静,时翌飞. 物联网中具备终端匿名的加密流量双层过滤方法. 西安邮电大学学报. 2023(02): 72-81 .
    6. 冯景瑜,李嘉伦,张宝军,韩刚,张文波. 工业互联网中抗APT窃密的主动式零信任模型. 西安电子科技大学学报. 2023(04): 76-88 .
    7. 丁世飞,杜威,郭丽丽,张健,徐晓. 基于双评论家的多智能体深度确定性策略梯度方法. 计算机研究与发展. 2023(10): 2394-2404 . 本站查看
    8. 冯景瑜,王锦康,张宝军,刘宇航. 基于信任过滤的轻量级加密流量异常检测方案. 西安邮电大学学报. 2023(05): 56-66 .
    9. 徐敏,胡聪,王萍,张翠翠,王鹏. 基于强化学习的Ceph文件系统的性能优化. 微型电脑应用. 2022(03): 83-86 .
    10. 冯景瑜,于婷婷,王梓莹,张文波,韩刚,黄文华. 电力物联场景下抗失陷终端威胁的边缘零信任模型. 计算机研究与发展. 2022(05): 1120-1132 . 本站查看
    11. 王鑫,赵清杰,于重重,张长春,陈涌泉. 多节点探测器软着陆的路径规划方法. 宇航学报. 2022(03): 366-373 .
    12. 张文璐,霍子龙,赵西雨,崔琪楣,陶小峰. 面向智能工厂多机器人定位的无线分布式协同决策. 无线电通信技术. 2022(04): 718-727 .
    13. 王岩,童向荣. 基于tri-training和极限学习机的跨领域信任预测. 计算机研究与发展. 2022(09): 2015-2026 . 本站查看
    14. 聂雷,刘博,李鹏,何亨. 基于多智能体Q学习的异构车载网络选择方法. 计算机工程与科学. 2021(05): 836-844 .
    15. 洪志理,赖俊,曹雷,陈希亮. 融合用户兴趣建模的智能推荐算法研究. 信息技术与网络安全. 2021(11): 37-48 .

    Other cited types(15)

Catalog

    Article views (1402) PDF downloads (807) Cited by(30)

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return