ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2022, Vol. 59 ›› Issue (2): 329-341.doi: 10.7544/issn1000-1239.20210905

所属专题: 2022空间数据智能专题

• 人工智能 • 上一篇    下一篇

基于深度强化学习的网约车动态路径规划

郑渤龙1,明岭峰1,胡琦1,方一向2,郑凯3,李国徽1   

  1. 1(华中科技大学计算机科学与技术学院 武汉 430074);2(香港中文大学(深圳)数据科学学院 广东深圳 518172);3(电子科技大学计算机科学与工程学院 成都 610054) (bolongzheng@hust.edu.cn)
  • 出版日期: 2022-02-01
  • 基金资助: 
    国家自然科学基金项目(61902134,62011530437);湖北省自然科学基金项目(2020CFB871);中央高校基本科研业务费专项资金(2019kfyXKJC021,2019kfyXJJS091)

Dynamic Ride-Hailing Route Planning Based on Deep Reinforcement Learning

Zheng Bolong1, Ming Lingfeng1, Hu Qi1, Fang Yixiang2, Zheng Kai3, Li Guohui1   

  1. 1(School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan 430074);2(School of Data Science, The Chinese University of Hong Kong (Shenzhen), Shenzhen, Guangdong 518172);3(School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054)
  • Online: 2022-02-01
  • Supported by: 
    This work was supported by the National Natural Science Foundation of China (61902134, 62011530437), Hubei Natural Science Foundation (2020CFB871), and the Fundamental Research Funds for the Central Universities (2019kfyXKJC021, 2019kfyXJJS091).

摘要: 随着移动互联网的快速发展,许多利用手机App打车的网约车平台也应运而生.这些网约车平台大大减少了网约车的空驶时间和乘客等待时间,从而提高了交通效率.作为平台核心模块,网约车路径规划问题致力于调度空闲的网约车以服务潜在的乘客,从而提升平台的运营效率,近年来受到广泛关注.现有研究主要采用基于值函数的深度强化学习算法(如deep Q-network, DQN)来解决这一问题.然而,由于基于值函数的方法存在局限,无法应用到高维和连续的动作空间.提出了一种具有动作采样策略的执行者-评论者(actor-critic with action sampling policy, AS-AC)算法来学习最优的空驶网约车调度策略,该方法能够感知路网中的供需分布,并根据供需不匹配度来确定最终的调度位置.在纽约市和海口市的网约车订单数据集上的实验表明,该算法取得了比对比算法更低的请求拒绝率.

关键词: 移动信息处理系统, 时空数据挖掘, 深度强化学习, 网约车路径规划, 车队调度

Abstract: With the rapid development of the mobile Internet, many online ride-hailing platforms that use mobile apps to request taxis have emerged. Such online ride-hailing platforms have reduced significantly the amounts of the time that taxis are idle and that passengers spend on waiting, and improved traffic efficiency. As a key component, the taxi route planning problem aims at dispatching idle taxis to serve potential requests and improving the operating efficiency, which has received extensive attention in recent years. Existing studies mainly adopt value-based deep reinforcement learning methods such as DQN to solve this problem. However, due to the limitations of value-based methods, existing methods cannot be applied to high-dimensional or continuous action spaces. Therefore, an actor-critic with action sampling policy, called AS-AC, is proposed to learn an optimal fleet management strategy, which can perceive the distribution of supply and demand in the road network, and determine the final dispatch location according to the degree of mismatch between supply and demand. Extensive experiments on New York and Haikou taxi datasets offer insight into the performance of our model and show that it outperforms the comparison approaches.

Key words: mobile information processing systems, spatial-temporal data mining, deep reinforcement learning, ride-hailing route planning, fleet management

中图分类号: