高级检索

    基于深度强化学习的网约车动态路径规划

    Dynamic Ride-Hailing Route Planning Based on Deep Reinforcement Learning

    • 摘要: 随着移动互联网的快速发展,许多利用手机App打车的网约车平台也应运而生.这些网约车平台大大减少了网约车的空驶时间和乘客等待时间,从而提高了交通效率.作为平台核心模块,网约车路径规划问题致力于调度空闲的网约车以服务潜在的乘客,从而提升平台的运营效率,近年来受到广泛关注.现有研究主要采用基于值函数的深度强化学习算法(如deep Q-network, DQN)来解决这一问题.然而,由于基于值函数的方法存在局限,无法应用到高维和连续的动作空间.提出了一种具有动作采样策略的执行者-评论者(actor-critic with action sampling policy, AS-AC)算法来学习最优的空驶网约车调度策略,该方法能够感知路网中的供需分布,并根据供需不匹配度来确定最终的调度位置.在纽约市和海口市的网约车订单数据集上的实验表明,该算法取得了比对比算法更低的请求拒绝率.

       

      Abstract: With the rapid development of the mobile Internet, many online ride-hailing platforms that use mobile apps to request taxis have emerged. Such online ride-hailing platforms have reduced significantly the amounts of the time that taxis are idle and that passengers spend on waiting, and improved traffic efficiency. As a key component, the taxi route planning problem aims at dispatching idle taxis to serve potential requests and improving the operating efficiency, which has received extensive attention in recent years. Existing studies mainly adopt value-based deep reinforcement learning methods such as DQN to solve this problem. However, due to the limitations of value-based methods, existing methods cannot be applied to high-dimensional or continuous action spaces. Therefore, an actor-critic with action sampling policy, called AS-AC, is proposed to learn an optimal fleet management strategy, which can perceive the distribution of supply and demand in the road network, and determine the final dispatch location according to the degree of mismatch between supply and demand. Extensive experiments on New York and Haikou taxi datasets offer insight into the performance of our model and show that it outperforms the comparison approaches.

       

    /

    返回文章
    返回