Advanced Search
    Zheng Bolong, Ming Lingfeng, Hu Qi, Fang Yixiang, Zheng Kai, Li Guohui. Dynamic Ride-Hailing Route Planning Based on Deep Reinforcement Learning[J]. Journal of Computer Research and Development, 2022, 59(2): 329-341. DOI: 10.7544/issn1000-1239.20210905
    Citation: Zheng Bolong, Ming Lingfeng, Hu Qi, Fang Yixiang, Zheng Kai, Li Guohui. Dynamic Ride-Hailing Route Planning Based on Deep Reinforcement Learning[J]. Journal of Computer Research and Development, 2022, 59(2): 329-341. DOI: 10.7544/issn1000-1239.20210905

    Dynamic Ride-Hailing Route Planning Based on Deep Reinforcement Learning

    • With the rapid development of the mobile Internet, many online ride-hailing platforms that use mobile apps to request taxis have emerged. Such online ride-hailing platforms have reduced significantly the amounts of the time that taxis are idle and that passengers spend on waiting, and improved traffic efficiency. As a key component, the taxi route planning problem aims at dispatching idle taxis to serve potential requests and improving the operating efficiency, which has received extensive attention in recent years. Existing studies mainly adopt value-based deep reinforcement learning methods such as DQN to solve this problem. However, due to the limitations of value-based methods, existing methods cannot be applied to high-dimensional or continuous action spaces. Therefore, an actor-critic with action sampling policy, called AS-AC, is proposed to learn an optimal fleet management strategy, which can perceive the distribution of supply and demand in the road network, and determine the final dispatch location according to the degree of mismatch between supply and demand. Extensive experiments on New York and Haikou taxi datasets offer insight into the performance of our model and show that it outperforms the comparison approaches.
    • loading

    Catalog

      Turn off MathJax
      Article Contents

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return