高级检索

    用于求解旅行商问题的深度智慧型蚁群优化算法

    Deep Intelligent Ant Colony Optimization for Solving Travelling Salesman Problem

    • 摘要: 启发式算法是求解组合优化问题求解的重要手段,其主要特征是能够以可接受的计算代价找到足够好的可行解.然而,设计良好的用于求解组合优化问题的启发式算法需要大量的专业领域知识以及大量的试错工作,且人工设计的启发式算法不能够保证在不同问题集上均具有一致性表现.另一方面,深度学习方法能够通过学习自动设计启发式规则,然而深度学习方法通常缺少在解空间内搜索的能力.为克服以上问题,提出了一种基于蚁群优化和深度强化学习的混合启发式算法框架.在该框架中,蚁群算法能够利用深度强化学习提取的启发式信息,而深度强化学习方法的解空间搜索性能也由于蚁群算法的加入而获得提高.采用经典的TSPLIB中的算例对该算法求解旅行商问题的效能进行了计算验证,结果表明采用深度学习方法能够极大地提升蚁群算法的计算表现,并降低其计算代价.

       

      Abstract: Heuristic algorithms are important methods for solving combinatorial optimization problems since heuristic algorithms can find feasible solutions with reasonable computational consumption. Heuristic design of combinatorial optimization problem is an important research field of combinatorial optimization society. However, designing heuristic algorithms need lots of special domain knowledge and years of trial-and-error, and algorithm performance of manually designed heuristics normally have no guarantee on different problem scenarios. On the other hand, deep learning approaches have the ability of designing heuristics automatically, but they lack the ability of searching in solution space. To overcome these disadvantages, in this article, we propose a hybrid meta-heuristic algorithm framework which is a combination of deep reinforcement learning method and ant colony optimization. In this algorithm, ant colony optimization is benefited from heuristic information learned by deep reinforcement learning method. And the solution searching ability of deep reinforcement learning method is also improved since the ant colony optimization is implemented. To test the algorithm performance, instances with different problem scales selected from TSPLIB are tested. Comparison algorithms include ant based heuristics and reinforcement learning methods. Experimental results show that the deep reinforcement learning method significantly improves both the algorithm proficiency and convergence performance of ant colony optimization algorithm.

       

    /

    返回文章
    返回