高级检索

    基于优先级先验的演化大规模多目标安全博弈框架

    A Framework for Evolutionary Large-Scale Multi-Objective Security Games Based on Priority Priors

    • 摘要: 安全博弈(multi-objective security game,MOSG)旨在同时最优化防御者应对多个异质攻击者获得的收益,在实际应用中具有重要意义. 近期提出的基于空间离散化的演化搜索(space discretization based evolutionary search,SDES)框架将MOSG中的带约束的高维阶梯函数优化问题转换为低维组合优化问题,并使用贪心策略解决组合优化任务. 虽然SDES能够在有限时间内处理大规模MOSG任务,但是SDES难以收敛到大规模MOSG任务对应的最优Pareto前沿上. 一方面,SDES的贪心策略的收敛性假设随问题规模扩大而变得愈发难以满足;另一方面,SDES过多的阶段组件(空间离散化、演化优化、评估、解的精炼4个组件)存在阶段耦合的风险,即上游组件的优化质量直接影响下游组件表现. 因此,挖掘并利用MOSG任务中被保护对象的优先级(priority)先验知识,旨在提高解的质量并简化SDES框架,从而提出了SDES-P框架.SDES-P重新设计了SDES的核心组件——评估组件,并移除解的精炼组件. 具体而言,SDES-P从具有最大资源的不可行解开始,根据被保护对象优先级先验将被保护对象分成2 组,优先级较高的一组对象会逐渐释放资源以找到可行解. 最后,SDES-P包含了一种结合优先级先验的演化局部搜索策略,增强最终Pareto前沿的质量. 我们分析出SDES-P可保持SDES所具有的样本复杂度低、规模可扩展性强的优势,并且用实验结果表明,无论MOSG任务是否满足收敛假设,SDES-P可以找到相较于SDES收敛性、多样性更优的高质量Pareto前沿.

       

      Abstract: Multi-objective security games (MOSGs) aim to simultaneously optimize the defender's payoff against multiple heterogeneous attackers, which is of great importance in practical applications. Recently, the space discretization based evolutionary search (SDES) framework has been proposed to transform the constrained high-dimensional step function optimization problem in MOSG into a low-dimensional combinatorial optimization problem and to solve the combinatorial optimization task using a greedy strategy. Although SDES can address large-scale MOSG tasks in time, it is difficult for SDES to converge to the optimal Pareto front, especially in the large-scale scenario. On the one hand, the convergence assumption of the greedy strategy of SDES becomes difficult to be satisfied with the scale of MOSG tasks. On the other hand, SDES uses multiple number of stage components, including spatial discretization, evolutionary optimization, evaluation, and refinement components. This poses a risk of stage coupling, where upstream components' optimization quality directly affects downstream components' performance. To address these issues, this paper exploits and utilizes the priority prior of the protected targets in MOSG task to improve the quality of solutions and simplify the SDES framework, resulting in the SDES-P framework. SDES-P redesigns the evaluation component, which is the core component of SDES, and removes the refinement component. Specifically, SDES-P starts from the infeasible solution with the maximum resources. Then, SDES-P divides the protected targets into two groups based on the priority prior, and the higher-priority group gradually releases resources to find feasible solutions. Finally, SDES-P contains an evolutionary local search strategy combined with priority prior knowledge to enhance the quality of the final Pareto front. We reveal that SDES-P can maintain the advantages of low sample complexity and strong scalability of SDES, and experimental results show that, regardless of whether the convergence assumption is satisfied, SDES-P can find high-quality Pareto fronts with better convergence and diversity compared with SDES.

       

    /

    返回文章
    返回