高级检索
    高 坚 张 伟. 多Agent系统中双边多指标自动协商的ACEA算法[J]. 计算机研究与发展, 2006, 43(6): 1104-1108.
    引用本文: 高 坚 张 伟. 多Agent系统中双边多指标自动协商的ACEA算法[J]. 计算机研究与发展, 2006, 43(6): 1104-1108.
    Gao Jian and Zhang Wei. An Accelerating Chaos Evolution Algorithm of Bilateral Multi-Issue Automated Negotiation in MAS[J]. Journal of Computer Research and Development, 2006, 43(6): 1104-1108.
    Citation: Gao Jian and Zhang Wei. An Accelerating Chaos Evolution Algorithm of Bilateral Multi-Issue Automated Negotiation in MAS[J]. Journal of Computer Research and Development, 2006, 43(6): 1104-1108.

    多Agent系统中双边多指标自动协商的ACEA算法

    An Accelerating Chaos Evolution Algorithm of Bilateral Multi-Issue Automated Negotiation in MAS

    • 摘要: 自动协商是多Agent系统中的一个中心议题,它是在Agent间建立一种合作合约,多数情况下这种合约包含多个协商指标,而多指标的协商比单一指标的协商要复杂得多.因此,如何快速、高效地进行Agent间的多指标自动协商是多Agent系统中必须解决的一个问题.给出了一个Agent间多指标协商的模型(MN),并在此基础上提出了双边—多指标协商的一种加速混沌进化算法(ACEA). ACEA算法首先将混沌机制引入进化计算,然后采用压缩技术对算法进行加速,这样既克服了进化计算过早收敛到局部Nash平衡点的缺点,又解决了多指标协商繁杂的计算和引入混沌后带来的收敛速度慢的问题.理论分析和仿真实验表明,ACEA算法以概率1收敛到全局最优解.

       

      Abstract: Automated negotiation is a key problem in multi-agent systems (MAS). It means establishing contracts for working together between agents. Now, in many cases these contracts consist of multi-issue, which makes the negotiation more complex than dealing with just one-issue. In multi-issue negotiation, how to conduct automated negotiation quickly and high-efficiently has become an important problem that we must deal with in MAS. A bilateral multi-issue automated negotiation model (MN) is proposed, and then an accelerating chaos evolution algorithm for the bilateral multi-issue automated negotiation is given. In this method the chaos mechanism is introduced into evolution and then algorithm constantly shrinks the searching area to accelerate method. By this way, the algorithm avoids premature converging to local Nash point, and solves slow convergence of multi-issue negotiation and chaos evolution algorithm. Theory analysis and experiments indicate that this method can fast and more efficiently converge to the global optimal numerical value in the field at probability of 1.

       

    /

    返回文章
    返回