高级检索

    基于变分注意力知识选择和预训练模型的对话生成

    Conversation Generation Based on Variational Attention Knowledge Selection and Pre-trained Language Model

    • 摘要: 基于知识的神经对话研究常常面临外部知识包含冗余甚至与对话主题不相关信息的问题,从而导致对话系统性能下降. 知识选择成为解决该问题的重要途径,但现有研究对诸如知识选择器的设计、选择出的知识的利用以及知识选择对话方法适用的场景等问题,还缺乏深入研究. 针对这些问题,提出了一个新的基于变分注意力知识选择和预训练模型的神经对话方法,使用一个基于条件变分自编码和多层注意力机制的知识选择算法,自动选择出与当前对话最相关文本知识集合. 该算法有效利用了训练数据中的对话回复来提高知识选择的效率. 使用预训练语言模型Bart作为编码器-解码器架构,将选择的文本知识合并到Bart模型中,并在训练过程中对其进行微调. 实验结果表明,与现有的一些代表性研究方法相比,提出的模型能生成多样性和连贯性更好、准确率更高的对话回复.

       

      Abstract: Research on knowledge-grounded dialogue often suffers from the problem of external knowledge containing redundant or even noisy information irrelevant to the conversation topic, which leads to a degradation in the performance of the dialogue system. Knowledge selection becomes an important approach to solving this issue. However, existing work has not yet investigated in depth some issues involving it such as how to design a knowledge selector, how to exploit the selected knowledge, what are the suitable scenarios for the knowledge selection conversation methods, etc. In this paper, we propose a new neural conversation method based on conditional variational attention knowledge selection and a pre-trained language model. This method employs a knowledge selection algorithm based on CVAE and a multi-layer attention mechanism to pick up the most relevant textual knowledge collection to the current conversation, which effectively exploits the dialogue response in training data to improve the efficiency of knowledge selection. Our novel model adopts the pre-trained language model Bart as encoder-decoder architecture and incorporates selected textual knowledge into the Bart model to fine-tune it during the training process. The experimental results show that the model proposed, in contrast to the current representative dialog models, can generate more diverse and coherent dialogue responses with higher accuracy.

       

    /

    返回文章
    返回