ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2022, Vol. 59 ›› Issue (1): 118-126.doi: 10.7544/issn1000-1239.20200626

• 人工智能 • 上一篇    下一篇

关于短文本匹配的泛化性和迁移性的研究分析

马新宇,范意兴,郭嘉丰,张儒清,苏立新,程学旗   

  1. (中国科学院网络数据科学与技术重点实验室(中国科学院计算技术研究所) 北京 100190) (中国科学院大学 北京 100049) (maxinyu17g@ict.ac.cn)
  • 出版日期: 2022-01-01
  • 基金资助: 
    国家自然科学基金项目(61722211,61773362,61872338,62006218,61902381);国家重点研发计划项目(2016QY02D0405);北京智源人工智能研究院项目(BAAI2019ZD0306);中国科学院青年创新促进会项目(20144310,2016102);重庆市基础科学与前沿技术研究专项项目(重点)(cstc2017jcjyBX0059);王宽诚教育基金会项目;联想-中科院联合实验室青年科学家项目

An Empirical Investigation of Generalization and Transfer in Short Text Matching

Ma Xinyu, Fan Yixing, Guo Jiafeng, Zhang Ruqing, Su Lixin, Cheng Xueqi   

  1. (CAS Key Laboratory of Network Data Science & Technology (Institute of Computing Technology, Chinese Academy of Sciences), Beijing 100190) (University of Chinese Academy of Sciences, Beijing 100049)
  • Online: 2022-01-01
  • Supported by: 
    This work was supported by the National Natural Science Foundation of China (61722211, 61773362, 61872338, 62006218, 61902381), the National Key Research and Development Program of China (2016QY02D0405), the Project of Beijing Academy of Artificial Intelligence (BAAI2019ZD0306), the Youth Innovation Promotion Association CAS (20144310, 2016102), the Project of Chongqing Research Program of Basic Research and Frontier Technology (cstc2017jcyjBX0059), the K.C.Wong Education Foundation, and the Lenovo-CAS Joint Lab Youth Scientist Project.

摘要: 自然语言理解中的许多任务,比如自然语言推断任务、机器问答和复述问题,都可以看作是短文本匹配问题.近年来,大量的数据集和深度学习模型的涌现使得短文本匹配任务取得了长足的进步,然而,很少有工作去分析模型在不同数据集之间的泛化能力,以及如何在新领域中有效地利用现有不同领域中的大量带标注的数据,达到减少新领域的数据标注量和提升性能的目标.为此,重点分析了不同数据集之间的泛化性和迁移性,并且通过可视化的方式展示了影响数据集之间泛化性的因素.具体地,使用深度学习模型ESIM(enhanced sequential inference model)和预训练语言模型BERT(bidirectional encoder representations from transformers)在10个通用的短文本匹配数据集上进行了详尽的实验.通过实验,发现即使是在大规模语料预训练过的BERT,合适的迁移仍能带来性能提升.基于以上的分析,也发现通过在混合数据集预训练过的模型,在新的领域和少量样本情况下,具有较好的泛化能力和迁移能力.

关键词: 短文本匹配, 泛化性, 迁移性, 少样本, 预训练语言模型

Abstract: Many tasks in natural language understanding, such as natural language inference, question answering, and paraphrasing can be viewed as short text matching problems. Recently, the emergence of a large number of datasets and deep learning models has made great success in short text matching. However, little study has been done on analyzing the generalization of these datasets across different text matching tasks, and how to leverage these supervised datasets of multiple domains to new domains to reduce the cost of annotating and improve their performance. In this paper, we conduct an extensive investigation of generalization and transfer across different datasets and show the factors that affect the generalization through visualization. Specially, we experiment with a conventional neural semantic matching model ESIM (enhanced sequential inference model) and a pre-trained language model BERT (bidirectional encoder representations from transformers) over 10 common datasets. We show that even BERT which is pre-trained on a large-scale dataset can still improve performance on the target dataset through transfer learning. Following our analysis, we also demonstrate that pre-training on multiple datasets shows good generalization and transfer. In the case of a new domain and few-shot setting, BERT which we pre-train on the multiple datasets first and then transfers to new datasets achieves exciting performance.

Key words: short text matching, generalization, transfer, few-shot, pre-trained language model

中图分类号: