• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
高级检索

关于短文本匹配的泛化性和迁移性的研究分析

马新宇, 范意兴, 郭嘉丰, 张儒清, 苏立新, 程学旗

马新宇, 范意兴, 郭嘉丰, 张儒清, 苏立新, 程学旗. 关于短文本匹配的泛化性和迁移性的研究分析[J]. 计算机研究与发展, 2022, 59(1): 118-126. DOI: 10.7544/issn1000-1239.20200626
引用本文: 马新宇, 范意兴, 郭嘉丰, 张儒清, 苏立新, 程学旗. 关于短文本匹配的泛化性和迁移性的研究分析[J]. 计算机研究与发展, 2022, 59(1): 118-126. DOI: 10.7544/issn1000-1239.20200626
Ma Xinyu, Fan Yixing, Guo Jiafeng, Zhang Ruqing, Su Lixin, Cheng Xueqi. An Empirical Investigation of Generalization and Transfer in Short Text Matching[J]. Journal of Computer Research and Development, 2022, 59(1): 118-126. DOI: 10.7544/issn1000-1239.20200626
Citation: Ma Xinyu, Fan Yixing, Guo Jiafeng, Zhang Ruqing, Su Lixin, Cheng Xueqi. An Empirical Investigation of Generalization and Transfer in Short Text Matching[J]. Journal of Computer Research and Development, 2022, 59(1): 118-126. DOI: 10.7544/issn1000-1239.20200626
马新宇, 范意兴, 郭嘉丰, 张儒清, 苏立新, 程学旗. 关于短文本匹配的泛化性和迁移性的研究分析[J]. 计算机研究与发展, 2022, 59(1): 118-126. CSTR: 32373.14.issn1000-1239.20200626
引用本文: 马新宇, 范意兴, 郭嘉丰, 张儒清, 苏立新, 程学旗. 关于短文本匹配的泛化性和迁移性的研究分析[J]. 计算机研究与发展, 2022, 59(1): 118-126. CSTR: 32373.14.issn1000-1239.20200626
Ma Xinyu, Fan Yixing, Guo Jiafeng, Zhang Ruqing, Su Lixin, Cheng Xueqi. An Empirical Investigation of Generalization and Transfer in Short Text Matching[J]. Journal of Computer Research and Development, 2022, 59(1): 118-126. CSTR: 32373.14.issn1000-1239.20200626
Citation: Ma Xinyu, Fan Yixing, Guo Jiafeng, Zhang Ruqing, Su Lixin, Cheng Xueqi. An Empirical Investigation of Generalization and Transfer in Short Text Matching[J]. Journal of Computer Research and Development, 2022, 59(1): 118-126. CSTR: 32373.14.issn1000-1239.20200626

关于短文本匹配的泛化性和迁移性的研究分析

基金项目: 国家自然科学基金项目(61722211,61773362,61872338,62006218,61902381);国家重点研发计划项目(2016QY02D0405);北京智源人工智能研究院项目(BAAI2019ZD0306);中国科学院青年创新促进会项目(20144310,2016102);重庆市基础科学与前沿技术研究专项项目(重点)(cstc2017jcjyBX0059);王宽诚教育基金会项目;联想-中科院联合实验室青年科学家项目
详细信息
  • 中图分类号: TP18

An Empirical Investigation of Generalization and Transfer in Short Text Matching

Funds: This work was supported by the National Natural Science Foundation of China (61722211, 61773362, 61872338, 62006218, 61902381), the National Key Research and Development Program of China (2016QY02D0405), the Project of Beijing Academy of Artificial Intelligence (BAAI2019ZD0306), the Youth Innovation Promotion Association CAS (20144310, 2016102), the Project of Chongqing Research Program of Basic Research and Frontier Technology (cstc2017jcyjBX0059), the K.C.Wong Education Foundation, and the Lenovo-CAS Joint Lab Youth Scientist Project.
  • 摘要: 自然语言理解中的许多任务,比如自然语言推断任务、机器问答和复述问题,都可以看作是短文本匹配问题.近年来,大量的数据集和深度学习模型的涌现使得短文本匹配任务取得了长足的进步,然而,很少有工作去分析模型在不同数据集之间的泛化能力,以及如何在新领域中有效地利用现有不同领域中的大量带标注的数据,达到减少新领域的数据标注量和提升性能的目标.为此,重点分析了不同数据集之间的泛化性和迁移性,并且通过可视化的方式展示了影响数据集之间泛化性的因素.具体地,使用深度学习模型ESIM(enhanced sequential inference model)和预训练语言模型BERT(bidirectional encoder representations from transformers)在10个通用的短文本匹配数据集上进行了详尽的实验.通过实验,发现即使是在大规模语料预训练过的BERT,合适的迁移仍能带来性能提升.基于以上的分析,也发现通过在混合数据集预训练过的模型,在新的领域和少量样本情况下,具有较好的泛化能力和迁移能力.
    Abstract: Many tasks in natural language understanding, such as natural language inference, question answering, and paraphrasing can be viewed as short text matching problems. Recently, the emergence of a large number of datasets and deep learning models has made great success in short text matching. However, little study has been done on analyzing the generalization of these datasets across different text matching tasks, and how to leverage these supervised datasets of multiple domains to new domains to reduce the cost of annotating and improve their performance. In this paper, we conduct an extensive investigation of generalization and transfer across different datasets and show the factors that affect the generalization through visualization. Specially, we experiment with a conventional neural semantic matching model ESIM (enhanced sequential inference model) and a pre-trained language model BERT (bidirectional encoder representations from transformers) over 10 common datasets. We show that even BERT which is pre-trained on a large-scale dataset can still improve performance on the target dataset through transfer learning. Following our analysis, we also demonstrate that pre-training on multiple datasets shows good generalization and transfer. In the case of a new domain and few-shot setting, BERT which we pre-train on the multiple datasets first and then transfers to new datasets achieves exciting performance.
  • 期刊类型引用(7)

    1. 殷秀秀,檀健,朱金秋,张诗韵. 融合维度构建和数据增强的评教文本匹配算法. 中北大学学报(自然科学版). 2025(01): 10-18 . 百度学术
    2. 孙莹,章玉婷,庄福振,祝恒书,何清,熊辉. 基于集合效用边际贡献学习的可解释薪酬预测算法. 计算机研究与发展. 2024(05): 1276-1289 . 本站查看
    3. 李思恒,金蓓弘,张扶桑,王志,马俊麒,苏畅,任晓勇,刘海琴. 基于多任务注意力网络的非接触式睡眠监测. 计算机研究与发展. 2024(11): 2739-2753 . 本站查看
    4. 臧洁,周万林,王妍. 融合多头注意力机制和孪生网络的语义匹配方法. 计算机科学. 2023(12): 294-301 . 百度学术
    5. 贾钰峰,李容,章蓬伟,邵小青. 基于字向量的短文本情感分类研究. 微处理机. 2023(06): 40-45 . 百度学术
    6. 丁露雨,吕阳,李奇峰,王朝元,余礼根,宗伟勋. 融合多环境参数的鸡粪氨气排放预测模型研究. 农业机械学报. 2022(05): 366-375 . 百度学术
    7. 钱杨舸,秦小林,张思齐,廖兴滨. 基于深度学习的文本语义匹配综述. 软件导刊. 2022(12): 252-261 . 百度学术

    其他类型引用(9)

计量
  • 文章访问数:  572
  • HTML全文浏览量:  6
  • PDF下载量:  343
  • 被引次数: 16
出版历程
  • 发布日期:  2021-12-31

目录

    /

    返回文章
    返回