ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2019, Vol. 56 ›› Issue (12): 2632-2640.doi: 10.7544/issn1000-1239.2019.20190101

• 人工智能 • 上一篇    下一篇

基于多任务学习的方言语种识别

秦晨光1,王海1,任杰2,郑杰1,袁璐1,赵子鑫1   

  1. 1(西北大学信息科学与技术学院 西安 710127);2(陕西师范大学计算机学院 西安 710119) (qcgnwu@stumail.nwu.edu.cn)
  • 出版日期: 2019-12-01
  • 基金资助: 
    国家自然科学基金项目(61572401,61701400);中央高校基本科研业务费专项资金项目(GK201803063);陕西省自然科学基础研究计划项目(2019JQ-271)

Dialect Language Recognition Based on Multi-Task Learning

Qin Chenguang1, Wang Hai1, Ren Jie2, Zheng Jie1, Yuan Lu1, Zhao Zixin1   

  1. 1(School of Information Science & Technology, Northwest University, Xi’an 710127);2(School of Computer Science, Shaanxi Normal University, Xi’an 710119)
  • Online: 2019-12-01

摘要: 近年来深度学习尤其是神经网络的发展,对语音识别这类复杂的模式分类问题提供了新的解决思路.为加强对我国方言语种的保护工作、提高方言语种识别的准确率以及丰富语音识别的前处理模块,首先采用目前语音识别领域应用最广泛的LSTM模型搭建单任务方言语种识别模型SLNet作为基线系统.其次,针对中国方言的多样性、复杂性特点,基于多任务学习的参数共享机制,通过多任务神经网络模型发现不同语种间的隐含相关特性,提出基于多语种任务的方言语种识别模型MTLNet.进一步根据中国方言的区域特点,采用基于参数硬共享的多任务学习模式,构建基于辅助任务的多任务学习神经网络ATLNet.经实验验证表明:相比于单任务神经网络方言语种识别,MTLNet和ATLNet将识别准确率可提升至80.2%,弥补了单任务模型的单一性和弱泛化性.

关键词: 方言语种识别, 方言区域识别, 多任务学习, 辅助任务, 神经网络

Abstract: Development of deep learning and neural networks in recent years has led to new solutions to the complicated pattern recognition problems of speech recognition. In order to reinforce the protection of Chinese dialects, to improve the accuracy of dialect language recognition and the diversity of speech signal pre-processing modules for language recognition, this paper proposes a single-task dialect language recognition model, SLNet, on the basis of LSTM and currently the most widely used model in the field of speech recognition. Considering the diversity and complexity of Chinese dialects, on the basis of a multi-task learning parameter sharing mechanism, we use a neural network model to discover the implicit correlation characteristics of different dialects and propose the MTLNet, a dialect recognition model based on multilingual tasking. Further considering the regional characteristics of Chinese dialects, we adopt a multi-task learning model based on hard parameter sharing to construct the ATLNet, a multi-task learning neural network model based on auxiliary tasks. We design several sets of experiments to compare a single-task dialect language recognition model with the MTLNet and ATLNet models proposed in this paper. The results show multi-task methods improve the accuracy of language recognition to 80.2% on average and make up the singularity and weak generalization of the single-task model.

Key words: dialect language recognition, dialect region recognition, multi-task learning, auxiliary tasks, neural networks

中图分类号: