高级检索
    秦晨光, 王海, 任杰, 郑杰, 袁璐, 赵子鑫. 基于多任务学习的方言语种识别[J]. 计算机研究与发展, 2019, 56(12): 2632-2640. DOI: 10.7544/issn1000-1239.2019.20190101
    引用本文: 秦晨光, 王海, 任杰, 郑杰, 袁璐, 赵子鑫. 基于多任务学习的方言语种识别[J]. 计算机研究与发展, 2019, 56(12): 2632-2640. DOI: 10.7544/issn1000-1239.2019.20190101
    Qin Chenguang, Wang Hai, Ren Jie, Zheng Jie, Yuan Lu, Zhao Zixin. Dialect Language Recognition Based on Multi-Task Learning[J]. Journal of Computer Research and Development, 2019, 56(12): 2632-2640. DOI: 10.7544/issn1000-1239.2019.20190101
    Citation: Qin Chenguang, Wang Hai, Ren Jie, Zheng Jie, Yuan Lu, Zhao Zixin. Dialect Language Recognition Based on Multi-Task Learning[J]. Journal of Computer Research and Development, 2019, 56(12): 2632-2640. DOI: 10.7544/issn1000-1239.2019.20190101

    基于多任务学习的方言语种识别

    Dialect Language Recognition Based on Multi-Task Learning

    • 摘要: 近年来深度学习尤其是神经网络的发展,对语音识别这类复杂的模式分类问题提供了新的解决思路.为加强对我国方言语种的保护工作、提高方言语种识别的准确率以及丰富语音识别的前处理模块,首先采用目前语音识别领域应用最广泛的LSTM模型搭建单任务方言语种识别模型SLNet作为基线系统.其次,针对中国方言的多样性、复杂性特点,基于多任务学习的参数共享机制,通过多任务神经网络模型发现不同语种间的隐含相关特性,提出基于多语种任务的方言语种识别模型MTLNet.进一步根据中国方言的区域特点,采用基于参数硬共享的多任务学习模式,构建基于辅助任务的多任务学习神经网络ATLNet.经实验验证表明:相比于单任务神经网络方言语种识别,MTLNet和ATLNet将识别准确率可提升至80.2%,弥补了单任务模型的单一性和弱泛化性.

       

      Abstract: Development of deep learning and neural networks in recent years has led to new solutions to the complicated pattern recognition problems of speech recognition. In order to reinforce the protection of Chinese dialects, to improve the accuracy of dialect language recognition and the diversity of speech signal pre-processing modules for language recognition, this paper proposes a single-task dialect language recognition model, SLNet, on the basis of LSTM and currently the most widely used model in the field of speech recognition. Considering the diversity and complexity of Chinese dialects, on the basis of a multi-task learning parameter sharing mechanism, we use a neural network model to discover the implicit correlation characteristics of different dialects and propose the MTLNet, a dialect recognition model based on multilingual tasking. Further considering the regional characteristics of Chinese dialects, we adopt a multi-task learning model based on hard parameter sharing to construct the ATLNet, a multi-task learning neural network model based on auxiliary tasks. We design several sets of experiments to compare a single-task dialect language recognition model with the MTLNet and ATLNet models proposed in this paper. The results show multi-task methods improve the accuracy of language recognition to 80.2% on average and make up the singularity and weak generalization of the single-task model.

       

    /

    返回文章
    返回