高级检索

    感知器在语言模型训练中的应用

    Perceptron for Language Modeling

    • 摘要: 感知器(perceptron)是神经网络模型中的一种,它可以通过监督学习(supervised learning)的方法建立模式识别的能力.将感知器应用到语言模型的训练中,实现了感知器的两种不同训练规则以及多种特征权值计算方法,讨论了不同的训练参数对训练效果的影响.在训练之前,使用了一种基于经验风险最小化(empirical risk minimization, ERM)的特征选择算法确定特征集合.感知器训练之后的语言模型在日文假名到汉字(kana-kanji)的转换中进行评估.通过实验对比了感知器的两种训练规则以及变形算法的性能,同时发现通过感知器训练的模型比传统模型(N-gram)在性能上有了很大的提高,使相对错误率下降了15%~20%.

       

      Abstract: Perceptron is one type of neural networks (NN) which can acquire the ability of pattern recognition by supervised learning. In this paper, two perceptron training rules for language modeling (LM) are introduced as an alternative to the traditional training method such as maximum likelihood estimation (MLE). Variants of perceptron learning algorithms are presented and the impact of different training parameters on performance is discussed. Since there is a strict restriction on the language model size, feature selection is conducted based on the empirical risk minimization (ERM) principle before modeling. The model performance is evaluated in the task of Japanese kana-kanji conversion which converts phonetic strings into the appropriate word strings. An empirical study on the variants of perceptron learning algorithms is conducted based on the two training rules, and the results also show that perceptron methods outperform substantially the traditional methods for LM.

       

    /

    返回文章
    返回