高级检索

    基于稀疏自学习卷积神经网络的句子分类模型

    Sentence Classification Model Based on Sparse and Self-Taught Convolutional Neural Networks

    • 摘要: 句子分类模型的建立对于自然语言理解的研究有着十分重要的意义.基于卷积神经网络(convolutional neural networks, CNN)提取数据特征的特点,提出基于稀疏自学习卷积神经网络(sparse and self-taught CNN, SCNN)的句子分类模型.首先,在卷积层排除人为约定的特征map输入,自学习前一层输入的特征矩阵的有效组合,动态捕获句子范围内各个特征的有效关联;然后,在训练过程中利用L1范数增加稀疏性约束,降低模型复杂度;最后,在采样层利用K-Max Pooling选择句子中最大特征的序列,并保留特征之间的相对次序.SCNN可以处理变长的句子输入,模型的建立不依赖于句法、分析树等语言学特征,从而适用于任何一种语言.通过对语料库进行句子分类实验,验证了所提出模型有较好的分类效果.

       

      Abstract: The study and establishment of sentence classification model have an important impact on the study of nature language processing and understanding. In this paper, we propose a sentence classification model named SCNN based on sparse and self-taught convolutional neural networks in extracting characteristics of the features from data in the CNN model. Firstly, in this method, the convolutional layer itself studies the effective combinations from the feature matrices of the previous layers in order to dynamically learn the relationships of data features in the scope of the sentence, eliminating the user-defined feature-map input of the convolutional layers. Secondly, during the unsupervised training process, using L1-norm to increase sparse constraints, the complexity of the proposed model can be effectively decreased, on the contrary, the accuracy of SCNN model can be effectively increased. Finally, by employing K-Max Pooling in the feature extraction layer, the maximal feature sequence can be selected, and relative orders among features can be effectively preserved. SCNN can cope with sentence with variant length, and furthermore, the model can apply to any language due to its independence to any linguistic features like syntax and parse trees. Experiments on the standard corpus dataset show that the proposed model is effective for the task of the sentence classification.

       

    /

    返回文章
    返回