高级检索

    联合稀疏非负矩阵分解和神经网络的语音增强

    Deep Neural Network Based Monaural Speech Enhancement with Sparse Non-Negative Matrix Factorization

    • 摘要: 针对基于非负矩阵分解(non-negative matrix factorization, NMF)的语音增强方法在低信噪比部分和无结构特征的清音部分会引入失真这一问题,利用语音信号在时频域呈现的稀疏特性和深度神经网络在语音增强应用中表现出的谱重构特性,提出了一种联合稀疏非负矩阵分解和深度神经网络的单通道语音增强方法.首先对带噪语音的幅度谱进行非负矩阵分解得到与语音字典和噪声字典相对应的稀疏编码矩阵,其中语音字典和噪声字典通过对纯净语音和噪声进行训练预先得到,以维纳滤波方法恢复出语音成分的主要结构;然后利用深度神经网络在语音增强中表现出的时频保持特性,通过深层网络学习经维纳滤波分离出的语音的对数幅度谱和理想纯净语音对数幅度谱之间的非线性映射函数,进而恢复出语音结构的缺失成分.实验结果表明:所提方法可以有效抑制噪声且较好地恢复出语音成分,在语音感知质量和对数谱失真性能评价指标上均优于基线方法.

       

      Abstract: In this paper, a monaural speech enhancement method combining deep neural network (DNN) with sparse non-negative matrix factorization (SNMF) is proposed. This method takes advantage of the sparse characteristic of speech signal in time-frequency (T-F) domain and the spectral preservation characteristic of DNN presented in speech enhancement, aiming to resolve the distortion problem introduced by low SNR situation and unvoiced components without structure characteristics in conventional non-negative matrix factorization (NMF) method. Firstly, the magnitude spectrogram matrix of noisy speech is decomposed by NMF with sparse constraint to obtain the corresponding coding matrix coefficients of speech and noise dictionary. The speech and noise dictionary are pre-trained independently. Then Wiener filtering method is used to get the separated speech and noise. DNN is employed to model the non-linear function which maps the log magnitude spectrum of the separated speech from Wiener filter to the target clean speech. Evaluations are conducted on the IEEE dataset, both stationary and non-stationary types of noise are selected to demonstrate the effectiveness of the proposed method. The experimental results show that the proposed method could effectively suppress the noise and preserve the speech component from the corrupted speech signal. It has better performance than the baseline methods in terms of perceptual quality and log-spectral distortion.

       

    /

    返回文章
    返回