Advanced Search
    Wang Chenglong, Yi Jiangyan, Tao Jianhua, Ma Haoxin, Tian Zhengkun, Fu Ruibo. Global and Temporal-Frequency Attention Based Network in Audio Deepfake Detection[J]. Journal of Computer Research and Development, 2021, 58(7): 1466-1475. DOI: 10.7544/issn1000-1239.2021.20200799
    Citation: Wang Chenglong, Yi Jiangyan, Tao Jianhua, Ma Haoxin, Tian Zhengkun, Fu Ruibo. Global and Temporal-Frequency Attention Based Network in Audio Deepfake Detection[J]. Journal of Computer Research and Development, 2021, 58(7): 1466-1475. DOI: 10.7544/issn1000-1239.2021.20200799

    Global and Temporal-Frequency Attention Based Network in Audio Deepfake Detection

    • Audio deepfake detection is a hot topic in recent years and has been widely concerned. At present, convolutional neural networks and their variants have made good progress in the task of audio deepfake detection. However, there are still two problems: 1) The assumption of current work is that each aspect of the feature map fed into the convolutional neural network has the same effect on the result, ignoring that the information emphasized at different locations on each dimensional feature map is different. 2) In addition, the current work focuses on the local information of the feature map, and cannot make use of the relationship between the feature map from the global view. To solve these challenges, we introduce a global and temporal-frequency attention based network that focuses on channel dimensions and temporal-frequency dimensions, respectively. Specifically, we introduced two parallel attention modules. One is the temporal-frequency attention module and the other is the global attention module. For the temporal-frequency attention module, we can update the features by using weighted aggregation on all temporal-frequency feature maps. For the global attention module, we draw on the idea of SE-Net to generate weights for each feature channel by parameters. And by this way, we can get the global distribution of the response on the feature channel. We have carried out a series of experiments on ASVspoof2019 LA open data set, and the results showed that the proposed model achieved good results, and the EER of the best model reached 4.12%, which refreshed the best results of the single model.
    • loading

    Catalog

      Turn off MathJax
      Article Contents

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return