• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search
Huang Xunhua, Zhang Fengbin, Fan Haoyi, Xi Liang. Multimodal Adversarial Learning Based Unsupervised Time Series Anomaly Detection[J]. Journal of Computer Research and Development, 2021, 58(8): 1655-1667. DOI: 10.7544/issn1000-1239.2021.20201037
Citation: Huang Xunhua, Zhang Fengbin, Fan Haoyi, Xi Liang. Multimodal Adversarial Learning Based Unsupervised Time Series Anomaly Detection[J]. Journal of Computer Research and Development, 2021, 58(8): 1655-1667. DOI: 10.7544/issn1000-1239.2021.20201037

Multimodal Adversarial Learning Based Unsupervised Time Series Anomaly Detection

Funds: This work was supported by the National Natural Science Foundation of China (61172168).
More Information
  • Published Date: July 31, 2021
  • Time series anomaly detection is one of the most important research directions in machine learning, which aims to find the patterns that deviate significantly from the normal behavior of time series. However, most of the existing methods for anomaly detection of time series are based on single-modality feature learning, which ignores the relevance and complementarity of the characteristic distribution of time series in multi-modality space, and consequently fails to make full use of the existing information for learning. To alleviate the above problems, in this paper, we present a time series anomaly detection model based on multimodal adversarial learning. Firstly, we convert the original time series into the frequency domain to construct multi-modality time series representation. Then, based on the constructed multi-modality representation, we propose a multimodal generated adversarial network model to learn normal data’s distributions in time domain and frequency domain jointly. Finally, by modeling the anomaly detection problem as the data reconstruction problem in time domain and frequency domain, we measure the anomaly score of time series from both the time domain and frequency domain perspectives. We verify the proposed method on the time series data sets of UCR and MIT-BIH. Experimental results on the 6 data sets of UCR and MIT-BIH show that, compared with the state-of-the-arts, the proposed method improves the AUC and AP metrics of anomaly detection performance by 12.50% and 21.59% respectively.
  • Related Articles

    [1]Wang Yuanzheng, Sun Wenxiang, Fan Yixing, Liao Huaming, Guo Jiafeng. A Cross-Modal Entity Linking Model Based on Contrastive Learning[J]. Journal of Computer Research and Development, 2025, 62(3): 662-671. DOI: 10.7544/issn1000-1239.202330731
    [2]Yan Meng, Xu Cai, Huang Haibin, Zhao Wei, Guan Ziyu. Large Language Model-Based Trusted Multi-Modal Recommendation[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202440433
    [3]Li Ang, Du Junping, Kou Feifei, Xue Zhe, Xu Xin, Xu Mingying, Jiang Yang. Scientific and Technological Information Oriented Semantics-Adversarial and Media-Adversarial Based Cross-Media Retrieval Method[J]. Journal of Computer Research and Development, 2023, 60(11): 2660-2670. DOI: 10.7544/issn1000-1239.202220430
    [4]Li Zeyu, Zhang Xuhong, Pu Yuwen, Wu Yiming, Ji Shouling. A Survey on Multimodal Deepfake and Detection Techniques[J]. Journal of Computer Research and Development, 2023, 60(6): 1396-1416. DOI: 10.7544/issn1000-1239.202111119
    [5]Hao Shaopu, Liu Quan, Xu Ping’an, Zhang Lihua, Huang Zhigang. Multi-Modal Imitation Learning Method with Cosine Similarity[J]. Journal of Computer Research and Development, 2023, 60(6): 1358-1372. DOI: 10.7544/issn1000-1239.202220119
    [6]Qi Peng, Cao Juan, Sheng Qiang. Semantics-Enhanced Multi-Modal Fake News Detection[J]. Journal of Computer Research and Development, 2021, 58(7): 1456-1465. DOI: 10.7544/issn1000-1239.2021.20200804
    [7]Liu Jinshuo, Feng Kuo, Jeff Z. Pan, Deng Juan, Wang Lina. MSRD: Multi-Modal Web Rumor Detection Method[J]. Journal of Computer Research and Development, 2020, 57(11): 2328-2336. DOI: 10.7544/issn1000-1239.2020.20200413
    [8]Yu Haitao, Yang Xiaoshan, Xu Changsheng. Antagonistic Video Generation Method Based on Multimodal Input[J]. Journal of Computer Research and Development, 2020, 57(7): 1522-1530. DOI: 10.7544/issn1000-1239.2020.20190479
    [9]Dian Yujie, Jin Qin. Audio-Visual Correlated Multimodal Concept Detection[J]. Journal of Computer Research and Development, 2019, 56(5): 1071-1081. DOI: 10.7544/issn1000-1239.2019.20180463
    [10]Liu Yanan, Wu Fei, and Zhuang Yueting. Video Semantics Mining Using Multi-Modality Subspace Correlation Propagation[J]. Journal of Computer Research and Development, 2009, 46(1): 1-8.
  • Cited by

    Periodical cited type(3)

    1. 潘佳,于秀兰. 基于社交意识和支付激励的D2D协作传输策略. 计算机应用研究. 2023(06): 1801-1805 .
    2. 刘琳岚,谭镇阳,舒坚. 基于图神经网络的机会网络节点重要度评估方法. 计算机研究与发展. 2022(04): 834-851 . 本站查看
    3. 王淳,吴仕荣. 舰船自组织网络数据分发机制研究. 舰船科学技术. 2020(14): 166-168 .

    Other cited types(2)

Catalog

    Article views (1365) PDF downloads (746) Cited by(5)

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return