• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search
Zheng Mingyu, Lin Zheng, Liu Zhengxiao, Fu Peng, Wang Weiping. Survey of Textual Backdoor Attack and Defense[J]. Journal of Computer Research and Development, 2024, 61(1): 221-242. DOI: 10.7544/issn1000-1239.202220340
Citation: Zheng Mingyu, Lin Zheng, Liu Zhengxiao, Fu Peng, Wang Weiping. Survey of Textual Backdoor Attack and Defense[J]. Journal of Computer Research and Development, 2024, 61(1): 221-242. DOI: 10.7544/issn1000-1239.202220340

Survey of Textual Backdoor Attack and Defense

Funds: This work was supported by the National Natural Science Foundation of China(61976207, 61906187).
More Information
  • Author Bio:

    Zheng Mingyu: born in 1998. Master candidate. His main research interest includes natural language processing

    Lin Zheng: born in 1984. PhD, professor, PhD supervisor. Member of CCF. Her main research interests include sentiment analysis, machine reading comprehension, and text generation

    Liu Zhengxiao: born in 1997. Master. His main research interests include natural language processing, adversarial attack, and backdoor attack

    Fu Peng: born in 1987. PhD, associate professor, master supervisor. Member of CCF. His main research interests include text generation, question answering, and sentiment analysis

    Wang Weiping: born in 1975. PhD, professor, PhD supervisor. His main research interests include big data, artificial intelligence, and data security

  • Received Date: April 24, 2022
  • Revised Date: February 05, 2023
  • Available Online: September 19, 2023
  • In the deep learning community, lots of efforts have been made to enhance the robustness and the reliability of deep neural networks (DNNs). Previous research mainly analyzed the fragility of DNN from the perspective of adversarial attack, and researchers designed numerous adversarial attack and defense methods. However, with the wide application of pre-trained models (PTMs), a new security threat against DNN especially PTM, called backdoor attack is emerging. Backdoor attack aims at injecting hidden backdoors into DNN, such that the backdoored model behaves properly on normal inputs but produces attacker-specified malicious outputs on the poisoned inputs embedded with special triggers. Backdoor attack poses a severe threat against DNN based systems like spam filter or hate speech detector. Compared with the textual adversarial attack and defense which has been widely studied, textual backdoor attack and defense has not been thoroughly investigated and requires a systematic review. In this paper, we present a comprehensive survey of backdoor attack and defense methods in the text domain. Specifically, we first summarize and categorize the textual backdoor attack and defense methods from different perspectives, then we introduce typical work and analyze their pros and cons. We also enumerate widely adopted benchmark datasets and evaluation metrics in the current literatures. Moreover, we respectively compare the backdoor attack with two relevant threats (i.e., adversarial attack and data poisoning). Finally, we discuss existing challenges of backdoor attack and defense in the text domain and present several promising future directions in this emerging and rapidly growing research area.

  • [1]
    Szegedy C, Zaremba W, Sutskever I, et al. Intriguing properties of neural networks [J]. arXiv preprint, arXiv: 1312. 6199, 2013
    [2]
    Goodfellow I J, Shlens J, Szegedy C. Explaining and harnessing adversarial examples [J]. arXiv preprint, arXiv: 1412. 6572, 2014
    [3]
    Devlin J, Chang Mingwei, Lee K, et al. BERT: Pre-training of deep bidirectional transformers for language understanding [C] //Proc of the 14th Conf of the North American Chapter of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2019: 4171−4186
    [4]
    Liu Yinhan, Ott M, Goyal N, et al. Roberta: A robustly optimized bert pretraining approach [J]. arXiv preprint, arXiv: 1907. 11692, 2019
    [5]
    Raffel C, Shazeer N, Roberts A, et al. Exploring the limits of transfer learning with a unified text-to-text transformer [J]. arXiv preprint, arXiv: 1910. 10683, 2019
    [6]
    Guzella T S, Caminhas W M. A review of machine learning approaches to spam filtering[J]. Expert Systems with Applications, 2009, 36(7): 10206−10222 doi: 10.1016/j.eswa.2009.02.037
    [7]
    Schmidt A, Wiegand M. A survey on hate speech detection using natural language processing [C] //Proc of the 5th Int Workshop on Natural Language Processing for Social Media. Stroudsburg, PA: ACL, 2019: 1−10
    [8]
    Ford E, Carroll J A, Smith H E, et al. Extracting information from the text of electronic medical records to improve case detection: A systematic review[J]. Journal of the American Medical Informatics Association, 2016, 23(5): 1007−1015 doi: 10.1093/jamia/ocv180
    [9]
    Zhang W E, Sheng Q Z, Alhazmi A, et al. Adversarial attacks on deep-learning models in natural language processing: A survey[J]. ACM Transactions on Intelligent Systems and Technology, 2020, 11(3): 1−41
    [10]
    Xu Han, Ma Yao, Liu Haochen, et al. Adversarial attacks and defenses in images, graphs and text: A review[J]. International Journal of Automation and Computing, 2020, 17(2): 151−178 doi: 10.1007/s11633-019-1211-x
    [11]
    Belinkov Y, Glass J. Analysis methods in neural language processing: A survey [J]. arXiv preprint, arXiv: 1812. 08951, 2018
    [12]
    Li Yiming, Jiang Yong, Li Zhifeng, et al. Backdoor learning: A survey [J]. arXiv preprint, arXiv: 2007. 08745, 2020
    [13]
    Garg S, Kumar A, Goel V, et al. Can adversarial weight perturbations inject neural backdoors [C] //Proc of the 29th ACM Int Conf on Information & Knowledge Management. New York: ACM, 2020: 2029−2032
    [14]
    Dai Jiazhu, Chen Chuanshuai. A backdoor attack against LSTM-based text classification systems [J]. arXiv preprint, arXiv: 1905. 12457, 2019
    [15]
    Chen Xiaoyi, Salem A, Chen Dingfan, et al. Badnl: Backdoor attacks against NLP models with semantic-preserving improvements [C] //Proc of the 37th Annual Computer Security Applications Conf. New York: ACM, 2021: 554−569
    [16]
    Zhang Zhengyan, Xiao Guangxuan, Li Yongwei, et al. Red alarm for pre-trained models: Universal vulnerability to neuron-level backdoor attacks [J]. arXiv preprint, arXiv: 2101. 06969, 2021
    [17]
    Kurita K, Michel P, Neubig G. Weight poisoning attacks on pretrained models [C] //Proc of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2020: 2793–2806
    [18]
    Wallace E, Feng Shi, Kandpal N, et al. Universal adversarial triggers for attacking and analyzing NLP [J]. arXiv preprint, arXiv: 1908. 07125, 2019
    [19]
    Azizi A, Tahmid I A, Waheed A, et al. T-miner: A generative approach to defend against trojan attacks on DNN-based text classification [J]. arXiv preprint, arXiv: 2103. 04264, 2021
    [20]
    Chen Chuanshuai, Dai Jiazhu. Mitigating backdoor attacks in LSTM-based text classification systems by backdoor keyword identification [J]. arXiv preprint, arXiv: 2007. 12070, 2021
    [21]
    Qi Fanchao, Chen Yangyi, Li Mukai, et al. Onion: A simple and effective defense against textual backdoor attacks[C] //Proc of the 26th Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2021: 9558–9566
    [22]
    Gao Yansong, Kim Y, Doan B G, et al. Design and evaluation of a multi-domain trojan detection method on deep neural networks[J]. IEEE Transactions on Dependable and Secure Computing, 2022, 19(4): 2349−2364 doi: 10.1109/TDSC.2021.3055844
    [23]
    Gu Tianyu, Dolan-Gavitt B, Garg S. Badnets: Identifying vulnerabilities in the machine learning model supply chain [J]. arXiv preprint, arXiv: 1708. 06733, 2017
    [24]
    Chen Xinyun, Liu Chang, Li Bo, et al. Targeted backdoor attacks on deep learning systems using data poisoning [J]. arXiv preprint, arXiv: 1712. 05526, 2017
    [25]
    Yan Zhicong, Li Gaolei, TIan Yuan, et al. Dehib: Deep hidden backdoor attack on semi-supervised learning via adversarial perturbation [C] //Proc of the 35th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2021: 10585−10593
    [26]
    Saha A, Subramanya A, Pirsiavash H. Hidden trigger backdoor attacks [C] //Proc of the 34th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2020: 11957−11965
    [27]
    Chou E, Tramer F, Pellegrino G. Sentinet: Detecting localized universal attacks against deep learning systems [C] //Proc of the 41st IEEE Symp on Security and Privacy Workshops (SPW). Piscataway, NJ: IEEE, 2020: 48−54
    [28]
    Nguyen A, Tran A. WaNet-imperceptible warping-based backdoor attack [J]. arXiv preprint, arXiv: 2102. 10369, 2021
    [29]
    Liu Yingqi, Ma Shiqing, Aafer Y, et al. Trojaning attack on neural networks [C] //Proc of the 25th Annual Network and Distributed System Security Symp (NDSS). Reston, VA: The Internet Society, 2017: 18−21
    [30]
    Kwon H, Lee S. Textual backdoor attack for the text classification system [J/OL]. Security and Communication Networks, 2021[2022-11-18].https://www.hindawi.com/journals/scn/2021/2938386/
    [31]
    Qi Fanchao, Yao Yuan, Xu S, et al. Turn the combination lock: Learnable textual backdoor attacks via word substitution [C] //Proc of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th Int Joint Conf on Natural Language Processing. Stroudsburg, PA: ACL, 2021: 4873–4883
    [32]
    Li Linyang, Song Demin, Li Xiaonan, et al. Backdoor attacks on pre-trained models by layerwise weight poisoning [C] //Proc of the 26th Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2021: 3023–3032
    [33]
    Wallace E, Zhao T Z, Feng Shi, et al. Concealed data poisoning attacks on NLP models [J]. arXiv preprint, arXiv: 2010. 12563, 2020
    [34]
    Song Liwei, Yu Xinwei, Peng H T, et al. Universal adversarial attacks with natural triggers for text classification [C] //Proc of the 15th Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA: ACL, 2021: 3724–3733
    [35]
    Yang Wenkai, Li Lei, Zhang Zhiyuan, et al. Be careful about poisoned word embeddings: Exploring the vulnerability of the embedding layers in NLP models [C] //Proc of the 15th Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA: ACL, 2021: 2048–2058
    [36]
    Wallace E, Feng Shi, Kandpal N, et al. Universal adversarial triggers for attacking and analyzing NLP [C] //Proc of the 24th Conf on Empirical Methods in Natural Language Processing and the 9th Int Joint Conf on Natural Language Processing (EMNLP-IJCNLP). Stroudsburg, PA: ACL, 2019: 2153–2162
    [37]
    Zhang Zhiyuan, Ren Xuancheng, Su Qi, et al. Neural network surgery: Injecting data patterns into pre-trained models with minimal instance-wise side effects [C] //Proc of the 15th Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA: ACL, 2021: 5453−5466
    [38]
    Gan Leilei, Li Jiwei, Zhang Tianwei, et al. Triggerless backdoor attack for NLP tasks with clean labels [J]. arXiv preprint, arXiv: 2111. 07970, 2021
    [39]
    Bagdasaryan E, Shmatikov V. Spinning language models for propaganda-as-a-service [J]. arXiv preprint, arXiv: 2112.05224, 2021
    [40]
    Qi Fanchao, Li Mukai, Chen Yangyi, et al. Hidden Killer: Invisible textual backdoor attacks with syntactic trigger [C] //Proc of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th Int Joint Conf on Natural Language Processing. Stroudsburg, PA: ACL, 2021: 443–453
    [41]
    Chan A, Tay Y, Ong Y S, et al. Poison attacks against text datasets with conditional adversarially regularized autoencoder [J]. arXiv preprint, arXiv: 2010. 02684, 2020
    [42]
    Qi Fanchao, Chen Yangyi, Zhang Xurui, et al. Mind the style of text! adversarial and backdoor attacks based on text style transfer [C] // Proc of the 26th Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2021: 4569–4580
    [43]
    Chen Yangyi, Qi Fanchao, Gao Hongcheng, et al. Textual backdoor attacks can be more harmful via two simple tricks [J]. arXiv preprint, arXiv: 2110. 08247, 2021
    [44]
    Li Shaofeng, Liu Hui, Dong Tian, et al. Hidden backdoors in human-centric language models [C] //Proc of the 28th ACM SIGSAC Conf on Computer and Communications Security. New York: ACM, 2021: 3123−3140
    [45]
    Zhang Xinyang, Zhang Zheng, Ji Shouling, et al. Trojaning language models for fun and profit [C] //Proc of 6th IEEE European Symp on Security and Privacy (EuroS&P). Piscataway, NJ: IEEE, 2021: 179−197
    [46]
    Yang Wenkai, Lin Yankai, Li Peng, et al. Rethinking stealthiness of backdoor attack against NLP models [C] //Proc of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th Int Joint Conf on Natural Language Processing. Stroudsburg, PA: ACL, 2021: 5543−5557
    [47]
    Hochreiter S, Schmidhuber J. Long short-term memory[J]. Neural Computation, 1997, 9(8): 1735−1780 doi: 10.1162/neco.1997.9.8.1735
    [48]
    Chen Qian, Zhu Xiaodan, Ling Zhenhua, et al. Enhanced LSTM for natural language inference [C] //Proc of the 55th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2017: 1657–1668
    [49]
    Parikh A, Täckström O, Das D, et al. A decomposable attention model for natural language inference [C] //Proc of the 21st Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2016: 2249–2255
    [50]
    Radford A, Wu J, Child R, et al. Language models are unsupervised multitask learners [EB/OL]. OpenAI, 2019[2022-11-03].https://openai.com/blog/better-language-models/
    [51]
    Lewis M, Liu Yinhan, Goyal N, et al. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension [J]. arXiv preprint, arXiv: 1910. 13461, 2019
    [52]
    Dowmunt M, Grundkiewicz R, Dwojak T, et al. Marian: Fast neural machine translation in C++ [C] //Proc of the 56th Annual Meeting of the Association for Computational Linguistics, System Demonstrations. Stroudsburg, PA: ACL, 2018: 116–121
    [53]
    Lan Zhenzhong, Chen Mingda, Goodman S, et al. Albert: A lite bert for self-supervised learning of language representations [J]. arXiv preprint, arXiv: 1909. 11942, 2019
    [54]
    Sanh V, Debut L, Chaumond J, et al. DistilBERT, a distilled version of BERT: Smaller, faster, cheaper and lighter [J]. arXiv preprint, arXiv: 1910. 01108, 2019
    [55]
    Yang Zhilin, Dai Zihang, Yang Yiming, et al. Xlnet: Generalized autoregressive pretraining for language understanding [J]. arXiv preprint, arXiv: 1906. 08237, 2019
    [56]
    Kim Y. Convolutional neural networks for sentence classification [C] //Proc of the 19th Conf on Empirical Methods in Natural Language Processing (EMNLP). Stroudsburg, PA: ACL, 2014: 1746–1751
    [57]
    Zhu Yukun, Kiros R, Zemel R, et al. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books [C] //Proc of the 15th IEEE Int Conf on Computer Vision (ICCV). Piscataway, NJ: IEEE, 2015: 19−27
    [58]
    Maas A, Daly R E, Pham P T, et al. Learning word vectors for sentiment analysis [C] //Proc of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA: ACL, 2011: 142−150
    [59]
    Iyyer M, Wieting J, Gimpel K, et al. Adversarial example generation with syntactically controlled paraphrase networks [C] //Proc of the 13th Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL). Stroudsburg, PA: ACL, 2018: 1875−1885
    [60]
    Krishna K, Wieting J, Iyyer M. Reformulating unsupervised style transfer as paraphrase generation [C] //Proc of the 25th Conf on Empirical Methods in Natural Language Processing (EMNLP). Stroudsburg, PA: ACL, 2020: 737–762
    [61]
    Huang Xijie, Alzantot M, Srivastava M. Neuroninspect: Detecting backdoors in neural networks via output explanations [J]. arXiv preprint, arXiv: 1911. 07399, 2019
    [62]
    Wang Bolun, Yao Yuanshun, Shan S, et al. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks [C] //Proc of the 40th IEEE Symp on Security and Privacy (S&P). Piscataway, NJ: IEEE, 2019: 707−723
    [63]
    Du Min, Jia Ruoxi, Song D. Robust anomaly detection and backdoor attack detection via differential privacy [J]. arXiv preprint, arXiv: 1911. 07116, 2020
    [64]
    Qiao Ximing, Yang Yukun, Li Hai. Defending neural backdoors via generative distribution modeling [C] //Proc of the 33rd Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates, 2019: 14027−14036
    [65]
    Kolouri S, Saha A, Pirsiavash H, et al. Universal litmus patterns: Revealing backdoor attacks in CNNs [C] //Proc of the 30th IEEE/CVF Conf on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ: IEEE, 2020: 301−310
    [66]
    Levine A, Feizi S. Deep partition aggregation: Provable defense against general poisoning attacks [J]. arXiv preprint, arXiv: 2006. 14768, 2020
    [67]
    Hu Zhiting, Yang Zichao, Liang Xiaodong, et al. Toward controlled generation of text [C] //Proc of the 34th Int Conf on Machine Learning. New York: ACM, 2017: 1587−1596
    [68]
    Ester M, Kriegel H P, Sander J, et al. A density-based algorithm for discovering clusters in large spatial databases with noise [C] //Proc of the 2nd Int Conf on Knowledge Discovery and Data Mining (KDD). Palo Alto, CA: AAAI, 1996: 226−231
    [69]
    Liu Kang, Dolan-Gavitt B, Garg S. Fine-pruning: Defending against backdooring attacks on deep neural networks [C] //Proc of the 21st Int Symp on Research in Attacks, Intrusions, and Defenses (RAID). Berlin: Springer, 2018: 273−294
    [70]
    Li Yege, Lyu Xixiang, Koren N, et al. Neural attention distillation: Erasing backdoor triggers from deep neural networks [J]. arXiv preprint, arXiv: 2101. 05930, 2021
    [71]
    Shen Lingfeng, Jiang Haiyun, Liu Lemao, et al. Rethink the evaluation for attack strength of backdoor attacks in natural language processing [J]. arXiv preprint, arXiv: 2201. 02993, 2022
    [72]
    Le T, Park N, Lee D. A sweet rabbit hole by darcy: Using honeypots to detect universal trigger’s adversarial attacks [C] //Proc of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th Int Joint Conf on Natural Language Processing. Stroudsburg, PA: ACL, 2021: 3831−3844
    [73]
    Gao Yansong, Xu Chang, Wang Derui, et al. Strip: A defence against trojan attacks on deep neural networks [C] //Proc of the 35th Annual Computer Security Applications Conf. New York: ACM, 2019: 113−125
    [74]
    Yang Wenkai, Lin Yankai, Li Peng, et al. RAP: Robustness-aware perturbations for defending against backdoor attacks on NLP models [C] //Proc of the 26th Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2021: 8365−8381
    [75]
    Li Zichao, Mekala D, Dong Chengyu, et al. BFClass: A backdoor-free text classification framework [J]. arXiv preprint, arXiv: 2109. 10855, 2021
    [76]
    Zhu Chen, Cheng Yu, Gan Zhe, et al. Freelb: Enhanced adversarial training for natural language understanding [J]. arXiv preprint, arXiv: 1909. 11764, 2019
    [77]
    Miyato T, Dai A M, Goodfellow I. Adversarial training methods for semi-supervised text classification [J]. arXiv preprint, arXiv: 1605. 07725, 2016
    [78]
    Jia R, Raghunathan A, Göksel K, et al. Certified robustness to adversarial word substitutions [C] //Proc of the 24th Conf on Empirical Methods in Natural Language Processing and the 9th Int Joint Conf on Natural Language Processing (EMNLP-IJCNLP). Stroudsburg, PA: ACL, 2019: 4129−4142
    [79]
    Huang P S, Stanforth R, Welbl J, et al. Achieving verified robustness to symbol substitutions via interval bound propagation [C] //Proc of the 24th Conf on Empirical Methods in Natural Language Processing and the 9th Int Joint Conf on Natural Language Processing (EMNLP-IJCNLP). Stroudsburg, PA: ACL, 2019: 4083−4093
    [80]
    Ye Mao, Gong Chengyue, Liu Qiang. Safer: A structure-free approach for certified robustness to adversarial word substitutions [C] //Proc of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2020: 3465−3475
    [81]
    Lakshmipathi N. IMDB dataset of 50K movie reviews [EB/OL]. 2018[2022-11-16].https://www.kaggle.com/datasets/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews
    [82]
    Zhang Xiang. AG’s news topic classification dataset [EB/OL]. 2015[2022-11-16].https://paperswithcode.com/dataset/ag-news
    [83]
    Zhang Xiang, Zhao Junbo, LeCun Y. Character-level convolutional networks for text classification [C] //Proc of the 28th Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates, 2015: 649–657
    [84]
    Stanford University. Sentiment analysis [EB/OL]. 2013[2022-11-17].https://nlp.stanford.edu/sentiment/index.html
    [85]
    Socher R, Perelygin A, Wu J Y, et al. Recursive deep models for semantic compositionality over a sentiment treebank [C] //Proc of the 18th Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2013: 1631−1642
    [86]
    Shervin M. Offensive language identification dataset–OLID [EB/OL]. 2019[2022-11-17].https://scholar.harvard.edu/malmasi/olid
    [87]
    Zampieri M, Malmasi S, Nakov P, et al. Predicting the type and target of offensive posts in social media [C] //Proc of the 14th Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA: ACL, 2019: 1415−1420
    [88]
    Leskovec J. Amazon reviews [EB/OL]. 2013[2022-11-17]. http://snap.stanford.edu/data/web-Amazon-links.html
    [89]
    McAuley J, Leskovec J. Hidden factors and hidden topics: Understanding rating dimensions with review text [C] //Proc of the 7th ACM Conf on Recommender Systems. New York: ACM, 2013: 165−172
    [90]
    Conversation AI. Toxic comment classification challenge [EB/OL]. 2017[2022-11-17].https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/data
    [91]
    Antigoni M F. Hate and abusive speech on Twitter [EB/OL]. 2018[2022-11-17].https://github.com/ENCASEH2020/hatespeech-twitter
    [92]
    Founta A M, Djouvas C, Chatzakou D, et al. Large scale crowdsourcing and characterization of Twitter abusive behavior [J]. arXiv preprint, arXiv: 1802.00393, 2018
    [93]
    Mandy G. Ling-spam dataset [EB/OL]. 2019[2022-11-17].https://www.kaggle.com/datasets/mandygu/lingspam-dataset
    [94]
    Sakkis G, Androutsopoulos I, Paliouras G, et al. A memory-based approach to anti-spam filtering for mailing lists[J]. Information Retrieval, 2003, 6(1): 49−73 doi: 10.1023/A:1022948414856
    [95]
    Van Ranst W, Thys S, Goedemé T. Fooling automated surveillance cameras: Adversarial patches to attack person detection [C] //Proc of the 29th CVPR Workshop on The Bright and Dark Sides of Computer Vision: Challenges and Opportunities for Privacy and Security. Piscataway, NJ: IEEE, 2019: 49−55
    [96]
    Moosavi-Dezfooli S M, Fawzi A, Fawzi O, et al. Universal adversarial perturbations [C] //Proc of the 27th IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2017: 1765−1773
    [97]
    Alzantot M, Sharma Y, Elgohary A, et al. Generating natural language adversarial examples [J]. arXiv preprint, arXiv: 1804. 07998, 2018
    [98]
    Ren Shuhuai, Deng Yihe, He Kun, et al. Generating natural language adversarial examples through probability weighted word saliency [C] //Proc of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2019: 1085−1097
    [99]
    Zang Yuan, Qi Fanchao, Yang Chenghao, et al. Word-level textual adversarial attacking as combinatorial optimization [C] //Proc of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2020: 6066−6080
    [100]
    Pang Ren, Shen Hua, Zhang Xinyang, et al. A tale of evil twins: Adversarial inputs versus poisoned models [C] //Proc of the 27th ACM SIGSAC Conf on Computer and Communications Security. New York: ACM, 2020: 85−99
    [101]
    Weng C H, Lee Y T, Wu S H B. On the trade-off between adversarial and backdoor robustness [C] // Proc of the 33rd Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates, 2020: 11973−11983
    [102]
    Biggio B, Nelson B, Laskov P. Poisoning attacks against support vector machines [C] //Proc of the 29th Int Conf on Machine Learning (ICML’12). Madison, WI: Omnipress, 2012: 1467–1474
    [103]
    Yang Chaofei, Wu Qing, Li Hai, et al. Generative poisoning attack method against neural networks [J]. arXiv preprint, arXiv: 1703. 01340, 2017
    [104]
    Steinhardt J, Koh P W, Liang P. Certified defenses for data poisoning attacks [C] //Proc of the 30th Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates, 2017: 3520−3532
    [105]
    Kwon H, Yoon H, Park K W. Selective poisoning attack on deep neural network to induce fine-Grained recognition Error [C] //Proc of the 2nd IEEE Int Conf on Artificial Intelligence and Knowledge Engineering (AIKE). Piscataway, NJ: IEEE, 2019: 136−139
    [106]
    Liu Pengfei, Yuan Weizhe, Fu Jinlan, et al. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing [J]. arXiv preprint, arXiv: 2107. 13586, 2021
    [107]
    杜巍,刘功申. 深度学习中的后门攻击综述[J]. 信息安全学报,2022,7(3):1−16 doi: 10.19363/J.cnki.cn10-1380/tn.2022.05.01

    Du Wei, Liu Gongshen. A survey of backdoor attack in deep learning[J]. Journal of Cyber Security, 2022, 7(3): 1−16 (in Chinese) doi: 10.19363/J.cnki.cn10-1380/tn.2022.05.01
    [108]
    谭清尹,曾颖明,韩叶,等. 神经网络后门攻击研究[J]. 网络与信息安全学报,2021,7(3):46−58 doi: 10.11959/j.issn.2096-109x.2021053

    Tan Qingyin, Zeng Yingming, Han Ye, et al. Survey on backdoor attacks targeted on neural network[J]. Chinese Journal of Network and Information Security, 2021, 7(3): 46−58 (in Chinese) doi: 10.11959/j.issn.2096-109x.2021053
    [109]
    陈大卫,付安民,周纯毅,等. 基于生成式对抗网络的联邦学习后门攻击方案[J]. 计算机研究与发展,2021,58(11):2364−2373 doi: 10.7544/issn1000-1239.2021.20210659

    Chen Dawei, Fu Anmin, Zhou Chunyi, et al. Federated learning backdoor attack scheme based on generative adversarial network[J]. Journal of Computer Research and Development, 2021, 58(11): 2364−2373 (in Chinese) doi: 10.7544/issn1000-1239.2021.20210659
    [110]
    Geirhos R, Jacobsen J H, Michaelis C, et al. Shortcut learning in deep neural networks[J]. Nature Machine Intelligence, 2020, 2(11): 665−673 doi: 10.1038/s42256-020-00257-z
  • Related Articles

    [1]Li Nan, Ding Yidong, Jiang Haoyu, Niu Jiafei, Yi Ping. Jailbreak Attack for Large Language Models: A Survey[J]. Journal of Computer Research and Development, 2024, 61(5): 1156-1181. DOI: 10.7544/issn1000-1239.202330962
    [2]Wang Mengru, Yao Yunzhi, Xi Zekun, Zhang Jintian, Wang Peng, Xu Ziwen, Zhang Ningyu. Safety Analysis of Large Model Content Generation Based on Knowledge Editing[J]. Journal of Computer Research and Development, 2024, 61(5): 1143-1155. DOI: 10.7544/issn1000-1239.202330965
    [3]Chen Xuanting, Ye Junjie, Zu Can, Xu Nuo, Gui Tao, Zhang Qi. Robustness of GPT Large Language Models on Natural Language Processing Tasks[J]. Journal of Computer Research and Development, 2024, 61(5): 1128-1142. DOI: 10.7544/issn1000-1239.202330801
    [4]Chen Huimin, Liu Zhiyuan, Sun Maosong. The Social Opportunities and Challenges in the Era of Large Language Models[J]. Journal of Computer Research and Development, 2024, 61(5): 1094-1103. DOI: 10.7544/issn1000-1239.202330700
    [5]Yang Yi, Li Ying, Chen Kai. Vulnerability Detection Methods Based on Natural Language Processing[J]. Journal of Computer Research and Development, 2022, 59(12): 2649-2666. DOI: 10.7544/issn1000-1239.20210627
    [6]Pan Xuan, Xu Sihan, Cai Xiangrui, Wen Yanlong, Yuan Xiaojie. Survey on Deep Learning Based Natural Language Interface to Database[J]. Journal of Computer Research and Development, 2021, 58(9): 1925-1950. DOI: 10.7544/issn1000-1239.2021.20200209
    [7]Zheng Haibin, Chen Jinyin, Zhang Yan, Zhang Xuhong, Ge Chunpeng, Liu Zhe, Ouyang Yike, Ji Shouling. Survey of Adversarial Attack, Defense and Robustness Analysis for Natural Language Processing[J]. Journal of Computer Research and Development, 2021, 58(8): 1727-1750. DOI: 10.7544/issn1000-1239.2021.20210304
    [8]Pan Xudong, Zhang Mi, Yan Yifan, Lu Yifan, Yang Min. Evaluating Privacy Risks of Deep Learning Based General-Purpose Language Models[J]. Journal of Computer Research and Development, 2021, 58(5): 1092-1105. DOI: 10.7544/issn1000-1239.2021.20200908
    [9]Bao Yang, Yang Zhibin, Yang Yongqiang, Xie Jian, Zhou Yong, Yue Tao, Huang Zhiqiu, Guo Peng. An Automated Approach to Generate SysML Models from Restricted Natural Language Requirements in Chinese[J]. Journal of Computer Research and Development, 2021, 58(4): 706-730. DOI: 10.7544/issn1000-1239.2021.20200757
    [10]Che Haiyan, Feng Tie, Zhang Jiachen, Chen Wei, and Li Dali. Automatic Knowledge Extraction from Chinese Natural Language Documents[J]. Journal of Computer Research and Development, 2013, 50(4): 834-842.
  • Cited by

    Periodical cited type(66)

    1. 袁良志,海佳丽,汪润,邓文萍,肖勇,常凯. 知识图谱驱动的中医药标准数字化探索与实践. 中医药导报. 2025(01): 225-230 .
    2. 范定容,王倩倩,沈奥,彭露. 从ChatGPT到Sora:人工智能在医学教育中的应用潜力与挑战. 中国医学教育技术. 2025(01): 33-40 .
    3. 刘园园,王银刚. ChatGPT影响大学生判断能力:双向机理与对策. 湖北成人教育学院学报. 2025(01): 29-34 .
    4. 魏昱,刘卫. 人工智能生成内容在服装设计中的应用现状. 毛纺科技. 2025(01): 134-142 .
    5. 李冰,鲜勇,雷刚,苏娟. ChatGPT架构下课程智能教学助手建设探讨. 教育教学论坛. 2025(03): 45-48 .
    6. 梁炜,许振宇. 大语言模型赋能舆情治理现代化:价值、风险与路径. 中国应急管理科学. 2025(01): 93-103 .
    7. 刘邦奇,聂小林,王士进,袁婷婷,朱洪军,赵子琪,朱广袤. 生成式人工智能与未来教育形态重塑:技术框架、能力特征及应用趋势. 电化教育研究. 2024(01): 13-20 .
    8. 秦涛,杜尚恒,常元元,王晨旭. ChatGPT的工作原理、关键技术及未来发展趋势. 西安交通大学学报. 2024(01): 1-12 .
    9. 张小朝. AIGC在商旅行业中的应用探索. 广东通信技术. 2024(01): 75-79 .
    10. 廉霄兴,宋勇,朱军,王淑玲,叶晓舟,欧阳晔. 基于双通道理论的通信认知增强技术研究. 电信科学. 2024(01): 123-135 .
    11. 杨永恒. 人工智能时代社会科学研究的“变”与“不变”. 人民论坛·学术前沿. 2024(04): 96-105 .
    12. 刘英祥,张琳. 生成式人工智能技术在海事管理工作中的应用探索. 航海. 2024(02): 62-64 .
    13. 吕静,何平,王永芬,冉朝霞,曹钦兴,古文帆,彭敏,田敏. ChatGPT在医学领域研究态势的文献计量学分析. 医学与哲学. 2024(07): 30-35 .
    14. 王益君,董韵美. 公众对人工智能的认知与情感态度——以ChatGPT为例. 知识管理论坛. 2024(01): 16-29 .
    15. 陈雷. ChatGPT在公安院校教育教学中的应用及影响. 太原城市职业技术学院学报. 2024(02): 85-88 .
    16. 尤冲,李彦兵. 基于ChatGPT大语言模型应用的公共体育服务智能化:指征、风险及其规制. 南京体育学院学报. 2024(02): 1-12 .
    17. 杨胜钦. 从ChatGPT看AI对电信网络诈骗犯罪治理的影响. 犯罪与改造研究. 2024(05): 26-33 .
    18. 王春英,姚亚妮,滕白莹. 生成式人工智能嵌入敏捷政府建设:影响、风险与应对. 北京行政学院学报. 2024(03): 73-83 .
    19. 王雯,李永智. 国际生成式人工智能教育应用与省思. 开放教育研究. 2024(03): 37-44 .
    20. 张智义. 体认语言学视阈下ChatGPT语言生成及性能研究. 外语研究. 2024(03): 20-25+43+112 .
    21. 余淑珍,单俊豪,闫寒冰. 情感计算赋能个性化教学:逻辑框架、问题解构与多元重塑. 现代远距离教育. 2024(02): 53-61 .
    22. 高尚. 大语言模型与中台:共融还是替代?. 科技与金融. 2024(05): 59-62 .
    23. 郭亚军,马慧芳,张鑫迪,冯思倩. ChatGPT赋能图书馆知识服务:原理、场景与进路. 图书馆建设. 2024(03): 60-68 .
    24. 高雪松,黄蕴华,王斌. 基于专利数据的生成式人工智能技术栈创新态势研究. 东北财经大学学报. 2024(04): 53-61 .
    25. 张渊. ChatGPT文本的生成机制与文本特性分析. 重庆文理学院学报(社会科学版). 2024(04): 105-114 .
    26. 罗仕鉴,于慧伶,易珮琦. 数智时代工业设计知识生产新范式. 机械设计. 2024(08): 6-10 .
    27. 徐炳文. 基于ChatGPT的人工智能交互技术工业物联网平台研究. 工业控制计算机. 2024(08): 132-134 .
    28. Deyi Li,Jialun Yin,Tianlei Zhang,Wei Han,Hong Bao. The Four Most Basic Elements In Machine Cognition. Data Intelligence. 2024(02): 297-319 .
    29. 黄语,刘海洋,常海军,杨远松. 基于ChatGPT工作模式的AI工具在BIM技术中的潜在应用与实现途径. 科技创新与应用. 2024(26): 181-184+188 .
    30. 李琳娜,丁楷,韩红旗,王力,李艾丹. 基于知识图谱的中文科技文献问答系统构建研究. 中国科技资源导刊. 2024(04): 51-62 .
    31. 裴炳森,李欣,蒋章涛,刘明帅. 基于大语言模型的公安专业小样本知识抽取方法研究. 计算机科学与探索. 2024(10): 2630-2642 .
    32. 李克寒,余丽媛,邵企能,蒋可,乌丹旦. 大语言模型在口腔住院医师规范化培训中的应用构想. 中国卫生产业. 2024(07): 155-158 .
    33. 钟厚涛. 生成式人工智能给翻译实践带来的机遇与挑战. 北京翻译. 2024(00): 238-250 .
    34. 张夏恒,马妍. AIGC在应急情报服务中的应用研究. 图书馆工作与研究. 2024(11): 60-67 .
    35. 崔金满,李冬梅,田萱,孟湘皓,杨宇,崔晓晖. 提示学习研究综述. 计算机工程与应用. 2024(23): 1-27 .
    36. 周代数,魏杉汀. 人工智能驱动的科学研究第五范式:演进、机制与影响. 中国科技论坛. 2024(12): 97-107 .
    37. 钱力,张智雄,伍大勇,常志军,于倩倩,胡懋地,刘熠. 科技文献大模型:方法、框架与应用. 中国图书馆学报. 2024(06): 45-58 .
    38. 潘崇佩,廖康启,孔勇发. 生成式人工智能背景下的近代物理实验教学改革. 实验室研究与探索. 2024(12): 117-122 .
    39. 李德毅,刘玉超,殷嘉伦. 认知机器如何创造. 中国基础科学. 2024(06): 1-11 .
    40. 李德毅,张天雷,韩威,海丹,鲍泓,高洪波. 认知机器的结构和激活. 智能系统学报. 2024(06): 1604-1613 .
    41. 蔡昌,庞思诚. ChatGPT的智能性及其在财税领域的应用. 商业会计. 2023(09): 41-46 .
    42. 于书娟,卢小雪,赵磊磊. 教育人工智能变革的基本逻辑与发展进路. 当代教育科学. 2023(05): 40-49 .
    43. 曹克亮. ChatGPT:意识形态家的机器学转向及后果. 统一战线学研究. 2023(04): 134-144 .
    44. 宋恺,屈蕾蕾,杨萌科. 生成式人工智能的治理策略研究. 信息通信技术与政策. 2023(07): 83-88 .
    45. 陈凌云,姚宽达,王茜,方安,李刚. ChatGPT:研究进展、模型创新及医学信息研究应用场景优化. 医学信息学杂志. 2023(07): 18-23+29 .
    46. 彭强,李羿卫. 自然用户界面在智能家居系统中的应用路径创新研究:生成式人工智能技术的调节作用. 包装工程. 2023(16): 454-463 .
    47. 杨军农,王少波. 类ChatGPT技术嵌入政务服务网的应用场景、风险隐患与实施建议. 信息与电脑(理论版). 2023(10): 183-186 .
    48. 政光景,吕鹏. 生成式人工智能与哲学社会科学新范式的涌现. 江海学刊. 2023(04): 132-142+256 .
    49. 吴梦妮. 社交媒体传播视域下玩具企业应用AI技术实施营销的实践路径. 玩具世界. 2023(04): 144-146 .
    50. 李德毅,殷嘉伦,张天雷,韩威,鲍泓. 机器认知四要素说. 中国基础科学. 2023(03): 1-10+22 .
    51. 王洁. ChatGPT对知识服务的五大变革. 图书馆. 2023(09): 10-16 .
    52. 刘乃嘉. 基于ChatGPT的矿山工程风险评估预警系统实现探讨. 企业科技与发展. 2023(08): 44-47 .
    53. 裴炳森,李欣,吴越. 基于ChatGPT的电信诈骗案件类型影响力评估. 计算机科学与探索. 2023(10): 2413-2425 .
    54. 张新新,丁靖佳. 生成式智能出版的技术原理与流程革新. 图书情报知识. 2023(05): 68-76 .
    55. 张新新,黄如花. 生成式智能出版的应用场景、风险挑战与调治路径. 图书情报知识. 2023(05): 77-86+27 .
    56. 陈靖. ChatGPT的类人想象与安全风险分析. 网络空间安全. 2023(04): 8-12 .
    57. 李佩芳,陈佳丽,宁宁,王立群,张涵旎. ChatGPT在医学领域的应用进展及思考. 华西医学. 2023(10): 1456-1460 .
    58. 朱敏锐,郜云帆,黄勇. 以新时代优良学风涵养新时代外语人才. 北京教育(高教). 2023(11): 35-37 .
    59. 丁红菊. 消解与重构:人工智能技术对新闻业的影响——基于对ChatGPT的研究. 运城学院学报. 2023(05): 57-62 .
    60. 李钥,淮盼盼,杨辉. ChatGPT在护理教育中的应用状况及优劣分析. 护理学杂志. 2023(21): 117-121 .
    61. 张绍龙. 基于ChatGPT的人工智能技术应用. 集成电路应用. 2023(11): 200-201 .
    62. 崔克克,孙冲,李辉,赵凌飞. 浅谈水泥企业数字化转型发展. 中国水泥. 2023(12): 28-33 .
    63. 单琳,王文娟,刘舒萌. ChatGPT在医学分子生物学教学中的应用. 基础医学教育. 2023(12): 1084-1086 .
    64. 李德毅,刘玉超,任璐. 人工智能看智慧. 科学与社会. 2023(04): 131-149 .
    65. 付翔,魏晓伟,张浩,徐宁. 数字安全角度下审视和剖析ChatGPT. 航空兵器. 2023(06): 117-122 .
    66. 黄婷,刘力凯. 基于大模型的数智化语言教学探索与应用. 连云港职业技术学院学报. 2023(04): 73-79 .

    Other cited types(0)

Catalog

    Article views (585) PDF downloads (251) Cited by(66)

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return