• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search
Qin Zhen, Zhuang Tianming, Zhu Guosong, Zhou Erqiang, Ding Yi, Geng Ji. Survey of Security Attack and Defense Strategies for Artificial Intelligence Model[J]. Journal of Computer Research and Development, 2024, 61(10): 2627-2648. DOI: 10.7544/issn1000-1239.202440449
Citation: Qin Zhen, Zhuang Tianming, Zhu Guosong, Zhou Erqiang, Ding Yi, Geng Ji. Survey of Security Attack and Defense Strategies for Artificial Intelligence Model[J]. Journal of Computer Research and Development, 2024, 61(10): 2627-2648. DOI: 10.7544/issn1000-1239.202440449

Survey of Security Attack and Defense Strategies for Artificial Intelligence Model

Funds: This work was supported by the National Natural Science Foundation of China (62372083, 62072074, 62076054, 62027827, 62002047), the Sichuan Provincial Science and Technology Plan Project (2024NSFTD0005, 2022JDJQ0039), and the Fundamental Research Funds for the Central Universities (ZYGX2021YGLH212, ZYGX2022YGRH012).
More Information
  • Author Bio:

    Qin Zhen: born in 1983. PhD, professor, PhD supervisor. His main research interests include multi-source data fusion analysis, and artificial intelligence security and application

    Zhuang Tianming: born in 1998. PhD. His main research interests include human action recognition and image processing

    Zhu Guosong: born in 1997. PhD. His main research interests include image processing and multidimensional reconstruction

    Zhou Erqiang: born in 1980. PhD, associate professor. His main research interests include natural language processing, and artificial intelligence security and applications

    Ding Yi: born in 1985. PhD, professor, PhD supervisor. His main research interests include medical image processing and computer-aided diagnosis

    Geng Ji: born in 1963. PhD, professor. His main research interests include deep learning, open computer systems and network security, and information system security

  • Received Date: May 30, 2024
  • Revised Date: July 17, 2024
  • Available Online: September 13, 2024
  • In recent years, the rapid development of artificial intelligence technology, particularly deep learning, has led to its widespread application in various fields such as computer vision and natural language processing. However, recent research indicates potential security risks associated with these advanced AI models could compromise their reliability. In light of this concern, this survey delves into cutting-edge research findings pertaining to security attacks, attack detection, and defense strategies for artificial intelligence models. Specifically regarding model security attacks, our work focuses on elucidating the principles and technical status of adversarial attacks, model inversion attacks, and model theft attacks. With regards to model attack detection methods explored in this paper, they include defensive distillation techniques, regularization approaches, outlier detection, robust statistics. As for model defense strategies examined in this study, they encompass adversarial training measures, model structure defense mechanisms, query control defenses along with other technical means. This comprehensive survey not only summarizes but also expands upon techniques and methodologies relevant to ensuring the security of artificial intelligence models thereby providing a solid theoretical foundation for their secure applications while simultaneously enabling researchers to gain a better understanding of the current state-of-the-art research within this field facilitating informed decisions when selecting future research directions.

  • [1]
    Chen Zhuo, Yin Fei, Yang Qing, et al. Cross-lingual text image recognition via multi-hierarchy cross-modal mimic[J]. IEEE Transactions on Multimedia, 2023, 25: 4830−4841
    [2]
    Yan Tiantian, Li Haojie, Sun Baoli, et al. Discriminative feature mining and enhancement network for low-resolution fine-grained image recognition[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(8): 5319−5330
    [3]
    Chen Jicheng, Zhang Yongkang, Teng Siyu, et al. ACP-based energy-efficient schemes for sustainable intelligent transportation systems[J]. IEEE Transactions on Intelligent Vehicles, 2023, 8(5): 3224−3227
    [4]
    Ling Yating, Wang Yuling, Dai Wenli, et al. MTANet: Multi-task attention network for automatic medical image segmentation and classification[J]. IEEE Transactions on Medical Imaging, 2024, 43(2): 674−685
    [5]
    Liu Hongmin, Jin Fan, Zeng Hui, et al. Image enhancement guided object detection in visually degraded scenes[J]. IEEE Transactions on Neural Networks and Learning Systems, 2023[2024-05-31]. https://ieeexplore.ieee.org/document/10130799
    [6]
    Zhou Chulun, Liang Yunlong, Meng Fandong, et al. A multi-task multi-stage transitional training framework for neural chat translation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(7): 7970−7985
    [7]
    Liu Jia, Li Tianrui, Ji Shengong, et al. Urban flow pattern mining based on multi-source heterogeneous data fusion and knowledge graph embedding[J]. IEEE Transactions on Knowledge and Data Engineering, 2023, 35: 2133−2146
    [8]
    Ateniese G, Mancini L V, Spognardi A, et al. Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers[J]. International Journal of Security and Networks, 2015, 10(3): 137−150 doi: 10.1504/IJSN.2015.071829
    [9]
    Mehnaz S, Dibbo S V, De V R, et al. Are your sensitive attributes private? Novel model inversion attribute inference attacks on classification models[C]//Proc of the 31st USENIX Security Symp (USENIX Security 22). Berkeley, CA: USENIX Association, 2022: 4579−4596
    [10]
    Juuti M, Szyller S, Marchal S, et al. PRADA: Protecting against DNN model stealing attacks[C]//Proc of IEEE European Symp on Security and Privacy. Piscataway, NJ: IEEE, 2019: 512–527
    [11]
    Battista B, Blaine N, Pavel L. Support vector machines under adversarial label noise[C]//Proc of Asion Conf on Machine Learning. New York: PMLR, 2011: 97−112
    [12]
    Ru B, Cobb A, Blaas A, et al. Bayesopt adversarial attack[C]//Proc of Int Conf on Learning Representations. 2019. https://openreview.net/pdf?id=Hkem-lrtvH
    [13]
    Wang Binghui, Gong Neli, Zhenqiang. Stealing hyperparameters in machine learning[C]//Proc of 2018 IEEE Symp on Security and Privacy (SP). Piscataway, NJ: IEEE, 2018: 36−52
    [14]
    Yang Q, Liu Y, Chen T, et al. Federated machine learning: Concept and applications[J]. ACM Transactions on Intelligent Systems and Technology, 2019, 10(2): 1−19
    [15]
    Papernot N, McDaniel P, Wu Xi, et al. Distillation as a defense to adversarial perturbations against deep neural networks[J]. IEEE Transactions on Neural Networks and Learning Systems, 2016, 27(10): 1631−1643
    [16]
    Abadi M, Chu A, Goodfellow I, et al. Deep learning with differential privacy[C]//Proc of the 2016 ACM SIGSAC Conf on Computer and Communications Security. New York: ACM, 2016: 308−318
    [17]
    Chandola V, Banerjee A, Kumar V. Anomaly detection: A survey[J]. ACM Computing Surveys, 2009, 41(3): 1−58
    [18]
    Seidman J H, Fazlyzb M, Preciado V M. Robust deep learning as optimal control: Insights and convergence guarantees[C]//Proc of the 2nd Conf on Learning for Dynamics and Control. New York: PMLR, 2020: 884−893
    [19]
    Warner S L. Randomized response: A survey technique for eliminating evasive answer bias[J]. Journal of the American Statistical Association, 1965, 60(309): 63−69 doi: 10.1080/01621459.1965.10480775
    [20]
    Zhou Wen, Hou Xin, Chen Yongjun, et al. Transferable adversarial perturbations[C]//Proc of the European Conf on Computer Vision (ECCV). Piscataway, NJ: IEEE, 2018, 452−467
    [21]
    Goodfellow I J, Shlens J, Szegedy C. Explaining and harnessing adversarial examples[J]. arXiv preprint, arXiv: 1412.6572, 2014
    [22]
    Carlini N, Wagner D. Towards evaluating the robustness of neural networks[C]//Proc of IEEE Symp on Security and Privacy (SP). Piscataway, NJ: IEEE, 2017: 39−57
    [23]
    Fredrikson M, Jha S, Ristenpart T. Model inversion attacks that exploit confidence information and basic countermeasures[C]//Proc of the 22nd ACM SIGSAC Conf on Computer and Communications Security. New York: ACM, 2015: 1322−1333
    [24]
    Yang Z, Zhang J, Chang E C, et al. Neural network inversion in adversarial setting via background knowledge alignment[C]//Proc of the 2019 ACM SIGSAC Conf on Computer and Communications Security. New York: ACM, 2019: 225−240
    [25]
    Zhu Yuankun, Cheng Yueqiang, Zhou Husheng, et al. Hermes attack: Steal DNN models with lossless inference accuracy[C]//Proc of the 30th USENIX Security Symp (USENIX Security 21). Berkeley, CA: USENIX Association, 2021: 1973−1988
    [26]
    Reith R N, Schneider T, Tkachenko O. Efficiently stealing your machine learning models[C]//Proc of the 18th ACM Workshop on Privacy in the Electronic Society. New York: ACM, 2019: 198−210
    [27]
    Chandrasekaran V, Chaudhuri K, Giacomelli I, et al. Exploring connections between active learning and model extraction[C]//Proc of the 29th USENIX Security Symp (USENIX Security 20). Berkeley, CA: USENIX Association, 2020: 1309−1326
    [28]
    Orekondy T, Schiele B, Fritz M. Knockoff nets: Stealing functionality of black-box models[C]//Proc of the IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2019: 4954−4963
    [29]
    Battista B, Blaine N, Pavel L. Poisoning attacks against support vector machines[C]//Proc of the Int Conf on Machine Learning, Edinburgh, Scotland, New York: ICML 2012: 1467−1474
    [30]
    Cinà A E, Vascon S, Demontis A, et al. The hammer and the nut: Is bilevel optimization really needed to poison linear classifiers?[C]//Proc of Int Joint Conf on Neural Networks. Piscataway, NJ: IEEE, 2021: 1−8
    [31]
    Carnerero C J, Muñoz-González L, Spencer P, et al. Hyperparameter learning under data poisoning: Analysis of the influence of regularization via multiobjective bilevel optimization[J]. IEEE Transactions on Neural Networks and Learning Systems, 2023[2024-5-31]. https://ieeexplore.ieee.org/document/10235261
    [32]
    Mei Shike, Zhu Xiaojin. Using machine teaching to identify optimal training-set attacks on machine learners[C]//Proc of AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2015, 29(1): 2871−2877
    [33]
    Feng Ji, Cai Qizhi, Zhou Zhihua. Learning to confuse: Generating training time adversarial data with auto-encoder[J]. Advances in Neural Information Processing Systems, 2019[2024-05-31]. https://proceedings.neurips.cc/paper_files/paper/2019/file/1ce83e5d4135b07c0b82afffbe2b3436-Paper.pdf
    [34]
    Koh P W, Liang P. Understanding black-box predictions via influence functions[C]//Proc of Int Conf on Machine Learning. New York: PMLR, 2017: 1885−1894
    [35]
    Jagielski M, Severi G, Pousette H N, et al. Subpopulation data poisoning attacks[C]//Proc of the 2021 ACM SIGSAC Conf on Computer and Communications Security. New York: ACM, 2021: 3104−3122
    [36]
    Gu Tianyu, Liu Kang, Dolan-Gavitt B, et al. Badnets: Evaluating backdooring attacks on deep neural networks[J]. IEEE Access, 2019, 7: 47230−47244
    [37]
    Chu Cheng, Chen Fan, Richerme P, et al. QDoor: Exploiting approximate synthesis for backdoor attacks in quantum neural networks[C]//Proc of IEEE Int Conf on Quantum Computing and Engineering. Piscataway, NJ: IEEE, 2023, 1: 1098−1106
    [38]
    Zhang Zhengming, Tian Muchen, Li Chunguo, et al. Poison neural network-based mmwave beam selection and detoxification with machine unlearning[J]. IEEE Transactions on Communications, 2022, 71(2): 877−892
    [39]
    Zhong Haoti, Liao Cong, Squicciarini A C, et al. Backdoor embedding in convolutional neural network models via invisible perturbation[C]//Proc of the 10th ACM Conf on Data and Application Security and Privacy. New York: ACM, 2020: 97−108
    [40]
    Salem A, Yang Zhang, Humbert M, et al. Ml-leaks: Model and data independent membership inference attacks and defenses on machine learning models[J]. arXiv preprint, arXiv: 1806.01246, 2018
    [41]
    Sablayrolles A, Douze M, Schmid C, et al. White-box vs black-box: Bayes optimal strategies for membership inference[C]//Proc of Int Conf on Machine Learning. New York: PMLR, 2019: 5558−5567
    [42]
    Hisamoto S, Post M, Duh K. Membership inference attacks on sequence-to-sequence models: Is my data in your machine translation system?[J]. Transactions of the Association for Computational Linguistics, 2020, 8: 49−63 doi: 10.1162/tacl_a_00299
    [43]
    Jamie H, Luca M, George D, et al. LOGAN: Membership inference attacks against generative models[J].arXiv preprint, arXiv: 1705.07663.2017
    [44]
    Carlini N, Wagner D. Towards evaluating the robustness of neural networks[C]//Proc of 2017 IEEE Symp on Security and Privacy (SP). Piscataway, NJ: IEEE, 2017: 39−57
    [45]
    Yu Lei, Liu Ling, Calton P, et al. Differentially private model publishing for deep learning[C]//Proc of IEEE Symp on Security and Privacy (SP). Piscataway, NJ: IEEE, 2019: 332−349
    [46]
    Steinhardt J, Koh P W W, Liang P S. Certified defenses for data poisoning attacks[J]. Advances in neural information processing systems, 2017[2024-05-31]. https://proceedings.neurips.cc/paper_files/paper/2017/file/9d7311ba459f9e45ed746755a32dcd11-Paper.pdf
    [47]
    Pang Guansong, Shen Chunhua, Cao Longbin, et al. Deep learning for anomaly detection: A review[J]. ACM Computing Surveys, 2021: 1−38
    [48]
    Diakonikolas I, Kamath G, Kane D, et al. Robust estimators in high-dimensions without the computational intractability[J]. SIAM Journal on Computing, 2019, 48(2): 742−864 doi: 10.1137/17M1126680
    [49]
    Chen Bo, Hawkins C, Yazdani K, et al. Edge differential privacy for algebraic connectivity of graphs[C]//Proc of the 60th IEEE Conf on Decision and Control (CDC). Piscataway, NJ: IEEE, 2021: 2764−2769
    [50]
    Jian Xun, Wang Yue, Chen Lei, et al. Publishing graphs under node differential privacy[J]. IEEE Transactions on Knowledge and Data Engineering, 2023, 35(4): 4164−4177 doi: 10.1109/TKDE.2021.3128946
    [51]
    Yan Jun, Liu Hai, Wu Zhenqiang. An efficient differential privacy method with wavelet transform for edge weights of social network[J]. International Journal of Network Security, 2022, 24(2): 181−192
    [52]
    Wang Hengzhi, Yang Yongjian, Wang En, et al. Bilateral privacy-preserving worker selection in spatial crowdsourcing[J]. IEEE Transactions on Dependable and Secure Computing, 2022, 20(3): 2533−2546
    [53]
    Yan Ziang, Guo Yiwen, Zhang Changshui. Deep defense: Training dnns with improved adversarial robustness[J]. Advances in Neural Information Processing Systems. San Diego: NIPS, 2018, 31.

    Yan Ziang, Guo Yiwen, Zhang Changshui. Deep defense: Training dnns with improved adversarial robustness[J]. Advances in Neural Information Processing Systems. San Diego: NIPS, 2018, 31.
    [54]
    Yuan Xiaoyong, He Pan, Zhu Qile, et al. Adversarial examples: Attacks and defenses for deep learning[J]. IEEE transactions on neural networks and learning systems, 2019, 30(9): 2805−2824

    Yuan Xiaoyong, He Pan, Zhu Qile, et al. Adversarial examples: Attacks and defenses for deep learning[J]. IEEE transactions on neural networks and learning systems,2019,30(9):2805−2824
    [55]
    Papernot N, McDaniel P, Jha S, et al. The limitations of deep learning in adversarial settings[C]//Proc of 2016 IEEE European Symp on Security and Privacy (EuroS&P). Piscataway, NJ: IEEE, 2016: 372−387
    [56]
    Ma Xingjun, Li Bo, Wang Yisen, et al. Characterizing adversarial subspaces using local intrinsic dimensionality[J]. arXiv preprint, arXiv: 1801.02613, 2018
    [57]
    Song Yang, Kim T, Nowozin S, et al. PixelDefend: Leveraging generative models to understand and defend against adversarial examples[J]. arXiv preprint, arXiv: 1710.10766, 2017
    [58]
    Goodfellow, Ian J, Shlens J, et al. Explaining and harnessing adversarial examples[J]. arXiv preprint, arXiv: 1412.6572, 2014
    [59]
    Wang Jingyuan, Wu Yufan, Li Mingxuan, et al. Interpretability is a kind of safety: An interpreter-based ensemble for adversary defense[C]//Proc of the 26th ACM SIGKDD Int Conf on Knowledge Discovery & Data Mining. New York: ACM, 2020: 15−24
    [60]
    Xie Saining, Girshick R, Dollár P, et al. Aggregated residual transformations for deep neural networks[C]//Proc of the IEEE Conf on Computer Vision and Pattern Recognition, Piscataway, NJ: IEEE 2017: 1492−1500
    [61]
    任奎, 孟泉润, 闫守琨, 等. 人工智能模型数据泄露的攻击与防御研究综述[J]. 网络与信息安全学报,2021,7(1):1−10

    Ren Kui, Meng Quanrun, Yan Shoukun, et al. A survey on attacks and defenses of data leakage in artificial intelligence models[J]. Journal of Cyber Security, 2021, 7(1): 1−10 (in Chinese)
    [62]
    Kesarwani M, Mukhoty B, Arya V, et al. Model extraction warning in MLaaS paradigm[C]//Proc of the 34th Annual Computer Security Applications Conf. New York: ACM 2018: 371−380
    [63]
    He Yingzhe, Meng Guozhu, Chen Kai, et al. Towards privacy and security of deep learning systems: a survey[J]. arXiv preprint, arXiv:1911.12562, 2019
    [64]
    Roh Y, Heo G, Whang S E. A survey on data collection for machine learning: A big data - AI integration perspective[J]. IEEE Transactions on Knowledge and Data Engineering, 2019, 33(4): 1328−1347
    [65]
    McMahan H B, Moore E, Ramage D, et al. Communication-efficient learning of deep networks from decentralized data[C]//Proc of the 20th Int Conf on Artificial Intelligence and Statistics. New York: PMLR, 2017: 1273−1282
    [66]
    熊世强, 何道敏, 王振东, 等. 联邦学习及其安全与隐私保护研究综述[J]. 计算机工程,2024,50(5):1−17

    Xiong Shiqiang, He Daomin, Wang Zhendong, et al. A survey on federated learning and its security and privacy protection[J]. Computer Engineering, 2024, 50(5): 1−17 (in Chinese)
    [67]
    肖雄,唐卓,肖斌,等. 联邦学习的隐私保护与安全防御研究综述[J]. 计算机学报,2023,46(5):1019−1044

    Xiao Xiong, Tang Zhuo, Xiao Bin, et al. A survey on privacy protection and security defenses in federated learning[J]. Journal of Computer Research and Development, 2023, 46(5): 1019−1044 (in Chinese)
    [68]
    Yang Hongwei, He Hui, Zhang Weizhe, et al. FedSteg: A federated transfer learning framework for secure image steganalysis[J]. IEEE Transactions on Network Science and Engineering, 2021, 8(2): 1084−1094
    [69]
    Zhang C, Ekanut C, Zhen L L, et al. Augmented multi-party computation against gradient leakage in federated learning[J]. IEEE Transactions on Big Data, 2022: 1−10
    [70]
    Hosseini S M, Sikaroudi M, Babaie M, et al. Cluster based secure multi-party computation in federated learning for histopathology images[C]//Medical Image Computing and Computer Assisted Intervention Society. Berlin: Springer, 2022: 110−118
    [71]
    Sun Litong, Du Runmeng, He Daojing, et al. Feature engineering framework based on secure multi-party computation in federated learning[C]//Proc of 2021 IEEE 23rd Int Conf on High Performance Computing & Communications; 7th Int Conf on Data Science & Systems; 19th Int Conf on Smart City; 7th Int Conf on Dependability in Sensor, Cloud & Big Data Systems & Application (HPCC/DSS/SmartCity/DependSys). Piscataway, NJ: IEEE, 2021, 487−494
    [72]
    Park J, Lim H, et al. Privacy-preserving federated learning using homomorphic encryption with different encryption keys[C]//Proc of 2022 13th Int Conf on Information and Communication Technology Convergence (ICTC). Piscataway, NJ: IEEE 2022: 1869−1871
    [73]
    陈景雪,高克寒,周尔强,等. 物联网环境下鲁棒的源匿名联邦学习洗牌协议[J]. 计算机研究与发展,2023,60(10):2218−2233

    Chen Jingxue, Gao Kehan, Zhou Erqiang, et al. Robust source anonymous federated learning shuffling protocol in the Internet of things environment[J]. Journal of Computer Research and Development, 2023, 60(10): 2218−2233 (in Chinese)
    [74]
    Wang Chen, Wu Xinkui, Liu Gaoyang, et al. Safeguarding cross-silo federated learning with local differential privacy[J]. Digital Communications and Networks, 2022, 8(4): 446−454 doi: 10.1016/j.dcan.2021.11.006
    [75]
    Byrd D, Mugunthan V, Polychroniadou A, et al. Collusion resistant federated learning with oblivious distributed differential privacy[C]//Proc of the 3rd ACM Int Conf on AI in Finance. New York: ACM, 2022: 114−122
    [76]
    Xu Tianxing, Zhu Kongling, Rzejak A, et al. Distributed learning in trusted execution environment: A case study of federated learning in SGX[C]//Proc of the 7th IEEE Int Conf on Network Intelligence and Digital Content (IC-NIDC). Piscataway, NJ: IEEE, 2021: 450−454
    [77]
    Mo Fan, Haddadi H, Katevas K, et al. PPFL: Privacy-preserving federated learning with trusted execution environments[C]//Proc of the 19th Annual Int Conf on Mobile Systems, Applications, and Services. New York: ACM, 2021: 94−108
    [78]
    Chen Yu, Luo Fang, Li Tang, et al. A training-integrity privacy-preserving federated learning scheme with trusted execution environment[J]. Information Sciences, 2020, 522: 69−79
    [79]
    赵蕾, 桂小林, 邵屹样, 等. 数字图像多功能水印综述[J]. 计算机辅助设计与图形学学报,2022,36(2):1−28

    Zhao Lei, Gui Xiaolin, Shao Yiyang, et al. A survey on multifunctional watermarking in digital images[J]. Journal of Computer-Aided Design and Computer Graphics, 2022, 36(2): 1−28 (in Chinese)
    [80]
    Li Leuda, Li Shushang, Abraham A, et al. Geometrically invariant image watermarking using polar harmonic transforms[J]. Information Sciences, 2012, 199: 1−19 doi: 10.1016/j.ins.2012.02.062
    [81]
    Zhu Dandan, Zhang Xiuping, Zhang Youliang, et al. A new image watermarking algorithm using NSCT and Harris detector in green manufacturing[J]. Applied Mechanics and Materials, 2013, 340: 277−282 doi: 10.4028/www.scientific.net/AMM.340.277
    [82]
    刘颖,杨星,朱婷鸽. 基于结构森林边缘和 SIFT 的鲁棒水印算法[J]. 激光与光电子学进展,2021,58(6):339−346

    Liu Ying, Yang Xing, Zhu Tingge. Robust watermarking algorithm based on structured forest edges and SIFT[J]. Advances in Laser and Optoelectronics, 2021, 58(6): 339−346 (in Chinese)
    [83]
    侯翔,闵连权. 基于 SURF 特征区域的鲁棒水印算法[J]. 武汉大学学报:信息科学版,2017,42(3):421−426

    Hou Xiang, Min Lianquan. Robust watermarking algorithm based on SURF feature regions[J]. Journal of Wuhan University (Information Science Edition), 2017, 42(3): 421−426 (in Chinese)
    [84]
    Pourhadi A, Mahdavi-Nasab H. A robust digital image watermarking scheme based on BAT algorithm optimization and SURF detector in SWT domain[J]. Multimedia Tools and Applications, 2020, 79(29-30): 21653−21677 doi: 10.1007/s11042-020-08960-0
  • Related Articles

    [1]Guo Yaqing, Wang Wenjian, Su Meihong. An Adaptive Regression Feature Selection Method for Datasets with Outliers[J]. Journal of Computer Research and Development, 2019, 56(8): 1695-1707. DOI: 10.7544/issn1000-1239.2019.20190313
    [2]Xu Hang, Zhang Kai, Wang Wenjian. A Feature Selection Method for Small Samples[J]. Journal of Computer Research and Development, 2018, 55(10): 2321-2330. DOI: 10.7544/issn1000-1239.2018.20170748
    [3]Wang Ling, Meng Jianyao. Dynamic Fuzzy Features Selection Based on Variable Weight[J]. Journal of Computer Research and Development, 2018, 55(5): 893-907. DOI: 10.7544/issn1000-1239.2018.20170503
    [4]Yao Sheng, Xu Feng, Zhao Peng, Ji Xia. Intuitionistic Fuzzy Entropy Feature Selection Algorithm Based on Adaptive Neighborhood Space Rough Set Model[J]. Journal of Computer Research and Development, 2018, 55(4): 802-814. DOI: 10.7544/issn1000-1239.2018.20160919
    [5]Dong Hongbin, Teng Xuyang, Yang Xue. Feature Selection Based on the Measurement of Correlation Information Entropy[J]. Journal of Computer Research and Development, 2016, 53(8): 1684-1695. DOI: 10.7544/issn1000-1239.2016.20160172
    [6]Xu Junling, Zhou Yuming, Chen Lin, Xu Baowen. An Unsupervised Feature Selection Approach Based on Mutual Information[J]. Journal of Computer Research and Development, 2012, 49(2): 372-382.
    [7]Jing Hongfang, Wang Bin, YangYahui, Xu Yan. Category Distribution-Based Feature Selection Framework[J]. Journal of Computer Research and Development, 2009, 46(9): 1586-1593.
    [8]Gao Jun, Wang Shitong, Deng Zhaohong. GPSFM: Generalized Potential Support Features Selection Method[J]. Journal of Computer Research and Development, 2009, 46(1): 41-51.
    [9]Xu Yan, Li Jintao, Wang Bin, Sun Chunming, Zhang Sen. A Study on Constraints for Feature Selection in Text Categorization[J]. Journal of Computer Research and Development, 2008, 45(4): 596-602.
    [10]Liu Tao, Wu Gongyi, Chen Zheng. An Effective Unsupervised Feature Selection Method for Text Clustering[J]. Journal of Computer Research and Development, 2005, 42(3).
  • Cited by

    Periodical cited type(1)

    1. 杨晋云,宋超峰. 智能化应用发展场景下网络安全防御技术分析. 电脑知识与技术. 2024(31): 91-95 .

    Other cited types(0)

Catalog

    Article views (519) PDF downloads (253) Cited by(1)

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return