ISSN 1000-1239 CN 11-1777/TP

Journal of Computer Research and Development ›› 2020, Vol. 57 ›› Issue (4): 736-745.doi: 10.7544/issn1000-1239.2020.20190844

Special Issue: 2020数据驱动网络专题

Previous Articles     Next Articles

Adversarial Example Attack Analysis of Low-Dimensional Industrial Control Network System Dataset

Zhou Wen1,3, Zhang Shikun2, Ding Yong4, Chen Xi5   

  1. 1(School of Software and Microelectronics, Peking University,Beijing 100871);2(National Engineering Research Center for Software Engineering, Peking University, Beijing 100871);3(China National Aviation Fuel Group Limited, Beijing 100088);4(Peng Cheng Laboratory, Shenzhen, Guangdong 518000);5(China Software Testing Center, Beijing 100048)
  • Online:2020-04-01
  • Supported by: 
    This work was supported by the National Natural Science Foundation of China (61772150) and the Project of Peng Cheng Laboratory (PCL2018KP004).

Abstract: The growth in cyber attacks on industrial control systems (ICS) highlights the need for network intrusion anomaly detection. Researchers have proposed various anomaly detection models for industrial control network traffic based on machine learning algorithms. However, adversarial example attacks are hindering the widespread application of machine learning models. Existing researches on adversarial example attacks focused on feature-rich/high-dimensional datasets. However, due to the relatively fixed network topology of the industrial control network system, the number of features in an ICS dataset is small. It is unknown whether the existing researches on adversarial examples work well for low-dimensional ICS datasets. We aim to analyze the relationship between four common optimization algorithms (namely, SGD, RMSProp, AdaDelta and Adam) and adversarial sample attacking capability, and analyze the defending capability of typical machine learning algorithms against adversarial example attacks through experiments on a low-dimensional natural Gas dataset. We also investigate whether adversarial examples-based training can improve the anti-attack ability of deep learning algorithms. Moreover, a new index “Year-to-Year Loss Rate” is proposed to evaluate the white-box attacking ability of adversarial examples. Experimental results show that for the natural Gas dataset: 1)the optimization algorithm does have an impact on the white-box attacking ability of adversarial examples; 2)the adversarial example has the ability in carrying out black-box attacks to each typical machine learning algorithm; 3)compared with decision tree, random forest, support vector machine, AdaBoost, logistic regression and convolutional neural network, recurrent neural network has the best capability in resisting black-box attack of adversarial examples; 4) adversarial example training can improve the defending ability of deep learning models.

Key words: adversarial example, deep learning, intrusion detection, industrial control system, machine learning

CLC Number: