Advanced Search
    Zhou Wen, Zhang Shikun, Ding Yong, Chen Xi. Adversarial Example Attack Analysis of Low-Dimensional Industrial Control Network System Dataset[J]. Journal of Computer Research and Development, 2020, 57(4): 736-745. DOI: 10.7544/issn1000-1239.2020.20190844
    Citation: Zhou Wen, Zhang Shikun, Ding Yong, Chen Xi. Adversarial Example Attack Analysis of Low-Dimensional Industrial Control Network System Dataset[J]. Journal of Computer Research and Development, 2020, 57(4): 736-745. DOI: 10.7544/issn1000-1239.2020.20190844

    Adversarial Example Attack Analysis of Low-Dimensional Industrial Control Network System Dataset

    • The growth in cyber attacks on industrial control systems (ICS) highlights the need for network intrusion anomaly detection. Researchers have proposed various anomaly detection models for industrial control network traffic based on machine learning algorithms. However, adversarial example attacks are hindering the widespread application of machine learning models. Existing researches on adversarial example attacks focused on feature-rich/high-dimensional datasets. However, due to the relatively fixed network topology of the industrial control network system, the number of features in an ICS dataset is small. It is unknown whether the existing researches on adversarial examples work well for low-dimensional ICS datasets. We aim to analyze the relationship between four common optimization algorithms (namely, SGD, RMSProp, AdaDelta and Adam) and adversarial sample attacking capability, and analyze the defending capability of typical machine learning algorithms against adversarial example attacks through experiments on a low-dimensional natural Gas dataset. We also investigate whether adversarial examples-based training can improve the anti-attack ability of deep learning algorithms. Moreover, a new index “Year-to-Year Loss Rate” is proposed to evaluate the white-box attacking ability of adversarial examples. Experimental results show that for the natural Gas dataset: 1)the optimization algorithm does have an impact on the white-box attacking ability of adversarial examples; 2)the adversarial example has the ability in carrying out black-box attacks to each typical machine learning algorithm; 3)compared with decision tree, random forest, support vector machine, AdaBoost, logistic regression and convolutional neural network, recurrent neural network has the best capability in resisting black-box attack of adversarial examples; 4) adversarial example training can improve the defending ability of deep learning models.
    • loading

    Catalog

      Turn off MathJax
      Article Contents

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return