高级检索
    周文, 张世琨, 丁勇, 陈曦. 面向低维工控网数据集的对抗样本攻击分析[J]. 计算机研究与发展, 2020, 57(4): 736-745. DOI: 10.7544/issn1000-1239.2020.20190844
    引用本文: 周文, 张世琨, 丁勇, 陈曦. 面向低维工控网数据集的对抗样本攻击分析[J]. 计算机研究与发展, 2020, 57(4): 736-745. DOI: 10.7544/issn1000-1239.2020.20190844
    Zhou Wen, Zhang Shikun, Ding Yong, Chen Xi. Adversarial Example Attack Analysis of Low-Dimensional Industrial Control Network System Dataset[J]. Journal of Computer Research and Development, 2020, 57(4): 736-745. DOI: 10.7544/issn1000-1239.2020.20190844
    Citation: Zhou Wen, Zhang Shikun, Ding Yong, Chen Xi. Adversarial Example Attack Analysis of Low-Dimensional Industrial Control Network System Dataset[J]. Journal of Computer Research and Development, 2020, 57(4): 736-745. DOI: 10.7544/issn1000-1239.2020.20190844

    面向低维工控网数据集的对抗样本攻击分析

    Adversarial Example Attack Analysis of Low-Dimensional Industrial Control Network System Dataset

    • 摘要: 针对工业控制系统的网络攻击日趋增加,凸显工业控制网络入侵异常检测的必需性.研究工作者已经提出了各种基于机器学习算法的工控网流量异常检测模型,然而对抗样本攻击阻碍了机器学习模型的广泛应用.目前关于对抗样本攻击的成果集中在特征丰富的数据集上.然而,工控系统由于网络拓扑结构相对固定,所以数据集特征较少.针对一个低维(特征少)的天然气工控网数据集,通过实验分析4个常见优化算法SGD,RMSProp,AdaDelta和Adam与对抗样本攻击能力的关系,分析典型机器学习算法防御对抗样本攻击的能力,并研究对抗训练对提高深度学习算法抗对抗样本白盒攻击的能力.此外,提出了一个新指标“同比损失率”来评估对抗样本的白盒攻击能力.大量实验结果表明:对于这个低维数据集,优化算法确实影响了对抗样本的白盒攻击能力;对抗样本对各典型机器学习算法具有黑盒攻击能力;和决策树、随机森林,支持向量机、AdaBoost、逻辑回归、卷积神经网络(CNN)等典型分类器相比,循环神经网络(RNN)具有最强的防对抗样本黑盒攻击能力;此外,对抗样本训练能够提高深度学习模型防御对抗样本白盒攻击的能力.

       

      Abstract: The growth in cyber attacks on industrial control systems (ICS) highlights the need for network intrusion anomaly detection. Researchers have proposed various anomaly detection models for industrial control network traffic based on machine learning algorithms. However, adversarial example attacks are hindering the widespread application of machine learning models. Existing researches on adversarial example attacks focused on feature-rich/high-dimensional datasets. However, due to the relatively fixed network topology of the industrial control network system, the number of features in an ICS dataset is small. It is unknown whether the existing researches on adversarial examples work well for low-dimensional ICS datasets. We aim to analyze the relationship between four common optimization algorithms (namely, SGD, RMSProp, AdaDelta and Adam) and adversarial sample attacking capability, and analyze the defending capability of typical machine learning algorithms against adversarial example attacks through experiments on a low-dimensional natural Gas dataset. We also investigate whether adversarial examples-based training can improve the anti-attack ability of deep learning algorithms. Moreover, a new index “Year-to-Year Loss Rate” is proposed to evaluate the white-box attacking ability of adversarial examples. Experimental results show that for the natural Gas dataset: 1)the optimization algorithm does have an impact on the white-box attacking ability of adversarial examples; 2)the adversarial example has the ability in carrying out black-box attacks to each typical machine learning algorithm; 3)compared with decision tree, random forest, support vector machine, AdaBoost, logistic regression and convolutional neural network, recurrent neural network has the best capability in resisting black-box attack of adversarial examples; 4) adversarial example training can improve the defending ability of deep learning models.

       

    /

    返回文章
    返回