高级检索
    左鹏飞, 华宇, 谢新锋, 胡杏, 谢源, 冯丹. 面向深度学习加速器的安全加密方法[J]. 计算机研究与发展, 2019, 56(6): 1161-1169. DOI: 10.7544/issn1000-1239.2019.20190109
    引用本文: 左鹏飞, 华宇, 谢新锋, 胡杏, 谢源, 冯丹. 面向深度学习加速器的安全加密方法[J]. 计算机研究与发展, 2019, 56(6): 1161-1169. DOI: 10.7544/issn1000-1239.2019.20190109
    Zuo Pengfei, Hua Yu, Xie Xinfeng, Hu Xing, Xie Yuan, Feng Dan. A Secure Encryption Scheme for Deep Learning Accelerators[J]. Journal of Computer Research and Development, 2019, 56(6): 1161-1169. DOI: 10.7544/issn1000-1239.2019.20190109
    Citation: Zuo Pengfei, Hua Yu, Xie Xinfeng, Hu Xing, Xie Yuan, Feng Dan. A Secure Encryption Scheme for Deep Learning Accelerators[J]. Journal of Computer Research and Development, 2019, 56(6): 1161-1169. DOI: 10.7544/issn1000-1239.2019.20190109

    面向深度学习加速器的安全加密方法

    A Secure Encryption Scheme for Deep Learning Accelerators

    • 摘要: 随着机器学习特别是深度学习技术的飞速发展,其应用场景也越来越广,并逐渐从云计算向边缘计算上扩展.在深度学习中,深度学习模型作为模型提供商的知识产权是非常重要的数据.发现部署在边缘计算设备上的深度学习加速器有泄露在其上存储的深度学习模型的风险.攻击者通过监听深度学习加速器和设备内存之间的总线就能很容易地截获到深度学习模型数据,所以加密该内存总线上的数据传输是非常重要的.但是,直接地在加速器上使用内存加密会极大地降低加速器的性能.为了解决这个问题,提出了一个有效的安全深度学习加速器架构称作COSA.COSA通过利用计数器模式加密不仅提高了加速器的安全性,而且能够把解密操作从内存访问的关键路径中移走来极大地提高加速器性能.在GPGPU-Sim上实现了提出的COSA架构,并使用神经网络负载测试了其性能.实验结果显示COSA相对于直接加密的架构提升了3倍以上的性能,相对于一个不加密的加速器性能只下降了13%左右.

       

      Abstract: With the rapid development of machine learning techniques, especially deep learning (DL), their application domains are wider and wider and increasingly expanded from cloud computing to edge computing. In deep learning, DL models as the intellectual property (IP) of model providers become important data. We observe that DL accelerators deployed on edge devices for edge computing have the risk of leaking DL models stored on them. Attackers are able to easily obtain the DL model data by snooping the memory bus connecting the on-chip accelerator and off-chip device memory. Therefore, encrypting data transmitted on the memory bus is non-trivial. However, directly using memory encryption in DL accelerators significantly decreases their performance. To address this problem, this paper proposes COSA, a COunter mode Secure deep learning Accelerator architecture. COSA achieves higher security level than direct encryption and removes decryption operations from the critical path of memory accesses by leveraging counter mode encryption. We have implemented COSA in GPGPU-Sim and evaluated it using the neural network workload. Experimental results show COSA improves the performance of the secure accelerator by over 3 times compared with direct encryption and causes only 13% performance decrease compared with an insecure accelerator without using encryption.

       

    /

    返回文章
    返回