ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2019, Vol. 56 ›› Issue (6): 1161-1169.doi: 10.7544/issn1000-1239.2019.20190109

所属专题: 2019面向人工智能的计算机体系结构专题

• 系统结构 • 上一篇    下一篇

面向深度学习加速器的安全加密方法

左鹏飞1,2,3,华宇1,2,谢新锋3,胡杏3,谢源3,冯丹1,2   

  1. 1(武汉光电国家研究中心(华中科技大学) 武汉 430074);2(华中科技大学计算机学院 武汉 430074);3(加州大学圣芭芭拉分校 加利福利亚圣芭芭拉 美国 93106) (pfzuo@hust.edu.cn)
  • 出版日期: 2019-06-01
  • 基金资助: 
    国家自然科学基金项目(61772212,61821003)

A Secure Encryption Scheme for Deep Learning Accelerators

Zuo Pengfei1,2,3, Hua Yu1,2, Xie Xinfeng3, Hu Xing3, Xie Yuan3, Feng Dan1,2   

  1. 1(Wuhan National Laboratory for Optoelectronics (Huazhong University of Science and Technology), Wuhan 430074);2(School of Computer, Huazhong University of Science and Technology, Wuhan 430074);3(University of California at Santa Barbara, Santa Barbara, California, USA 93106)
  • Online: 2019-06-01
  • Supported by: 
    This work was supported by the National Natural Science Foundation of China (61772212, 61821003).

摘要: 随着机器学习特别是深度学习技术的飞速发展,其应用场景也越来越广,并逐渐从云计算向边缘计算上扩展.在深度学习中,深度学习模型作为模型提供商的知识产权是非常重要的数据.发现部署在边缘计算设备上的深度学习加速器有泄露在其上存储的深度学习模型的风险.攻击者通过监听深度学习加速器和设备内存之间的总线就能很容易地截获到深度学习模型数据,所以加密该内存总线上的数据传输是非常重要的.但是,直接地在加速器上使用内存加密会极大地降低加速器的性能.为了解决这个问题,提出了一个有效的安全深度学习加速器架构称作COSA.COSA通过利用计数器模式加密不仅提高了加速器的安全性,而且能够把解密操作从内存访问的关键路径中移走来极大地提高加速器性能.在GPGPU-Sim上实现了提出的COSA架构,并使用神经网络负载测试了其性能.实验结果显示COSA相对于直接加密的架构提升了3倍以上的性能,相对于一个不加密的加速器性能只下降了13%左右.

关键词: 机器学习, 加速器, 边缘设备, 安全, 总线监听攻击

Abstract: With the rapid development of machine learning techniques, especially deep learning (DL), their application domains are wider and wider and increasingly expanded from cloud computing to edge computing. In deep learning, DL models as the intellectual property (IP) of model providers become important data. We observe that DL accelerators deployed on edge devices for edge computing have the risk of leaking DL models stored on them. Attackers are able to easily obtain the DL model data by snooping the memory bus connecting the on-chip accelerator and off-chip device memory. Therefore, encrypting data transmitted on the memory bus is non-trivial. However, directly using memory encryption in DL accelerators significantly decreases their performance. To address this problem, this paper proposes COSA, a COunter mode Secure deep learning Accelerator architecture. COSA achieves higher security level than direct encryption and removes decryption operations from the critical path of memory accesses by leveraging counter mode encryption. We have implemented COSA in GPGPU-Sim and evaluated it using the neural network workload. Experimental results show COSA improves the performance of the secure accelerator by over 3 times compared with direct encryption and causes only 13% performance decrease compared with an insecure accelerator without using encryption.

Key words: machine learning, accelerator, edge device, security, bus snooping attack

中图分类号: