Processing math: 1%
  • 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
高级检索

SAF-CNN:面向嵌入式FPGA的卷积神经网络稀疏化加速框架

谢坤鹏, 仪德智, 刘义情, 刘航, 赫鑫宇, 龚成, 卢冶

谢坤鹏, 仪德智, 刘义情, 刘航, 赫鑫宇, 龚成, 卢冶. SAF-CNN:面向嵌入式FPGA的卷积神经网络稀疏化加速框架[J]. 计算机研究与发展, 2023, 60(5): 1053-1072. DOI: 10.7544/issn1000-1239.202220735
引用本文: 谢坤鹏, 仪德智, 刘义情, 刘航, 赫鑫宇, 龚成, 卢冶. SAF-CNN:面向嵌入式FPGA的卷积神经网络稀疏化加速框架[J]. 计算机研究与发展, 2023, 60(5): 1053-1072. DOI: 10.7544/issn1000-1239.202220735
Xie Kunpeng, Yi Dezhi, Liu Yiqing, Liu Hang, He Xinyu, Gong Cheng, Lu Ye. SAF-CNN:A Sparse Acceleration Framework of Convolutional Neural Network forEmbedded FPGAs[J]. Journal of Computer Research and Development, 2023, 60(5): 1053-1072. DOI: 10.7544/issn1000-1239.202220735
Citation: Xie Kunpeng, Yi Dezhi, Liu Yiqing, Liu Hang, He Xinyu, Gong Cheng, Lu Ye. SAF-CNN:A Sparse Acceleration Framework of Convolutional Neural Network forEmbedded FPGAs[J]. Journal of Computer Research and Development, 2023, 60(5): 1053-1072. DOI: 10.7544/issn1000-1239.202220735
谢坤鹏, 仪德智, 刘义情, 刘航, 赫鑫宇, 龚成, 卢冶. SAF-CNN:面向嵌入式FPGA的卷积神经网络稀疏化加速框架[J]. 计算机研究与发展, 2023, 60(5): 1053-1072. CSTR: 32373.14.issn1000-1239.202220735
引用本文: 谢坤鹏, 仪德智, 刘义情, 刘航, 赫鑫宇, 龚成, 卢冶. SAF-CNN:面向嵌入式FPGA的卷积神经网络稀疏化加速框架[J]. 计算机研究与发展, 2023, 60(5): 1053-1072. CSTR: 32373.14.issn1000-1239.202220735
Xie Kunpeng, Yi Dezhi, Liu Yiqing, Liu Hang, He Xinyu, Gong Cheng, Lu Ye. SAF-CNN:A Sparse Acceleration Framework of Convolutional Neural Network forEmbedded FPGAs[J]. Journal of Computer Research and Development, 2023, 60(5): 1053-1072. CSTR: 32373.14.issn1000-1239.202220735
Citation: Xie Kunpeng, Yi Dezhi, Liu Yiqing, Liu Hang, He Xinyu, Gong Cheng, Lu Ye. SAF-CNN:A Sparse Acceleration Framework of Convolutional Neural Network forEmbedded FPGAs[J]. Journal of Computer Research and Development, 2023, 60(5): 1053-1072. CSTR: 32373.14.issn1000-1239.202220735

SAF-CNN:面向嵌入式FPGA的卷积神经网络稀疏化加速框架

基金项目: 国家自然科学基金项目(62002175);计算机体系结构国家重点实验室(中国科学院计算技术研究所)开放课题(CARCHB202016);天津市企业优秀科技特派员项目(21YDTPJC00380);中国民航大学信息安全测评中心开放基金项目(ISECCA-202102);CCF-华为胡杨林基金项目(CCF-HuaweiTC2022005)
详细信息
    作者简介:

    谢坤鹏: 1996年生. 博士研究生,CCF会员. 主要研究方向为异构计算、机器学习和嵌入式系统

    仪德智: 1999年生. 硕士研究生. CCF会员. 主要研究方向为深度学习和异构计算

    刘义情: 1994年生. 硕士研究生. 主要研究方向为DNN模型自动剪枝、模型转换工具和FPGA部署实现

    刘航: 2000年生. 硕士研究生. 主要研究方向为机器学习和FPGA部署实现

    赫鑫宇: 1996年生. 博士研究生. CCF会员. 主要研究方向为计算机视觉、机器学习和模型压缩与加速

    龚成: 1993年生. 博士,讲师. 主要研究方向为异构计算、机器学习和物联网

    卢冶: 1986年生. 博士,副教授. CCF高级会员. 主要研究方向为高性能嵌入式系统、异构计算和人工智能

    通讯作者:

    卢冶 (luye@nankai.edu.cn)

  • 中图分类号: TP391

SAF-CNN:A Sparse Acceleration Framework of Convolutional Neural Network forEmbedded FPGAs

Funds: This work was supported by the National Natural Science Foundation of China(62002175), the Open Project Fund of State Key Laboratory of Computer Architecture (Institute of Computing Technology, Chinese Academy of Sciences) (CARCHB202016), the Special Funding for Excellent Enterprise Technology Correspondent of Tianjin(21YDTPJC00380), the Open Project Foundation of Information Security Evaluation Center of Civil Aviation University of China (ISECCA-202102), and the CCF-Huawei Populus Grove Fund(CCF-HuaweiTC2022005).
More Information
    Author Bio:

    Xie Kunpeng: born in 1996. PhD candidate. Member of CCF. His main research interests include heterogeneous computing,machine learning, and embedded system

    Yi Dezhi: born in 1999. Master candidate. Member of CCF. His main research interests include deep learning and heterogeneous computing

    Liu Yiqing: born in 1994. Master candidate. His main research interests include DNN model automatic pruning, model transformation tool, and FPGA deployment implementation

    Liu Hang: born in 2000. Master candidate. His main research interests include machine learning and FPGA deployment implementation

    He Xinyu: born in 1996. PhD candidate. Member of CCF. His main research interests include computer vision,machine learning, and model compression and acceleration

    Gong Cheng: born in 1993. PhD, lecturer. His main research interests include heterogeneous computing, machine learning, and Internet of things

    Lu Ye: born in 1986. PhD, associate professor. Senior member of CCF. His main research interests include high performance embedded system, heterogeneous computing, and artificial intelligence

  • 摘要:

    传统的卷积神经网络加速器及推理框架在资源约束的FPGA上部署模型时,往往面临设备种类繁多且资源极端受限、数据带宽利用不充分、算子操作类型复杂难以适配且调度不合理等诸多挑战. 提出一种面向嵌入式FPGA的卷积神经网络稀疏化加速框架(sparse acceleration framework of convolutional neural network, SAF-CNN),通过软硬件协同设计的方法,从硬件加速器与软件推理框架2个角度进行联合优化. 首先, SAF-CNN构建并行计算阵列,并且设计并行编解码方案,实现单周期多数据的传输,有效减少通信代价. 其次,设计细粒度结构化块划分剪枝算法,于输入通道维度进行块内裁剪来获得稀疏且规则的权重矩阵,借此显著降低计算规模和DSP 乘法器等资源占用. 然后,提出一种兼容深度可分离卷积的输入通道维度动态拓展及运行时调度策略,实现输入通道参数灵活适配与逐通道卷积和逐点卷积的资源复用. 最后,提出一种计算图重构及硬件算子融合优化方法,提升硬件执行效率.实验采用2种资源受限的低端FPGA异构平台Intel CycloneV与Xilinx ZU3EG,结果表明SAF-CNN加速器可分别实现76.3GOPS与494.3GOPS的计算性能. 与多核CPU相比,SAF-CNN在进行SSD_MobileNetV1目标模型检测时,可实现3.5倍与2.2倍的性能提升,模型推理速度高达26.5fps.

    Abstract:

    When deploying models on resource-constrained FPGAs, traditional convolutional neural network accelerators and inference frameworks often face challenges such as various device types, extremely limited resources, insufficient data bandwidth utilization, complex operator types that are difficult to match operators and schedule computing task reasonably. In this paper, a sparse acceleration framework of convolutional neural network (SAF-CNN) for embedded FPGA is proposed. Through the method of software and hardware co-design, SAF-CNN is jointly optimized from the two perspectives of hardware accelerator design and software inference framework. SAF-CNN first constructs parallel computing array and designs parallel encoding and decoding scheme to realize single-period multi-data transmission and effectively reduce communication costs. Secondly, a fine-grained structured block partitioning pruning algorithm is designed to obtain a sparse and regular weight matrix by cutting the input channel dimension within the block, so as to significantly reduce the computation scale and the resource utilization of DSP multiplier. Then, the input channel dimension dynamic expansion method and runtime scheduling strategy compatible with depth-separable convolution is proposed to realize flexible adaptation of input channel parameters and resource reuse of point-wise convolution and depth-wise convolution. Finally, the computational graph reconstruction method and hardware operator fusion are used to improve the hardware execution efficiency. The experiments use two resource-limited low-end FPGA heterogeneous platforms, Intel CycloneV and Xilinx ZU3EG. The results show that the SAF-CNN accelerator can achieve the computational performance of 76.3GOPS and 494.3GOPS respectively. Compared with multi-core CPU, SAF-CNN can achieve 3.5x and 2.2x performance improvement on the object detection model of SSD_MobileNetV1, and the model inference speed is up to 26.5fps.

  • 无线体域网[1](wireless body area network, WBAN)指由佩戴或嵌入在人体的各种无线传感器(wireless sensor, WS)组成的无线通信网络.WBAN技术在医疗数据监测方面的应用极为广泛,不同类型的无线医疗传感器负责监测患者各个方面的医疗数据并将数据发送给各种远端服务器,方便对患者的医疗数据做出专业的分析与整合.然而,开放的WBAN在传输患者敏感的医疗数据时,面临着患者的隐私被泄露或医疗数据被恶意篡改等风险[2].

    许多国内外学者提出将密码体制应用到WBAN中,以确保WBAN的医疗数据在传输与共享时的机密性.Mykletun等人[3]基于传统公钥密码(public key cryptography, PKC)体制,设计了一种保证无线传感网络数据机密性的加密方案.Nadir等人[4]基于PKC体制与椭圆曲线密码体制为用户生成对称密钥来加密数据,确保医疗数据在无线传感网络中传输与共享时的机密性.然而,基于PKC体制的方案[3-4]需要可信中心对用户证书进行管理,为消除证书管理的开销,一些基于身份加密体制的WBAN方案[5-7]相继被提出.上述文献[37]利用对数据进行加密的方式确保了医疗数据传输时的机密性,但这种方式没有实现对医疗数据来源的认证.如果无法实现医疗数据的可认证性,不仅会导致医院浪费宝贵的医疗资源进行无效的诊断,还可能基于被篡改的医疗数据而对患者的病情做出错误诊断.

    为了实现WBAN中医疗数据的可认证性,Ahn等人[8]构造了一种基于高级加密标准(advanced encryption standard,AES)对称密码体制的认证方案.黄一才等人[9]基于身份密码体制设计了一种签名方案,该方案实现了抗重放攻击.Cagalaban等人[10]将数字签密技术引入医疗保健系统,在确保医疗数据机密性的同时实现了数据的可认证性.Ullah等人[11]利用超椭圆曲线的概念,设计了一种基于证书的签密方案.尽管文献[811]实现了医疗数据的可认证性,但都没有考虑在多用户环境下的应用场景.为解决密码方案在多用户环境下的WBAN中计算效率较低的问题,基于聚合签名与聚合加密等技术,一些支持聚合模式的方案[12-15]相继被提出.然而,文献[815]没有考虑如何对WBAN云端密文进行有效的搜索,导致数据用户在对医疗数据进行检索时开销较大.

    基于可搜索加密技术[16]与密文等值测试技术[17],国内外学者提出了一些适用于WBAN的密文检索方案[18-21].但这些WBAN密文检索方案均存在一些缺陷,例如张嘉懿[18]与Andrew等人[19]提出的可搜索加密方案仅支持对用相同公钥加密的医疗数据进行搜索;Ramadan等人[20]设计的等值测试加密方案无法实现对医疗数据来源的认证;Elhabob等人[21]设计的基于证书的密文等值测试方案存在证书管理问题等.此外,医生或医疗机构有时需要判断多个患者某些特定方面的医疗数据是否相同,或对有相同病症的患者的医疗数据进行整合与存档,但密文检索文献[1821]均没有考虑到多用户检索以及对多密文同时进行检索的情况,在用户节点众多的WBAN实际应用环境中存在一定局限性.

    WBAN通常会面临需要对2个以上的密文进行匹配的情况,而传统的密文等值测试技术只能将多个密文两两分为一组,再对所有的分组逐个进行测试,在多用户环境下的密文检索效率较低.为提高密文等值测试技术在多密文测试时的计算效率,Susilo等人[22]提出了一种支持多密文等值测试的公钥加密(public-key encryption with multi-ciphertext equality test, PKE-MET)方案,实现了对2个以上的密文同时进行匹配的功能.在PKE-MET方案中,每个参与多密文等值测试的数据拥有者都可以指定1个数字n,并将自己的密文与其他n−1个数据拥有者的密文进行匹配.PKE-MET在支持同时对多密文进行等值测试的同时,还支持对多个用户同时进行密文检索,当测试者接收到n个希望进行密文检索的数据用户分别上传的n个测试陷门时,才可以对数据拥有者的密文进行测试,实现了多数据用户同时进行密文匹配的功能.然而,PKE-MET方案中存在证书管理开销较大、无法对数据的来源进行认证等问题.

    针对以上问题,本文提出了一种支持多密文等值测试的WBAN聚合签密方案.该方案的创新点主要包括3个方面:

    1)基于身份签密体制.本文方案采用基于身份的签密体制,消除了传统公钥加密方案中存在的证书管理开销,确保了WBAN中医疗数据的机密性、完整性、可认证性与数据拥有者签名的不可伪造性.

    2)支持多用户密文聚合签密.引入聚合签密技术,验证者可以实现对多个数据拥有者医疗数据密文的批量验证,提高了签密方案在多用户环境下的验证效率.

    3)支持多密文等值测试.引入多密文等值测试技术,测试者可以利用数据用户上传的测试陷门同时对多个密文进行匹配,实现了多用户检索与多密文等值测试,降低了多用户环境下等值测试过程的计算开销.

    计算性Diffie-Hellman(computation Diffie-Hellman, CDH)问题:给定(P,aP,bP),其中a,bZp,计算abP.

    由含有n个未知数x1,x2,,xnn个线性方程所组成的非齐次线性方程组

    {a11x1+a12x2++a1nxn=b1,a21x1+a22x2++a2nxn=b2, an1x1+an2x2++annxn=bn,

    所对应的系数矩阵为

    {\boldsymbol{A}} = \left({\begin{array}{*{20}{c}} {{a_{11}}}&{{a_{12}}}& \cdots &{{a_{1n}}} \\ {{a_{21}}}&{{a_{22}}}& \cdots &{{a_{2n}}} \\ \vdots & \vdots &{}& \vdots \\ {{a_{n1}}}&{{a_{n2}}}& \cdots &{{a_{nn}}} \end{array}} \right),

    矩阵A对应的行列式为

    \det ({\boldsymbol{A}}) = \left| {\begin{array}{*{20}{c}} {{a_{11}}}&{{a_{12}}}& \cdots &{{a_{1n}}} \\ {{a_{21}}}&{{a_{22}}}& \cdots &{{a_{2n}}} \\ \vdots & \vdots &{}& \vdots \\ {{a_{n1}}}&{{a_{n2}}}& \cdots &{{a_{nn}}} \end{array}} \right| \text{,}

    \det ({\boldsymbol{A}}) \ne 0,则该方程组有唯一解.

    形如

    {\boldsymbol{V}} = \left( {\begin{array}{*{20}{c}} 1&{{a_1}}&{a_1^2}& \cdots &{a_1^{n - 1}} \\ 1&{{a_2}}&{a_2^2}& \cdots &{a_2^{n - 1}} \\ \vdots & \vdots & \vdots &{}& \vdots \\ 1&{{a_n}}&{a_n^2}& \cdots &{a_n^{n - 1}} \end{array}} \right)

    的矩阵称为范德蒙矩阵,其对应的范德蒙行列式 \det ({\boldsymbol{V}}) 具有如下计算性质:

    \det ({\boldsymbol{V}}) = \left| {\begin{array}{*{20}{c}} 1&{{a_1}}&{a_1^2}& \cdots &{a_1^{n - 1}} \\ 1&{{a_2}}&{a_2^2}& \cdots &{a_2^{n - 1}} \\ \vdots & \vdots & \vdots &{}& \vdots \\ 1&{{a_n}}&{a_n^2}& \cdots &{a_n^{n - 1}} \end{array}} \right| = \prod\limits_{1 \leqslant i \lt j \leqslant n} {({a_i} - {a_j})} .

    本文提出的支持多密文等值测试的WBAN聚合签密方案的系统模型如图1所示,它包括6个实体:私钥生成器(private key generator, PKG)、云存储提供商、数据拥有者(即患者佩戴的无线传感器)、密文等值测试者、聚合者与数据用户(data user, DU).

    图  1  本文系统模型
    Figure  1.  The proposed system model

    各个实体具体介绍为:

    1)私钥生成器.负责为WBAN中的数据拥有者和数据用户生成密钥.

    2)云存储提供商.负责在云服务器中存储用户上传的医疗密文 C{T_1} C{T_2} ,…, C{T_n} .

    3)数据拥有者.即患者佩戴的无线传感器,负责对医疗数据进行签密并将医疗密文上传到云端存储.

    4)测试者.对从云服务器下载的多个医疗密文执行等值测试操作,将测试结果返回给云服务器.

    5)聚合者.负责对多个数据拥有者的医疗数据进行聚合签密,将聚合医疗密文上传到云端存储.

    6)数据用户.即医生、医疗机构与数据处理中心等希望获取医疗密文的用户,负责将等值测试的陷门上传给测试者,并对从云服务器下载的医疗密文进行解密与认证.

    本文提出的支持多密文等值测试的聚合签密方案需要考虑2种类型的敌手,第1类敌手无法访问数据用户的测试陷门,第2类敌手可以获取数据用户的测试陷门.针对这2类敌手,本文提出的方案旨在达到的安全目标为:

    1)医疗数据的机密性和完整性.WBAN中传输的大多是敏感的医疗数据,若患者的医疗数据在传输时中被恶意窃取或篡改,会造成严重后果.本文利用基于身份的加密体制,保证了所提方案在面对第1类攻击者时医疗数据的机密性与完整性.机密性指即使攻击者截取了传输的医疗密文也无法获取与明文相关的信息;完整性则指医疗数据在传输时中无法被敌手伪造或篡改.

    2)数据拥有者签名的不可伪造性.本文新方案在对数据拥有者的签名的合法性进行验证的过程中,采用基于身份的签密体制,保证了在面对第1类攻击者时数据拥有者签名的不可伪造性,即攻击者不能伪造出合法的数据拥有者签名.

    3)测试陷门的单向性.测试者通过数据用户上传的测试陷门对医疗密文进行等值测试操作,在测试过程中,需要保证面对第2类敌手时测试陷门满足单向性,即敌手无法通过测试陷门获取与参与测试的医疗数据明文相关的信息.

    给定安全参数 k ,PKG选择大素数 p ( p \gt {2^k} ), G 是阶为 p 的循环加法群, P G 的生成元.PKG随机选择 s \in \mathbb{Z}_p^* 作为主密钥秘密保存,计算 {P_{{\text{pub}}}} = sP 作为系统公钥,定义6个Hash函数: {H_1}:{\{ 0,1\} ^*} \to \mathbb{Z}_p^* {H_2}:{\{ 0,1\} ^*} \times G \to \mathbb{Z}_p^* {H_3}:{\{ 0,1\} ^*} \times G \to \mathbb{Z}_p^* {H_4}:G \to {\{ 0,1\} ^{{l_0} + {l_1}}} {H_5}:{\{ 0,1\} ^*} \to \mathbb{Z}_p^* {H_6}:{\{ 0,1\} ^*} \to {\{ 0,1\} ^k} ,其中 {l_0} 是密文长度.输出系统参数 params = \{ p,P,{P_{{\text{pub}}}},G,{H_1},{H_2},{H_3},{H_4},{H_5},{H_6}\} .

    1)用户将 I{D_i} 上传给PKG,PKG计算 {Q_i} = {H_1}(I{D_i}) s{k_{i,1}} = s{Q_i}

    2)PKG随机选择 {x_i} \in \mathbb{Z}_p^* ,计算 P{K_{i,1}}\; =\; {x_i}P P{K_{i,2}}\; = {H_1}(I{D_i}||P{K_{i,1}}) s{k_{i,2}} = {x_i} + sP{K_{i,2}} s{k_{i,3}} = {H_1}(I{D_i}||s) P{K_{i,3}} = s{k_{i,3}}P

    3)PKG输出公共参数 P{K_i} = (P{K_{i,1}},P{K_{i,2}},P{K_{i,3}}) 与私钥 s{k_i} = (s{k_{i,1}},s{k_{i,2}},s{k_{i,3}}) .

    给定参与密文等值测试与聚合签密的数据拥有者数量为 n ,数据拥有者的身份标识为 I{D_i} ,数据用户的身份标识为 I{D_j} ,其中i,j \in \{ 1,2, \cdots ,n\}.数据拥有者执行1)~5)操作对 {m_i} 进行签密:

    1)随机选择 {a_i},{b_i},{N_i} \in \mathbb{Z}_p^* ,计算 {C_{i,1}} = {a_i}P {C_{i,2}} = {b_i}P {R_i} = {a_i}{Q_j}{P_{{\text{pub}}}}

    2)计算 {U_i} = {H_2}({m_i},I{D_i},I{D_j},{R_i},P{K_{i,1}},P{K_{j,1}}) {V_i} = {H_3} ({m_i},I{D_i},I{D_j},{R_i},P{K_{i,1}},P{K_{j,1}}) {v_i} = {a_i}{U_i} + s{k_{i,2}}{V_i} {C_{i,3}} = {v_i}P {C_{i,4}} = {H_4}({R_i}) \oplus ({m_i}||{v_i})

    3)计算 {f_{i,0}} = {H_5}({m_i}||n) {f}_{i,1} = {H}_{5}({m}_{i}|\left|n\right||{f}_{i,0}),\cdots {f_{i,n - 1}} = {H_5}({m_i}||n||{f_{i,0}}|| \cdots ||{f_{i,n - 2}})

    4)计算 {C_{i,5}} \;= \;{H_4}({b_i}P{K_{j,3}}) \;\oplus\; ({N_i}||f({N_i})){C_{i,6}}\; = \;{H_6} (n|| {C_{i,1}}|| \cdots ||{C_{i,5}}||{b_i}P{K_{j,3}}||{f_{i,0}}|| \cdots ||{f_{i,n - 1}}),其中 f({N_i}) = {f_{i,0}} + {f_{i,1}}{N_i} + {f_{i,2}}N_i^2 + \cdots + {f_{i,n - 1}}N_i^{n - 1}

    5)将密文 C{T_i} = ({t_i},{C_{i,1}},{C_{i,2}},{C_{i,3}},{C_{i,4}},{C_{i,5}},{C_{i,6}}) 上传到云端存储,其中 {t_i} = n .

    n 个数据用户分别将等值测试陷门 t{k_j} = s{k_{j,3}} 发送给测试者,其中j \in \{ 1,2, \cdots ,n\}.测试者从云服务器分别下载 n 个数据拥有者想要测试的密文 C{T_1,CT_2,\cdots,CT_n} ,执行1)~3)多密文等值测试操作:

    1)检查{t_1} = {t_2} = \cdots = {t_n} = n是否成立,若成立测试者则继续执行以下操作,否则终止操作并输出“ \bot ”;

    2)对于 i \in \{ 1,2, \cdots ,n\} j \in \{ 1,2, \cdots ,n\} ,测试者分别计算 {N_i}||f({N_i}) = {C_{i,5}} \oplus {H_4}({C_{i,2}}t{k_j}) ,由签密算法有 f({N_i}) = {f_{i,0}} + {f_{i,1}}{N_i} + {f_{i,2}}N_i^2 + \cdots + {f_{i,n - 1}}N_i^{n - 1} ,测试者将 n 个等式合并得到方程组

    \left\{\begin{aligned} &f({N}_{1})={f}_{1,0}+{f}_{1,1}{N}_{1}+{f}_{1,2}{N}_{1}^{2}+\cdots +{f}_{1,n-1}{N}_{1}^{n-1},\\ &f({N}_{2})={f}_{2,0}+{f}_{2,1}{N}_{2}+{f}_{2,2}{N}_{2}^{2}+\cdots +{f}_{2,n-1}{N}_{2}^{n-1},\\ & \;\;\; \vdots \\ &f({N}_{n})={f}_{n,0}+{f}_{n,1}{N}_{n}+{f}_{n,2}{N}_{n}^{2}+\cdots +{f}_{n,n-1}{N}_{n}^{n-1},\end{aligned}\right.

    并隐式设置 {f_{i,k}} = {f_{j,k}} ,其中 k \in \{ 0,1, \cdots ,n - 1\} ,测试者通过对该方程组对应的范德蒙矩阵求逆,获得方程组的唯一一组解 {f_{1,0}},{f_{1,1}}, \cdots ,{f_{1,n - 1}}

    3)检查等式{C_{i,6}} = {H_6}(n||{C_{i,1}}||{C_{i,2}}||{C_{i,3}}||{C_{i,4}}||{C_{i,5}}||{C_{i,2}}t{k_j}|| {f_{i,0}}||{f_{i,1}}|| \cdots ||{f_{i,n - 1}})是否成立,若成立测试者则向云服务器输出测试结果为“1”,否则向云服务器输出测试结果为“0”.

    若云服务器接收到的密文等值测试结果为“1”,代表 n 个数据拥有者的医疗密文全部相同,云服务器将所有数据拥有者的医疗密文 C{T}_{1},C{T}_{2},\cdots ,C{T}_{n} 发送给聚合者,聚合者执行1)~2)操作对医疗密文进行聚合签密:

    1)计算{X_{{\text{agg}}}} = \displaystyle\sum\limits_{i = 1}^n {{C_{i,3}}}

    2)将聚合医疗密文 {\sigma _{{\text{agg}}}} = ({\{ C{T_i}\} _{i = 1,2, \cdots ,n}},{X_{{\text{agg}}}}) 上传到云服务器存储.

    给定数据用户的身份标识为 I{D_j} ,其中 j \in \{ 1, 2, \cdots , n\} .数据用户从云端下载聚合医疗密文 {\sigma _{{\text{agg}}}} ,对密文进行解密并验证数据来源.数据用户的具体操作如为:

    1)计算R_{i}'= sk_{j,1} C_{i,1}m_i'||v_i' = {C_{i,4}} \oplus {H_4}(R_i')

    2)根据m_i'的值计算{f}_{i,0}'\;=\;{H}_{5}({m}_{i}'||n),f_{i,1}^{{'} }\; =\; {H_5}(m_i^{{'} }||n|| f_{i,0}^{{'} }) ,\cdotsf_{i,n - 1}^{'} = {H_5}(m_i'||n||f_{i,0}'||, \cdots ||f_{i,n - 2}^{{'} })N_i^{{'} }||f(N_i^{{'} }) = {C_{i,5}} \oplus {H_4} ({C_{i,2}}s{k_{j,3}})

    3)计算U_i^{{'} } = {H_2}(m_i^{{'} },I{D_i},I{D_j},R_i^{{'} },P{K_{i,1}},P{K_{j,1}})V_i' = {H_3} (m_i', \; I{D_i},\;I{D_j},\;R_i',\;P{K_{i,1}},\;P{K_{j,1}})X_{{\text{agg}}}' = \displaystyle\sum\limits_{i = 1}^n {v_i'P}X_{{\text{agg}}}^*= \displaystyle\sum\limits_{i = 1}^n {U_i'{C_{i,1}} +} \displaystyle\sum\limits_{i = 1}^n {V_i'P{K_{i,1}} + }\displaystyle\sum\limits_{i = 1}^n {V_i'P{K_{i,2}}{P_{{\text{pub}}}}}

    4)分别检查等式{C_{i,6}}\; =\; {H_6}(n||{C_{i,1}}||{C_{i,2}}||{C_{i,3}}||{C_{i,4}}||{C_{i,5}}|| {C_{i,2}}s{k_{j,3}}|| f_{i,0}'||f_{i,1}'|| \cdots ||f_{i,n - 1}')X_{{\text{agg}}}^* = X_{{\text{agg}}}'f(N_i') = f_{i,0}' + {f_{i,1}'N_i'} +\cdots+ f_{i,n-1}'N_i^{{'}n-1}是否同时成立.

    若以上等式均成立,数据用户则接收医疗数据m_i';否则输出“ \bot ”.

    1)解密等式的正确性

    数据用户通过计算 m_i'||v_i' = {C_{i,4}} \oplus {H_4}(R_i') 对密文进行解密,其中 R_i' = s{k_{j,1}}{C_{i,1}} s{k_{j,1}} 是数据用户的私钥,由于s{k_{j,1}} = s{Q_j},则有

    R_i' = s{k_{j,1}}{C_{i,1}} = s{k_{j,1}}{a_i}P = s{Q_j}{a_i}P = {a_i}{Q_j}{P_{{\text{pub}}}} = {R_i} \text{,}

    R_i' = {R_i},从而有

    m_i'||v_i' = {C_{i,4}} \oplus {H_4}(R_i') = {H_4}({R_i}) \oplus ({m_i}||{v_i}) \oplus {H_4}(R_i') = {m_i}||{v_i}{\kern 1pt} .

    因此,本文方案满足密文解密等式的正确性.

    2)签名验证等式的正确性

    数据用户通过判断等式X_{{\text{agg}}}^* = X_{{\text{agg}}}'是否成立以验证聚合密文签名的合法性,其中X_{{\text{agg}}}' = \displaystyle\sum\limits_{i = 1}^n {v_i'P}{v_i'} = {a_i}{U_i} +s{k_{i,2}}{V_i} s{k_{i,2}} = {x_i} + sP{K_{i,2}} ,则有

    \begin{aligned} X_{{\text{agg}}}' = &\sum\limits_{i = 1}^n {v_i'P} = \sum\limits_{i = 1}^n {{a_i}{U_i}P + \sum\limits_{i = 1}^n {s{k_{i,2}}{V_i}P} } = \\ &\sum\limits_{i = 1}^n {{a_i}{U_i}P + \sum\limits_{i = 1}^n {{x_i}{V_i}P + \sum\limits_{i = 1}^n {sP{K_{i,2}}{V_i}P} } } ,\end{aligned}

    结合 {C_{i,1}} = {a_i}P P{K_{i,1}} = {x_i}P {P_{{\text{pub}}}} = sP ,从而有

    X_{{\text{agg}}}' = \sum\limits_{i = 1}^n {{U_i}{C_{i,1}} + } \sum\limits_{i = 1}^n {{V_i}P{K_{i,1}} + } \sum\limits_{i = 1}^n {{V_i}P{K_{i,2}}{P_{{\text{pub}}}}}.

    进一步,由解密等式的正确性可知 m_i'||v_i' = {m_i}||{v_i} ,则有

    \begin{aligned} {U_i} =\;& {H_2}({m_i},I{D_i},I{D_j},{R_i},P{K_{i,1}},P{K_{j,1}})= \\ & {H_2}(m_i',I{D_i},I{D_j},R_i',P{K_{i,1}},P{K_{j,1}}) =U_i',\\ {V_i} = & {H_3}({m_i},I{D_i},I{D_j},{R_i},P{K_{i,1}},P{K_{j,1}}) =\\ &{H_3}(m_i',I{D_i},I{D_j},R_i',P{K_{i,1}},P{K_{j,1}}) = V_i', \end{aligned}

    {U_i} = U_i' {V_i} = V_i' ,于是有

    \begin{aligned} X_{{\text{agg}}}' = \;& \sum\limits_{i = 1}^n {{U_i}{C_{i,1}} + } \sum\limits_{i = 1}^n {{V_i}P{K_{i,1}} + } \sum\limits_{i = 1}^n {{V_i}P{K_{i,2}}{P_{{\text{pub}}}}} = \\ &\sum\limits_{i = 1}^n {U_i^{'}{C_{i,1}} + } \sum\limits_{i = 1}^n {V_i'P{K_{i,1}} + } \sum\limits_{i = 1}^n {V_i'P{K_{i,2}}{P_{{\text{pub}}}}} = X_{{\text{agg}}}^* \text{,} \end{aligned}

    X_{{\text{agg}}}^* = X_{{\text{agg}}}' 成立.因此,本文所提的新方案满足签名验证等式的正确性.

    3)等值测试结果的正确性

    i \in \{ 1,2, \cdots ,n\} j \in \{ 1,2, \cdots ,n\} ,测试者通过检查 {C_{i,6}} = {H_6}(n||{C_{i,1}}|| \cdots ||{C_{i,5}}||{C_{i,2}}t{k_j}||{f_{i,0}}|| \cdots ||{f_{i,n - 1}}) 是否成立来判断 n 个医疗密文是否相同,其中{f_{i,0}}\; =\; {H_5} ({m_i}|| n), \cdots , {f_{i,n - 1}} = {H_5}({m_i}||n||{f_{i,0}}|| \cdots ||{f_{i,n - 2}}) .假设所有参与密文等值测试的医疗密文全部相同,即 {m_1} = {m_2} = \cdots = {m_n} ,则有

    \begin{aligned} {H}_{5}({m}_{1}||n)={H}_{5}({m}_{2}||n)=\; &\cdots ={H}_{5}({m}_{n}||n),\\ {H}_{5}({m}_{1}|\left|n\right||{f}_{1,0})={H}_{5}({m}_{2}|\left|n\right|| & {f}_{1,0})= \cdots ={H}_{5}({m}_{n}|\left|n\right||{f}_{1,0}),\\ &\vdots\\ {H}_{5}({m}_{1}||n||{f}_{1,0}||\cdots ||{f}_{1,n-2})= & {H}_{5}({m}_{1}||n||{f}_{2,0}||\cdots ||{f}_{2,n-2})=\cdots=\\ {H}_{5}({m}_{n}||n||{f}_{n,0}||&\cdots ||{f}_{n,n-2}), \end{aligned}

    即对于所有的 i,j \in \{ 1,2, \cdots ,n\} k \in \{ 0,1, \cdots ,n - 1\} ,等式 {f_{i,k}} = {f_{j,k}} 均成立.

    由医疗数据签密及上传算法可知,数据拥有者在签密过程中设置

    f({N_i}) = {f_{i,0}} + {f_{i,1}}{N_i} + {f_{i,2}}N_i^2 + \cdots + {f_{i,n - 1}}N_i^{n - 1},

    由此可以得到方程组

    \left\{\begin{aligned} f({N}_{1})&={f}_{1,0}+{f}_{1,1}{N}_{1}+{f}_{1,2}{N}_{1}^{2}+\cdots +{f}_{1,n-1}{N}_{1}^{n-1},\\ f({N}_{2})&={f}_{2,0}+{f}_{2,1}{N}_{2}+{f}_{2,2}{N}_{2}^{2}+\cdots +{f}_{2,n-1}{N}_{2}^{n-1},\\ & \vdots \\ f({N}_{n})&={f}_{n,0}+{f}_{n,1}{N}_{n}+{f}_{n,2}{N}_{n}^{2}+\cdots +{f}_{n,n-1}{N}_{n}^{n-1},\end{aligned}\right.

    结合 {f_{i,k}} = {f_{j,k}} ,因此可将 {f_{1,0}},{f_{1,1}}, \cdots ,{f_{1,n - 1}} 作为方程组的解,将随机数 {N_i} 作为方程组的系数,则该方程组对应的矩阵为

    {\boldsymbol{V}} = \left({\begin{array}{*{20}{c}} 1&{{N_1}}&{N_1^2}& \cdots &{N_1^{n - 1}} \\ 1&{{N_2}}&{N_2^2}& \cdots &{N_2^{n - 1}} \\ \vdots & \vdots & \vdots &{}& \vdots \\ 1&{{N_n}}&{N_n^2}& \cdots &{N_n^{n - 1}} \end{array}} \right) ,

    由范德蒙矩阵的性质可知其对应的行列式为 \det ({\boldsymbol{V}}) = \displaystyle\prod\limits_{1 \leqslant i \lt j \leqslant n} {({N_i} - {N_j})} .

    从数据拥有者签密过程可知, {N_i} 是由 n 个不同的数据拥有者在对医疗密文进行签密时分别选择的随机数,因此 \det ({\boldsymbol{V}}) = 0 的概率仅为 {[p(p - 1) \cdots (p - n + 1)]^{ - 1}} ,其中 p 为群 \mathbb{Z}_p^* 的阶.由克拉默法则可知当 \det ({\boldsymbol{V}}) \ne 0 时,方程组有且仅有唯一解 {f_{1,0}},{f_{1,1}}, \cdots ,{f_{1,n - 1}} ,于是有对于所有的 i,j \in \{ 1,2, \cdots ,n\} k \in \{ 0,1, \cdots ,n - 1\} ,等式 {f_{i,k}} = {f_{j,k}} 均成立,与所有参与密文等值测试的医疗密文全部相同的假设相符.因此,本文新方案满足多密文等值测试结果的正确性.

    本文提出的方案引入了基于身份的聚合签密体制,确保了本文方案在面对第1类敌手时医疗数据的机密性与签名的存在不可伪造性,对于机密性与不可伪造性的证明过程可以参考文献[23]方案.同时,本文方案满足面对第2类敌手适应性选择密文攻击下的单向性(one-way against adaptive chosen ciphertext attack, OW-CCA2),以下通过定理1证明本文方案满足OW-CCA2安全.

    定理1. 假设CDH问题是难解的,则本文方案在随机预言模型下对第2类敌手是OW-CCA2安全的.

    证明.假设 \mathcal{C} 是能够解决CDH困难问题的人, {\mathcal{A}_2} 代表第2类敌手. \mathcal{C} {\mathcal{A}_2} 为子程序充当以下游戏中的挑战者,若 {\mathcal{A}_2} 能以不可忽略的优势在概率多项式时间内的游戏中获胜,则 \mathcal{C} 能够在概率多项式时间内解决CDH困难问题.

    初始化阶段.CDH问题的输入为 (P,aP,bP) ,其中 a,b \in \mathbb{Z}_p^* \mathcal{C} 的目标是给出CDH困难问题的解 abP . \mathcal{C} 选取阶为素数 p 的循环群 G ,计算 P G 的生成元,随机选择 a \in \mathbb{Z}_p^* 并计算P_{{\text{pub}}}' = aP.最后,输出系统参数 params=\{p,P,{P}_{\text{pub}},G,{H}_{1},{H}_{2},{H}_{3},{H}_{4},{H}_{5},{H}_{6}\} ,将 a 秘密保存并发送 params {\mathcal{A}_2} .

    询问阶段1.为了响应 {\mathcal{A}_2} 的询问, \mathcal{C} 维持列表 {L}_{1}, {L}_{2},{L}_{3},{L}_{4},{L}_{5},{L}_{6},{L}_{\text{td}} 分别用于跟踪 {\mathcal{A}_2} {H_1} Hash询问、 {H_2} Hash询问、 {H_3} Hash询问、 {H_4} Hash询问、 {H_5} Hash询问、 {H_6} Hash询问、测试陷门询问. {L_1} 同时用于跟踪密钥提取询问,开始时每个列表都为空.

    1) {H_1} Hash询问.当 \mathcal{C} 收到 {\mathcal{A}_2} {H_1}(I{D_i},{Q_i}) 的查询,若 I{D_i} \in \{ I{D_i}\} _{i = 1}^n ,则计算 P{K_{i,1}} = {x_i}P ,其中 {x_i} 是未知的, \mathcal{C} 保存 ( \bot ,{Q_i},I{D_i}) {L_1} ;若 i \ne 1 \mathcal{C} 随机选择 {x_i},P{K_{i,2}} \in \mathbb{Z}_p^* 并设置 P{K_{i,1}} = {x_i}P ,将 P{K_{i,2}} = {H_1}(I{D_i}||P{K_{i,1}}) 返回给 {\mathcal{A}_2} 并保存 ({x_i},P{K_{i,1}},P{K_{i,2}},I{D_i}) {L_1} .

    2) {H_2} Hash询问.当 \mathcal{C} 收到 {\mathcal{A}_2} ({m_i},I{D_i},I{D_j},{R_i}, P{K_{i,1}},P{K_{j,1}},{U_i})的查询后, \mathcal{C} 首先在 {L_2} 查找是否已有({m_i}, I{D_i},I{D_j},{R_i},P{K_{i,1}},P{K_{j,1}},{U_i},{t_i},{t_i}P),若 {L_2} 已有({m_i},I{D_i}, I{D_j},{R_i},P{K_{i,1}},P{K_{j,1}},{U_i},{t_i},{t_i}P),则发送 {U_i} {\mathcal{A}_2} ;否则, \mathcal{C} 选取 {U_i} \in \mathbb{Z}_p^* ,将 ({U_i},{t_i},{t_i}P) 加入到 {L_2} 中并输出 {t_i}P .

    3) {H_3} Hash询问.当 \mathcal{C} 收到 {\mathcal{A}_2} ({m_i},I{D_i},I{D_j},{R_i}, P{K_{i,1}}, P{K_{j,1}},{V_i})的查询后, \mathcal{C} 首先在 {L_3} 查找是否已有({m_i}, I{D_i}, I{D_j},{R_i},P{K_{i,1}},P{K_{j,1}},{V_i},{w_i},{w_i}P),若 {L_3} 已有({m_i},I{D_i}, I{D_j},{R_i},P{K_{i,1}},P{K_{j,1}},{V_i},{w_i},{w_i}P),则返回 {V_i} {\mathcal{A}_2} ;否则, \mathcal{C} 选取 {V_i} \in \mathbb{Z}_p^* ,将 ({V_i},{w_i},{w_i}P) 加入到 {L_3} 中并输出 {w_i}P .

    4) {H_4} Hash询问.当 \mathcal{C} 收到 {\mathcal{A}_2} ({R_i},{H_4}({R_i})) 的查询后,若在 {L_4} 中已有 ({R_i},{H_4}({R_i})) 则返回 {H_4}({R_i}) {\mathcal{A}_2} ;否则, \mathcal{C} 选取 {H_4}({R_i}) \in {\{ 0,1\} ^{{l_0} + {l_1}}} ,并将 ({R_i},{H_4}({R_i})) 加入到 {L_4} 中且输出 {H_4}({R_i}) .

    5) {H_5} Hash询问.当 \mathcal{C} 收到 {\mathcal{A}_2} {f_{i,d}} 的查询,其中 d \in \{ 1,2, \cdot \cdot \cdot n\} ,若 {L_5} 存在 ({m_i},n,{f_{i,0}}, \cdot \cdot \cdot ,{f_{i,d - 2}},{f_{i,d}}) 则返回 {f_{i,d}} {\mathcal{A}_2} ;否则, \mathcal{C} 选取 {f_{i,*}} \in \mathbb{Z}_p^* ,将 ({m_i},n,{f_{i,0}}, \cdot \cdot \cdot ,{f_{i,d - 2}},{f_{i,d}}) 加入到 {L_5} 中并输出 {f_{i,d}} .

    6) {H_6} Hash询问.当 \mathcal{C} 收到 {\mathcal{A}_2} {C_{i,6}} 的查询后,若在 {L_6} 中已有 {C_{i,6}} 则返回 {C_{i,6}} {\mathcal{A}_2} ;否则, \mathcal{C} 选取 {C_{i,6}} \in {\{ 0,1\} ^k} ,将相应元组加入到 {L_6} 中并输出 {C_{i,6}} .

    7) 密钥提取询问.当 \mathcal{C} 收到 {\mathcal{A}_2} I{D_i} 的私钥的查询后, \mathcal{C} 首先查询 {L_1} 中是否存在 ({x_i},P{K_{i,1}},P{K_{i,2}},I{D_i}) ,若不存在则输出“ \bot ”;否则返回 ({x_i},P{K_{i,1}},*,*) .如果I{D_i} \notin \{ I{D_i}\} _{i = 1}^n \mathcal{C} I{D_i} 作为 {H_1} Hash询问的输入,得到 {Q_i} = {H_0} (I{D_i}) ,并计算 s{k_{i,1}} = a{Q_i} s{k_{i,2}} = {x_i} + aP{K_{i,2}} ,返回 (P{K_{i,1}}, s{k_{i,1}}, P{K_{i,2}},I{D_i}) {\mathcal{A}_2} .

    8) 公钥替换询问.当 \mathcal{C} 收到 {\mathcal{A}_2} (I{D_i},P{K_{i,1}},P{K_{i,2}}) 的查询后,若 ({x_i},P{K_{i,1}},P{K_{i,2}},I{D_i}) 已存在于 {L_1} 中,则 \mathcal{C} 用列表L1中的 (P{K_{i,1}},P{K_{i,2}}) 替换 I{D_i} 原有的公钥(P{K_{i,1}}, P{K_{i,2}});否则, \mathcal{C} ({x_i},P{K_{i,1}}, P{K_{i,2}},I{D_i}) 加入到列表 {L_1} 中.

    9) 签密询问.当 \mathcal{C} 收到 {\mathcal{A}_2} ({m_i},I{D_i},I{D_j}) 的询问后, \mathcal{C} 执行①~②操作:

    ① 若 I{D_i} \ne I{D_l} {\mathcal{A}_2} 没有对 I{D_i} 的公钥执行过替换询问, \mathcal{C} 通过 {H_1} Hash询问与密钥提取询问分别获取 {x_i} s{k_{i,2}} ,并对 {m_i} 进行签密;若 I{D_i} 对应的公钥被替换过, \mathcal{C} 首先通过 {H_1} 询问分别获取 (P{K_{i,1}},P{K_{i,2}}) (P{K_{j,1}},P{K_{j,2}}) ,然后 \mathcal{C} 利用随机数 {a_i} \in \mathbb{Z}_p^* 计算 {C_{i,1}} = {a_i}P {R_i} = {a_i}{Q_j}P_{{\text{pub}}}',并通过 {H_2} {H_3} {H_4} Hash询问分别获取 {U_i} = {H_2}({m_i}, I{D_i}, I{D_j}, {R_i},P{K_{i,1}},P{K_{j,1}}) {V_i} = {H_3}({m_i},I{D_i},I{D_j},{R_i},P{K_{i,1}},P{K_{j,1}}) . {H_4} ({R_i}) ,通过密钥提取询问获取私钥 s{k_{i,2}} ,计算 {v_i} = \ {a_i}{U_i} + s{k_{i,2}}{V_i} {C_{i,3}} = {v_i}P {C_{i,4}} = {H_4}({R_i}) \oplus ({m_i}||{v_i}) ,最后输出密文 {\sigma _i} = ({C_{i,1}},{C_{i,2}},{C_{i,3}},P{K_{i,1}}) {\mathcal{A}_2} .

    ② 若 I{D_i} = I{D_l} \mathcal{C} 首先通过 {H_1} 询问分别获取 (P{K_{i,1}}, P{K_{i,2}}) (P{K_{j,1}},P{K_{j,2}}) ,随机选择 y,z \in \mathbb{Z}_p^* 并计算 {C_{i,1}} = zaP .然后 \mathcal{C} 通过 {H_1} Hash询问和 {H_4} Hash询问分别获取 (I{D_j}, {a_j}) {H_4}({R_j}) ,并计算{R_j} = {a_j}{Q_j}P_{{\text{pub}}}' {U_j} = {H_2}({m_l},I{D_l},I{D_j}, {R_j}, P{K_{l,1}},P{K_{j,1}}) ,将 ({m_l},I{D_l},I{D_j},{R_j},P{K_{l,1}},P{K_{j,1}},{U_j}) 加入到 {L_2} 中,通过 {H_3} Hash询问获取 ({m_l},I{D_l},I{D_j},{R_l},P{K_{l,1}}, P{K_{j,1}}, {V_l},{w_l},{w_l}P) ,并计算 {v_l} = y{U_l} {C_{l,3}} = z{v_l}P_{{\text{pub}}}' + {w_l}P{K_{l,1}} {C_{i,4}} = {H_4} ({R_l}) \oplus ({m_l}||{v_l}) ,最后输出 {\sigma _l} = ({C_{l,1}},{C_{l,2}},{C_{l,3}},P{K_{l,1}}) {\mathcal{A}_2} .

    10) 解签密询问.当 \mathcal{C} 收到 {\mathcal{A}_2} (C{T_1},C{T_2}, \cdot \cdot \cdot , C{T_n}, \{ I{D_i}\} _{i = 1}^n,I{D_j}) 的查询后, \mathcal{C} 执行①~②操作:

    ① 对 (I{D_1},I{D_2}, \cdot \cdot \cdot ,I{D_n},I{D_j}) 分别执行 {H_1} Hash询问以获取 ({Q_1},{Q_2}, \cdot \cdot \cdot ,{Q_n},{Q_j}) (P{K_{1,1}},P{K_{2,1}}, \cdot \cdot \cdot ,P{K_{n,1}}, P{K_{j,1}}) ,然后 \mathcal{C} 执行聚合签名验证算法,若验证未通过,则输出“ \bot ”后终止模拟;否则继续执行后续操作.

    ② 若I{D_j} \ne I{D_l} \mathcal{C} 则通过 {H_1} Hash询问获取 (I{D_j}, {a_j}) 并计算 {R_j} = {a_j}{C_{j,1}} ,检查 {L_2} 中是否存在元组 (*,I{D_j},{R_i}, P{K_{i,1}},P{K_{j,1}},{U_i}) ,若存在,则 \mathcal{C} 利用Hash值 {U_i} 对密文进行解密;否则 \mathcal{C} 随机选取 {U_i} \in \mathbb{Z}_p^* 并用 {U_i} 对密文进行解密.若 I{D_j} = I{D_l} \mathcal{C} 则在 {L_2} 中查询是否存在元组(*,I{D_j},*, P{K_{i,1}},P{K_{j,1}},{U_i}),若存在则利用Hash值 {U_i} 对密文进行解密;否则将随机选取 {U_i} \in \mathbb{Z}_p^* 并用 {U_i} 对密文进行解密.

    11) 测试陷门询问.当 \mathcal{C} 收到 {\mathcal{A}_2} t{k_j} 的询问后,若 {L_1} 中存在元组 ({x_i},P{K_{i,1}},P{K_{i,2}},I{D_i}) \mathcal{C} 通过 {H_1} 询问获取s{k_{i,3}} ={H_1}(I{D_i}||s)并返回 t{k_j} = s{k_{i,3}} {\mathcal{A}_2} ;否则, \mathcal{C} 选取t{k_j} \in \mathbb{Z}_p^*发送给 {\mathcal{A}_2} ,并将 ({x_i},P{K_{i,1}},P{K_{i,2}},I{D_i}) 加入到 {L_{{\text{td}}}} 中.

    挑战阶段. {\mathcal{A}_2} 输出2个消息 m_0^* = \{ m_{i,0}^*\} _{i = 1}^n m_1^* = \{ m_{i,1}^*\} _{i = 1}^n ,并输出身份 \{ ID_i^*\} _{i = 1}^n ID_j^* \mathcal{C} ID_j^* 作为输入进行 {H_1} Hash询问,若 {L_1} 中不存在与 ID_j^* 相关的元组,则 \mathcal{C} 挑战失败;否则, \mathcal{C} {L_1} 中获取 \{ ID_i^*\} _{i = 1}^n 对应的公钥 \{ PK_{i,1}^*,PK_{i,2}^*\} _{i = 1}^n ,随机选择 \{ s{k_{i,2}} \in \mathbb{Z}_p^*\} _{i = 1}^n 并计算 \{ {C_{i,1}} = s{k_{i,2}}cP\} _{i = 1}^n ;然后 \mathcal{C} {L_2} {L_3} 中获取 \{ {U_i}\} _{i = 1}^n \{ {V_i}\} _{i = 1}^n ,并计算 v_i^* = {a_i}{U_i} + s{k_{i,2}}{V_i} = {t_i}C_{i,1}^* + s{k_{i,2}}{w_i}PK_{i,1}^* ,其中 {t_i} {w_i} s{k_{i,2}} 分别来自 {H_2} Hash询问、 {H_3} Hash询问与对 ID_j^* 的密钥提取询问;随后 \mathcal{C} 随机选择 \mu \in \{ 0,1\} 并计算 C_{i,4}^* = {H_4}({R_i}) \oplus ({m_{i,\mu }}||v_i^*) C_{i,3}^* = v_i^*P ,然后通过 {H_1} Hash询问获取公钥 \{ PK_{i,1}^*\} _{i = 1}^n 并输出 {\sigma ^*} = (C_{1,1}^*, \cdot \cdot \cdot ,C_{n,1}^*,C_{1,3}^*, \cdot \cdot \cdot ,C_{n,3}^*,C_{1,4}^*, \cdot \cdot \cdot ,C_{n,4}^*,PK_{1,1}^*, \cdot \cdot \cdot ,PK_{n,1}^*) {\mathcal{A}_2} .

    询问阶段2. {\mathcal{A}_2} 执行与询问阶段1类似的多项式有界次适应性查询,但不允许对 ID_i^* ID_j^* 对应的密文进行解签密查询.

    猜测阶段. {\mathcal{A}_2} 输出1个对 \mu 的猜测\mu {'} \in \{ 0,1\},如果\mu {'} = \mu,则 {\mathcal{A}_2} 在以上游戏中获胜. \mathcal{C} 在列表 {L_4} 中选取 ({R_i},{H_4}({R_i})) 并以 {R_i} = abP 作为CDH困难问题的解,这与目前公认的CDH问题的难解性相矛盾.因此本文方案在面对A2敌手时满足选择OW-CCA2安全. 证毕.

    将本文提出的方案与文献[2226]方案在功能特性方面进行比较,对比结果如表1所示.与文献[2324]方案相比,本文方案引入等值测试功能,实现了对存储在云端的医疗密文的安全检索.与文献[22,2526]方案相比,本文方案引入了聚合签密技术,确保了WBAN中医疗数据的机密性、完整性与可认证性,提高了多用户环境下对医疗数据进行签密与验证的效率.文献[2526]方案采用的等值测试方法只能对2个密文进行比较,本文方案实现了同时对多个密文进行匹配,降低了测试者执行密文等值测试时的开销.此外,与文献[2223,2526]方案相比,本文方案达到了适应性选择密文攻击下的单向性,安全性有所提升.

    表  1  功能特性比较
    Table  1.  Comparison of Functional Characteristics
    方案等值
    测试
    多密文等值
    测试
    签密聚合
    签密
    安全性
    文献[22]方案××选择明文攻击下的单向性
    文献[23]方案××选择密文攻击
    下的不可区分性
    文献[24]方案××适应性选择密文攻击
    下的不可区分性
    文献[25]方案×××选择密文攻击下的单向性
    文献[26]方案××选择密文攻击下的单向性
    本文方案适应性选择密文攻击
    下的单向性
    注:“×”表示不具有某种特定功能;“√”表示具有某种特定功能.
    下载: 导出CSV 
    | 显示表格

    本文所提新方案在执行多密文等值测试算法时,测试者通过对范德蒙矩阵求逆以提取出与数据拥有者明文相关的系数.其中,n阶范德蒙矩阵求逆算法的时间复杂度取决于所使用的求逆方法,已有许多学者提出了求解范德蒙矩阵逆矩阵的串行[27-28]与并行[29-30]方法,其时间复杂度如表2所示:

    表  2  范德蒙矩阵求逆算法复杂度
    Table  2.  Complexity of Inversion for Vandermonde Matrix
    方案时间复杂度
    文献[27]方案 O({n^2})
    文献[28]方案 O({n^2})
    文献[29]方案 O((\log n))
    文献[30]方案 O({(\log n)^2})
    下载: 导出CSV 
    | 显示表格

    将本文提出的方案在计算时间开销方面与文献[2526]方案进行对比,假设参与密文等值测试的用户数量为n,使用i7-8750h,2.20 GHz处理器,8 GB内存和Win10操作系统在VC6.0环境下用PBC库分别对本文方案与对比方案进行了仿真模拟,对比结果如表3所示.其中标量乘法运算时间Tsm = 0.0004 ms,群元素乘法运算时间Tmul = 0.0314 ms,Hash函数运算时间Th = 0.0001 ms,指数运算时间Te = 6.9866 ms,双线性配对时间Tbp = 9.6231 ms,范德蒙矩阵求逆时间Tinv取决于矩阵求逆方法.从表3可以看出,由于本文方案中不存在计算开销较大的双线性配对运算,因此在密文生成阶段的计算时间开销相比于文献[2526]的方案有显著降低.在数据解密及验证阶段,非聚合模式下的文献[2526]方案需要所有数据用户逐一对数据进行验证并解密,而本文方案中的数据用户能够对聚合密文进行批量验证,验证效率相比于文献[2526]的方案有所提高.

    表  3  计算量比较
    Table  3.  Computation Amount Comparison ms
    方案密文生成时间密文等值测试时间数据解密及验证时间
    文献[25]方案\begin{aligned} & n{T_{ {\text{mul} } } } + 3n{T_{ {\text{bp} } } } + 6n{T_{\text{h} } } + 5n{T_{\text{e} } } \\ &\quad( 63.8343n )\end{aligned}\begin{aligned} & (n - 1)(4{T_{ {\text{bp} } } } + 2{T_{\text{h} } }) \\ &\quad ( 38.4926n - 38.4926) \end{aligned}\begin{aligned} & 2n{T_{ {\text{bp} } } } + 4n{T_{\text{h} } } + 2n{T_{{\rm{e}} } }\\ &\quad (33.2198n) \end{aligned}
    文献[26]方案\begin{aligned} & 6n{T_{ {\text{sm} } } } + 2n{T_{ {\text{bp} } } } + 7n{T_{\text{h} } } + 2n{T_{\text{e} } } \\ &\quad( 33.2250n) \end{aligned}\begin{aligned} & (n - 1)(4{T_{ {\text{bp} } } } + 2{T_{\text{h} } }) \\ &\quad( 38.4926n - 38.4926) \end{aligned}\begin{aligned}& 3n{T_{ {\text{sm} } } } + n{T_{ {\text{mul} } } } + 5n{T_{ {\text{bp} } } } + 5n{T_{\text{h} } }\\ &\quad ( 48.1486n )\end{aligned}
    本文方案\begin{aligned} & 7n{T_{ {\text{sm} } } } + n{T_{ {\text{mul} } } } + n(n + 4){T_{\text{h} } }\\ &\quad ( 0.0346n + 0.0001{n^2})\end{aligned}\begin{aligned} & n{T_{ {\text{sm} } } } + 2n{T_{\text{h} } } + {T_{ {\text{inv} } } }\\ &\quad ( {T_{ {\text{inv} } } } + 0.0006n) \end{aligned}\begin{aligned} & n(2 + 4n){T_{ {\text{sm} } } } + {n^2}{T_{ {\text{mul} } } } + n(n + 4){T_{\text{h} } } \\ &\quad ( 0.0012n + 0.0331{n^2}) \end{aligned}
    注:n表示参与密文等值测试的用户数量;T_{\text{sm}}表示标量乘法运算时间;T_{\text{mul}}表示群元素乘法运算时间;T_{\text{h}}表示Hash函数运算时间;T_{\text{e}}表示指数运算时间;T_{\text{bp}}表示双线性配对时间;T_{\text{inv}}表示范德蒙矩阵求逆时间.
    下载: 导出CSV 
    | 显示表格

    此外,文献[2526]方案仅支持将多个用户的密文两两一组进行匹配,其密文等值测试算法中双线性配对运算数量与参与测试的用户数量呈线性关系;而本文方案中,测试者可以同时对 n 个用户的密文进行匹配,且测试过程中不存在双线性配对运算.本文方案的等值测试时间主要取决于测试者对范德蒙行列式求逆时所选取的算法,而在对范德蒙矩阵求逆的过程中仅进行标量加法与乘法等计算效率较高的运算[28],因此本文方案的密文等值测试效率同样高于文献[2526]方案的效率.

    针对现有的WBAN密码方案在多用户环境下计算效率较低等问题,本文提出了支持多密文等值测试的WBAN聚合签密方案.该方案采用基于身份的密码体制,消除了传统公钥方案中证书管理的开销;引入多密文等值测试技术,实现了多数据用户对多医疗密文的同时检索;减少了多用户环境下密文等值测试的计算开销;利用聚合签密技术,提高了对多个用户的医疗数据进行签密的效率.本文方案满足医疗数据在传输过程中的机密性、完整性和可认证性,同时保证了数据拥有者签名的不可伪造性与测试陷门的单向性.与同类方案的对比分析结果表明,本文方案支持更多安全属性且计算开销更低.在未来的工作中,将尝试设计抗量子计算攻击的支持多密文等值测试的WBAN签密方案.

    作者贡献声明:杨小东负责论文整体思路与实验方案的设计;周航负责设计方案与撰写论文;任宁宁负责方案仿真与效率分析;袁森负责搜集应用场景相关资料;王彩芬提出指导意见并修改论文.

  • 图  1   FPGA稀疏化加速器整体架构

    Figure  1.   Overall architecture of FPGA sparse accelerator

    图  2   卷积计算数据流

    Figure  2.   Computing dataflow of convolution

    图  3   输入通道块内结构化剪枝方法

    Figure  3.   Block-based structured pruning method along input channel

    图  4   逐通道卷积权重填充策略

    Figure  4.   Weight expanding strategy of depth-wise convolution

    图  5   卷积层调度策略

    Figure  5.   Scheduling strategy of convolution layer

    图  6   集成稀疏化加速器和深度学习框架

    Figure  6.   Integrating sparse accelerator and deep learning framework

    图  7   插入量化节点的模型重构机制

    Figure  7.   Model reconstruction mechanism of inserting quantization node

    图  8   硬件加速器验证平台

    Figure  8.   Hardware accelerator verification platform

    图  9   ZU3EG平台上VGG16, ResNet18,MobileNetV1的各层延迟

    Figure  9.   Layer-wise latency of VGG16, ResNet18 and MobileNetV1 on ZU3EG

    图  10   CycloneV平台上VGG16, ResNet18,MobileNetV1的各层延迟

    Figure  10.   Layer-wise latency of VGG16, ResNet18 and MobileNetV1 on CycloneV

    图  11   加速器功耗和计算效率对比

    Figure  11.   Power and computing efficiency comparison of accelerators

    图  12   不同稀疏率下CNN模型的精度损失

    Figure  12.   Accuracy loss of CNN models with different sparsity ratios

    表  1   加速器乘累加单元配置

    Table  1   Accelerator MACs Unit Configuration

    加速器输出通道并行因子输入通道并行因子剪枝后输入通道并行因子MACs个数
    SAF-CNN_Dense8864
    SAF-CNN_Sparse1616464
    32164128
    32328256
    下载: 导出CSV

    表  2   ZU3EG加速器与其他加速器计算性能与资源占用对比

    Table  2   Computing Performance and Resource Utilization Comparison of ZU3EG Accelerators and Other Accelerators

    加速器平台频率/MHz精度/bkLUTsDSPBRAM计算性能/GOPS
    Suda[51]Stratix-V GSD816120760117.8
    Wei[22]Arria 10 GT1150232831315008341171.3
    AccELB[52]ZC706886808303493
    FAQ-CNN[14]ZU102200821024837231229
    Caffeine[18]KU06020082001167841200
    Li[53]Virtex-7 VC709200162732144956565.9
    ZU3EG_64ZU3EG333821.813071128
    ZU3EG_128ZU3EG333828.9201107254.9
    ZU3EG_256ZU3EG333845.7330178494.3
    下载: 导出CSV

    表  3   SAF-CNN加速器在CycloneV上的资源占用与计算性能

    Table  3   Resource Utilization and Computing Performance of SAF-CNN Accelerator on CycloneV

    编程语言频率/MHzkALMsDSP存储/Mb计算性能/GOPS
    RTL15021.9642.576.3
    HLS15025.7684.141.8
    下载: 导出CSV

    表  4   加速器硬件配置对比

    Table  4   Hardware Configuration Comparison of Accelerators

    平台硬件型号频率/MHzLUTBRAMDSP
    DPU_SZU2EG43031198145212
    DPU_LZU9EG3331619447112070
    ZU3EG_64ZU3EG3332657584137
    ZU3EG_128ZU3EG33333496120208
    ZU3EG_256ZU3EG33350506191337
    下载: 导出CSV

    表  5   SSD_MobileNetV1 推理性能对比

    Table  5   Inference Performance Comparison of SSD_MobileNetV1

    平台编程语言硬件配置帧率/fpsDSP 效率/(fps/DSP)
    CPU0C++ARMV7 * 22.2
    CPU1C++ARMV8 * 48.3
    DPU_SZU2EG310.146
    DPU_LZU9EG124.30.06
    CycloneV_64HLSARMV7 +CycloneV5.30.078
    CycloneV_64RTLARMV7 +CycloneV100.156
    ZU3EG_64HLSARMV8+ZU3EG20.20.147
    ZU3EG_128HLSARMV8+ZU3EG25.60.123
    ZU3EG_256HLSARMV8+ZU3EG26.50.079
    下载: 导出CSV

    表  6   SSD_MobileNetV1推理延迟组成

    Table  6   Inference Latency Components of SSD_MobileNetV1

    推理结果ZU3EG_64ZU3EG_128ZU3EG_256
    计算图重构前计算图重构后计算图重构前计算图重构后计算图重构前计算图重构后
    子图数量616161
    CPU端卷积层数量180180180
    数据重排时延/ms5.13.14.73.14.93.1
    FPGA子图运行时延/ms27.132.117.12215.120
    框架处理时延/ms78.914.48114.579.514.7
    总时延/ms111.149.6102.839.699.537.8
    下载: 导出CSV
  • [1]

    Shafique M, Theocharides T, Reddy V J, et al. TinyML: Current progress, research challenges, and future roadmap[C]//Proc of the 58th ACM/IEEE Design Automation Conf (DAC). Piscataway, NJ: IEEE, 2021: 1303−1306

    [2]

    Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[J]. arXiv preprint, arXiv: 1409.1556, 2014

    [3]

    Ren S, He Kaiming, Girshick R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2017, 39(6): 1137−1149

    [4]

    He Kaiming, Gkioxari G, Dollár P, et al. Mask R-CNN[C]//Proc of the 16th Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2017: 2961−2969

    [5]

    Li Guang, Wang Peisong, Liu Zejian, et al. Hardware acceleration of CNN with one-hot quantization of weights and activations [C]//Proc of the 23rd Design, Automation & Test in Europe Conf & Exhibition (DATE). Piscataway, NJ: IEEE, 2020: 971−974

    [6]

    Cong J, Xiao Bingjun. Minimizing computation in convolutional neural networks[C]//Proc of the 24th Int Conf on Artificial Neural Networks. Berlin: Springer, 2014: 281−290

    [7]

    Prost-Boucle A, Bourge A, Pétrot F, et al. Scalable high-performance architecture for convolutional ternary neural networks on FPGA[C/OL]//Proc of the 27th Int Conf on Field Programmable Logic and Applications (FPL). Piscataway, NJ: IEEE, 2017[2019-06-10].https://ieeexplore.ieee.org/document/8056850

    [8]

    Mousouliotis P G, Petrou L P. CNN-Grinder: From algorithmic to high-level synthesis descriptions of CNNs for low-end-low-cost FPGA SoCs[J]. Microprocessors and Microsystems, 2020, 73: 102990

    [9] 陈桂林,马胜,郭阳. 硬件加速神经网络综述[J]. 计算机研究与发展,2019,56(2):240−253 doi: 10.7544/issn1000-1239.2019.20170852

    Chen Guilin, Ma Sheng, Guo Yang. Survey on accelerating neural network with hardware[J]. Journal of Computer Research and Development, 2019, 56(2): 240−253 (in Chinese) doi: 10.7544/issn1000-1239.2019.20170852

    [10]

    Mishra R, Gupta H P, Dutta T. A survey on deep neural network compression: Challenges, overview, and solutions[J]. arXiv preprint, arXiv: 2010.03954, 2020

    [11]

    Courbariaux M, Hubara I, Soudry D, et al. Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1[J]. arXiv preprint, arXiv: 1602.02830, 2016

    [12]

    Li Fengfu, Zhang Bo, Liu Bin. Ternary weight networks[J]. arXiv preprint, arXiv: 1605.04711, 2016

    [13]

    Gondimalla A, Chesnut N, Thottethodi M, et al. SparTen: A sparse tensor accelerator for convolutional neural networks[C]//Proc of the 52nd Annual IEEE/ACM Int Symp on Microarchitecture. New York: ACM, 2019: 151−165

    [14] 谢坤鹏,卢冶,靳宗明,等. FAQ-CNN:面向量化卷积神经网络的嵌入式FPGA可扩展加速框架[J]. 计算机研究与发展,2022,59(7):1409−1427 doi: 10.7544/issn1000-1239.20210142

    Xie Kunpeng, Lu Ye, Jin Zongming, et al. FAQ-CNN: A flexible acceleration framework for quantized convolutional neural networks on embedded FPGAs[J]. Journal of Computer Research and Development, 2022, 59(7): 1409−1427 (in Chinese) doi: 10.7544/issn1000-1239.20210142

    [15]

    Chollet F. Xception: Deep learning with depthwise separable convolutions[C]//Proc of the 30th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2017: 1251−1258

    [16]

    Abadi M, Barham P, Chen Jianmin, et al. TensorFlow: A system for large-scale machine learning[C]//Proc of the 12th USENIX Symp on Operating Systems Design and Implementation. Berkeley, CA: USENIX, 2016: 265−283

    [17]

    Paszke A, Gross S, Massa F, et al. PyTorch: An imperative style, high-performance deep learning library[C]// Proc of the 32nd Conf on Neural Information Processing Systems. New York: Curran Associates, 2019: 8024−8035

    [18]

    Zhang Chen, Sun Guangyu, Fang Zhenman, et al. Caffeine: Toward uniformed representation and acceleration for deep convolutional neural networks[J]. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2018, 38(11): 2072−2085

    [19]

    Howard A G, Zhu Menglong, Chen Bo, et al. MobileNets: Efficient convolutional neural networks for mobile vision applications[J]. arXiv preprint, arXiv: 1704.04861, 2017

    [20]

    Wu Di, Zhang Yu, Jia Xijie, et al. A high-performance CNN processor based on FPGA for MobileNets[C]//Proc of the 29th Int Conf on Field Programmable Logic and Applications (FPL). Piscataway, NJ: IEEE, 2019: 136−143

    [21]

    Ding Wei, Huang Zeyu, Huang Zunkai, et al. Designing efficient accelerator of depthwise separable convolutional neural network on FPGA[J]. Journal of Systems Architecture, 2019, 97: 278−286 doi: 10.1016/j.sysarc.2018.12.008

    [22]

    Wei Xuechao, Yu C H, Zhang Peng, et al. Automated systolic array architecture synthesis for high throughput CNN inference on FPGAs[C/OL]//Proc of the 54th Annual Design Automation Conf. New York: ACM, 2017[2020-05-08].https://ieeexplore.ieee.org/document/8060313

    [23]

    Cong J, Wei Peng, Yu C H, et al. Automated accelerator generation and optimization with composable, parallel and pipeline architecture[C/OL]//Proc of the 55th ACM/ESDA/IEEE Design Automation Conf (DAC). Piscataway, NJ: IEEE, 2018[2021-01-03].https://ieeexplore.ieee.org/document/8465940

    [24]

    Véstias M, Duarte R P, de Sousa J T, et al. A fast and scalable architecture to run convolutional neural networks in low density FPGAs[J/OL]. Microprocessors and Microsystems, 2020, 77 [2021-12-04].https://www.sciencedirect.com/science/article/pii/S0141933120303033

    [25] 百度. 飞浆: 源于产业实践的开源深度学习平台 [EB/OL]. [2020-08-10].https://www.paddlepaddle.org.cn/

    Baidu. PaddlePaddle: An open source deep learning platform derived from industrial practice[EB/OL]. [2020-08-10].https://www.paddlepaddle.org.cn/ (in Chinese)

    [26]

    Zhou Shuchang, Wu Yuxin, Ni Zekun, et al. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients[J]. arXiv preprint, arXiv: 1606.06160, 2016

    [27]

    Miyashita D, Lee E H, Murmann B. Convolutional neural networks using logarithmic data representation[J]. arXiv preprint, arXiv: 1603. 01025, 2016

    [28]

    Han Song, Mao Huizi, Dally W J. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding[J]. arXiv preprint, arXiv: 1510.00149, 2015

    [29]

    Liu Zhuang, Li Jianguo, Shen Zhiqiang, et al. Learning efficient convolutional networks through network slimming[C]//Proc of the 16th Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2017: 2736−2744

    [30]

    Ma Xiaolong, Lin Sheng, Ye Shaokai, et al. Non-structured DNN weight pruning−Is it beneficial in any platform?[J/OL]. IEEE Transactions on Neural Networks and Learning Systems, 2021[2022-05-01].https://ieeexplore.ieee.org/abstract/document/9381660

    [31]

    Song Linghao, Chi Yuze, Guo Licheng, et al. Serpens: A high bandwidth memory based accelerator for general-purpose sparse matrix-vector multiplication[C]//Proc of the 59th ACM/IEEE Design Automation Conf (DAC). Piscataway, NJ: IEEE, 2022: 211−216

    [32]

    Li Shiyu, Hanson E, Qian Xuehai, et al. ESCALATE: Boosting the efficiency of sparse CNN accelerator with kernel decomposition[C]//Proc of the 54th Annual IEEE/ACM Int Symp on Microarchitecture. New York: ACM, 2021: 992−1004

    [33]

    Li Hao, Kadav A, Durdanovic I, et al. Pruning filters for efficient convnets[J]. arXiv preprint, arXiv: 1608.08710, 2016

    [34]

    Tan Zhanhong, Song Jiebo, Ma Xiaolong, et al. PCNN: Pattern-based fine-grained regular pruning towards optimizing CNN accelerators[C/OL]//Proc of the 57th ACM/IEEE Design Automation Conf (DAC). Piscataway, NJ: IEEE, 2020[2021-10-03].https://dl.acm.org/doi/10.5555/3437539.3437730

    [35]

    Mao Huizi, Han Song, Pool J, et al. Exploring the granularity of sparsity in convolutional neural networks[C]//Proc of the 30th IEEE/CVF Conf on Computer Vision and Pattern Recognition Workshops. Los Alamitos, CA: IEEE Computer Society, 2017: 1927−1934

    [36]

    Zhang Chen, Li Peng, Sun Guangyu, et al. Optimizing FPGA-based accelerator design for deep convolutional neural networks[C]//Proc of the 23rd ACM/SIGDA Int Symp on Field-Programmable Gate Arrays. New York: ACM, 2015: 161−170

    [37] 卢冶,陈瑶,李涛,等. 面向边缘计算的嵌入式FPGA卷积神经网络构建方法[J]. 计算机研究与发展,2018,55(3):551−562 doi: 10.7544/issn1000-1239.2018.20170715

    Lu Ye, Chen Yao, Li Tao, et al. Convolutional neural network construction method for embedded FPGAs oriented edge computing[J]. Journal of Computer Research and Development, 2018, 55(3): 551−562 (in Chinese) doi: 10.7544/issn1000-1239.2018.20170715

    [38]

    Lu Liqiang, Liang Yun, Xiao Qingcheng, et al. Evaluating fast algorithms for convolutional neural networks on FPGAs[C]//Proc of the 25th Annual Int Symp on Field-Programmable Custom Computing Machines (FCCM). Piscataway, NJ: IEEE, 2017: 101−108

    [39]

    Wu Haoning, Huang C T. Data locality optimization of depthwise separable convolutions for CNN inference accelerators[C]//Proc of the 22nd Design, Automation & Test in Europe Conf & Exhibition (DATE). Piscataway, NJ: IEEE, 2019: 120−125

    [40]

    Zhang Xiaofan, Lu Haoming, Hao Cong, et al. SkyNet: A hardware-efficient method for object detection and tracking on embedded systems[J]. Proceedings of Machine Learning and Systems, 2020, 2: 216−229

    [41]

    Yan Shun, Liu Zhengyan, Wang Yun, et al. An FPGA-based MobileNet accelerator considering network structure characteristics[C]//Proc of the 31st Int Conf on Field Programmable Logic and Applications (FPL). Piscataway, NJ: IEEE, 2021: 17−23

    [42]

    Du Zidong, Fasthuber R, Chen Tianshi, et al. ShiDianNao: Shifting vision processing closer to the sensor[C]//Proc of the 42nd Annual Int Symp on Computer Architecture. New York: ACM, 2015: 92−104

    [43]

    Zhang Zhichao, Mahmud M A P, Kouzani A Z. FitNN: A low-resource FPGA-based CNN accelerator for drones[J/OL]. IEEE Internet of Things Journal, 2022[2022-08-01].https://ieeexplore.ieee.org/abstract/document/9785605

    [44]

    Chen Tianqi, Moreau T, Jiang Ziheng, et al. TVM: An automated end-to-end optimizing compiler for deep learning[C]//Proc of the 13th USENIX Symp on Operating Systems Design and Implementation. Berkeley, CA: USENIX Association, 2018: 578−594

    [45]

    Li Rengang, Kan Hongwei, Su Dongdong, et al. An optimal design method of Conv2d operator for TensorFlow based on FPGA accelerator[C/OL]//Proc of the 4th Int Conf on Computer Science and Application Engineering. New York: ACM, 2020[2022-08-03]. https://dl.acm.org/doi/10.1145/3424978.3424987

    [46]

    Nunez-Yanez J. Fused architecture for dense and sparse matrix processing in TensorFlow Lite[J/OL]. IEEE Micro, 2022[2022-08-15].https://ieeexplore.ieee.org/abstract/document/9851516

    [47]

    Martone M, Filippone S, Tucci S, et al. Use of hybrid recursive csr/coo data structures in sparse matrix-vector multiplication[C]//Proc of the Int Multiconference on Computer Science and Information Technology. Piscataway, NJ: IEEE, 2010: 327−335

    [48]

    Russakovsky O, Deng Jia, Su Hao, et al. ImageNet large scale visual recognition challenge[J]. International Journal of Computer Vision, 2015, 115(3): 211−252 doi: 10.1007/s11263-015-0816-y

    [49]

    Everingham M, Eslami S M, Van Gool L, et al. The pascal visual object classes challenge: A retrospective[J]. International Journal of Computer Vision, 2015, 111(1): 98−136 doi: 10.1007/s11263-014-0733-5

    [50]

    He Kaiming, Zhang Xiangyu, Ren Shaoqing, et al. Deep residual learning for image recognition[C]//Proc of the 29th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2016: 770−778

    [51]

    Suda N, Chandra V, Dasika G, et al. Throughput-optimized OpenCL-based FPGA accelerator for large-scale convolutional neural networks[C]//Proc of the 24th ACM/SIGDA Int Symp on Field-Programmable Gate Arrays. New York: ACM, 2016: 16−25

    [52]

    Wang Junsong, Lou Qiuwen, Zhang Xiaofan, et al. Design flow of accelerating hybrid extremely low bit-width neural network in embedded FPGA[C]//Proc of the 28th Int Conf on Field Programmable Logic and Applications (FPL). Piscataway, NJ: IEEE, 2018: 163−1636

    [53]

    Li Huimin, Fan Xitian, Jiao Li, et al. A high performance FPGA-based accelerator for large-scale convolutional neural networks[C/OL]//Proc of the 26th Int Conf on Field Programmable Logic and Applications (FPL). Piscataway, NJ: IEEE, 2016[2019-05-20].https://ieeexplore.ieee.org/document/7577308

  • 期刊类型引用(0)

    其他类型引用(2)

图(12)  /  表(6)
计量
  • 文章访问数:  354
  • HTML全文浏览量:  144
  • PDF下载量:  155
  • 被引次数: 2
出版历程
  • 收稿日期:  2022-08-15
  • 修回日期:  2023-03-30
  • 网络出版日期:  2023-04-09
  • 刊出日期:  2023-05-11

目录

/

返回文章
返回