-
摘要:
深度学习和物联网的融合发展有力地促进了AIoT生态的繁荣. 一方面AIoT设备为深度学习提供了海量数据资源,另一方面深度学习使得AIoT设备更加智能化. 为保护用户数据隐私和克服单个AIoT设备的资源瓶颈,联邦学习和协同推理成为了深度学习在AIoT应用场景中广泛应用的重要支撑. 联邦学习能在保护隐私的前提下有效利用用户的数据资源来训练深度学习模型,协同推理能借助多个设备的计算资源来提升推理的性能. 引入了面向AIoT的协同智能的基本概念,围绕实现高效、安全的知识传递与算力供给,总结了近十年来联邦学习和协同推理算法以及架构和隐私安全3个方面的相关技术进展,介绍了联邦学习和协同推理在AIoT应用场景中的内在联系. 从设备共用、模型共用、隐私安全机制协同和激励机制协同等方面展望了面向AIoT的协同智能的未来发展.
Abstract:The fusion of deep learning and the Internet of things has significantly promoted the development of the AIoT ecosystem. On the one hand, the huge amounts of multi-modal data collected by AIoT devices provide deep learning with abundant training data resources, which plays a more important role in the era of big models. On the other hand, the development of deep learning makes AIoT devices smarter, which shows great potential for promoting social development and the convenience of human life. As major support for the usage of deep learning in AIoT, federated learning effectively makes use of the training data provided by AIoT devices to train deep learning models with data privacy protection while collaborative inference overcomes the obstacles in the deployment of deep learning brought by the limited computation resource of AIoT devices. We induce the concept of AIoT-oriented collaborative intelligence. Aiming at implementing knowledge transmission and computation resource supply with high efficiency and security, we review the related works, published in the past 10 years, about the architecture, algorithm, privacy, and security of federated learning and collaborative inference, and introduce the inner connection of federated learning and collaborative inference. The algorithm part summarizes the federated learning and collaborative inference algorithm related to AIoT use cases and their optimization goals. The architecture part introduces the related works about deep learning accelerators, deep learning compilation, deep learning frameworks, communication among devices, and collaboration among devices from the view of AI computing systems. The privacy and security part introduces the privacy and security threats faced by AIoT-oriented collaborative intelligence and the defense methods against them. We also provide insights into the future development of AIoT-oriented collaborative intelligence in the aspect of equipment sharing, model sharing, collaboration of privacy and security mechanisms, and collaboration of incentive mechanisms.
-
云计算的快速发展带动了云存储的普及,越来越多的用户和企业选择将数据存储在云服务器中[1]. 云服务器通常由不可信第三方维护和管理,存在诸多安全隐患. 如果将数据直接存储在云端,可能会使数据被未授权实体访问,从而导致隐私泄露. 一种保护数据隐私和实现安全数据共享的方式是采用公钥加密算法将数据加密后再存储到云服务器[2],但这种方式极大地阻碍了数据的可用性. 当用户对存储在云服务器的加密数据进行搜索时,最直接的方法是将所有加密数据下载到本地后执行解密再对明文信息进行搜索,显然这种方法极其繁琐且低效.
为了解决数据机密性和可搜索性之间的矛盾,Boneh等人[3]提出了支持关键词检索的公钥加密的概念. 在PKEKS系统中,发送者利用接收者的公钥和关键词生成关键词密文,并附在对应的文件密文后上传到云服务器. 接收者可以利用自己的私钥和关键词生成陷门上传至云服务器,随后云服务器利用陷门对关键词密文执行检索以判断关键词密文是否包含陷门中嵌入的关键词. 由于在检测过程中云服务器无法获取密文对应的关键词信息以及接收者私钥. 大量的PKEKS方案被提出以进一步提升公钥可搜索加密的安全性[4-5]、效率[6-7]和功能[8]. 尽管PKEKS支持对于密文的搜索功能,但其存在功能上的限制:无法对不同公钥加密的2条信息进行检索. 同时还存在着严重的安全隐患:关键词空间远小于密钥空间,攻击者可以借此实施关键词猜测攻击.
为了解决上述问题,Yang等人[9]在2010年提出了支持等式测试的公钥加密(public-key encryption with equality test, PKEET)方案. PKEET允许任何用户对不同公钥加密生成的密文进行对比以判断其中是否包含相同的明文. 由于对比的密文空间远大于密钥空间,关键词猜测攻击无法对PKEET系统生效. 受Yang等人[9]的启发,国内外学者围绕PKEET的授权测试[10-11]、通用构造[12]和应用场景(无证书[13]、签密[14]、异构[15])等展开大量研究,一系列具有等式测试功能的加密方案被陆续提出. 在传统支持等式测试的公钥加密方案中,公钥是一个不可读的字符串,需要公钥基础设施[16](public key infrastructure, PKI)系统来签发公钥证书以绑定用户的身份与公钥. 公钥证书包括用户的身份信息、权威机构的签名和各种参数,以结构化数据的形式存储. 这种复杂且昂贵的证书管理方式在实际应用场景中带来了棘手的证书管理问题. 基于此,Ma[17]受标识密码体制思想启发[18],构造了支持等式测试的标识加密(identity-based encryption with equality test,IBEET)体制. 在IBEET中,用户的公钥根据其身份信息生成,而用户的私钥由私钥生成中心(private key generator,PKG)构建.
目前已有的IBEET方案虽然避免了证书管理的问题,却存在着严重的安全隐患:大部分IBEET方案都难以抵抗渗透攻击. 斯诺登事件表明[19],攻击者可以在正常使用条件下秘密设置后门以泄露用户隐私. 受此启发,Bellare 等人[20]提出的算法篡改攻击 (algorithm-substitution attacks,ASA)表明攻击者可以非法占据用户的个人设备来篡改加密算法,从被篡改的密文中获取明文信息. 具体来说,Bellare 等人[20]构建了颠覆加密(subverting encryption)框架,对ASA的受害者实施IV篡改攻击(IV-replacement attacks)和偏密文攻击(the biased-ciphertext attack). 攻击者利用颠覆密钥篡改随机化输入IV,继而跟踪篡改密文以获取明文. 目前大量的无状态随机化密码算法被证明几乎无法抵抗这类ASA. 因此,一旦用户遭遇ASA,云服务器上的所有加密数据都有被泄露的风险. 考虑到ASA的危害性,密码逆向防火墙(cryptographic reverse firewalls,CRF)的概念由Mironov等人[21]在斯诺登事件之后提出. CRF可以被认为是部署在用户与外部世界的一个实体,通过重随机化用户发送和接收的信息将其映射到与原始输出相同的空间中,达到抵抗隐私泄露的作用. CRF具有3个性质:维持功能性、保留安全性和抵抗渗透性. 到目前为止,CRF已被成功应用于密钥协商协议[22]、基于属性的加密体制[23]和基于签名的不经意电子信封[24]. 将CRF部署在云服务器与用户之间,分别负责用户密文与陷门的重随机化. 由于用户输出的是重随机化结果,即便算法遭到篡改,用户隐私也不会泄露. 基于此,有必要为IBEET构建CRF.
另外,目前IBEET都是基于国外密码算法设计且存在计算与通信开销大的问题,还没有出现支持国产商用密码算法的高效等式测试方案. SM9作为一种具有效率优势的双线性对标识密码算法(identity-based cryptographic algorithm),可以良好地拓展至IBEET领域中. 2016年我国国家密码管理局正式发布SM9密码算法,其相关标准为“GM/T 0044—2016 SM9标识密码算法”. SM9标识密码算法主要包括4个部分:数字签名算法、密钥交换协议、密钥封装机制和标识加密算法. 其中SM9标识加密算法于2021年正式成为国标(ISO/IEC 18033-5:2015/AMD 1:2021). 虽然SM9在密码技术和网络安全领域占据越来越重要的地位,但关于SM9在云环境安全下的研究却寥寥无几. 因此,为了实现我国密码算法国产自主,提高其在信息安全领域的核心竞争力,亟需推进国密算法在云计算场景中的研究应用. 基于此,本文提出了一种支持等式测试并具有密码逆向防火墙的SM9标识加密方案(SM9 identity-based encryption scheme with equality test and cryptographic reverse firewalls,SM9-IBEET-CRF). 本文的贡献有3点:
1) 本文将SM9标识加密算法应用于等式测试这一密码学原语,提出了支持等式测试的SM9标识密码方案(SM9 identity-based encryption scheme with equality test,SM9-IBEET). 利用SM9是标识密码算法这一性质,避免了传统等式测试算法中证书管理的问题. 与传统IBEET方案相比,具有更强的安全性和计算开销的优势. 同时,丰富国产商用密码算法在云计算领域的研究.
2) 解决传统IBEET体制难以抵抗渗透攻击的问题. 本文将CRF部署在用户与云服务器之间的上行信道,分别实现用户密文、陷门的重随机化. 攻击者即使非法占据用户设备,由于其输出结果要经过CRF的处理,无法造成明文信息泄露.
3) 本文实现了形式化的安全性证明. 严谨的安全性分析证明其满足选择密文攻击下的不可区分性(IND-CCA)和选择密文攻击下的单向性(OW-CCA). CRF的设置使其具备抵抗算法篡改攻击的能力. 经过大量的实验仿真证明,SM9-IBEET-CRF在计算与通信开销上具有一定的优势,适用于云计算场景. 相关工作有:
1) 支持等式测试的公钥加密体制. 支持等式测试的公钥加密方案首先由Yang等人[9]在2010年提出. 该方案允许任何用户对不同公钥加密的密文进行比较,解决了关键词检索公钥加密方案的局限性. 接下来,Tang[25]为PKEET提出了具有细粒度的授权机制,以确保只有被授权的用户才有能力执行等式测试. 此外,Tang[10]提出了混合粒度授权的支持等式测试的公钥加密方案(all-or-nothing PKE-ET,AoN-PKE-ET)来实现粗细粒度授权. 然而,在实际中云服务器与用户需要交互来授权,导致了方案的不可拓展性. 为应对这一挑战,Tang[26]和Ma[11]分别提出了带有灵活授权机制的等式测试方案,其基本思想是用户单独对云服务器进行授权. 为了解决PKEET中的密钥管理问题,Ma[17]将标识加密体制集成到PKEET中,提出一种IBEET方案. Qu等人[13]提出的无证书等式测试加密方案(certificateless-based encryption with equality test,CLEET),旨在同时避免密钥托管和证书管理的问题.Wang等人[14]将签密的概念引入等式测试中,有效降低了计算和通信开销. Xiong等人[15]提出的异构签密等式测试方案(heterogeneous signcryption scheme with equality test,HSC-ET),则实现了IBE与PKE的异构等式测试系统. 目前已有的PKEET体制难以抵抗渗透攻击.
2) 密码逆向防火墙. 斯诺登事件爆发后,为了保护用户隐私和维持密码方案安全性,Mironov等人[21]提出CRF的概念. CRF位于用户计算机与外部世界之间,通过修改用户设备输入和输出,为受到篡改算法攻击的用户提供安全保护. 2016年,Dodis等人[22]设计了一种具有逆向防火墙的消息传输协议,缩短公钥交换轮数至4轮. 同年,Chen等人[24]基于可延展的平滑映射哈希函数(smooth projective Hash function,SPHF)为多个密码协议设计了CRF框架. 2018年,Ma等人[23]将逆向防火墙的概念引入属性基加密体制,提出了一种基于密文在线/离线属性的加密算法. 2019年,Zhou等人[27]提出具有逆向防火墙的标识加密体制(identity-based encryption with cryptographic reverse firewalls,IBE-CRF),为IBE设计了2种CRF方案. 目前还不存在具有逆向防火墙的PKEET方案.
1. 基础知识
1.1 非对称双线性对
给定3个循环群G1,G2,GT,它们的阶均为素数N,P1为G1的生成元,P2为G2的生成元,存在非对称双线性对e:G1×G2→GT,满足3个条件:
1) 双线性. 对任意P∈G1,Q∈G2,a,b∈ZN,有e([a]P,[b]Q)=e(P,Q)ab.
2) 非退化性. e(P1,P2)≠1.
3) 可计算性. 对任意P∈G1,Q∈G2,存在有效的算法计算e(P,Q).
1.2 BDH假设
BDH假设首先由Boneh等人[28]在对称双线性配对中提出,然后被拓展到非对称双线性对中. 本文使用Boyen等人[29]在非对称双线性对中推广的BDH假设.
1)BDH问题. 给定(P1,P2,[a]P1,[a]P2,[b]Pi,[c]Pj),其中a,b,c∈Z∗N,i,j∈{1,2},计算e(P1,P2)abc是困难的.
2)BDH假设. 给定一个BDH问题实例,不存在一个PPT攻击者A具有不可忽略的优势计算出e(P1,P2)abc. 其中A的优势被定义为
Pr[A(P1,P2,[a]P1,[a]P2,[b]Pi,[c]Pj)=e(P1,P2)abc]. 1.3 密码逆向防火墙(CRF)
CRF位于用户计算机与外部实体之间,只能修改用户的输入输出消息. 对于用户而言,他们并不知道CRF的存在.
CRF是一种具有状态的算法W,它以当前状态和消息作为输入,输出更新后的状态和消息. 对于初始状态为σ的参与方P=(receive,next,output)和逆向防火墙W,它们的组成被定义为
W∘P=(receiveW∘P(σ,m))= receiveP(σ,W(m))=nextW∘P=W(nextP(σ))=outputW∘P(σ)= outputP(σ). 当组成方W∘P参与协议时,W的状态被初始化为系统公开参数params,如果W是与参与方P共同组成的,则我们称W为参与方P的逆向防火墙.
显然,参与方P希望获得管理并部署多台防火墙的权力. 这种多个防火墙的组合(W∘W∘…∘W∘P)只会增强系统的安全性,而不会破坏其初始协议的功能.
定义1. CRF具有维持功能性. 对任意逆向防火墙W与参与方P,令W1∘P = W∘P,当任意多项式边界k⩾时,令 {\mathcal{W}^k}^\circ P{\text{ = }}\mathcal{W}^\circ ({\mathcal{W}^{k - 1}}^\circ P{\text{)}} . 在 k \geqslant 1 的协议 \mathcal{P} 中,如果 {\mathcal{W}^k}^\circ P 维持了参与方 {P_i} 的功能 \mathcal{F} ,也就是说当 \left( {{\mathcal{W}^k}^\circ {P_i}{\text{ = }}\mathcal{W}^\circ {P_i}} \right) \cap \left( {{\mathcal{F}_{\mathcal{W}^\circ {P_i}}} = {\mathcal{F}_{{P_i}}}} \right) = 1 时,CRF具有维持功能性.
定义2. CRF具有保留安全性. 对满足安全性 \mathcal{S} ,功能 \mathcal{F} 的协议 \mathcal{P} 与逆向防火墙 \mathcal{W} :
1) 对于任意改变协议功能 \mathcal{F} 的PPT攻击者 P_\mathcal{A}^* ,如果协议 {\mathcal{P}_{P \Rightarrow \mathcal{W}^\circ P_{\mathcal{A}}^*}} 仍满足安全性 \mathcal{S} ,则称CRF对于协议参与方P具有强保留安全性.
2) 对于任意不改变协议功能 \mathcal{F} 的PPT攻击者 \hat P ,如果协议 {\mathcal{P}_{P \Rightarrow \mathcal{W}^\circ \hat P}} 仍满足安全性 \mathcal{S} ,则称CRF对于协议参与方P具有弱保留安全性.
当PPT攻击者 P_\mathcal{A}^* 破坏协议功能 \mathcal{F} 时,由于协议参与方P 可以很快感知到这种破坏,在云计算场景中攻击效果较差,是一种罕见的攻击方式. 因此,本文主要证明CRF对于不破坏协议功能的ASA安全性.
定义3. CRF具有抵抗渗透性.CRF具有阻止参与方P对消息篡改泄露的能力. 使用Mironov等人[21]定义的泄露游戏 LEAK(\mathcal{P},P,J,\mathcal{W},\lambda ) ,如图1所示: \mathcal{P} 代表密码协议,P为协议正常参与方,J代表被敌手控制的协议参与方, \mathcal{W} 为逆向防火墙, \lambda 表示系统参数. P_\mathcal{A}^*,P_\mathcal{B}^* 代表被敌手控制的协议参与方, {\sigma _{P_\mathcal{B}^*}} 为协议运行后 P_\mathcal{B}^* 的状态,敌手在游戏LEAK中的优势被定义为
1) 对于任意改变协议功能 \mathcal{F} 的PPT攻击者 P_\mathcal{A}^* ,其优势 Adv_{\mathcal{A}. \mathcal{W}}^{LEAK}(\lambda ) 是可以忽略不计的,则称CRF对于协议参与方P具有强抵抗渗透性.
2) 对于任意不改变协议功能 \mathcal{F} 的PPT攻击者 {\hat P_\mathcal{A}} ,其优势 Adv_{\mathcal{A}. \mathcal{W}}^{LEAK}(\lambda ) 是可以忽略不计的,则称CRF对于协议参与方P具有弱抵抗渗透性.
与定义2相同,本文主要讨论CRF对于不破坏协议功能的ASA安全性.
2. 方案定义
本节我们给出了本文方案的系统模型SMP-IBEET-CRF、形式化定义,并且通过考虑3种不同的敌手来定义安全模型.
2.1 系统模型
系统模型如图2所示,在SM9-IBEET-CRF中,存在5种实体.
1) 数据上传者. 生成密文并将其上传到云服务器的实体.
2) 数据接收者. 可以从云服务器下载密文并解密,或者可以委托云服务器执行等式测试的实体.
3) 云服务器. 存储密文,在收到用户请求后可以执行等式测试但无法解密的实体.
4) 密钥生成中心(KGC). 为用户秘密地生成并且分配密钥的实体.
5) 密码逆向防火墙(CRF). 部署在用户(数据上传者和数据接收者)与云服务器的上行信道中,重随机化用户密文与陷门,再发送给云服务器的实体.
KGC初始化系统,根据用户的身份来生成其私钥,并秘密传输给用户. 数据上传者计算接收者的公钥来生成密文,然后上传到云服务器. 密文在上传过程中会受到CRF的重随机化处理,而数据上传者并不知道有这个过程. 在任何时候,数据接收者都可以从云服务器下载密文,并使用KGC生成的私钥来解密数据. 当接收者想要测试其存储在云服务器的密文时,可以利用自己的私钥计算出陷门并将其发送给云服务器进行测试,但是并没有给云服务器提供解密的能力. 陷门在上传过程中会受到CRF的重随机化处理,而数据接收者不会知道有这个过程.
2.2 形式化定义
SM9-IBEET-CRF方案由8个算法组成.
1) 系统建立 Setup. 输入安全参数k,KGC 运行该算法并生成系统公开参数params和系统主密钥,包括消息空间.
2) 私钥提取 KeyExtract. 输入系统公开参数params、用户ID以及主私钥s,KGC 运行该算法生成用户身份所对应的私钥d.
3) 陷门生成 Trapdoor. 输入用户ID以及用户私钥d,输出对应的陷门 td .
4) 加密 Encrypt. 输入明文M和用户ID,输出密文C.
5) 重随机化密文 ReEncrypt. 输入密文C,CRF运行该算法输出对应的重随机化密文 C' .
6) 密文解密 Decrypt. 输入密文C、用户ID和用户私钥 d ,解密输出明文M.
7) 重随机化陷门 ReTrapdoor. 输入陷门td,CRF运行该算法输出对应的重随机化陷门 td' .
8) 等式测试 Test. 分别输入 I{D_A} 对应的密文 {C_A} 和陷门 t{d_A} , I{D_B} 对应的密文 {C_B} 和陷门 t{d_B} ,云服务器执行等式测试,若 {C_A} 和 {C_B} 的内容为相同的明文,则输出1,否则输出0.
根据IBEET的安全模型,SM9-IBEET需要考虑3种类型的敌手.
1) Ⅰ型敌手. 这类敌手没有目标用户的陷门,不能执行等式测试,其目的是在2个挑战密文中做区分. 我们针对这种类型的敌手定义了IBE-IND-CCA安全游戏Game 1.
安全游戏Game 1中让 {\mathcal{A}_1} 表示Ⅰ型敌手,挑战者 \mathcal{C} 与 {\mathcal{A}_1} 按如下顺序进行游戏:
①系统建立 Setup.挑战者 \mathcal{C} 执行系统建立Setup算法生成系统参数params和主私钥对. \mathcal{C} 保存主私钥对并将params发送给 {\mathcal{A}_1} .
② 阶段1. {\mathcal{A}_1} 可以自适应地执行查询:
i)公钥查询. 当接收到身份为IDi的公钥询问时, \mathcal{C} 通过运用主私钥对计算,生成公钥 {Q_i} 并发送给 {\mathcal{A}_1} .
ii)私钥查询. 当接收到身份为IDi的公钥询问时, \mathcal{C} 通过执行私钥提取算法KeyExtract生成私钥 {d_i} 并发送给 {\mathcal{A}_1} .
iii)解密查询. 当接收到身份为IDi以及密文 C 的密钥解密询问时, \mathcal{C} 执行密文解密算法Decrypt生成明文M并返回给 {\mathcal{A}_1} .
③挑战. 敌手 {\mathcal{A}_1} 将身份ID*以及消息 M_0^*,M_1^* (二者长度相同)发送给 \mathcal{C} ,且在阶段1,ID*对应的私钥没有被询问到,则 \mathcal{C} 随机选取 \rho \in \{ 0,1\} ,并且计算 {C^*} = Encrypt({M_\rho },{d^*},I{D^*}) 并发送给 {\mathcal{A}_1} .
④阶段2. {\mathcal{A}_1} 像在阶段1一样发出询问,但是ID*对应的私钥以及密文 {C^*} 不可以被询问到.
⑤ 猜测. {\mathcal{A}_1} 输出 \rho ' \in \{ 0,1\} .
定义4. 如果对于任意多项式时间攻击者 {\mathcal{A}_1} ,在IND-CCA游戏中的优势 Ad{v}_{\text{SM9-IBEET,Type-І}}^{\text{IBE-IND-CCA}}({\mathcal{A}}_{1})=\left|2Pr\left[\rho = {\rho }^{\prime }\right] -1\right| 都是可忽略的,则SM9-IBEET方案是满足IBE-IND-CCA安全的.
2)Ⅱ型敌手. 这类敌手拥有目标用户密文的陷门,因此可以执行挑战密文的等式测试,其目的是为了揭示挑战密文对应的消息. 我们针对这种类型的敌手定义了IBE-OW-CCA安全游戏Game 2.
安全游戏Game 2中让 {\mathcal{A}_2} 表示Ⅱ型敌手,挑战者 \mathcal{C} 与 {\mathcal{A}_2} 按如下顺序进行游戏:
①系统建立 Setup. 挑战者 \mathcal{C} 执行系统建立Setup算法生成系统参数params和主私钥对. \mathcal{C} 保存主私钥对并将params发送给 {\mathcal{A}_2} .
②阶段1. {\mathcal{A}_2} 可以自适应地执行查询:
i)公钥查询. 当接收到身份为IDi的公钥询问时, \mathcal{C} 通过运用主私钥对计算,生成公钥 {Q_i} 并发送给 {\mathcal{A}_2} .
ii)私钥查询. 当接收到身份为IDi的公钥询问时, \mathcal{C} 通过执行私钥提取算法KeyExtract生成私钥 {d_i} 并发送给 {\mathcal{A}_2} .
iii)陷门查询. 当接收到身份为IDi的陷门询问时, \mathcal{C} 通过执行陷门生成算法Trapdoor生成陷门 t{d_i} 并发送给 {\mathcal{A}_2} .
iv)解密查询. 当接收受到身份为IDi以及密文 C 的密钥解密询问时, \mathcal{C} 执行密文解密算法Decrypt生成明文M并返回给 {\mathcal{A}_2} .
③挑战. 敌手 {\mathcal{A}_2} 将身份ID*发送给 \mathcal{C} ,且在阶段1,ID*对应的私钥没有被询问到. 则 \mathcal{C} 随机选取消息 M_{}^* ,计算 {C^*} = Encrypt({M^*},{d^*},I{D^*}) 并发送给 {\mathcal{A}_2} .
④阶段2. {\mathcal{A}_2} 像在阶段1一样发出询问,但是ID*对应的私钥、陷门以及密文 {C^*} 不可以被询问到.
⑤猜测. {\mathcal{A}_2} 输出 M' .
定义5. 如果对于任意多项式时间攻击者 {\mathcal{A}_2} ,在IBE-OW-CCA游戏中的优势 Ad{v}_{\text{SM9-IBEET,Type-П}}^{\text{IBE-OW-CCA}}({\mathcal{A}}_{2})= \Big|Pr\Big[{M}^{*}= {M}^{\prime }\Big]\Big| 都是可忽略的,则SM9-IBEET方案是满足IBE-OW-CCA安全的.
为证明CRF的部署带来的ASA安全性,SM9-IBEET-CRF还需考虑Ⅲ型敌手.
3) Ⅲ型敌手. 其具备ASA能力,在保持算法功能不变的前提下,可以替换除了CRF重随机化以外的算法,然后对系统发起攻击. 针对这种类型的敌手,我们可以证明CRF的部署没有改变原SM9-IBEET的功能与安全性,同时增强了ASA安全性. 基于此,我们定义了ASA安全游戏Game 3.
安全游戏Game 3中让 {\mathcal{A}_3} 表示Ⅲ型敌手,挑战者 \mathcal{C} 与 {\mathcal{A}_3} 按如下顺序进行游戏.
① 篡改阶段. {\mathcal{A}_3} 选择一些篡改的算法Setup*,KeyExtract*,Encrypt*,Decrypt*,Trapdoor*,Test*发送给 \mathcal{C} , \mathcal{C} 收到后用篡改算法来替换自己的原始算法.
②系统建立 Setup.挑战者 \mathcal{C} 执行系统建立Setup*算法生成系统参数params和主私钥对. \mathcal{C} 保存主私钥对并将params发送给 {\mathcal{A}_3} .
③阶段1. {\mathcal{A}_3} 可以自适应地执行查询:
i) 公钥查询. 当接受到身份为IDi的公钥询问时, \mathcal{C} 通过运用主私钥对计算,生成公钥 {Q_i} 并发送给 {\mathcal{A}_3} .
ii)私钥查询. 当接受到身份为IDi的公钥询问时, \mathcal{C} 执行私钥提取算法KeyExtract*生成私钥di并发送给 {\mathcal{A}_3} .
iii) 陷门查询. 当接受到身份为IDi的陷门询问时, \mathcal{C} 执行陷门生成算法Trapdoor*生成陷门tdi,然后运行陷门重随机化算法ReTrapdoor生成陷门 t{d_i} 的重随机化陷门 t{d'_i} 并发送给 {\mathcal{A}_3} .
iv) 解密查询. 当接受到身份为IDi以及密文 C 的密钥解密询问时, \mathcal{C} 执行密文解密算法Decrypt*生成明文并返回给 {\mathcal{A}_3} .
④挑战. 敌手 {\mathcal{A}_3} 将身份ID*以及消息 M_0^*,M_1^* (二者长度相同)发送给 \mathcal{C} ,且在阶段1,ID*对应的私钥没有被询问到,则 \mathcal{C} 随机选取 \rho \in \{ 0,1\} ,并且计算 {C^*} = Encrypt^*({M_\rho },{d^*},I{D^*}) ,然后再计算 C^{*'} = ReEncrypt({C^*}) 并发送给 {\mathcal{A}_3} .
⑤阶段2. {\mathcal{A}_3} 像在阶段1一样发出询问,但是ID*对应的私钥以及密文 {C^*},C^{*'} 不可以被询问到.
⑥猜测. {\mathcal{A}_3} 输出 \rho ' \in \{ 0,1\} .
定义6. 如果对于任意多项式时间攻击者 {\mathcal{A}_3} ,在ASA游戏中的优势 Ad{v}_{\text{SM9ET-CRF}}^{\text{ASA,Type-Ш}}({\mathcal{A}}_{3})=\left|2Pr\left[\rho ={\rho }^{\prime }\right]-1\right| 都是可忽略的,则SM9-IBEET-CRF方案是满足ASA安全的.
3. 具体方案
本文方案由8个算法组成,具体构造过程介绍如下.
1) 系统建立 Setup.
① 初始化系统,输出系统参数 params =\Big\langle {{G_1}}, \Big. \Big. {{G_2},{G_T},e,{P_1},{P_2},{P_{{\text{pub}}1}},{P_{{\text{pub}}2}},KDF,MAC,EUC} \Big\rangle . 其中 e 为双线性对映射 e:{G_1} \times {G_2} \to {G_T} , G_{1} , G_ { 2} 的阶均为 N , P_1 为 G_{1} 的生成元, P_2 为G_{ 2 } 的生成元. 消息空间 \mathcal{M} \in {\{ 0,1\} ^*} ,用户的身份 id \in {\{ 0,1\} ^*} 均为比特串.
② KGC随机选取 s,s' \in [1,N - 1] 作为主私钥对 (s,s') ,并计算主公钥 {P_{{\text{pub1}}}} = [s]{P_1} , {P_{{\text{pub2}}}} = [s']{P_1} .
③ 获取KGC公布的5个哈希函数: {H_1}:{\{ 0,1\} ^*} \to \mathbb{Z}_N^* , {H_2}:{G_T} \to {G_{\text{2}}} , {H_3}:{G_{\text{1}}} \to {\{ 0,1\} ^*} , {H_4}:{\{ 0,1\} ^*} \to {G_2} , {H_5}:{G_T} \to {\{ 0,1\} ^*} .
④ 使用SM9规定的密钥派生函数 KDF(Z,klen) ,输入比特串 Z 、非负整数 klen ,输出长度为 klen 的密钥数据比特串 K .
⑤消息认证码函数 MAC({K_2},Z) . 输入为比特长度 {K_2}\_len 的密钥 {K_2} ,比特串消息 Z . 其作用是防止消息数据 Z被非法篡改.
⑥ 拓展欧几里得函数 EUC(r) . 输入 r \in [1,N - 1] ,运行拓展欧几里得算法计算输出r的逆元.
2) 私钥提取 KeyExtract. 输入系统公开参数params、用户身份 I{D_A} 和主私钥对s,KGC 按①~③方式生成 I{D_A} 的私钥 {d_A} :
① 在有限域 {F_N} 上计算 {t_1} = {H_1}(I{D_A}) + s ,若 {t_1} = 0 则需要重新产生主私钥;
② 否则,计算 {t_2} = st_1^{ - 1} , {t'_2} = s't_1^{ - 1} ;
③ 然后计算 {d_A} = ({d_{A1}},{d_{A2}}) ,此处的 (s,s') 是主私钥对,即
{d_{A1}} = [{t_2}]{P_2} = [s{({H_1}(I{D_A}) + s)^{ - 1}}]{P_2} \text{,} {d_{A2}} = [{t'_2}]{P_2} = [s'{({H_1}(I{D_A}) + s)^{ - 1}}]{P_2} . 3) 陷门生成 Trapdoor. 输入用户身份 I{D_A} ,私钥 {d_A} . 输出陷门 t{d_A} = [{t'_2}]{P_2} .
4) 加密 Encrypt. 输入系统公开参数params、用户身份 I{D_A} ,运算生成用户公钥 {Q_A} . 对于消息长度为mlen比特的比特串 M \in {\{ 0,1\} ^*} ,mlen为密钥 {K_1} 的比特长度, {K_2}\_len 为 MAC({K_2},Z) 中密钥 {K_2} 的比特长度,过程运算有:
① {Q_A} = [{H_1}(I{D_A})]{P_1} + {P_{{\text{pub1}}}} ;
② 随机选取 {r_1},{r_2} \in [1,N - 1] ;
③ {C_1} = [{r_1}]{Q_A} ;
④ g = e({P_{{\text{pub1}}}},{P_2}) ;
⑤ w = {g^{{r_1}}} ;
⑥ 计算 {K_1} , {K_2} :
i) klen = mlen + {K_2}\_len ;
ii) {K_1}||{K_2} = KDF({H_3}({C_1})||{H_5}(w)||I{D_A},klen) ;
⑦ {C_2} = M \oplus {K_1} ;
⑧ {C_3} = MAC({K_2},{C_2}) ;
⑨ {C_4} = [{r_2}]{H_4}(M){H_2}(e{({P_{{\text{pub2}}}},[{r_1}]{P_2})^{{r_2}}}) ;
⑩ {C_5} = [{r_2}]{P_1} ;
⑪ {C_6} = [{r_1}{r_2}]{Q_A} ;
⑫ 输出 C = ({C_1},{C_2},{C_3},{C_4},{C_5},{C_6}) 作为密文.
5) 重随机化密文 ReEncrypt. CRF收到密文 C = ({C_1},{C_2},{C_3},{C_4},{C_5},{C_6}) 后,随机选取 {r_3} \in [1,N - 1] ,然后计算 C 的重随机化密文 C' = ({C_1},{C_2},{C_3},{C_4}, {C_5},[{r_3}]{C_6}) ,并发送给云服务器.
6) 解密 Decrypt. 输入 C' = ({C_1},{C_2},{C_3},{C_4},{C_5},{C'_6}) ,私钥 {d_A} = ({d_{A1}},{d_{A2}}) 和用户身份 I{D_A} .
① 验证 {C_1} \in {G_1} ,若不成立则无法解密;
② w' = e({C_1},{d_{A1}}) ;
③ klen = mlen + {K_2}\_len ;
④ {K'_1}||{K'_2} = KDF({H_3}({C_1})||{H_5}(w')||I{D_A},klen ) ;
⑤ M' = {C_2} \oplus {K'_1} ;
⑥ 若 {C_3} \ne MAC({K'_2},{C_2}) ,则解密失败,密文完整性有误;
⑦ 输出 M' 作为消息的明文.
7) 重随机化陷门 ReTrapdoor. 输入陷门 td ,CRF运行 EUC({r_3}) 生成 {r_3} 的逆元 {r_4} \in [1,N - 1] ,计算 td 的重随机化陷门 td' = [{r_4}]td = [{r_4}{t'_2}]{P_2} .
8)等式测试 Test. 输入2个用户的密文 {C'_\alpha } = ({C_{1,\alpha }}, {C_{2,\alpha }},{C_{3,\alpha }},{C_{4,\alpha }},{C_{5,\alpha }},{C'_{6,\alpha }}) , {C'_\beta } = ({C_{1,\;\beta }},{C_{2,\;\beta }},{C_{3,\;\beta }},{C_{4,\;\beta }},{C_{5,\;\beta }}, {C'_{6,\;\beta }}) 和2个陷门 t{d'_\alpha } , t{d'_\beta } ,然后按3个步骤进行测试.
① {X_\alpha } = \dfrac{{{C_{4,\alpha }}}}{{{H_2}\left( {e\left( {{{C'}_{6,\alpha }},t{{d'}_\alpha }} \right)} \right)}} ;
② {X_\beta } = \dfrac{{{C_{4,\;\beta }}}}{{{H_2}\left( {e\left( {{{C'}_{6,\;\beta }},t{{d'}_\beta }} \right)} \right)}} ;
③ 若 e({C_{\alpha ,5}},{X_\beta }) = e({C_{\beta ,5}},{X_\alpha }) ,则 {M_\alpha } = {M_\beta } .
4. 安全性分析
4.1 SM9-IBEET正确性分析
本节首先对SM9-IBEET进行正确性分析.
1) 验证密文解密过程的正确性
采用用户私钥d以及密文消息 C = ({C_1},{C_2}, {C_3}, {C_4},{C_5},{C_6}) 来验证:
\begin{split} {{M'}_\alpha } =& {C_{2,\alpha }} \oplus KDF({H_3}({C_{1,\alpha }})||{H_5}(e({C_{1,\alpha }},{d_{1,\;\beta }}))||I{D_\beta },klen) =\\ & {C_{2,\alpha }} \oplus KDF({H_3}({C_{1,\alpha }})||{H_5}({w_{1,\alpha }})||I{D_\beta },klen) =\\ & {C_{2,\alpha }} \oplus {{K'}_{1,\;\beta }}, \\ \end{split} 其中 {K'_{1,\;\beta }} 为KDF函数结果左边的mlen比特.
2) 计算消息认证码函数
u = MAC({K'_{2,\;\beta }},{C_{2,\alpha }}) ( {K'_{2,\;\beta }} 为 KDF 函数结果右边的 {K_2}\_len 比特).
若 u = {C_{3,\alpha }} ,则消息认证完整性的结果通过,解密结果正确,输出明文 {M'_\alpha } .
3) 验证等式测试计算结果的正确性
第1层计算的安全性如下所示,采用用户的陷门以及部分密文验证:
\begin{split}{X}_{\alpha }=&\frac{{C}_{4,\alpha }}{{H}_{2}\left(e\left({C}_{6,\alpha },t{d}_{\alpha }\right)\right)} =\\&\frac{\left[{r}_{2,\alpha }\right]{H}_{4}(M){H}_{2}\left(e{\left({P}_{\text{pub2}},\left[{r}_{1,\alpha }\right]{P}_{2}\right)}^{{r}_{2,\alpha }}\right)}{{H}_{2}\left(e\left(\left[{r}_{1,\alpha }\right]\left[{r}_{2,\alpha }\right]\left[{t}_{1,\alpha }\right]{P}_{1},\left[{{t}^{\prime }}_{2,\alpha }^{}\right]{P}_{2}\right)\right)}=\\&[{r}_{2,\alpha }]{H}_{4}({M}_{\alpha }),\end{split} \begin{split} {X_\beta } =& \frac{{{C_{4,\;\beta }}}}{{{H_2}\left( {e\left( {{C_{6,\;\beta }},t{d_\beta }} \right)} \right)}} =\\& \frac{{\left[ {{r_{2,\;\beta }}} \right]{H_4}(M){H_2}\left( {e{{\left( {{P_{{\mathrm{pub}}2}},\left[ {{r_{1,\;\beta }}} \right]{P_2}} \right)}^{{r_{2,\;\beta }}}}} \right)}}{{{H_2}\left( {e\left( {\left[ {{r_{1,\;\beta }}} \right]\left[ {{r_{2,\;\beta }}} \right]\left[ {{t_{1,\;\beta }}} \right]{P_1},\left[ {t_{2,\;\beta }^\prime } \right]{P_2}} \right)} \right)}} = \\&[{r_{2,\;\beta }}]{H_4}({M_\beta }). \\ \end{split} 第2层计算的正确性分析如下所示,带入第1层中计算的中间结果:
\begin{split} e({C_{5,\alpha }},{X_\beta }) =& e([{r_{2,\alpha }}]{P_2},[{r_{2,\;\beta }}]{H_4}({M_\beta })) =\\& e{({P_2},{H_4}({M_\beta }))^{{r_{2,\alpha }},{r_{2,\;\beta }}}}, \\ e({C_{5,\;\beta }},{X_\alpha }) =& e([{r_{2,\;\beta }}]{P_2},[{r_{2,\alpha }}]{H_4}({M_\alpha })) = \\&e{({P_2},{H_4}({M_\alpha }))^{{r_{2,\alpha }},{r_{2,\;\beta }}}}. \\ \end{split} 若 {M_\alpha } = {M_\beta } ,则等式测试的结果成立.
4.2 SM9-IBEET安全性证明
定理1. 假定嵌入的BDH困难问题是不可破解的猜想成立,则表示本文提出的支持等式测试的SM9算法是IBE-IND-CCA安全的.
证明. 假设存在无法获取目标用户陷门且不能任意执行等式测试的Ⅰ型敌手 {\mathcal{A}_1} ,其攻击目的是破坏所提方案的语义安全,也即在安全游戏中对挑战密文进行区分. 如果敌手 {\mathcal{A}_1} 可以成功破坏本文方案,则存在挑战者 \mathcal{C} 能够以不可忽略的优势解决BDH困难问题. 给定 ({P_1},{P_2},[a]{P_1},[a]{P_2},[b]{P_1},[c]{P_1}) ,其中 a,b,c \in \mathbb{Z}_N^* , \mathcal{C} 的目标是计算出 e{({P_1},{P_2})^{abc}} . \mathcal{C} 与 {\mathcal{A}_1} 的挑战过程有7个.
1) 初始化.
\mathcal{C} 随机选取 \kappa \in \left\{ {1,2, … ,{q_{{H_1}}}} \right\} , {N_\kappa } \in \mathbb{Z}_N^* 以及 {\tau _1}, {\tau _2}, … , {\tau _{\kappa - 1}},{\tau _{\kappa + 1}}, … ,{\tau _N} \in \mathbb{Z}_N^* , {q_{{H_1}}} 代表的是查询随机预言机 {\mathcal{H}_1} 的次数,对 i = 1,2, … ,\kappa - 1,\kappa + 1, … ,N ,计算 {N_i} = {N_\kappa } - {\tau _i} 并保存. {P_1},{P_2} 分别为群 {G_1},{G_2} 的生成元,通过使用文献[30]的方法, \mathcal{C} 随机选取 \gamma \in [1,N - 1] ,对 i \in \{ 1,2, … ,N\} \backslash \{ \kappa \} , \mathcal{C} 可以获得 N - 1 个数值对 \left( {{\tau _i},\left( {1/\gamma - {\tau _i}} \right)} \right) ,计算 {P_{{\text{pub1}}}} = \left( {\gamma - {N_\kappa }} \right){P_1} ,令 {P_{{\text{pub2}}}} = a{P_1} . 得到公共参数 params = \left\langle {{G_1},{G_2},{G_T},e,} \right. \left. {{P_1},{P_2},{P_{{\text{pub1}}}},{P_{{\text{pub2}}}}, MAC} \right\rangle ,并将其发送给 {\mathcal{A}_1} .
\mathcal{C} 保存表格 {\mathcal{L}_{{\mathcal{H}_1}}},{\mathcal{L}_{{\mathcal{H}_2}}}{\mathcal{L}_{{\mathcal{H}_3}}},{\mathcal{L}_{{\mathcal{H}_4}}},{\mathcal{L}_{{\mathcal{H}_5}}},{\mathcal{L}_{\mathcal{K}\mathcal{D}\mathcal{F}}} ,初始化内容为空,用来模拟随机预言机 \left\langle {{\mathcal{H}_1},{\mathcal{H}_2},{\mathcal{H}_3},{\mathcal{H}_4},{\mathcal{H}_5}, }\right. \left.{ \mathcal{K}\mathcal{D}\mathcal{F}} \right\rangle . 设置空列表 {\mathcal{L}_K} 来保存公钥查询的结果.
2) 敌手 {\mathcal{A}_1} 向 \mathcal{C} 提出6个询问.
① {\mathcal{H}_1} {\text{-}} query . 在任何时刻 {\mathcal{A}_1} 可以询问随机预言机 {\mathcal{H}_1} ,为了回答这些询问, \mathcal{C} 保存表格 {\mathcal{L}_{{\mathcal{H}_1}}} 用来存取元组 \left\langle {I{D_i},{N_i}} \right\rangle ,当接收到身份为 I{D_i} 的 {\mathcal{H}_1} 查询时, \mathcal{C} 查找 I{D_i} 对应的数值 {N_i} ,用 {H_1}\left( {I{D_i}} \right) = {N_i} 返回给 {\mathcal{A}_1} ,并将 \left\langle {I{D_i},{N_i}} \right\rangle 添加到 {\mathcal{L}_{{\mathcal{H}_1}}} 中.
② {\mathcal{H}_2} {\text{-}} query . 在任何时刻 {\mathcal{A}_1} 可以询问随机预言机 {\mathcal{H}_2} ,为了回答这些询问, \mathcal{C} 保存表格 {\mathcal{L}_{{\mathcal{H}_2}}} 用来存取元组 \left\langle {\sigma ,\phi } \right\rangle , \mathcal{C} 按2个步骤回应:
i) 如果询问的 I{D_i} 已经出现在 {\mathcal{L}_{{\mathcal{H}_2}}} 的元组 \left\langle {\sigma ,\phi } \right\rangle 中, \mathcal{C} 用 \phi 来回复.
ii) 否则, \mathcal{C} 随机选取 \sigma \in {G_T},\phi \in {G_1} ,并将元组 \left\langle {\sigma ,\phi } \right\rangle 插入 {\mathcal{L}_{{\mathcal{H}_2}}} 中,然后用 \phi 来回复 {\mathcal{A}_1} .
③ {\mathcal{H}_3} {\text{-}} query . 在任何时刻 {\mathcal{A}_1} 可以询问随机预言机 {\mathcal{H}_3} ,为了回答这些询问, \mathcal{C} 保存表格 {\mathcal{L}_{{\mathcal{H}_3}}} 用来存取元组 \left\langle {{C_1},{h_3}} \right\rangle ,如果询问的 {C_1} 存在 {\mathcal{L}_{{\mathcal{H}_3}}} 中,返回 {h_3} 给 {\mathcal{A}_1} ;否则, \mathcal{C} 随机选取 {h_3} \in {\{ 0,1\} ^*} 并添加表项 \left\langle {{C_1},{h_3}} \right\rangle 到 {\mathcal{L}_{{\mathcal{H}_3}}} 中,并返回 {h_3} 给 {\mathcal{A}_1} .
④ {\mathcal{H}_4} {\text{-}} query . 在任何时刻 {\mathcal{A}_1} 可以询问随机预言机 {\mathcal{H}_4} ,为了回答这些询问, \mathcal{C} 保存表格 {\mathcal{L}_{{\mathcal{H}_4}}} 用来存取元组 \left\langle {M,{h_4}} \right\rangle ,如果询问的 M 存在 {\mathcal{L}_{{\mathcal{H}_4}}} 中,返回 {h_4} 给 {\mathcal{A}_1} ;否则, \mathcal{C} 随机选取 {h_4} \in {G_1} 并添加表项 \left\langle {M,{h_4}} \right\rangle 到 {\mathcal{L}_{{\mathcal{H}_4}}} 中,返回 {h_4} 给 {\mathcal{A}_1} .
⑤ {\mathcal{H}_5} {\text{-}} query . 在任何时刻 {\mathcal{A}_1} 可以询问随机预言机 {\mathcal{H}_5} ,为了回答这些询问, \mathcal{C} 保存表格 {\mathcal{L}_{{\mathcal{H}_5}}} 用来存取元组 \left\langle {w,{h_5}} \right\rangle ,如果询问的 w 存在 {\mathcal{L}_{{\mathcal{H}_5}}} 中,返回 {h_5} 给 {\mathcal{A}_1} ;否则, \mathcal{C} 随机选取 {h_5} \in {\{ 0,1\} ^*} 并添加表项 \left\langle {w,{h_5}} \right\rangle 到 {\mathcal{L}_{{\mathcal{H}_5}}} 中,返回 {h_5} 给 {\mathcal{A}_1} .
⑥ \mathcal{K}\mathcal{D}\mathcal{F} {\text{-}} query . 在任何时刻 {\mathcal{A}_1} 可以询问随机预言机 \mathcal{K}\mathcal{D}\mathcal{F} ,为了回答这些询问, \mathcal{C} 保存表格 {\mathcal{L}_{\mathcal{K}\mathcal{D}\mathcal{F}}} 用来存取元组 \left\langle {Z,K} \right\rangle ,如果询问的 Z 存在于 {\mathcal{L}_{\mathcal{K}\mathcal{D}\mathcal{F}}} 中,返回 K 给 {\mathcal{A}_1} ;否则, \mathcal{C} 随机选取 K \in {\{ 0,1\} ^*} 并添加表项 \left\langle {Z,K} \right\rangle 到 {\mathcal{L}_{\mathcal{K}\mathcal{D}\mathcal{F}}} 中,返回 K 给 {\mathcal{A}_1} .
3) 公钥查询. 当接受到身份为 I{D_i} 的公钥询问时, \mathcal{C} 按照如下方式回应:检查列表 {\mathcal{L}_{{\mathcal{H}_1}}} ,如果 i = \kappa , \mathcal{C} 放弃,否则得到 {H_1}\left( {I{D_i}} \right) = {N_i} ;计算用户公钥 {Q_i} = [{H_1}(I{D_i})]{P_1} +{P_{{\text{pub1}}}} = {N_i}{P_1} + (\gamma - {N_\kappa }){P_1} = (\gamma - {\tau _i}){P_1} ,并将Q_i 发送给 {\mathcal{A}_1} .
4) 私钥查询. 当接受到身份为 I{D_i} 的私钥询问时, \mathcal{C} 按照如下方式回应:检查列表 {\mathcal{L}_{{\mathcal{H}_1}}} ,如果 i = \kappa , \mathcal{C} 放弃;否则得到 {H_1}\left( {I{D_i}} \right) = {N_i} ,计算用户私钥为
\begin{split} d_{i,1}= & [s(H_1(ID_i)+s)^{-1}]P_2=[(\gamma-N_{\kappa})/(\gamma-\tau_i)]P_2= \\ & [1-(N_k-\tau_i)/(\gamma-\tau_i)]P_2, \\ d_{i,2}= & [s'(H_1(ID_i)+s)^{-1}]P_2=[a/(\gamma-\tau_i)]P_2. \end{split} 并将d_{i,1} 和d_{i,2} 发送给 {\mathcal{A}_1} .
5) 解密查询. 当接受到身份为 I{D_i} 以及密文 C = ({C_1},{C_2},{C_3},{C_4},{C_5},{C_6}) 的解密询问时,如果 i \ne \kappa , \mathcal{C} 计算 {d_{i,1}} = [(\gamma - {N_\kappa })/(\gamma - {\tau _i})]{P_2} ;如果 i = \kappa ,计算 {d_{\kappa ,1}} = [1 - {N_\kappa }/\gamma ]{P_2} . 计算 w' = e({C_1},{d_{i,1}}) ,查找 {C_1} 在 {\mathcal{L}_{{\mathcal{H}_3}}} 对应的表项 \left\langle {{C_1},{h_3}} \right\rangle , w' 在 {\mathcal{L}_{{\mathcal{H}_5}}} 对应的表项 \left\langle {w',{h_5}} \right\rangle ,得到Z = {h_3}||{h_5}||I{D_i} ,查找Z在 {\mathcal{L}_{\mathcal{K}\mathcal{D}\mathcal{F}}} 中对应的表项 \left\langle {Z,K} \right\rangle ,拆分 K = {K'_1}||{K'_2} . 计算 M' = {C_2} \oplus {K'_1} ,验证 {C_3} =MAC({K'_2}, {C_2}) 是否成立,如果成立,则返回明文;如果不成立,则 \mathcal{C} 输出 \bot .
6) 挑战. 敌手 {\mathcal{A}_1} 将身份 I{D^*} 以及消息 M_0^*,M_1^* (二者长度相同)发送给 \mathcal{C} ,如果 I{D^*} \ne I{D_\kappa } , \mathcal{C} 放弃游戏;如果 I{D^*} = I{D_\kappa } , \mathcal{C} 随机选取 \rho \in \{ 0,1\} , C_2^* \in {\{ 0,1\} ^*} , C_3^* \in {\{ 0,1\} ^*} , C_4^* \in {G_1} , C_6^* \in {G_1} ,并计算 C_1^* = (\gamma - {\tau _i})b{P_1} , C_5^* = c{P_1} 后将 {C^*} = (C_1^*,C_2^*,C_3^*,C_4^*,C_5^*,C_6^*) 发送给 {\mathcal{A}_1} 作为挑战密文.
7) 猜测. {\mathcal{A}_1} 输出 \rho ' \in \{ 0,1\} . \mathcal{C} 从 {\mathcal{L}_{{\mathcal{H}_2}}} 随机选取一个元组 \left\langle {{\sigma ^*},{\phi ^*}} \right\rangle ,并输出 {\sigma ^*} = e{({P_1},{P_2})^{abc}} 作为BDH实例的解. 证毕.
定理2. 假定嵌入的BDH困难问题是不可破解的猜想成立,则表示我们提出的支持等式测试的国密SM9算法方案是IBE-OW-CCA安全的.
证明. 假定存在可以获取目标用户陷门且可以执行等式测试的Ⅱ型敌手 {\mathcal{A}_2} ,其攻击目的是为了破坏所提方案的机密性,也即揭示挑战密文对应的消息. 如果敌手 {\mathcal{A}_2} 可以成功破坏所提方案,则存在挑战者 \mathcal{C} 能够以不可忽略的优势解决BDH困难问题. 给定 ({P_1},{P_2},[a]{P_1},[a]{P_2},[b]{P_1},[c]{P_1}) ,其中 a,b,c \in \mathbb{Z}_N^* , \mathcal{C} 的目标是计算出 e{({P_1},{P_2})^{abc}} . \mathcal{C} 与 {\mathcal{A}_2} 的挑战过程有8个.
1) 初始化. \mathcal{C} 随机选取 \kappa \in \left\{ {1,2, … ,{q_{{H_1}}}} \right\} , {N_\kappa } \in \mathbb{Z}_N^* 以及 {\tau _1},{\tau _2}, … ,{\tau _{\kappa - 1}},{\tau _{\kappa + 1}}, … ,{\tau _N} \in \mathbb{Z}_N^* , {q_{{H_1}}} 代表的是查询随机预言机 {\mathcal{H}_1} 的次数,对 i = 1,2, … ,\kappa - 1,\kappa + 1, … ,N ,计算 {N_i} = {N_\kappa } - {\tau _i} 并保存. {P_1},{P_2} 分别为群 {G_1},{G_2} 的生成元,通过使用文献[30]的方法, \mathcal{C} 随机选取 \gamma \in [1,N - 1] ,对 i \in \{ 1,2, … ,N\} \backslash \{ \kappa \} , \mathcal{C} 可以获得 N - 1 个数值对 \left( {{\tau _i},\left( {1/\gamma - {\tau _i}} \right)} \right) ,计算 {P_{{\text{pub1}}}} = \left( {\gamma - {N_\kappa }} \right){P_1} ,令 {P_{{\text{pub2}}}} = a{P_1} . 得到公共参数 params = \left\langle {{G_1},{G_2},{G_T},e,} \right. \left. {{P_1},{P_2},{P_{{\text{pub1}}}}, {P_{{\text{pub2}}}},}\right. \left.{ MAC} \right\rangle ,并将其发送给 {\mathcal{A}_2} .
\mathcal{C} 保存表格 {\mathcal{L}_{{\mathcal{H}_1}}},{\mathcal{L}_{{\mathcal{H}_2}}}{\mathcal{L}_{{\mathcal{H}_3}}},{\mathcal{L}_{{\mathcal{H}_4}}},{\mathcal{L}_{{\mathcal{H}_5}}},{\mathcal{L}_{\mathcal{K}\mathcal{D}\mathcal{F}}} ,初始化内容为空,用来模拟随机预言机 \left\langle {{\mathcal{H}_1},{\mathcal{H}_2},{\mathcal{H}_3},{\mathcal{H}_4},{\mathcal{H}_5},}\right. \left.{\mathcal{K}\mathcal{D}\mathcal{F}} \right\rangle . 设置空列表 {\mathcal{L}_K} 来保存公钥查询的结果.
2) 敌手 {\mathcal{A}_2} 向 \mathcal{C} 提出6点询问.
① {\mathcal{H}_1} {\text{-}} query . 在任何时刻 {\mathcal{A}_2} 可以询问随机预言机 {\mathcal{H}_1} ,为了回答这些询问, \mathcal{C} 保存表格 {\mathcal{L}_{{\mathcal{H}_1}}} 用来存取元组 \left\langle {I{D_i},{N_i}} \right\rangle ,当接收到身份为 I{D_i} 的 {\mathcal{H}_1} 查询时, \mathcal{C} 查找 I{D_i} 对应的数值 {N_i} ,用 {H_1}\left( {I{D_i}} \right) = {N_i} 返回给 {\mathcal{A}_2} ,并将\left\langle {I{D_i},{N_i}} \right\rangle 添加到 {\mathcal{L}_{{\mathcal{H}_1}}} 中.
② {\mathcal{H}_2} {\text{-}} query . 在任何时刻 {\mathcal{A}_2} 可以询问随机预言机 {\mathcal{H}_2} ,为了回答这些询问, \mathcal{C} 保存表格 {\mathcal{L}_{{\mathcal{H}_2}}} 用来存取元组 \left\langle {\sigma ,\phi } \right\rangle , \mathcal{C} 按2个步骤回应:
i) 如果询问的 I{D_i} 已经出现在 {\mathcal{L}_{{\mathcal{H}_2}}} 的元组 \left\langle {\sigma ,\phi } \right\rangle 中, \mathcal{C} 用 \phi 来回复.
ii) 否则, \mathcal{C} 随机选取 \sigma \in {G_T},\phi \in {G_1} ,并将元组 \left\langle {\sigma ,\phi } \right\rangle 插入 {\mathcal{L}_{{\mathcal{H}_2}}} 中,然后用 \phi 来回复 {\mathcal{A}_2} .
③ {\mathcal{H}_3} {\text{-}} query . 在任何时刻 {\mathcal{A}_2} 可以询问随机预言机 {\mathcal{H}_3} ,为了回答这些询问, \mathcal{C} 保存表格 {\mathcal{L}_{{\mathcal{H}_3}}} 用来存取元组 \left\langle {{C_1},{h_3}} \right\rangle ,如果询问的 {C_1} 存在 {\mathcal{L}_{{\mathcal{H}_3}}} 中,返回 {h_3} 给 {\mathcal{A}_2} ;否则, \mathcal{C} 随机选取 {h_3} \in {\{ 0,1\} ^*} 并添加表项 \left\langle {{C_1},{h_3}} \right\rangle 到 {\mathcal{L}_{{\mathcal{H}_3}}} 中,并返回 {h_3} 给 {\mathcal{A}_2} .
④ {\mathcal{H}_4} {\text{-}} query . 在任何时刻 {\mathcal{A}_2} 可以询问随机预言机 {\mathcal{H}_4} ,为了回答这些询问, \mathcal{C} 保存表格 {\mathcal{L}_{{\mathcal{H}_4}}} 用来存取元组 \left\langle {M,{h_4}} \right\rangle ,如果询问的 M 存在 {\mathcal{L}_{{\mathcal{H}_4}}} 中,返回 {h_4} 给 {\mathcal{A}_2} ;否则, \mathcal{C} 随机选取 {h_4} \in {G_1} 并添加表项 \left\langle {M,{h_4}} \right\rangle 到 {\mathcal{L}_{{\mathcal{H}_4}}} 中,返回 {h_4} 给 {\mathcal{A}_2} .
⑤ {\mathcal{H}_5} {\text{-}} query . 在任何时刻 {\mathcal{A}_2} 可以询问随机预言机 {\mathcal{H}_5} ,为了回答这些询问, \mathcal{C} 保存表格 {\mathcal{L}_{{\mathcal{H}_5}}} 用来存取元组 \left\langle {w,{h_5}} \right\rangle ,如果询问的 w 存在 {\mathcal{L}_{{\mathcal{H}_5}}} 中,返回 {h_5} 给 {\mathcal{A}_2} ;否则, \mathcal{C} 随机选取 {h_5} \in {\{ 0,1\} ^*} 并添加表项 \left\langle {w,{h_5}} \right\rangle 到 {\mathcal{L}_{{\mathcal{H}_5}}} 中,返回 {h_5} 给 {\mathcal{A}_2} .
⑥ \mathcal{K}\mathcal{D}\mathcal{F} {\text{-}} query . 在任何时刻 {\mathcal{A}_2} 可以询问随机预言机 \mathcal{K}\mathcal{D}\mathcal{F} ,为了回答这些询问, \mathcal{C} 保存表格 {\mathcal{L}_{\mathcal{K}\mathcal{D}\mathcal{F}}} 用来存取元组 \left\langle {Z,K} \right\rangle ,如果询问的 Z 存在于 {\mathcal{L}_{\mathcal{K}\mathcal{D}\mathcal{F}}} 中,返回 K 给 {\mathcal{A}_2} ;否则, \mathcal{C} 随机选取 K \in {\{ 0,1\} ^*} 并添加表项 \left\langle {Z,K} \right\rangle 到 {\mathcal{L}_{\mathcal{K}\mathcal{D}\mathcal{F}}} 中,返回 K 给 {\mathcal{A}_2} .
3) 公钥查询. 当接受到身份为 I{D_i} 的公钥询问时,C按照如下方式回应:检查列表 {\mathcal{L}_{{\mathcal{H}_1}}} ,如果 i = \kappa , \mathcal{C} 放弃;否则得到 {H_1}\left( {I{D_i}} \right) = {N_i} ,计算用户公钥
{Q_i} = [{H_1}(I{D_i})]{P_1} + {P_{{\text{pub1}}}} = {N_i}{P_1} + (\gamma - {N_\kappa }){P_1} = (\gamma - {\tau _i}){P_1}, 并将Q_i 发送给 {\mathcal{A}_2} .
4) 私钥查询. 当接受到身份为 I{D_i} 的私钥询问时, \mathcal{C} 按照如下方式回应:检查列表 {\mathcal{L}_{{\mathcal{H}_1}}} ,如果 i = \kappa , \mathcal{C} 放弃;否则得到 {H_1}\left( {I{D_i}} \right) = {N_i} ,计算用户私钥
\begin{split} {d_{i,1}} = &[s{({H_1}(I{D_i}) + s)^{ - 1}}]{P_2} = [(\gamma - {N_\kappa })/(\gamma - {\tau _i})]{P_2} = \\&[1 - ({N_k} - {\tau _i})/(\gamma - {\tau _i})]{P_2}, \\ {d_{i,2}} = &[s'{({H_1}(I{D_i}) + s)^{ - 1}}]{P_2} = [a/(\gamma - {\tau _i})]{P_2}, \end{split} 并将d_{i,1} 和d_{i,2} 发送给 {\mathcal{A}_2} .
5) 陷门查询. 当接受到身份为 I{D_i} 的陷门询问时,如果 i = \kappa , \mathcal{C} 放弃;否则计算
{d_{i,2}} = [s'{({H_1}(I{D_i}) + s)^{ - 1}}]{P_2} = [a/(\gamma - {\tau _i})]{P_2}, 并将d_{i,1} 和d_{i,2} 发送给 {\mathcal{A}_2} .
6) 解密查询. 当接受到身份为 I{D_i} 以及密文 C = ({C_1},{C_2},{C_3},{C_4},{C_5},{C_6}) 的解密询问时,C按照如下方式回应:如果 i \ne \kappa , \mathcal{C} 计算 {d_{i,1}} = [(\gamma - {N_\kappa })/(\gamma - {\tau _i})]{P_2} ;如果 i = \kappa ,计算 {d_{\kappa ,1}} = [1 - {N_\kappa }/\gamma ]{P_2} 、计算 w' = e({C_1},{d_{i,1}}) ,查找 {C_1} 在 {\mathcal{L}_{{\mathcal{H}_3}}} 对应的表项 \left\langle {{C_1},{h_3}} \right\rangle , w' 在 {\mathcal{L}_{{\mathcal{H}_5}}} 对应的表项 \left\langle {w',{h_5}} \right\rangle ,得到 Z = {h_3}||{h_5}||I{D_i} ,查找Z在 {\mathcal{L}_{\mathcal{K}\mathcal{D}\mathcal{F}}} 中对应的表项 \left\langle {Z,K} \right\rangle ,拆分 K = {K'_1}||{K'_2} . 计算 M' = {C_2} \oplus {K'_1} ,验证 {C_3} = MAC({K'_2},{C_2}) 是否成立,如果成立,则返回明文;如果不成立,则 \mathcal{C} 输出 \bot .
7) 挑战. 敌手 {\mathcal{A}_2} 将身份 I{D^*} 以及消息 {M^*} \in {\{ 0,1\} ^*} 发送给 \mathcal{C} ,如果 I{D^*} \ne I{D_\kappa } , \mathcal{C} 放弃游戏;如果 I{D^*} = I{D_\kappa } , \mathcal{C} 随机选取 C_2^* \in {\{ 0,1\} ^*} , C_3^* \in {\{ 0,1\} ^*} , C_4^* \in {G_1} , C_6^* \in {G_1} ,并计算 C_1^* = (\gamma - {\tau _i})b{P_1} , C_5^* = c{P_1} 后将 {C^*} = (C_1^*,C_2^*,C_3^*,C_4^*, C_5^*,C_6^*) 发送给 {\mathcal{A}_2} 作为挑战密文.
8) 猜测. {\mathcal{A}_2} 输出 M' \in {M^*} . \mathcal{C} 从 {\mathcal{L}_{{\mathcal{H}_2}}} 随机选取一个元组 \left\langle {{\sigma ^*},{\phi ^*}} \right\rangle ,并输出 {\sigma ^*} = e{({P_1},{P_2})^{abc}} 作为BDH实例的解. 证毕.
4.3 ASA安全性
ASA安全性要求SM9-IBEET-CRF可以抵抗Ⅲ型敌手 {\mathcal{A}_3} 的攻击. 其中敌手 {\mathcal{A}_3} 具备ASA能力,在保持算法功能不变的前提下,可以替换除了CRF重随机化以外的算法,然后对系统发起攻击.
定理3. SM9-IBEET-CRF中的CRF具有维持功能性.
证明. 维持功能性要求CRF不影响原协议的功能与安全性,当数据上传者将密文与陷门经由CRF上传至云服务器后,可以得到与原协议相同的功能与安全强度.
数据上传者运行加密算法Encrypt生成密文 C = ({C_1},{C_2},{C_3},{C_4},{C_5},{C_6}) 并发送给CRF.CRF重随机化密文后得到 C' = ({C_1},{C_2},{C_3},{C_4},{C_5},[{r_3}]{C_6}) ,并将其上传至云服务器.
当数据接收者需要解密时,从云服务器上下载密文,此时不需要经过CRF. 解密过程的正确性通过用户私钥 d 以及密文消息 {C'_\alpha } = ({C_1}, {C_2},{C_3},{C_4},{C_5}, [{r_3}]{C_6}) 来验证:
\begin{split} {{M'}_\alpha } = &{C_{2,\alpha }} \oplus KDF({H_3}({C_{1,\alpha }})||{H_5}(e({C_{1,\alpha }},{d_{1,\;\beta }}))||I{D_\beta },klen) =\\& {C_{2,\alpha }} \oplus KDF({H_3}({C_{1,\alpha }})||{H_5}({w_{1,\alpha }})||I{D_\beta },klen) = \\&{C_{2,\alpha }} \oplus {{K'}_{1,\;\beta }}, \\ \end{split} 其中 {K'_{1,\;\beta }} 为KDF函数结果左边的mlen比特. 接下来计算消息认证码函数:
u = MAC({K'_{2,\;\beta }},{C_{2,\alpha }}) ( {K'_{2,\;\beta }} 为 KDF 函数结果右边的 {K_2}\_len 比特).
若 u = {C_{3,\alpha }} ,则消息认证完整性的结果通过,解密结果正确,输出明文 {M'_\alpha } ,解密过程不受影响.
当数据接收者需要执行等式测试时,运行Trapdoor算法生成陷门td并发送给CRF. CRF重随机化陷门后得到 td' = [{r_4}]td = [{r_4}{t'_2}]{P_2} ,并将td' 上传至云服务器. 接下来验证等式测试计算结果的正确性.
1) 第1层计算的安全性采用用户的陷门以及部分密文验证:
\begin{split} {X_\alpha } =& \frac{{{C_{4,\alpha }}}}{{{H_2}\left( {e\left( {{{C'}_{6,\alpha }},t{{d'}_\alpha }} \right)} \right)}} =\\& \frac{{\left[ {{r_{2,\alpha }}} \right]{H_4}(M){H_2}\left( {e{{\left( {{P_{{\text{pub2}}}},\left[ {{r_{1,\alpha }}} \right]{P_2}} \right)}^{{r_{2,\alpha }}}}} \right)}}{{{H_2}\left( {e\left( {\left[ {{r_{1,\alpha }}} \right]\left[ {{r_{2,\alpha }}} \right]\left[ {{r_{3,\alpha }}} \right]\left[ {{t_{1,\alpha }}} \right]{P_1},\left[ {{r_{4,\alpha }}} \right]\left[ {t_{2,\alpha }^\prime } \right]{P_2}} \right)} \right)}} =\\& [{r_{2,\alpha }}]{H_4}({M_\alpha }). \\ {X_\beta } =& \frac{{{C_{4,\;\beta }}}}{{{H_2}\left( {e\left( {{{C'}_{6,\;\beta }},t{{d'}_\beta }} \right)} \right)}} =\\& \frac{{\left[ {{r_{2,\;\beta }}} \right]{H_4}(M){H_2}\left( {e{{\left( {{P_{{\text{pub2}}}},\left[ {{r_{1,\;\beta }}} \right]{P_2}} \right)}^{{r_{2,\;\beta }}}}} \right)}}{{{H_2}\left( {e\left( {\left[ {{r_{1,\;\beta }}} \right]\left[ {{r_{2,\;\beta }}} \right]\left[ {{r_{3,\;\beta }}} \right]\left[ {{t_{1,\;\beta }}} \right]{P_1},\left[ {{r_{4,\;\beta }}} \right]\left[ {t_{2,\;\beta }^\prime } \right]{P_2}} \right)} \right)}} =\\& [{r_{2,\;\beta }}]{H_4}({M_\beta }). \end{split} 2) 第2层计算的正确性分析带入第1层中计算的中间结果:
\begin{split} e({C_{5,\;\alpha }},{X_\beta }) =& e([{r_{2,\alpha }}]{P_2},[{r_{2,\;\beta }}]{H_4}({M_\beta })) = e{({P_2},{H_4}({M_\beta }))^{{r_{2,\alpha }},{r_{2,\;\beta }}}}, \\ e({C_{5,\;\beta }},{X_\alpha }) =& e([{r_{2,\;\beta }}]{P_2},[{r_{2,\alpha }}]{H_4}({M_\alpha })) = e{({P_2},{H_4}({M_\alpha }))^{{r_{2,\alpha }},{r_{2,\;\beta }}}}. \end{split} 若 {M_\alpha } = {M_\beta } ,则等式测试的结果成立. 证毕.
定理4. SM9-IBEET-CRF中的CRF具有弱保留安全性和预防泄露性.
证明. 假定存在可以替换除CRF重随机化以外算法的Ⅲ型敌手 {\mathcal{A}_3} ,其攻击目的是破坏所提方案的机密性,也即通过篡改原始算法来泄露隐私信息. 如果 {\mathcal{A}_3} 可以成功破坏所提方案,使用篡改算法Setup*,KeyExtract*,Encrypt*,Decrypt*,Trapdoor*,Test*来替换原始算法. 我们则将通过SM9-IBEET-CRF的安全性游戏和原SM9-IBEET安全性游戏的不可区分性,来证明CRF满足弱保留安全性和预防泄露性. 本文考虑3种安全游戏:
1) 安全游戏Game 1. 与第2节中定义的Game 3相同.
2) 安全游戏Game 2. 与Game 1的其他部分都相同,除了在陷门查询阶段, \mathcal{C} 用于生成陷门的算法为Trapdoor,而不是先执行Trapdoor*算法再执行ReTrapdoor算法.
3) 安全游戏Game 3. 与Game 2的其他部分都相同,除了挑战阶段, \mathcal{C} 用于生成密文的算法为Encrypt,而不是先执行Encrypt*算法再执行ReEncrypt算法. 事实上,Game 3就是原基础方案SM9-IBEET的安全性游戏.
现在我们证明,Game 1和Game 2,Game 2和Game 3分别具有不可区分性.
Game 1和Game 2之间,对于任何篡改算法Trapdoor*,其生成的陷门 td' 在经由CRF的重随机化算法ReTrapdoor后,由于数据的可延展性, td' 会重新分布,被映射到与原始Trapdoor相同的输出空间中. 也就是说,即使敌手篡改了Trapdoor算法的实现,它也难以区分 td' 是由Trapdoor算法产生,还是由先执行Trapdoor*算法再执行ReTrapdoor产生. 因此,Game 1和Game 2之间具有不可区分性.
Game 2和Game 3之间,对于任何篡改算法Encrypt*,其生成的密文 C' 在经由CRF的重随机化算法ReEncrypt后,由于数据的可延展性, C' 会重新分布,被映射到与原始密文相同的输出空间中. 也就是说,即使敌手篡改了Encrypt算法的实现,它也难以区分 C' 由Encrypt算法产生,还是是由先执行Encrypt*算法再执行ReEncrypt产生. 因此,Game 2和Game 3之间具有不可区分性.
综上所述,Game 1与Game 3具有不可区分性,SM9-IBEET-CRF满足与原方案相同的IBE-IND-CCA安全性与IBE-OW-CCA安全性. 这种选择密文攻击下的不可区分性表明云服务器与用户之间的CRF具有弱保留安全性,Game 1,Game 2,Game 3之间的不可区分性证明CRF有弱抵抗渗透性. 证毕.
5. 方案对比
在本节中主要从计算开销、通信开销、安全性等方面对本文方案(SM9-IBEET-CRF)与其他等式测试的公钥加密方案和支持关键词检索的公钥加密文献[4, 6, 11, 13, 15, 17]进行比较. 其中,文献[4]为具有前向安全性的公钥可搜索加密方案(FS-PKSE);文献[6]为带有关键字搜索的公钥认证加密方案(PAEKS);文献[11]为具有灵活授权机制的公钥加密等式测试方案(PKEET-FA);文献[13]为支持等式测试的无证书公钥加密方案(CLE-PKEET);文献[15]为支持等式测试的异构签密方案(HSC-ET);文献[17]为支持等式测试标识加密方案(IBEET).
为评估方案性能,将本文方案与其他方案在相同的环境下逐一对比,该环境配置的处理器为Intel® Core™ i7-8750H,内存为16 GB(RAM),在VMware软件的虚拟机上运行,在PBC(pairing-based cryptography library)库中实现双线性对的接口,实现了双线性对公钥密码体制的有效仿真,达到了1024 b RSA安全.
使用SM9定义256 b的BN曲线,椭圆曲线方程为 {y^2} = {x^3} + b 来生成映射 e:{G_1} \times {G_2} \to {G_T} ,嵌入次数为12,根据SM9的参数配置PBC库中对应的算法,进行多次模拟后取平均值,与之前的文献进行了对比,其中涉及的符号定义和密码算法的执行时间定义分别如表1和表2所示.
表 1 符号定义Table 1. Symbols Definition符号 含义 |{G_1}| {G_1} 中元素的大小 |{G_2}| {G_2} 中元素的大小 |{G_T}| {G_T} 中元素的大小 |G| 对称配对 G 中元素的大小 |{G_T}'| 对称配对 {G_T}' 中元素的大小 |{\mathbb{Z}_p}| {\mathbb{Z}_p} 中元素的大小 |PK| 公钥长度 |CL| 密文长度 |TD| 陷门长度 {T_p} 1次双线性配对运算 {T_{x1}} 1次 {G_1} 或 {G_2} 上的幂运算 {T_{x2}} 1次 {G_T} 上的幂运算 表 2 密码操作的执行时间Table 2. Execution Time of Cryptographic Operation具体操作 计算时间/ms {T_P} 7.5954 {T_{x1}} 3.8915 {T_{x2}} 1.2357 为评估方案的通信开销,考虑部署2类无线传感器节点平台MICAz[31]和Tmote Sky[32]. 其中MICAz配置的微控制器为ATmega128L,内存为4 KB(RAM). Tmote Sky配置的微控制器为MSP430,内存为10 KB(RAM). 采用CC2420, 2.4 GHz IEEE 802.15.4作为射频收发器标准,在 TinyOS系统运行. 使用文献[33]的方法,在2类传感器节点架构体系上实现对公钥密码通信系统的有效仿真.
5.1 计算开销对比
我们首先比较了不同方案,包括Enc加密操作、Dec解密操作和Test等式测试操作的计算开销. 具体结果如表3所示.
表 3 不同方案的计算开销对比Table 3. Computation Cost Comparison of Different Schemes方案 加密 解密 等式测试/搜索 FS-PKSE[4] 6{T_{x1}} 5{T_p} + 3{T_{x2}} PAEKS[6] {T_p} + 4{T_{x1}} 4{T_p} + {T_{x2}} PKEET-FA[11] 6{T_{x1}} 5{T_{x1}} 2{T_p} + 6{T_{x1}} CLE-PKEET[13] 4{T_p} + 5{T_{x1}} 2{T_P} + 2{T_{x1}} 4{T_p} HSC-ET[15] 5{T_{x1}} + 2{T_{x2}} 3{T_p} + {T_{x2}} 4{T_p} + 4{T_{x2}} IBEET[17] 2{T_p} + 6{T_{x1}} 2{T_p} + 2{T_{x1}} 4{T_p} 本文方案 2{T_p} + 5{T_{x1}} + 2{T_{x2}} {T_p} 4{T_p} 图3(a)表示在模拟环境下,不同方案的加密时间随消息数量的变化,虽然当消息数量增加到100时,本文方案的时间开销比文献[4, 6, 11, 15]大1.58倍左右,但很明显,与其他标识加密的等式测试的文献[13, 17]相比,本文方案的时间开销要小得多. 作为一种IBE-ET体制,本方案在加密时间上的开销是可以接受的. 与本文方案相比,文献[11]并没有实现IBE体制,在实际场景中,存在着密钥管理的问题;文献[15]异构等式测试,只能实现PKI端到IBE端的测试环境,具有一定的限制;文献[4, 6]实现的可搜索加密,不能对密文解密,只支持关键词搜索,无法搜索整段密文,搜索性有所降低. 本文方案的等式测试功能,可以实现双向密文的任意测试;用户与测试者均采用标识加密的方法,避免了密钥管理的问题;与其他标识加密的方案相比,降低了加密开销.
从图3(b)可以得出,本文方案在解密过程的计算时间远小于其他对比方案,具有解密开销上的优势.
从图3(c)中得出,本文方案与文献[13, 17]在测试计算过程耗费的计算时间是相近的,文献[4, 11, 15]在测试场景下耗费的时间要大于其他方案. 文献[4, 6]实现的传统可搜索加密并不能支持整段密文的测试,可以看出,本文方案在等式测试过程中耗费的时间是合理且具有一定优势的.
5.2 通信开销与功能对比
表4表示的是不同方案在通信开销与实际功能上的对比,可以看出本文方案与其他方案相比,具有更强的安全性与功能性. 在实际应用场景中,本文方案实现的标识加密体制避免了证书管理的问题,大大降低在通信过程中的开销. 同时,支持等式测试的功能相比于其他可搜索加密文献[4, 6]具有更强的搜索性. CRF的设置使得本文方案具备抵抗渗透攻击的能力,这意味着在面对算法篡改攻击这类威胁时本文方案提供了更高的安全性. 值得注意的是,本文方案还是唯一一个支持国密SM9算法的方案.
表 4 不同方案的通信开销与功能对比Table 4. Comparison of Communication Overhead and Function of Different Schemes方案 |PK| |CL| |TD| 等式测试 标识加密 抗渗透攻击 支持国密算法 FS-PKSE[4] 4|G| 5|G| + |{\mathbb{Z}_p}| 5|G| + |{\mathbb{Z}_p}| × × × × PAEKS[6] |G| 2|G| |G| × × × × PKEET-FA[11] 3|G| 5|G| + |{\mathbb{Z}_p}| |{\mathbb{Z}_p}| √ × × × CLE-PKEET[13] 2|G| 3|G| + |{\mathbb{Z}_p}| |G| √ × × × HSC-ET[15] |G| 3|G| + 2|{\mathbb{Z}_p}| |G| √ √ × × IBEET[17] 2|G| 4|G| + |{\mathbb{Z}_p}| |G| √ √ × × 本文方案 |{G_1}| 3|{G_1}| + |{G_2}| + 2|{\mathbb{Z}_p}| |{G_2}| √ √ √ √ 注:“√”表示存在,“×”表示不存在. 图4表示在模拟环境下,不同方案随着用户数量增加下各种通信开销的对比. 从图4(a)(b)可以看出,本文方案享有最低的公钥通信开销与密文通信开销. 并且在图4(c)中,本文方案的陷门开销小于除了文献[11]以外的所有方案的开销. 这是由于本文方案不仅实现了标识加密体制,在实际场景中避免了公钥证书的通信开销. 同时还是所有文献中唯一建立在非对称双线性配对基础上的方案,这大大降低了在实际通信场景中的存储开销. 由此可见本文方案具有通信开销上的优势,更适用于实际应用场景.
总的来说,通过严格的实验仿真与性能对比证明,本文方案在计算开销与通信开销上都具有一定的优势. 等式测试功能的引入,使本文方案具有比可搜索加密方案更强的搜索性;标识加密体制的拓展,解决了密钥协商和证书管理的问题;逆向防火墙的部署,进一步提升了本文方案抵抗渗透攻击与篡改攻击的能力. 本文方案不仅解决了SM9密文难以搜索的问题,还解决了目前支持等式测试的标识加密体制下计算与通信开销大、安全性弱的问题. 同时,本文方案是国密SM9密码算法在云计算场景下的一次良好应用,对于推动我国密码领域的安全研究也具有一定意义.
6. 结 论
针对已有IBEET算法难抵抗渗透攻击的问题,本文提出了一种支持等式测试并具有逆向防火墙的SM9标识密码方案SM9-IBEET-CRF,该方案可以运用于云服务器中加密数据的外包计算方案. 本文方案在用户与云服务器之间的上行信道分别部署了密码逆向防火墙;形式化了本文方案的系统模型和定义,并考虑3种不同的对手来定义安全模型;然后在BDH假设下的随机预言机模型中证明了它的安全性;最后通过严格的实验仿真和分析结果表明,本文方案比已有方法在解密与通信开销方面具有一定的效率优势.
作者贡献声明:熊虎提出了算法思路和实验方案;林烨负责完成实验并撰写论文;姚婷协助完成实验并修改论文.
根据IoT-Analytics 的报告,近年来AIoT的设备数目和市场规模均保持年均15%以上的增长,详见https://iot-analytics.com/number-connected-iot-devices/和https://iot-analytics.com/iot-market-size/当本文分别论述联邦学习和协同推理这2个领域中与AIoT应用场景相关的技术进展时,是从广义的角度来介绍面向AIoT的协同智能;当本文论述这2种技术的联系或涉及两者联合起作用的新的应用形态时,则是从狭义的角度来介绍面向AIoT的协同智能.攻击者保持模型收敛精度不受显著影响是为了获得有价值的全局模型参数同时防止被发现. -
表 1 相关综述简介
Table 1 A Brief Summary of Related Surveys
相关综述 AIoT 大模型 联邦学习 协同推理 定义 架构 异构 多模态 FCL FRL P&S 优化 定义 架构 P&S 优化 文献[1] ● ◐ ◐ ◐ ◐ 文献[16] ● ● ● ● 文献[20] ● ● ● ● 文献[22] ● ● ● 文献[25] ● ● ● ◐ ● 文献[43] ● ● ● 文献[30] ● ◐ ◐ ● ● ● 文献[37] ◐ ◐ ◐ ● ◐ ◐ ◐ ◐ ◐ 本文 ● ◐ ● ● ● ● ● ◐ ● ● ● ● ● ● 注:隐私安全(privacy and pecurity,P&S);联邦持续学习(federated continual learning,FCL);联邦强化学习(federated reinforcement learning,FRL). ◐ 简略介绍;● 详细介绍. 表 2 联邦学习的算法相关工作总结
Table 2 Summary of Related Works About the Algorithm of Federated Learning
表 3 协同推理的算法相关工作总结
Table 3 Summary of Related Works About the Algorithm of Collaborative Inference
主要优化目标 相关工作 模型切分方法 任务调度方法 性能 DeepThings[49] 卷积层并行 任务窃取 DeepSlicing[26] 通信量优化、模型并行 同步开销优化 IONN[97] 执行图生成与最短路径搜索 OFL[98] 基于层融合的模型切分 动态规划 PICO[99] 基于结束片的模型切分 动态规划 EdgeFlow[100] 模型并行 线性规划 IAO[50] 延迟预测 Neurosurgeon[107] 基于延迟预测的模型切分 延迟鲁棒性 DistrEdge[102] 强化学习 ICE[103] 服务质量感知、执行图最短路径搜索 MTS[105] 强化学习 能耗 CoEdge[21] 模型并行 线性规划 Neurosurgeon[107] 基于能耗估计的模型切分 AutoScale[101] 强化学习 表 4 面向AIoT的协同智能架构各层次相关工作总结
Table 4 Summary of Related Works at Different Levels of AIoT-Oriented Collaborative Intelligence Achitecture
架构层级 分类 优势 劣势 参考文献 联邦学习 协同推理 深度学习
加速器GPU 高性能、软件栈成熟、
兼顾通用计算任务面积大、能耗高 [27, 47] [41, 97, 101] 深度学习处理器 面积较小、能效比高 任务类型相对单一 [114−115] [111, 116] 深度学习
编译即时编译 可以获取运行时信息[109] 增加启动开销[171] [114−115] 预编译 更大的静态搜索空间、支持交叉编译等[109] 无法获取运行时信息 [127]* [116, 130] 深度学习
框架
AIoT联邦
学习框架FedML 基于MPI和MQTT的分布式通信协议支
持、支持多种通信拓扑结构、对真实
AIoT设备的实验平台支持没有对推理任务提供
专门支持和优化[73, 137] Flower 支持大量异构端侧设备和
各种通信状况的模拟[138] 轻量级
端侧框架TensorFlow Lite 支持嵌入式设备的轻量级运行时 一般只用于端侧设备 [136] [21, 111] MNN 基于半自动搜索的最佳执行策略搜索 [141] 端边云
通用框架PyTorch 编程风格简洁、多进程并行计算和通信优化 嵌入式设备等资源
受限设备难以支持[72, 77, 134] [99−100, 103] TensorFlow 良好的可扩展性、调度策略优化 [85] [50] TensorRT 高性能、高吞吐量推理 没有对训练任务提供支持 [139] 设备间
通信通信拓扑
结构[30, 137]中心化 结构简单、易于管理 中心节点通信瓶颈,可能依赖
第三方提供的计算服务[44] [97] 层次化 缓解中心节点通信瓶颈 增加额外通信层级,可能依赖
第三方提供的计算服务[143] [144] 去中心化 P2P直接通信、系统自治、拜占庭容错 共识开销,系统管理复杂 [147] [49] 混合式 可以兼具多种通信拓扑结构的优点 结构和系统管理较为复杂 [68] 减少通信的次数 降低通信开销 一般只用于联邦学习场景 [143, 148, 150−151, 172] 减少每次通信
的数据量可能降低模型精度 [152−157] [144, 158] 通信干涉管理 减少通信干涉的负面影响 对Wi-Fi 6等新通信网络
需要进一步研究[159−161] 多设备
协同[16]端云协同 云服务器计算、存储资源充足,
有利于数据的长期存储云服务器带宽受限、广域网
不稳定、隐私安全问题[166−167] [41, 101] 端边协同 降低通信延迟 边缘服务器计算和存储资源
受限、隐私安全问题[58] [97] 端边云协同 减轻云服务器计算和通信负担 隐私安全问题 [143] [144] 本地协同 高速和稳定的数据传输、隐私安全保障 只适用于封闭场景,
不适用于开放场景[21, 26, 49, 111] 大小模型协同 既可以使用大模型中包含的丰富
知识来提供高质量的服务,
又可以使用小模型来提升服务的响应速度大小模型之间的知识
传递需要进一步研究[72, 170] [141] 注:“*”表示潜在解决方案. 表 5 面向AIoT的协同智能面临的攻击和对应防御方法总结
Table 5 Summary of the Attacks That the AIoT-Oriented Collaborative Intelligence Faces and the Corresponding Defense Methods
攻击类型 攻击面 攻击场景 参考文献 防御机制 数据隐私 训练样本相关 模型反演攻击 模型参数、模型输出 联邦学习、
协同推理[52, 119, 174−175] 混淆[186-188]、
同态加密[119, 192, 195]、
多方安全计算[51, 197-198, 230-231]、
可信执行环境[199, 201-203]成员推断攻击 [53, 176−177] 性质推断攻击 模型参数 联邦学习 [178−179] 模型参数相关 模型提取攻击 模型输出 联邦学习、
协同推理[52, 173−174, 181] 异常检测[205]、改变输出[208-210] free-rider攻击 模型参数 联邦学习 [183−184] 异常检测[206-207]、区块链[68, 147] 模型安全 投毒攻击 训练数据、模型参数 联邦学习、
协同推理[214−215] 异常检测[120, 226] 逃逸攻击 模型输出、模型参数 [213, 217−219] 异常检测[227-228]、对抗学习[224]、混淆[229] -
[1] Chang Zhuoqing, Liu Shubo, Xiong Xingxing, et al. A survey of recent advances in edge-computing-powered artificial intelligence of things[J]. IEEE Internet of Things Journal, 2021, 8(18): 13849−13875 doi: 10.1109/JIOT.2021.3088875
[2] Wang Wenbo, Zhang Yingfeng, Gu Jinan, et al. A proactive manufacturing resources assignment method based on production performance prediction for the smart factory [J]. IEEE Transactions on Industrial Informatics, 18(1): 46−55
[3] Yu Liang, Xie Weiwei, Xie Di, et al. Deep reinforcement learning for smart home energy management[J]. IEEE Internet of Things Journal, 2020, 7(4): 2751−2762 doi: 10.1109/JIOT.2019.2957289
[4] Shaikh F K, Karim S, Zeadally S, et al. Recent trends in Internet-of-things-enabled sensor technologies for smart agriculture[J]. IEEE Internet of Things Journal, 2022, 9(23): 23583−23598 doi: 10.1109/JIOT.2022.3210154
[5] Zhao Jianxin, Chang Xinyu, Feng Yanhao, et al. Participant selection for federated learning with heterogeneous data in intelligent transport system[J]. IEEE Transactions on Intelligent Transportation Systems, 2022, 24(1): 1106−1115
[6] Analitics IoT. IoT 2020 in review: The 10 most relevant IoT developments of the year [EB/OL]. (2021-01-12)[2024-07-16]. https://iot-analytics.com/iot-2020-in-review/
[7] Analitics IoT. IoT 2021 in review: The 10 most relevant IoT developments of the year [EB/OL]. (2022-01-11)[2024-07-16]. https://iot-analytics.com/iot-2021-in-review/
[8] 张玉清,周威,彭安妮. 物联网安全综述[J]. 计算机研究与发展,2017,54(10):2130−2143 doi: 10.7544/issn1000-1239.2017.20170470 Zhang Yuqing, Zhou Wei, Peng Anni. Survey of Internet of things security[J]. Journal of Computer Research and Development, 2017, 54(10): 2130−2143(in Chinese) doi: 10.7544/issn1000-1239.2017.20170470
[9] Dong Yudi, Yao Yudong. Secure mmwave-radar-based speaker verification for IoT smart home[J]. IEEE Internet of Things Journal, 2021, 8(5): 3500−3511 doi: 10.1109/JIOT.2020.3023101
[10] Liu Yangyang, Chang Shuo, Wei Zhiqing, et al. Fusing mmwave radar with camera for 3-D detection in autonomous driving[J]. IEEE Internet of Things Journal, 2022, 9(20): 20408−20421 doi: 10.1109/JIOT.2022.3175375
[11] Zhang Chaoyun, Patras P, Haddadi H, et al. Deep learning in mobile and wireless networking: A survey[J]. IEEE Communications Surveys & Tutorials, 2019, 21((3): ): 2224−2287
[12] He Kaiming, Zhang Xiangyu, Ren Shaoqing, et al. Deep residual learning for image recognition[C]// Proc of the 2016 IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2016: 770−778
[13] Amodei D, Ananthanarayanan S, Anubhai R, et al. Deep speech 2: End-to-end speech recognition in English and Mandarin[C]// Proc of the 33rd Int Conf on Machine Learning. New York: ACM, 2016: 173–182
[14] Otter D W, Medina J R, Kalita J K. A survey of the usages of deep learning for natural language processing[J]. IEEE Transactions on Neural Networks and Learning Systems, 2021, 32(2): 604−624 doi: 10.1109/TNNLS.2020.2979670
[15] Hasselt H V, Guez A, Silver D. Deep reinforcement learning with double Q-learning[C]// Proc of the 30th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2016: 2094−100
[16] Ren Weiqing, Qu Yuben, Dong Chao, et al. A survey on collaborative DNN inference for edge intelligence[J]. Machine Intelligence Research, 2023, 20(3): 370−395 doi: 10.1007/s11633-022-1391-7
[17] EU. Regulation (EU) 2016/679 of the European parliament and of the council on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [EB/OL]. (2018-05-25) [2024-07-16]. https://gdpr-info.eu/
[18] Li Mu, Andersen D G, Park J W, et al. Scaling distributed machine learning with the parameter server[C]// Proc of the 11th USENIX Conf on Operating Systems Design and Implementation. Berkeley, CA: USENIX Association, 2014: 583–598
[19] Teerapittayanon S, Mcdanel B, Kung H T. Distributed deep neural networks over the cloud, the edge and end devices[C]// Proc of the 37th IEEE Int Conf on Distributed Computing Systems. Piscataway, NJ: IEEE, 2017: 328−339
[20] Lim W Y B, Luong N C, Hoang D T, et al. Federated learning in mobile edge networks: A comprehensive survey[J]. IEEE Communications Surveys & Tutorials, 2019, 22: 2031−2063
[21] Zeng Liekang, Chen Xu, Zhou Zhi, et al. CoEdge: Cooperative DNN inference with adaptive workload partitioning over heterogeneous edge devices[J]. IEEE/ACM Transactions on Networking, 2021, 29(2): 595−608 doi: 10.1109/TNET.2020.3042320
[22] Yang Qiang, Liu Yang, Chen Tianjian, et al. Federated machine learning: Concept and applications [J]. ACM Transactions on Intelligent Systems and Technology, 2019, 10(2): Article 12
[23] 朱泓睿,元国军,姚成吉,等. 分布式深度学习训练网络综述[J]. 计算机研究与发展,2021,58(1):98−115 doi: 10.7544/issn1000-1239.2021.20190881 Zhu Hongrui, Yuan Guojun, Yao Chengji, et al. Survey on network of distributed deep learning training[J]. Journal of Computer Research and Development, 2021, 58(1): 98−115 (in Chinese) doi: 10.7544/issn1000-1239.2021.20190881
[24] Nguyen D C, Ding Ming, Pathirana P N, et al. Federated learning for internet of things: A comprehensive survey[J]. IEEE Communications Surveys & Tutorials, 2021, 23(3): 1622−1658
[25] Khan L U, Saad W, Han Zhu, et al. Federated learning for Internet of things: Recent advances, taxonomy, and open challenges[J]. IEEE Communications Surveys & Tutorials, 2021, 23(3): 1759−1799
[26] Zhang Shuai, Zhang Sheng, Qian Zhuzhong, et al. DeepSlicing: Collaborative and adaptive CNN inference with low latency[J]. IEEE Transactions on Parallel and Distributed Systems, 2021, 32(9): 2175−2187 doi: 10.1109/TPDS.2021.3058532
[27] Mao Yunlong, Hong Wenbo, Wang Heng, et al. Privacy-preserving computation offloading for parallel deep neural networks training[J]. IEEE Transactions on Parallel and Distributed Systems, 2021, 32(7): 1777−1788
[28] Bommasani R, Hudson D, Adeli E, et al. On the opportunities and risks of foundation models [J]. arXiv preprint, arXiv: 2108.07258, 2021
[29] Cao Yihan, Li Siyu, Liu Yixin, et al. A comprehensive survey of AI-generated content (AIGC): A history of generative AI from GAN to ChatGPT [J]. arXiv preprint, arXiv: 2303.04226, 2023
[30] Zhou Zhi, Chen Xu, Li En, et al. Edge intelligence: Paving the last mile of artificial intelligence with edge computing[J]. Proceedings of the IEEE, 2019, 107: 1738−1762 doi: 10.1109/JPROC.2019.2918951
[31] 陈云霁,李玲,李威,等. 智能计算系统[M]. 北京:机械工业出版社,2020 Chen Yunji, Li Ling, Li Wei et al. AI Computing System [M] Beijing: China Machine Press, 2020(in Chinese)
[32] Poirot M G, Vepakomma P, Chang Ken, et al. Split learning for collaborative deep learning in healthcare [J]. arXiv preprint, arXiv: 1912.12115, 2019
[33] Zhuang Fuzhen, Qi Zhiyuan, Duan Keyu, et al. A comprehensive survey on transfer learning[J]. Proceedings of the IEEE, 2021, 109(1): 43−76 doi: 10.1109/JPROC.2020.3004555
[34] Finn C, Abbeel P, Levine S. Model-agnostic meta-learning for fast adaptation of deep networks[C]// Proc of the 34th Int Conf on Machine Learning. New York: ACM, 2017: 1126–1135
[35] Yao Jiangchao, Wang Feng, Jia Kunyang, et al. Device-cloud collaborative learning for recommendation[C]// Proc of the 27th ACM SIGKDD Conf on Knowledge Discovery & Data Mining. New York: ACM, 2021: 3865−3874
[36] Chen Zeyuan, Yao Jiangchao, Wang Feng, et al. Mc2-SF: Slow-fast learning for mobile-cloud collaborative recommendation [J]. arXiv preprint, arXiv: 2109.12314, 2021
[37] Yao Jiangchao, Zhang Shengyu, Yao Yang, et al. Edge-cloud polarization and collaboration: A comprehensive survey for AI[J]. IEEE Transactions on Knowledge and Data Engineering, 2023, 35(7): 6866−6886
[38] Zhao Yuxi, Gong Xiaowen, Mao Shiwen. Truthful incentive mechanism for federated learning with crowdsourced data labeling[C]// Proc of the 2023 IEEE Conf on Computer Communications. Piscataway, NJ: IEEE, 2023: 1−10
[39] Zhang Tuo, Feng Tiantian, Alam S, et al. GPT-FL: Generative pre-trained model-assisted federated learning [J]. arXiv preprint, arXiv: 2306.02210, 2023
[40] 郭斌,刘思聪,刘琰,等. 智能物联网:概念、体系架构与关键技术[J]. 计算机学报,2023,46(11): 2259−2278 Guo Bin, Liu Sicong, Liu Yan, et al. AIoT: The concept, architecture and key techniques[J]. Chinese Journal of Computers, 2023, 46(11): 2259−2278 (in Chinese)
[41] Kang Yiping, Hauswald J, Gao Cao, et al. Neurosurgeon: Collaborative intelligence between the cloud and mobile edge[C] //Proc of the 22nd Int Conf on Architectural Support for Programming Languages and Operating Systems. New York: ACM, 2016: 615−629
[42] Pan Jingyu, Chang C C, Xie Zhiyao, et al. Towards collaborative intelligence: Routability estimation based on decentralized private data[C] //Proc of the 59th ACM/IEEE Design Automation Conf. New York: ACM, 2017: 961−966
[43] 王睿,齐建鹏,陈亮,等. 面向边缘智能的协同推理综述[J]. 计算机研究与发展,2023,60(2):398−414 doi: 10.7544/issn1000-1239.202110867 Wang Rui, Qi Jianpeng, Chen Liang, et al. Survey of collaborative inference for edge intelligence[J]. Journal of Computer Research and Development, 2023, 60(2): 398−414 (in Chinese) doi: 10.7544/issn1000-1239.202110867
[44] Mcmahan H B, Moore E, Ramage D, et al. Communication-efficient learning of deep networks from decentralized data[C]// Proc of the 20th Int Conf on Artificial Intelligence and Statistics. New York: PMLR, 2017: 1273−1282
[45] Kairouz P, Mcmahan H B, Avent B, et al. Advances and open problems in federated learning[J]. Foundation Trends in Machine Learning, 2021, 14(1): 1−210
[46] Hinton G E, Vinyals O, Dean J. Distilling the knowledge in a neural network [J]. arXiv preprint, arXiv: 1503.02531, 2015
[47] Thapa C, Chamikara M A P, Camtepe S, et al. SplitFed: When federated learning meets split learning[C]// Proc of the 36th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2022: 8485−8493
[48] Lu Ying, Luo Lingkun, Huang Di, et al. Knowledge transfer in vision recognition: A survey [J]. ACM Computing Surveys, 2020, 53(2): Article 37
[49] Zhao Zhuoran, Barijough K M, Gerstlauer A. DeepThings: Distributed adaptive deep learning inference on resource-constrained IoT edge clusters[J]. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2018, 37(11): 2348−2359 doi: 10.1109/TCAD.2018.2858384
[50] Tang Xin, Chen Xu, Zeng Liekang, et al. Joint multiuser dnn partitioning and computational resource allocation for collaborative edge intelligence[J]. IEEE Internet of Things Journal, 2021, 8(12): 9511−9522 doi: 10.1109/JIOT.2020.3010258
[51] Huang P H, Tu C H, Chung S M, et al. SecureTVM: A TVM-based compiler framework for selective privacy-preserving neural inference[J]. ACM Transactions on Design Automation of Electronic Systems, 2023, 28(4): 1−28
[52] He Zecheng, Zhang Tianwei, Lee R B. Model inversion attacks against collaborative inference[C]// Proc of the 35th Annual Computer Security Applications Conf. New York: ACM, 2019: 148–162
[53] Chen Hanxiao, Li Hongwei, Dong Guishan, et al. Practical membership inference attack against collaborative inference in industrial IoT[J]. IEEE Transactions on Industrial Informatics, 2022, 18(1): 477−487 doi: 10.1109/TII.2020.3046648
[54] Ayad A, Renner M, Schmeink A. Improving the communication and computation efficiency of split learning for IoT applications[C/OL]// Proc of the 2021 IEEE Global Communications Conf. Piscataway, NJ: IEEE, 2021[2024-08-17]. https://ieeexplore.ieee.org/document/9685493
[55] Li Tian, Sahu A K, Talwalkar A, et al. Federated learning: Challenges, methods, and future directions[J]. IEEE Signal Processing Magazine, 2020, 37(3): 50−60 doi: 10.1109/MSP.2020.2975749
[56] Zhao Yuchen, Barnaghi P, Haddadi H. Multimodal federated learning on IoT data[C]// Proc of the 7th IEEE/ACM Int Conf on Internet-of-Things Design and Implementation. Piscataway, NJ: IEEE, 2022: 43−54
[57] Kirkpatrick J, Pascanu R, Rabinowitz N, et al. Overcoming catastrophic forgetting in neural networks[J]. Proceedings of the National Academy of Sciences, 2017, 114(13): 3521−3526 doi: 10.1073/pnas.1611835114
[58] Zhang Zhouyangzi, Guo Bin, Sun Wen, et al. Cross-FCL: Toward a cross-edge federated continual learning framework in mobile edge computing systems[J]. IEEE Transactions on Mobile Computing, 2022, 23(1): 313−326
[59] Zhuo H H, Feng Wenfeng, Lin Yufeng, et al. Federated deep reinforcement learning [J]. arXiv preprint, arXiv: 1901.08277, 2019
[60] Kingma D P, Ba J. Adam: A method for stochastic optimization[C/OL]// Proc of the 3rd Int Conf on Learning Representations. Washington: ICLR, 2015[2024-08-16]. https://www.semanticscholar.org/reader/a6cb366736791bcccc5c8639de5a8f9636bf87e8
[61] Zhang Jianyi, Li Ang, Tang Minxue, et al. Fed-CBS: A heterogeneity-aware client sampling mechanism for federated learning via class-imbalance reduction[C]// Proc of the 40th Int Conf on Machine Learning. New York: ACM, 2023: Article 1734
[62] Duan Moming, Liu Duo, Chen Xianzhang, et al. Self-balancing federated learning with global imbalanced data in mobile systems[J]. IEEE Transactions on Parallel and Distributed Systems, 2020, 32(1): 59−71
[63] Li Tian, Sahu A K, Zaheer M, et al. Federated optimization in heterogeneous networks[C/OL]// Proc of the 3rd Conf on Machine Learning and Systems. Indio, CA: MLSys. org, 2020[2024-08-16]. https://proceedings.mlsys.org/paper_files/paper/2020/hash/1f5fe83998a09396ebe6477d9475ba0c-Abstract.html
[64] Karimireddy S P, Kale S, Mohri M, et al. SCAFFOLD: Stochastic controlled averaging for federated learning[C]// Proc of the 37th Int Conf on Machine Learning. New York: ACM, 2020: 5132−5143
[65] Arivazhagan M G, Aggarwal V, Singh A K, et al. Federated learning with personalization layers [J]. arXiv preprint, arXiv: 1912.00818, 2019
[66] Li Tian, Hu Shengyuan, Beirami A, et al. Ditto: Fair and robust federated learning through personalization[C]// Proc of the 38th Int Conf on Machine Learning. New York: ACM, 2021: 6357−6368
[67] Xie Cong, Koyejo O, Gupta I. Asynchronous federated optimization [J]. arXiv preprint, arXiv: 1903.03934, 2019
[68] Lu Yunlong, Huang Xiaohong, Zhang Ke, et al. Blockchain empowered asynchronous federated learning for secure data sharing in Internet of vehicles[J]. IEEE Transactions on Vehicular Technology, 2020, 69(4): 4298−4311 doi: 10.1109/TVT.2020.2973651
[69] Sun Yuchang, Shao Jiawei, Mao Yuyi, et al. Semi-decentralized federated edge learning with data and device heterogeneity[J]. IEEE Transactions on Network and Service Management, 2023, 20(2): 1487−1501 doi: 10.1109/TNSM.2023.3252818
[70] Zhang Feilong, Liu Xianming, Lin Shiyi, et al. No one idles: Efficient heterogeneous federated learning with parallel edge and server computation[C]// Proc of the 40th Int Conf on Machine Learning. New York: ACM, 2023: 41399−41413
[71] Diao Enmao, Ding Jie, Tarokh V. HeteroFL: Computation and communication efficient federated learning for heterogeneous clients[C] // Proc of the 2021 Int Conf on Learning Representations. Washington: ICLR, 2021: 1−24
[72] Alam S, Liu Luyang, Yan Ming, et al. FedRolex: Model-heterogeneous federated learning with rolling sub-model extraction[C] // Proc of the 36th Annual Conf on Neural Information Processing Systems. Cambridge, MA: MIT, 2022: 29677−29690
[73] He Chaoyang, Annavaram M, Avestimehr S. Group knowledge transfer: Federated learning of large CNNs at the edge[C]// Proc of the 34th Int Conf on Neural Information Processing Systems. New York: Curran Associates Inc, 2020: Article 1180
[74] Itahara S, Nishio T, Koda Y, et al. Distillation-based semi-supervised federated learning for communication-efficient collaborative training with non-IID private data[J]. IEEE Transactions on Mobile Computing, 2023, 22(1): 191−205 doi: 10.1109/TMC.2021.3070013
[75] Lin Tao, Kong Lingjing, Stich S U, et al. Ensemble distillation for robust model fusion in federated learning[C]// Proc of the 34th Int Conf on Neural Information Processing Systems. New York: Curran Associates Inc, 2020: Article 198
[76] Lin Yiming, Gao Yuan, Gong Maoguo, et al. Federated learning on multimodal data: A comprehensive survey[J]. Machine Intelligence Research, 2023, 20(4): 539−553 doi: 10.1007/s11633-022-1398-0
[77] Xiong Baochen, Yang Xiaoshan, Qi Fan, et al. A unified framework for multi-modal federated learning [J]. Neurocomputing, 2022, 480: 110−118
[78] Lu Jiasen, Yang Jianwei, Batra D, et al. Hierarchical question-image co-attention for visual question answering[C]// Proc of the 30th Int Conf on Neural Information Processing Systems. New York: Curran Associates Inc, 2016: 289−297
[79] Liu Fenglin, Wu Xian, Ge Shen, et al. Federated learning for vision-and-language grounding problems[C]// Proc of the 34th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2020: 11572−11579
[80] Chen Jiayi, Zhang Aidong. FedMSplit: Correlation-adaptive federated multi-task learning across multimodal split networks[C]// Proc of the 28th ACM SIGKDD Conf on Knowledge Discovery and Data Mining. New York: ACM, 2022: 87–96
[81] Zhang Rongyu, Chi Xiaowei, Liu Guiliang, et al. Unimodal training-multimodal prediction: Cross-modal federated learning with hierarchical aggregation [J]. arXiv preprint, arXiv: 2303.15486, 2023
[82] Liu Boyi, Wang Lujia, Liu Ming. Lifelong federated reinforcement learning: A learning architecture for navigation in cloud robotic systems[J]. IEEE Robotics and Automation Letters, 2019, 4(4): 4555−4562 doi: 10.1109/LRA.2019.2931179
[83] Jiang Ziyue, Ren Yi, Lei Ming, et al. FedSpeech: Federated text-to-speech with continual learning[C]// Proc of the 30th Int Joint Conf on Artifical Intelligence. Berlin: Springer, 2021: 3829−3835
[84] Hung S C Y, Tu Chenghao, Wu Chengen, et al. Compacting, picking and growing for unforgetting continual learning[C]// Proc of the 33rd Int Conf on Neural Information Processing Systems. New York: Curran Associates Inc, 2019: Article 1225
[85] Usmanova A, Portet F, Lalanda P, et al. Federated continual learning through distillation in pervasive computing[C]// Proc of the 2022 IEEE Int Conf on Smart Computing. Piscataway, NJ: IEEE, 2022: 86−91
[86] Yoon J H, Jeong W Y, Lee G W, et al. Federated continual learning with weighted inter-client transfer[C]// Proc of the 38th Int Conf on Machine Learning. New York: PMLR, 2021: 12073−12086
[87] Mori J, Teranishi I, Furukawa R. Continual horizontal federated learning for heterogeneous data[C/OL]// Proc of the 2022 Int Joint Conf on Neural Networks. Piscataway, NJ: IEEE, 2022[2024-08-16]. https://www.semanticscholar.org/reader/3674cbf1900f748e5d1e981f296790256989a62e
[88] Hendryx S M, Kc D R, Walls B, et al. Federated reconnaissance: Efficient, distributed, class-incremental learning [J]. arXiv preprint, arXiv: 2109.00150, 2021
[89] Xu Chencheng, Hong Zhiwei, Huang Minlie, et al. Acceleration of federated learning with alleviated forgetting in local training[C/OL]// Proc of the 10th Int Conf on Learning Representations. Washington: ICLR, 2022[2024-07-30]. https://openreview.net/pdf?id=541PxiEKN3F
[90] Dong Jiahua, Wang Lixu, Fang Zhen, et al. Federated class-incremental learning[C]// Proc of the 2022 IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2022: 10154−10163
[91] Wang Tianyu, Liang Teng, Li Jun, et al. Adaptive traffic signal control using distributed MARL and federated learning[C]// Proc of the 20th IEEE Int Conf on Communication Technology. Piscataway, NJ: IEEE, 2020: 1242−1248
[92] Liu Haotian, Wu Wenchuan. Federated reinforcement learning for decentralized voltage control in distribution networks[J]. IEEE Transactions on Smart Grid, 2022, 13(5): 3840−3843 doi: 10.1109/TSG.2022.3169361
[93] Rezazadeh F, Bartzoudis N. A federated DRL approach for smart micro-grid energy control with distributed energy resources[C]// Proc of the 27th IEEE Int Workshop on Computer Aided Modeling and Design of Communication Links and Networks. Piscataway, NJ: IEEE, 2022: 108−114
[94] Wang Xiaofei, Wang Chenyang, Li Xiuhua, et al. Federated deep reinforcement learning for Internet of things with decentralized cooperative edge caching[J]. IEEE Internet of Things Journal, 2020, 7(10): 9441−9455 doi: 10.1109/JIOT.2020.2986803
[95] Yu Shuai, Chen Xu, Zhou Zhi, et al. When deep reinforcement learning meets federated learning: Intelligent multitimescale resource management for multiaccess edge computing in 5G ultradense network[J]. IEEE Internet of Things Journal, 2021, 8(4): 2238−2251 doi: 10.1109/JIOT.2020.3026589
[96] Wang Xiaoding, Hu Jia, Lin Hui, et al. QoS and privacy-aware routing for 5G-enabled industrial Internet of things: A federated reinforcement learning approach[J]. IEEE Transactions on Industrial Informatics, 2022, 18(6): 4189−4197 doi: 10.1109/TII.2021.3124848
[97] Jeong H J, Lee H J, Shin C H, et al. IONN: Incremental offloading of neural network computations from mobile devices to edge servers[C] //Proc of the 2018 ACM Symp on Cloud Computing. New York: ACM, 2018: 401−411
[98] Zhou Li, Samavatian M H, Bacha A, et al. Adaptive parallel execution of deep neural networks on heterogeneous edge devices[C] // Proc of the 4th ACM/IEEE Symp on Edge Computing. New York: ACM, 2019: 195−208
[99] Yang Xiang, Xu Zikang, Qi Qi, et al. PICO: Pipeline inference framework for versatile CNNs on diverse mobile devices[J]. IEEE Transactions on Mobile Computing, 2023, 23(4): 2712−2730
[100] Hu Chenghao, Li Baochun. Distributed inference with deep learning models across heterogeneous edge devices[C]// Proc of the 2022 IEEE Conf on Computer Communications. Piscataway, NJ: IEEE, 2022: 330−339
[101] Kim Y G, Wu C J. AutoScale: Energy efficiency optimization for stochastic edge inference using reinforcement learning[C]// Proc of the 53rd Annual IEEE/ACM Int Symp on Microarchitecture. Piscataway, NJ: IEEE, 2020: 1082−1096
[102] Hou Xueyu, Guan Yongjie, Han Tao, et al. DistrEdge: Speeding up convolutional neural network inference on distributed edge devices[C]// Proc of the 2022 IEEE Int Parallel and Distributed Processing Symp. Piscataway, NJ: IEEE, 2022: 1097−1107
[103] Fu Kaihua, Shi Jiuchen, Chen Quan, et al. QoS-aware irregular collaborative inference for improving throughput of DNN services[C/OL] //Proc of the 2022 Int Conf for High Performance Computing, Networking, Storage and Analysis. Piscataway, NJ: IEEE, 2022[2024-08-16]. https://dl.acm.org/doi/10.5555/3571885.3571976
[104] Lillicrap T P, Hunt J J, Pritzel A, et al. Continuous control with deep reinforcement learning[C/OL]// Proc of the 4th Int Conf on Learning Representations. Washington: ICLR, 2016[2024-07-30]. https://www.semanticscholar.org/reader/024006d4c2a89f7acacc6e4438d156525b60a98f
[105] Wang Lingdong, Xiang Liyao, Xu Jiayu, et al. Context-aware deep model compression for edge cloud computing[C]// Proc of the 40th IEEE Int Conf on Distributed Computing Systems. Piscataway, NJ: IEEE, 2020: 787−797
[106] Molina M, Muñoz O, Pascual-Iserte A, et al. Joint scheduling of communication and computation resources in multiuser wireless application offloading[C]// Proc of the 25th IEEE Annual Int Symp on Personal, Indoor, and Mobile Radio Communication. Piscataway, NJ: IEEE, 2014: 1093−1098
[107] Kang Yiping, Hauswald J, Gao Cao, et al. Neurosurgeon: Collaborative intelligence between the cloud and mobile edge[C] // Proc of the 22nd Int Conf on Architectural Support for Programming Languages and Operating Systems. New York: ACM, 2017: 615−629
[108] Zhuang Weiming, Chen Chen, Lyu Lingjuan. When foundation model meets federated learning: Motivations, challenges, and future directions [J]. arXiv preprint, arXiv: 2306.15546, 2023
[109] Li Mingzhen, Liu Yi, Liu Xiaoyan, et al. The deep learning compiler: A comprehensive survey[J]. IEEE Transactions on Parallel and Distributed Systems, 2021, 32(3): 708−727 doi: 10.1109/TPDS.2020.3030548
[110] Zeng Qunsong, Du Yuqing, Huang Kaibin, et al. Energy-efficient resource management for federated edge learning with CPU-GPU heterogeneous computing[J]. IEEE Transactions on Wireless Communications, 2021, 20(12): 7947−7962 doi: 10.1109/TWC.2021.3088910
[111] Han M, Hyun J, Park S, et al. MOSAIC: Heterogeneity-, communication-, and constraint-aware model slicing and execution for accurate and efficient inference[C]// Proc of the 28th Int Conf on Parallel Architectures and Compilation Techniques. Piscataway, NJ: IEEE, 2019: 165−177
[112] Chen Yunji, Luo Tao, Liu Shaoli, et al. DaDianNao: A machine-learning supercomputer[C]// Proc of the 47th Annual IEEE/ACM Int Symp on Microarchitecture. Piscataway, NJ: IEEE, 2014: 609−622
[113] Jouppi N P, Young C, Patil N, et al. In-datacenter performance analysis of a tensor processing unit[C/OL] // Proc of the 44th ACM/IEEE Annual Int Symp on Computer Architecture. New York: ACM, 2017[2024-08-16]. https://dl.acm.org/doi/10.1145/3079856.3080246
[114] Ro J H, Suresh A T, Wu Ke. FedJAX: Federated learning simulation with JAX [J]. arXiv preprint, arXiv: 2108.02117, 2021
[115] Lee J Y, Park W P, Mitchell N, et al. JaxPruner: A concise library for sparsity research [J]. arXiv preprint, arXiv: 2304.14082, 2023
[116] Villarrubia J, Costero L, Igual F D, et al. Improving inference time in multi-TPU systems with profiled model segmentation[C]// Proc of the 31st Euromicro Int Conf on Parallel, Distributed and Network-Based Processing. Piscataway, NJ: IEEE, 2023: 84−91
[117] Wang Zixiao, Che Biyao, Guo Liang, et al. PipeFL: Hardware/software co-design of an FPGA accelerator for federated learning[J]. IEEE Access, 2022, 10: 98649−98661 doi: 10.1109/ACCESS.2022.3206785
[118] Li H M, Rieger P, Zeitouni S, et al. FLAIRS: FPGA-accelerated inference-resistant & secure federated learning [J]. arXiv preprint, arXiv: 2308.00553, 2023
[119] Phong L T, Aono Y, Hayashi T, et al. Privacy-preserving deep learning via additively homomorphic encryption[J]. IEEE Transactions on Information Forensics and Security, 2018, 13(5): 1333−1345 doi: 10.1109/TIFS.2017.2787987
[120] Nguyen T D, Rieger P, Chen Huili, et al. FLAME: Taming backdoors in federated learning[C]// Proc of the 31st USENIX Security Symp USENIX Association. Berkeley, CA, 2022: 1415−1432
[121] 包云岗,常轶松,韩银和,等. 处理器芯片敏捷设计方法:问题与挑战[J]. 计算机研究与发展,2021,58(6):1131−1145 doi: 10.7544/issn1000-1239.2021.20210232 Bao Yungang, Chang Yisong, Han Yinhe, et al. Agile design of processor chips: Issues and challenges[J]. Journal of Computer Research and Development, 2021, 58(6): 1131−1145 (in Chinese) doi: 10.7544/issn1000-1239.2021.20210232
[122] 王凯帆,徐易难,余子濠,等. 香山开源高性能RISC-V处理器设计与实现[J]. 计算机研究与发展,2023,60(3):476−493 doi: 10.7544/issn1000-1239.202221036 Wang Kaifan, Xu Yinan, Yu Zihao, et al. XiangShan open-source high performance RISC-V processor design and implementation[J]. Journal of Computer Research and Development, 2023, 60(3): 476−493 (in Chinese) doi: 10.7544/issn1000-1239.202221036
[123] Dhilleswararao P, Boppu S, Manikandan M S, et al. Efficient hardware architectures for accelerating deep neural networks: Survey[J]. IEEE Access, 2022, 10: 131788−131828 doi: 10.1109/ACCESS.2022.3229767
[124] Zhao Yongwei, Du Zidong, Guo Qi, et al. Cambricon-F: Machine learning computers with fractal von neumann architecture[C]// Proc of the 46th ACM/IEEE Annual Int Symp on Computer Architecture. Piscataway, NJ: IEEE, 2019: 788−801
[125] Chen Tianqi, Moreau T, Jiang Ziheng, et al. TVM: An automated end-to-end optimizing compiler for deep learning[C]// Proc of the 13th USENIX Conf on Operating Systems Design and Implementation. Berkeley, CA: USENIX Association, 2018: 579–594
[126] Pytorch. Pytorch on XLA devices [EB/OL]. (2024-07-17)[2023-10-21]. https://pytorch.org/xla/master/
[127] Pytorch. Aot autograd —How to use and optimize?[EB/OL]. (2023-10-25)[2024-07-17]. https://pytorch.org/functorch/stable/notebooks/aot_autograd_optimizations.html
[128] Coral. Edge TPU compiler [EB/OL]. (2020-05-15)[2024-07-16]. https://coral.ai/docs/edgetpu/compiler/#help
[129] Nvidia. Optimizing inference on large language models with NVIDIA tensorrt-LLM, now publicly available [EB/OL]. (2023-10-19)[2024-07-16]. https://developer.nvidia.com/blog/optimizing-inference-on-llms-with-tensorrt-llm-now-publicly-available/
[130] NVIDIA. TensorRT-LLM [EB/OL]. (2023-10-24)[2024-07-17]. https://github.com/NVIDIA/TensorRT-LLM
[131] ONNX. ONNX [EB/OL]. (2024-05-24)[2024-07-17]. https://onnx.ai/
[132] MLIR. Multi-level intermediate representation overview [EB/OL]. (2017-07-17)[2024-07-17]. https://mlir.llvm.org/
[133] Jin Tian, Bercea G T, Tung D L, et al. Compiling ONNX neural network models using MLIR [J]. arXiv preprint, arXiv: 2008.08272, 2020
[134] Gao Liang, Li Li, Chen Yingwen, et al. FIFL: A fair incentive mechanism for federated learning[C]// Proc of the 50th Int Conf on Parallel Processing. New York: ACM, 2021: Article 82
[135] Team Tensorflow Lite. On-device training in tensorflow lite [EB/OL]. (2021-11-09)[2024-07-17]. https://blog.tensorflow.org/2021/11/on-device-training-in-tensorflow-lite.html
[136] Space Ts2. A comprehensive guide to tensorflow lite’s federated learning [EB/OL]. (2023-04-07)[2024-07-17]. https://ts2.space/en/a-comprehensive-guide-to-tensorflow-lites-federated-learning/
[137] He Chaoyang, Li Songze, So Jinhyun, et al. FedML: A research library and benchmark for federated machine learning [J]. arXiv preprint, arXiv: 2007.13518, 2020
[138] Beutel D J, Topal T, Mathur A, et al. Flower: A friendly federated learning research framework [J]. arXiv preprint, arXiv: 2007.14390, 2020
[139] Jeong E J, Kim J R, Ha S H. TensorRT-based framework and optimization methodology for deep learning inference on jetson boards [J]. ACM Transactions on Embedded Computing Systems, 2022, 21(5): Article 51
[140] Jiang Xiaotang, Wang Huan, Chen Yiliu, et al. MNN: A universal and efficient inference engine[C/OL]// Proc of the 3rd Conf on Machine Learning and Systems. Austin, Texas: MLSys. org, 2020[2024-07-30]. https://proceedings.mlsys.org/paper_files/paper/2020/file/bc19061f88f16e9ed4a18f0bbd47048a-Paper.pdf
[141] Lv Chengfei, Niu Chaoyue, Gu Renjie, et al. Walle: An end-to-end, general-purpose, and large-scale production system for device-cloud collaborative machine learning[C/OL]// Proc of the 16th USENIX Symp on Operating Systems Design and Implementation. Berkeley, CA: USENIX Association, 2022[2024-08-16]. https://www.usenix.org/conference/osdi22/presentation/lv
[142] Aminabadi R Y, Rajbhandari S, Awan A A, et al. DeepSpeed- inference: Enabling efficient inference of transformer models at unprecedented scale[C/OL]// Proc of the 2022 Int Conf for High Performance Computing, Networking, Storage and Analysis. Piscataway, NJ: IEEE, 2022[2024-08-16]. https://dl.acm.org/doi/abs/10.5555/3571885.3571946
[143] Liu Lumin, Zhang Jun, Song S H, et al. Client-edge-cloud hierarchical federated learning[C/OL]// Proc of the 2020 IEEE Int Conf on Communications. Piscataway, NJ: IEEE, 2020[2024-08-17]. https://ieeexplore.ieee.org/document/9148862
[144] Yang Shusen, Zhang Zhanhua, Zhao Cong, et al. CNNPC: End-edge-cloud collaborative cnn inference with joint model partition and compression[J]. IEEE Transactions on Parallel and Distributed Systems, 2022, 33(12): 4039−4056 doi: 10.1109/TPDS.2022.3177782
[145] Korkmaz C, Kocas H E, Uysal A, et al. Chain FL: Decentralized federated machine learning via blockchain[C]// Proc of the 2nd Int Conf on Blockchain Computing and Applications. Piscataway, NJ: IEEE, 2020: 140−146
[146] Du Jiangsu, Shen Minghua, Du Yunfei. A distributed in-situ CNN inference system for IoT applications[C]// Proc of the 38th Int Conf on Computer Design. Piscataway, NJ: IEEE, 2020: 279−287
[147] Lyu L, Yu Jiangshan, Nandakumar K, et al. Towards fair and privacy-preserving federated deep models[J]. IEEE Transactions on Parallel and Distributed Systems, 2020, 31(11): 2524−2541 doi: 10.1109/TPDS.2020.2996273
[148] Luping Wang, Wei Wang, Li Bo. CMFL: Mitigating communication overhead for federated learning[C]// Proc of the 39th IEEE Int Conf on Distributed Computing Systems. Piscataway, NJ: IEEE, 2019: 954−964
[149] Yu Hao, Yang Sen, Zhu Shenghuo. Parallel restarted SGD with faster convergence and less communication: Demystifying why model averaging works for deep learning[C] //Proc of the 33rd AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2019: 5693−5700
[150] You Chaoqun, Guo Kun, Feng Gang, et al. Automated federated learning in mobile-edge networks—Fast adaptation and convergence[J]. IEEE Internet of Things Journal, 2023, 10(15): 13571−13586 doi: 10.1109/JIOT.2023.3262664
[151] Heinbaugh C E, Luz-Ricca E, Shao Huajie. Data-free one-shot federated learning under very high statistical heterogeneity[C/OL] //Proc of the 11th Int Conf on Learning Representations. Washington: ICLR, 2023[2024-07-30]. https://openreview.net/forum?id=_hb4vM3jspB
[152] Sattler F, Wiedemann S, Müller K R, et al. Robust and communication-efficient federated learning from non-I. I. D. data[J]. IEEE Transactions on Neural Networks and Learning Systems, 2020, 31(9): 3400−3413 doi: 10.1109/TNNLS.2019.2944481
[153] Gao Hongchang, Xu An, Huang Heng. On the convergence of communication-efficient local SGD for federated learning[C]// Proc of the 35th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2021: 7510−7518
[154] Hönig R, Zhao Yiren, Mullins R D. DAdaQuant: Doubly-adaptive quantization for communication-efficient federated learning[C] //Proc of the 39th Int Conf on Machine Learning. New York: ACM, 2022: 8852−8866
[155] Nguyen M D, Lee S M, Pham Q V, et al. HCFL: A high compression approach for communication-efficient federated learning in very large scale IoT networks[J]. IEEE Transactions on Mobile Computing, 2023, 22(11): 6495−6507 doi: 10.1109/TMC.2022.3190510
[156] Dai Rong, Shen Li, He Fengxiang, et al. DisPFL: Towards communication-efficient personalized federated learning via decentralized sparse training[C]// Proc of the 39th Int Conf on Machine Learning. New York: ACM, 2022: 4587−4604
[157] Wen Hui, Wu Yue, Li Jingjing, et al. Communication-efficient federated data augmentation on non-IID data[C]// Proc of the 2022 IEEE/CVF Conf on Computer Vision and Pattern Recognition Workshops Piscataway, NJ: IEEE, 2022: 3376−3385
[158] Zhang Zhanhua, Yang Shusen, Zhao Cong, et al. RtCoInfer: Real-time collaborative CNN inference for stream analytics on ubiquitous images[J]. IEEE Journal on Selected Areas in Communications, 2023, 41(4): 1212−1226 doi: 10.1109/JSAC.2023.3242730
[159] Chen Xu, Jiao Lei, Li Wenzhong, et al. Efficient multi-user computation offloading for mobile-edge cloud computing[J]. IEEE/ACM Transactions on Networking, 2016, 24(5): 2795−2808 doi: 10.1109/TNET.2015.2487344
[160] Yi Changyan, Cai Jun, Su Zhou. A multi-user mobile computation offloading and transmission scheduling mechanism for delay-sensitive applications[J]. IEEE Transactions on Mobile Computing, 2020, 19(1): 29−43 doi: 10.1109/TMC.2019.2891736
[161] Ale L H, Zhang Ning, Fang Xiaojie, et al. Delay-aware and energy-efficient computation offloading in mobile-edge computing using deep reinforcement learning[J]. IEEE Transactions on Cognitive Communications and Networking, 2021, 7(3): 881−892 doi: 10.1109/TCCN.2021.3066619
[162] Mozaffariahrar E, Theoleyre F, Menth M. A survey of wi-fi 6: Technologies, advances, and challenges[J]. Future Internet, 2022, 14(10): 293−345 doi: 10.3390/fi14100293
[163] Das A K, Roy S, Bandara E, et al. Securing age-of-information (AoI)-enabled 5G smart warehouse using access control scheme[J]. IEEE Internet of Things Journal, 2023, 10(2): 1358−1375 doi: 10.1109/JIOT.2022.3205245
[164] Mehr H D, Polat H. Human activity recognition in smart home with deep learning approach[C]// Proc of the 7th Int Istanbul Smart Grids and Cities Congress and Fair. Piscataway, NJ: IEEE, 2019: 149−153
[165] Qi Lianyong, Hu Chunhua, Hu Chunhua, et al. Privacy-aware data fusion and prediction with spatial-temporal context for smart city industrial environment[J]. IEEE Transactions on Industrial Informatics, 2021, 17(6): 4159−4167 doi: 10.1109/TII.2020.3012157
[166] Chen Yiqiang, Wang Jindong, Yu Chaohui, et al. FedHealth: A federated transfer learning framework for wearable healthcare[J]. IEEE Intelligent Systems, 2020, 35(4): 83−93 doi: 10.1109/MIS.2020.2988604
[167] Lee S, Choi D H. Federated reinforcement learning for energy management of multiple smart homes with distributed energy resources[J]. IEEE Transactions on Industrial Informatics, 2022, 18(1): 488−497 doi: 10.1109/TII.2020.3035451
[168] 王帅,李丹. 分布式机器学习系统网络性能优化研究进展[J]. 计算机学报,2022,45(7):1384−1412 Wang Shuai, Li Dan. Research progress on network performance optimization of distributed machine learning system[J]. Chinese Journal of Computers, 2022, 45(7): 1384−1412 (in Chinese)
[169] Martinez I, Hafid A S, Jarray A. Design, resource management, and evaluation of fog computing systems: A survey[J]. IEEE Internet of Things Journal, 2021, 8(4): 2494−2516 doi: 10.1109/JIOT.2020.3022699
[170] Lu Yan, Shu Yuanchao, Tan Xu, et al. Collaborative learning between cloud and end devices: An empirical study on location prediction[C]// Proc of the 4th ACM/IEEE Symp on Edge Computing. New York: ACM, 2019: 139–151
[171] Encora. Ahead-of-time compilation vs just-in-time compilation— Part 1 of understanding angular [EB/OL]. (2024-07-14)[2024-07-16]. https://www.encora.com/insights/ahead-of-time-compilation-vs-just-in-time-compilation-part-1
[172] Yu Hao, Yang Sen, Zhu Shenghuo. Parallel restarted SGD with faster convergence and less communication: Demystifying why model averaging works for deep learning[C]// Proc of the 33rd AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2019: Article 698
[173] Tramer F, Zhang Fan, Juels A, et al. Stealing machine learning models via prediction APIs[C]// Proc of the 25th USENIX Conf on Security Symp. Berkeley, CA: USENIX Association, 2016: 601–618
[174] Yin Yupeng, Zhang Xianglong, Zhang Huanle, et al. Ginver: Generative model inversion attacks against collaborative inference[C]// Proc of the 2023 ACM Web Conf. New York: ACM, 2023: 2122–2131
[175] Jin Xiao, Chen Pinyu, Hsu Chiayi, et al. CAFE: Catastrophic data leakage in vertical federated learning[C/OL] //Proc of the 35th Conf on Neural Information Processing Systems. Cambridge, MA: MIT, 2021[2024-07-30]. https://papers.nips.cc/paper/2021/hash/08040837089cdf46631a10aca5258e16-Abstract.html
[176] Nguyen T D T, Lai P, Tran K, et al. Active membership inference attack under local differential privacy in federated learning[C]// Proc of the 26th Int Conf on Artificial Intelligence and Statistics. Piscataway, NJ: IEEE, 2023: 5714−5730
[177] Li Jiacheng, Li Ninghui, Ribeiro B. Effective passive membership inference attacks in federated learning against overparameterized models[C/OL] //Proc of the 11th Int Conf on Learning Representation. Washington: ICLR, 2023[2024-07-30]. https://openreview.net/pdf?id=QsCSLPP55Ku
[178] Melis L, Song C, Cristofaro E D, et al. Exploiting unintended feature leakage in collaborative learning[C]// Proc of the 40th IEEE Symp on Security and Privacy. Piscataway, NJ: IEEE, 2019: 691−706
[179] Wang Zhibo, Huang Yuting, Song Mengkai, et al. Poisoning-assisted property inference attack against federated learning[J]. IEEE Transactions on Dependable and Secure Computing, 2023, 20(4): 3328−3340 doi: 10.1109/TDSC.2022.3196646
[180] Kourtellis N, Katevas K, Perino D. FLaaS: Federated learning as a service[C] //Proc of the 1st Workshop on Distributed Machine Learning. New York: ACM, 2020: 7−13
[181] Truong J B, Maini P, Walls R J, et al. Data-free model extraction[C]// Proc of the 2021 IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2021: 4769−4778
[182] Liu Sijia, Chen Pinyu, Kailkhura B, et al. A primer on zeroth-order optimization in signal processing and machine learning: Principals, recent advances, and applications[J]. IEEE Signal Processing Magazine, 2020, 37(5): 43−54 doi: 10.1109/MSP.2020.3003837
[183] Fraboni Y, Vidal R, Lorenzi M. Free-rider attacks on model aggregation in federated learning[C]// Proc of the 24th Int Conf on Artificial Intelligence and Statistics. New York: PMLR, 2021: 1846−1854
[184] Lin Jierui, Du Min, Liu Jian. Free-riders in federated learning: Attacks and defenses [J]. arXiv preprint, arXiv: 1911.12560, 2019
[185] Abadi M, Chu Andy, Goodfellow I J, et al. Deep learning with differential privacy[C]// Proc of the 2016 ACM SIGSAC Conf on Computer and Communications Security. New York: ACM, 2016: 308–318
[186] Wang Baocang, Chen Yange, Jiang Hang, et al. PPeFL: Privacy-preserving edge federated learning with local differential privacy[J]. IEEE Internet of Things Journal, 2023, 10(17): 15488−15500 doi: 10.1109/JIOT.2023.3264259
[187] He Zecheng, Zhang Tianwei, Lee R. B. Attacking and protecting data privacy in edge–cloud collaborative inference systems[J]. IEEE Internet of Things Journal, 2021, 8(12): 9706−9716 doi: 10.1109/JIOT.2020.3022358
[188] Jiang Bin, Li Jianqiang, Wang Huihui, et al. Privacy-preserving federated learning for industrial edge computing via hybrid differential privacy and adaptive compression[J]. IEEE Transactions on Industrial Informatics, 2023, 19(2): 1136−1144 doi: 10.1109/TII.2021.3131175
[189] Mironov I. Rényi differential privacy [C]//Proc of the 30th IEEE Computer Security Foundations Symp. Piscataway, NJ: IEEE 2017: 263−275
[190] Ryu Jihyeon, Zheng Yifeng, Gao Yansong, et al. Can differential privacy practically protect collaborative deep learning inference for IoT?[J/OL]. Wireless Networks, 2022[2024-07-30]. https://link.springer.com/article/10.1007/s11276-022-03113-7
[191] Cheon J H, Kim A, Kim M, et al. Homomorphic encryption for arithmetic of approximate numbers[C/OL]// Proc of the 2017 Int Conf on the Theory and Application of Cryptology and Information Security. Berlin: Springer, 2017 [2024-07-30]. https://link.springer.com/chapter/10.1007/978-3-319-70694-8_15
[192] Zhang Chengliang, Li Suyi, Xia Junzhe, et al. BatchCrypt: Efficient homomorphic encryption for cross-silo federated learning[C]// Proc of the 2020 USENIX Annual Technical Conf. Berkeley, CA: USENIX Association, 2020: 493−506
[193] Zhu Yilan, Wang Xinyao, Ju Lei, et al. FxHENN: FPGA-based acceleration framework for homomorphic encrypted CNN inference[C]// Proc of the 29th IEEE Int Symp on High-Performance Computer Architecture. Piscataway, NJ: IEEE, 2023: 896−907
[194] Yang Zhaoxiong, Hu Shuihai, Chen Kai. FPGA-based hardware accelerator of homomorphic encryption for efficient federated learning [J]. arXiv preprint, arXiv: 2007.10560, 2020
[195] Juvekar C, Vaikuntanathan V, Chandrakasan A P. Gazelle: A low latency framework for secure neural network inference[C]// Proc of the 27th USENIX Security Symp. Berkeley, CA: USENIX Association, 2018: 1651−1668
[196] Li Yiran, Li Hongwei, Xu Guowen, et al. Practical privacy-preserving federated learning in vehicular fog computing[J]. IEEE Transactions on Vehicular Technology, 2022, 71(5): 4692−705 doi: 10.1109/TVT.2022.3150806
[197] Jarin I, Eshete B. PRICURE: Privacy-preserving collaborative inference in a multi-party setting[C]// Proc of the 2021 ACM Workshop on Security and Privacy Analytics. New York: ACM, 2021: 25–35
[198] Liu Yang, Kang Yan, Xing Chaoping, et al. A secure federated transfer learning framework[J]. IEEE Intelligent Systems, 2020, 35(4): 70−82 doi: 10.1109/MIS.2020.2988525
[199] Tramèr F, Boneh D. Slalom: Fast, verifiable and private execution of neural networks in trusted hardware [C/OL]//Proc of the 7th Int Conf on Learning Representations. Washington: ICLR, 2019 [2024-07-30]. https://openreview.net/pdf?id=rJVorjCcKQ
[200] Intel. Innovative technology for CPU based attestation and sealing [EB/OL]. (2013-08-14)[2024-07-17]. https://www.intel.com/content/www/us/en/developer/articles/technical/innovative-technology-for-cpu-based-attestation-and-sealing.html
[201] Kalapaaking A P, Khalil I, Rahman M S, et al. Blockchain-based federated learning with secure aggregation in trusted execution environment for Internet-of-things[J]. IEEE Transactions on Industrial Informatics, 2023, 19(2): 1703−1714 doi: 10.1109/TII.2022.3170348
[202] Kuznetsov E, Chen Yitao, Zhao Ming. SecureFL: Privacy preserving federated learning with SGX and trustzone [C]//Proc of the 2021 IEEE/ACM Symp on Edge Computing. Piscataway, NJ: IEEE, 2021: 55−67
[203] Li Yuepeng, Zeng Deze, Gu Lin, et al. Efficient and secure deep learning inference in trusted processor enabled edge clouds[J]. IEEE Transactions on Parallel and Distributed Systems, 2022, 33(12): 4311−4325 doi: 10.1109/TPDS.2022.3187772
[204] Law A, Leung C, Poddar R, et al. Secure collaborative training and inference for xgboost[C]// Proc of the 2020 Workshop on Privacy-Preserving Machine Learning in Practice. New York: ACM, 2020: 21–26
[205] Juuti M, Szyller S, Marchal S, et al. PRADA: Protecting against dnn model stealing attacks[C]// Proc of the 2019 IEEE European Symp on Security and Privacy. Piscataway, NJ: IEEE, 2019: 512−527
[206] Lin Jierui, Du Min, Liu Jian. Free-riders in federated learning: Attacks and defenses [J]. arXiv prerprint, arXiv: 1911.12560, 2019
[207] Xu Xinyi, Lyu Lingjuan. A reputation mechanism is all you need: Collaborative fairness and adversarial robustness in federated learning [C/OL]// Proc of the 2021 Int Workshop on Federated Learning for User Privacy and Data Confidentiality in Conjunction with ICML. New York: ACM, 2020[2024-08-16]. https://www.semanticscholar.org/reader/329734fdbb35faab89e14eb9b105a665d7a5f079
[208] Zhang Jiliang, Peng Shuang, Gao Yansong, et al. APMSA: Adversarial perturbation against model stealing attacks[J]. IEEE Transactions on Information Forensics and Security, 2023, 18: 1667−1679 doi: 10.1109/TIFS.2023.3246766
[209] Tan Jingxuan, Zhong Nan, Qian Zhenxing, et al. Deep neural network watermarking against model extraction attack[C]// Proc of the 31st ACM Int Conf on Multimedia. New York: ACM, 2023: 1588−1597
[210] Zhang Haitian, Hua Guang, Wang Xinya, et al. Categorical inference poisoning: Verifiable defense against black-box DNN model stealing without constraining surrogate data and query times[J]. IEEE Transactions on Information Forensics and Security, 2023, 18: 1473−1486 doi: 10.1109/TIFS.2023.3244107
[211] Dai Hongning, Zheng Zibin, Zhang Yan. Blockchain for Internet of things: A survey[J]. IEEE Internet of Things Journal, 2019, 6(5): 8076−8094 doi: 10.1109/JIOT.2019.2920987
[212] Biggio B, Corona I, Maiorca D, et al. Evasion attacks against machine learning at test time [J]. arXiv preprint, arXiv: 1708.06131, 2013
[213] Tang Pengfei, Wang Wenjie, Lou Jian, et al. Generating adversarial examples with distance constrained adversarial imitation networks[J]. IEEE Transactions on Dependable and Secure Computing, 2022, 19(6): 4145−4155 doi: 10.1109/TDSC.2021.3123586
[214] Bagdasaryan E, Veit A, Hua Yiqing, et al. How to backdoor federated learning[C]// Proc of the 23rd Int Conf on Artificial Intelligence and Statistics. New York: PMLR, 2020: 2938−2948
[215] Zhang Jiale, Chen Bing, Cheng Xiang, et al. PoisonGAN: Generative poisoning attacks against federated learning in edge computing systems[J]. IEEE Internet of Things Journal, 2021, 8(5): 3310−3322 doi: 10.1109/JIOT.2020.3023126
[216] Qammar A, Ding Jianguo, Ning Huansheng. Federated learning attack surface: Taxonomy, cyber defences, challenges, and future directions[J]. Artificial Intelligence Review, 2022, 55(5): 3569−3606 doi: 10.1007/s10462-021-10098-w
[217] Kim T, Singh S, Madaan N, et al. Characterizing internal evasion attacks in federated learning[C]// Proc of the 26th Int Conf on Artificial Intelligence and Statistics. New York: PMLR, 2023: 907−921
[218] Bao Hongyan, Han Yufei, Zhou Yujun, et al. Towards effcient and domain-agnostic evasion attack with high-dimensional categorical inputs[C]// Proc of the 37th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2023: 6753−6761
[219] Demontis A, Melis M, Pintor M, et al. Why do adversarial attacks transfer? Explaining transferability of evasion and poisoning attacks[C]// Proc of the 28th USENIX Conf on Security Symp. Berkeley, CA: USENIX Association, 2019: 321–338
[220] Blanchard P, Mahdi E, Guerraoui R, et al. Machine learning with adversaries: Byzantine tolerant gradient descent[C]// Proc of the 31st Int Conf on Neural Information Processing Systems. New York: Curran Associates Inc, 2017: 118–128
[221] Lugan S, Desbordes P, Brion E, et al. Secure architectures implementing trusted coalitions for blockchained distributed learning[J]. IEEE Access, 2019, 7: 181789−181799 doi: 10.1109/ACCESS.2019.2959220
[222] Bao Hongyan, Han Yufei, Zhou Yujun, et al. Towards understanding the robustness against evasion attack on categorical data[C/OL]// Proc of the 10th Int Conf on Learning Representations. Washington: ICLR, 2022[2024-07-30]. https://openreview.net/pdf?id=BmJV7kyAmg
[223] Cao Xiaoyu, Gong N Z Q. Mitigating evasion attacks to deep neural networks via region-based classification[C/OL]// Proc of the 33rd Annual Computer Security Applications Conf. New York: ACM, 2017: 278−287
[224] Zizzo G, Rawat A, Sinn M, et al. FAT: Federated adversarial training [J]. arXiv preprint, arXiv: 2012.01791, 2020
[225] Kumari K, Rieger P, Fereidooni H, et al. BayBFed: Bayesian backdoor defense for federated learning[C]// Proc of the 2023 IEEE Symp on Security and Privacy (SP). Piscataway, NJ: IEEE, 2023: 737−754
[226] Cao Xiaoyu, Jia Jinyuan, Zhang Zaixi, et al. FedRecover: Recovering from poisoning attacks in federated learning using historical information[C]// Proc of the 44th IEEE Symp on Security and Privacy. Piscataway, NJ: IEEE, 2023: 1366−1383
[227] Wen Jing, Hui L C K, Yiu S M, et al. DCN: Detector-corrector network against evasion attacks on deep neural networks[C]// Proc of the 48th Annual IEEE/IFIP Int Conf on Dependable Systems and Networks Workshops. Piscataway, NJ: IEEE, 2018: 215−221
[228] Debicha I, Bauwens R, Debatty T, et al. TAD: Transfer learning-based multi-adversarial detection of evasion attacks against network intrusion detection systems[J]. Future Generation Computer Systems, 2023, 138: 185−197 doi: 10.1016/j.future.2022.08.011
[229] Lecuyer M, Atlidakis V, Geambasu R, et al. Certified robustness to adversarial examples with differential privacy[C]// Proc of the 40th IEEE Symp on Security and Privacy. Piscataway, NJ: IEEE, 2019: 656−672
[230] Byrd D, Polychroniadou A. Differentially private secure multi-party computation for federated learning in financial applications[C]// Proc of the 1st ACM Int Conf on AI in Finance. New York: ACM, 2021: Article 16
[231] Rathee D, Rathee M, Kumar N, et al. Cryptflow2: Practical 2-party secure inference[C]// Proc of the 2020 ACM SIGSAC Conf on Computer and Communications Security. New York: ACM, 2020: 325–342
[232] He Xuanli, Lyu L, Xu Qiongkai, et al. Model extraction and adversarial transferability, your bert is vulnerable![C]// Proc of the 2021 Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA: ACL, 2021: 2006–2012
[233] Keskar N S, Mccann B, Xiong Caiming. The thieves on sesame street are polyglots-extracting multilingual models from monolingual APIs[C]// Proc of the 2020 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2020: 6203–6207
[234] Wu Huaming, Wolter K. Stochastic analysis of delayed mobile offloading in heterogeneous networks[J]. IEEE Transactions on Mobile Computing, 2018, 17(2): 461−474 doi: 10.1109/TMC.2017.2711014
[235] Tu Xuezhen, Zhu Kun, Luong N C, et al. Incentive mechanisms for federated learning: From economic and game theoretic perspective[J]. IEEE Transactions on Cognitive Communications and Networking, 2022, 8(3): 1566−1593 doi: 10.1109/TCCN.2022.3177522
[236] Liu Shaoshan, Liu Liangkai, Tang Jie, et al. Edge computing for autonomous driving: Opportunities and challenges[J]. Proceedings of the IEEE, 2019, 107(8): 1697−1716 doi: 10.1109/JPROC.2019.2915983
[237] Li Yuanchun, Wen Hao, Wang Weijun, et al. Personal LLM agents: Insights and survey about the capability, efficiency and security [J]. arXiv preprint, arXiv: 2401.05459, 2024
[238] Yu Sixing, Muñoz J P, Jannesari A. Federated foundation models: Privacy-preserving and collaborative learning for large models [J]. arXiv preprint, arXiv: 2305.11414, 2023
-
期刊类型引用(1)
1. LUO Haoran,HU Shuisong,WANG Wenyong,TANG Yuke,ZHOU Junwei. Research on Multi-Core Processor Analysis for WCET Estimation. ZTE Communications. 2024(01): 87-94 . 必应学术
其他类型引用(4)