-
摘要:
从智能手机、智能手表等小型终端智能设备,到智能家居、智能网联车等大型应用,再到智慧生活、智慧农业等,人工智能已经逐渐步入人们的生活,改变传统的生活方式. 各种各样的智能设备会产生海量的数据,传统的云计算模式已无法适应新的环境. 边缘计算在靠近数据源的边缘侧实现对数据的处理,可以有效降低数据传输时延,减轻网络传输带宽压力,提高数据隐私安全等. 在边缘计算架构上搭建人工智能模型,进行模型的训练和推理,实现边缘的智能化,对于当前社会至关重要. 由此产生的新的跨学科领域——边缘智能(edge intelligence,EI),开始引起了广泛的关注. 全面调研了边缘智能相关研究:首先,介绍了边缘计算、人工智能的基础知识,并引出了边缘智能产生的背景、动机及挑战. 其次,分别从边缘智能所要解决的问题、边缘智能模型研究以及边缘智能算法优化3个角度对边缘智能相关技术研究展开讨论. 然后,介绍边缘智能中典型的安全问题. 最后,从智慧工业、智慧生活及智慧农业3个层面阐述其应用,并展望了边缘智能未来的发展方向和前景.
Abstract:From smart terminal devices such as smart phones and smart watches, to large-scale intelligent applications, such as smart homes, Internet of vehicles, intelligent life and intelligent agriculture. Artificial intelligence (AI) has gradually entered and changed the life of human being. In this context, various of intelligent devices have produced massive amount of data, making traditional cloud computing paradigm unable to adapt to the unprecedented challenge. Instead, edge computing which aims to process the data at the edge of the network has the great potential to reduce latency and bandwidth pressure, as well as protect data privacy and security. Building AI models upon edge computing architecture, training and inferring the model, realizing the intelligence of the edge are crucial to the current social. As a result, a new interdisciplinary field, edge intelligence (EI), has begun to attract widespread attention. We make a comprehensive study on EI. Specifically, firstly introduce the basic knowledge of edge computing and AI, which leads to the background, motivation and challenges of EI. Secondly, the research on EI related technologies is discussed from three aspects, namely, the problems, the models and the algorithm. Further, the typical security problems in EI are introduced. Next, the applications of EI are described from three aspects of intelligent industry, intelligent life and intelligent agriculture. Finally, we propose the direction and prospect of EI in the future development.
-
物联网、大数据、边缘计算等新一代人工智能技术飞速发展,为智能交通系统的实现提供技术支持. 智能交通系统(intelligent traffic system,ITS)是一种综合运用多种先进技术的交通运输管理系统,用于营造安全、高效、环保的交通环境. 智能交通信号控制是智能交通系统的核心,它提供动态更新、综合计算、实时决策等功能.
近年来,物联网技术的研究取得突破性进展,也推动智能交通信号控制的广泛应用. 基于物联网技术实现对交通环境的全方位感知,云计算技术为海量数据提供计算服务,以数据为中心进行决策[1],具有实时精准的特性. 然而,采用云计算技术难以满足大规模场景下信号控制器低时延、高响应、实时计算的需求. 边缘计算技术将云计算能力从中心下沉到边缘节点,形成端—边—云一体化协同计算系统,实现就近实时计算,更加满足信号控制系统高实时性要求.
与此同时,对交通信号优化控制问题的研究也从未停止,采用整数规划、群体智能方法、传统机器学习方法等传统优化方法寻求最优控制方案的研究取得一定成果. 强化学习[2](reinforcement learning,RL)在解决连续决策问题上表现优异,被提出可适用于解决交通场景问题[3],在解决大规模交通信号协同控制问题上发挥着越来越重要的作用.
强化学习通过智能体试错的方式探索环境,并根据探索环境得到的经验自学习建立最优行为策略模型,最大化累计奖励. 当环境中智能体数量增加,每个智能体单独进行环境探索并学习. 从单个智能体的角度来看,环境出现非平稳性,方法不利于收敛. 在目前的研究中对多智能体协同方式大部分采用的同步决策机制,即统一时钟频率,以固定的决策周期进行决策.
在实际场景中,由于交叉口地理位置、交通管制要求以及功能的不同,车流通过交叉口的时间往往具有很大差异. 同步决策方式导致交通信号绿灯利用率较低,交叉口通行服务质量下降. 如图1所示,在时刻t交叉口i进行动作决策并切换交通灯相位. 在t+Δt时,交叉口i可通行车道(东西方向车道)已无等待车辆,但仍然持有通行权(绿灯空放现象). 由于未到约定好的动作决策周期,导致其他车道无法竞争通行权, 从而造成一部分绿灯时间损失,交通信号利用率降低.
在本研究中设计一种基于端—边—云协同的交通信号控制架构,并将异步通信与交通信号自适应控制相结合,提出一种多智能体之间可以使用不同决策周期的异步决策机制,降低绿灯损失时间,提高交叉口时间利用率.
本文的主要贡献包括3个方面:
1)针对集中控制系统高、时延低效率这一问题,提出一种基于端—边—云的交通信号分布式控制架构. 实现在边缘节点进行数据预处理,在端节点决策的方式减少传输时延.
2)针对同步决策导致交叉口时间利用率低问题,设计一种基于异步决策的交通信号优化机制. 智能体根据交叉口车辆等待时间更新决策周期的方法,增加单个交叉口有效绿灯时间,避免交叉口绿灯空放现象.
3)针对强化学习智能体之间实时通信受限问题,提出一种基于邻居信息库的多智能体协作交通信号自适应协调方法. 降低因异步方式产生的智能体之间信息不平衡情况,从而提升多参与者协同效率.
1. 相关工作
边缘计算为智慧交通的建设提出高效的分布式计算解决方案,该方案构建计算、存储、决策一体化的边缘开放平台,为交通信号控制系统提供一种新型计算模式[4]. 在大规模路网的交通信号控制研究中,文献[5]提出一种为每个交叉口控制智能体分配对应边缘学习平台,在协作时仅考虑直接相连的邻居信息的方法. 这种分散协作式具有较高的成本效益,难以适用于大规模路网.
实际交通信号控制应用场景存在环境建模难的问题,基于数据驱动的无模型强化学习方法可以在探索中自身学习,实现控制闭环反馈. 独立学习的单智能体之间不进行相互之间的沟通与协作,每个智能体只能感知自己控制范围内的状态,每次以优化局部Q值最大化为目标. 但当周围环境变得复杂时,不考虑上下游智能体的决策带来的非平稳性的影响将会导致自身学习无法收敛. 基于通信的多智能体联合学习通常采用集中式控制[6],以最大化所有区域智能体的联合动作对应的Q值为目标. 全局智能体所需要处理数据庞大,现有计算能力难以实时处理,集中式控制方式的弊端逐渐暴露出来,因此有学者提出分散式多智能体控制方式. 当掌握全局的统领者被撤走后,使用协作图[7–8]简化多个智能体之间关系或采用博弈论[9–11]解决智能体之间的联合问题是较为常用的办法. 文献[12]中提出一种完全可扩展的去中心化多智能体强化学习(muti-agent reinforcement learning,MARL)方法,将其他智能体的策略以广播的方式告知环境中的其他代理,并应用空间折现因子缩小距离较远的智能体带来的影响. 除此之外,其他MARL方法应用到多路口场景,如MADDPG[13],APE-X DQN[14],AC[15],A2C[16]等,也被证明是可行的. 将多智能体协作问题转换成图也被广泛研究,如MARL与GAN[17]、图卷积[18–20]等图方法结合.
交通信号控制系统中关于异步的研究集中在降低数据相关性方面. 文献[21]基于并行强化学习范式采用异步梯度下降优化神经网络参数,提高资源利用率,提升训练速度. 文献[22]提出一种异步协同信号框架,信号控制器根据并行方式异步共享的相邻信息进行决策,该框架能够提高实际控制的稳定性,但要求所有控制器必须同步进行决策. 文献[23]提出一种异步多步Q-Learning方法,该方法采样多个步骤后进行估值,降低因估计造成的误差,并利用多核CPU并行模拟多个代理与环境进行交互的过程,异步更新全局参数.
在关于多智能体协同的研究中可以发现,在同一环境下的智能体直接进行通信需要同步决策才能实现同步通信. 本研究采用间接通信方式,借助边缘节点存储的邻居信息库间接实现智能体之间通信,智能体之间不必要求同步决策. 异步决策方式能够达到提高智能体之间的通信效率、优化交通信号配时方案、降低车辆在交叉口的等待时间.
2. 基于端—边—云的交通信号控制架构
本文研究以常见十字交叉口场景为例,每个交叉口内安装多种信息采集装置,由m个十字交叉口构成的路网中分布着n个边缘服务器以及1个中心云服务器,并提出交通信号分层协同控制[24].
如图2所示,在单个十字交叉口中布设多种智能终端传感设备,如网联车、交通信号控制器、摄像头和传感器等. 这些终端设备用于感知环境信息,并向边缘服务器节点传输环境数据.
根据具体交通需求将m个交叉口划分为n个区域,缩小交通信号控制器控制范围. 每个区域由对应的边缘服务器进行管理,负责初步处理多源异构的感知数据、小规模的智能分析,以及提供存储与决策相关的服务. 此外,边缘节点还需要维护一个小型邻居信息库(参见3.2.1节),用于降低决策时的通信延迟,提升智能体之间的合作效率.
在中心云服务层,云节点核心控制程序从全局角度实现资源调度和决策,同时存储和维护路网整体的邻居信息库,进行深入分析,接收边缘节点定时传输的数据并更新.
3. 智能交通信号协调方法
在基于提出的端—边—云协同交通信号控制架构上,面向多交叉口交通信号控制场景,构建强化学习控制模型,提出一基于边缘计算的异步决策的多智能体交通信号自适应协调方法(adaptive coordination method,ADM),该方法包括交通信号配时优化机制和基于异步决策的多智能体交通信号自适应协调算法. 在3.1节中重点描述决策周期计算方法. 由于系统中多个智能体采取不同决策周期,相互之间的通信方式是需要研究的重点,因此,在3.2节中提出基于邻居信息库的多智能体协作机制,并给出智能体的定义以及学习过程.
3.1 交通信号配时优化机制
根据车辆跟驰方式,车流可划分为饱和连续车流(包含首车及后续连续车流)和非饱和车流. 受到信号灯的控制,当首车状态发生改变后,在停车线前排队等候的车辆传递性发生连续状态改变,形成交通流,并以一定的传播速度向后传播. 能够与前车一起形成连续不间断的车流为饱和连续车流,包含绿灯亮起时已排队车辆以及放行时到达车辆,后加入到队列中的车辆作为队尾进行研究. 后续到达车辆无法与前车构成连续车流时称为非饱和车流,此时车头时距较大,由车辆到达率决定. 因不受前车速度制约,非饱和车流以自由流速度行驶通过交叉口. 通常情况下,在最长绿灯时间允许范围内,最后一辆车驶离停车线后切换信号相位. 然而,实际情况中因非饱和车流通行的不确定性导致通行时间被浪费. 为了贴合实际场景中动态的交通流,提供更好的优化交通信号配时方案, ADM方法基于车辆跟驰理论针对不同交叉口状态实时调整绿信比.
交通模式划分为相对模式(C1)、相邻模式(C2)和汇聚模式(C3),每个模式中含有4个相位,每个相位默认绿灯时间为tg,默认黄灯时间为ty,信号默认周期ta是默认绿灯和黄灯时间之和,如式(1)所示:
ta=tg+ty. (1) 根据不同阶段的车头时距,将实际信号周期t′a的计算分为4个部分.
1)首车启动及饱和跟驰阶段t1,如式(2)所示:
t1={0,carN=0,ta3,carN≠0,waitN= 0,dv,carN≠0,waitN≠0, (2) 其中carN表示具有通行权车道上的车辆数,waitN表示车道上实际停车数,当车速小于0.1 m/s时视为车辆处于等待状态,d是饱和连续车流末尾车辆所在位置到停车线的距离,v是饱和连续车流正常通行情况下的平均速度估计值.
2)非饱和跟驰阶段t2,如式(3)所示:
t2={0,runN=0,duvu,runN≠0, (3) 其中runN表示具有通行权车道上正在行驶的车辆,du为非饱和车流末尾车辆所在位置到停车线的距离,vu为非饱和车流继续通行时平均行驶速度.
3)当饱和跟驰阶段执行完毕后,再次观察交通环境并计算除当前车道外其他车道的饱和连续通行时间t3,并判断当前交叉口竞争状态.
①如果t3<t2,交叉口处于弱竞争状态,不需要切换动作;
②如果t3≥t2,交叉口处于强竞争状态,需要根据邻居信息切换新动作. 根据3.2.3节描述的协调机制,重新选择新动作并执行.
4)黄灯实际执行时间ty′,如式(4)所示:
t′y=max (4) 修正后的实际相位周期时间 {t}_{{\rm{a}}}{{'}} 为这4部分之和,对应智能体的实际动作执行时间之间与默认动作执行时间存在一定差异,整个系统中智能体难以实现同步决策. 因此,ADM方法引入异步概念,允许智能体根据交通环境情况适当调整自身绿信比. 当前相位执行完毕后无需等待与其他智能体时钟频率同步的时刻,可以直接决策并执行新动作.
3.2 多智能体交通信号自适应协调算法
3.2.1 基于邻居信息库的协调机制
考虑到异步决策机制会降低多智能体之间的通信效率这一问题,ADM算法提出在云节点维护整体路网的邻居信息库,边缘节点维护与其目标节点相关的邻居信息库,并按一定周期将数据同步更新给云节点.
智能体在决策时仅参考与目标交叉口相邻接的交叉口状态信息,并将自身新决策发送给对应边缘节点更新. 邻居信息库中存储交叉口之间邻接信息、每个交叉口的决策时间、决策结果以及持续时间. 当交叉口控制智能体i决策时,向其对应的边缘服务器发送数据请求. 边缘服务器根据交叉口间邻接关系,将其邻接交叉口集合Ji的最新决策信息返回给智能体i,智能体i与邻居协调决策(协调策略详细描述见3.2.3节)后将自己最新决策再次发送给边缘服务器,用于更新存储在边缘节点的局部信息库. 一段时间后,边缘服务器集群集中向云服务器进行同步信息,用于云服务器训练模型,云服务器训练模型后将最新模型参数发送给边缘服务器更新.
3.2.2 模型设置
根据强化学习理论,可以将控制过程建模为马尔可夫决策过程(MDP),使用五元组表示(O, A, R, α, γ). 其中O表示状态空间向量,A表示动作空间向量,R表示奖励函数R(o,a):O×A→R,α为智能体的学习率,γ为折扣因子. 控制过程的根本原理是通过试错的方式探索环境,即在智能体执行动作后,环境根据执行该动作产生的效果给予奖励,如果获得较好奖励,表明在当前状态执行该动作较为合适,可以增加该动作的出现概率. 智能体根据探索环境得到的经验进行自身学习,主要学习任务是行为策略,目标是在环境中最大化累计奖励. 要素的具体定义有3方面:
1)状态空间
根据3.1节中划分的3组交通模式,智能体观测空间也由3组不同交通模式共计12种车流的状态向量构成,O = (S1,S2,…,Si)(1≤i≤12). 其中Si表示第i种车流的状态,由最长连续等待车流f和与f间隔最小的预计到达车流 {f}^{{'}} 的估计停车等待时间Tw表示,如式(5)所示.
{T_{\rm{w}}} = wait{N'} \times {t_{\rm{w}}} , (5) wait{N'} = waitN + runN \times e \text{,} (6) 其中 {waitN}' 是车道上估计停车数,waitN是车道上实际停车数, {t}_{{\rm{w}}} 是车道上单位车辆等待时间,e是车道上车辆行驶状态不均衡系数,e计算公式如式(7)所示.
e = {I'}/({I'} + I) \text{,} (7) 其中 {I}' 是车流在理想行驶与实际行驶状态下该统计分布面积之差,I是车流实际状态下该统计分布面积,车流内部以可协调的最大速度同速行驶.
2)动作空间
本文中动作定义采取在预定义的相位方案中选择需要更改的相位方法. 动作空间A = (C1,C2,C3),根据交通模式划分为3组C1={NSs,EWs,NSl,EWl},C2={Wsl,Ssl,Esl,Nsl},C3={WsNl,SsWl,EsSl,NsEl},共计12种动作构成. N,S,W,E分别表示北向、南向、西向、东向,下标s和l表示直行和左转. 出于安全性考虑,每个动作执行后均默认执行一个对应的黄灯过渡相位. 由于右转车流不受交通信号控制,因此在相位方案中省去对于右转车辆的指示,默认一直是绿灯状态.
3)奖励函数
累计奖励函数最大是强化学习算法优化学习的目标,其设置需要能够准确反馈动作执行带来的影响. 本文中奖励函数R的定义如式(8)所示:
R({\boldsymbol{o}},{\boldsymbol{a}}) = {H_{\rm{w}}} \times (1 - \bar e) \text{,} (8) 其中 \bar e 是路口整体车流状态不均衡系数,取路口直行和左转车道上行驶车辆状态不均衡系数e的平均值. Hw是执行动作a后路口拥堵状态持续加剧程度的估计值,反映执行绿灯相位对路口拥堵状态变化的影响,计算公式如式(9)所示:
{H_{\rm{w}}} = waitN/wait{N'} . (9) 3.2.3 基于多智能体的自适应控制算法
多个智能体在环境中需要相互协调以获得最大累计奖励值,智能体在充分考虑与目标节点邻接的节点的交通状态下,根据道路实际通行情况和交通信号控制器选择结果进行决策投票. 在强竞争场景下实现控制车流传输速度,尽量降低上游路口对下游路口的负面影响.
具体而言,智能体根据观察到的目标交叉口环境状态信息,以ε-greedy策略选取基于动作选择策略选取动作a1;从邻居经验库中获取目标交叉口邻接交叉口的信息,计算得到根据协同后建议采取动作a2;当a1≠a2时,表示与邻居协同失败,重新选择动作. 根据交叉口估计等待时间最长车道需要先疏通这一原则对车道设置优先级,从动作a1所属交通模式的相位集合中选择具有最高优先级的车道赋予通行权,即动作a3. 从动作候选集合{a1,a2,a3}中选择最终动作后得到对应默认执行周期ta,根据3.1节计算智能体实际执行周期t_a' . 每次决策后都要将决策结果发给附近边缘节点,智能体通过自适应以及与邻居之间经验不断优化学习,提高协调控制的效果,具体如算法1所示.
算法1. 基于多智能体异步协作的信号优化算法.
输入:学习率α,折扣因子γ,搜索概率ε,最大仿真步数T,交叉口集合J,邻居经验库B;
输出:最优执行动作序列A*.
① 初始化 ot←getObservation(),t←0;/*初始化状 态和时间*/
② for t=1,2, … ,T do
③ for j=1,2, … ,J do
④ if at, j,1≠at, j,2
⑤ at, j = at, j,1
⑥ else at,j = at,j,3; /*智能体根据邻居信息
采用投票策略独立进行决策*/
⑦ end if
⑧ t1,t2,t3←calDescisionTime();
⑨ if t2≥t3 /*判断交叉口状态*/
⑩ break;
⑪ end if
⑫ rt = execute(at,j,t1,t2,t3);
⑬ {Q}_{j}({\boldsymbol{o}}_{t,j}^{*},{\boldsymbol{a}}_{t,j}^{\mathit{*}})=(1-\alpha )\times {Q}_{j}({\boldsymbol{o}}_{t,j},{\boldsymbol{a}}_{t,j})+
\alpha [\gamma \times {Q}_{j}({\boldsymbol{o}}_{t+1,j},{\boldsymbol{a}}^{{*}})+R({\boldsymbol{o}}_{t,j},{\boldsymbol{a}}_{t,j})]; /*更新Q-table*/
⑭ ot+1,j←getObservation();
⑮ end for
⑯ end for
⑰ return {a1,0*,a1,1*,…,a1,J*,…,aT,0*,aT,1*,…,aT,J*}.
4. 实验结果与分析
4.1 仿真实验设置
为了评估所提出的ADM方法,在阿里云服务器上实现云服务核心控制程序、构建全局邻居信息库及相关操作API. 基于RSU设备实现数据预处理、控制决策、区域邻居信息库创建及更新的程序. 在交通仿真软件SUMO中对多交叉口仿真环境进行建模,在SUMO中搭建的路网如图3所示.
ADM方法基于Q学习方法,经过多次实验调整后对方法和道路相关参数设置如表1所示.
表 1 主要参数列表Table 1. Major Parameter List参数 取值 学习率α 0.1 折扣因子γ 0.9 搜索概率ε 0.1 最大训练轮次 100 最大仿真步数 7200 信息库更新周期/s 1 道路长度/m 300 车道最大车速/(km·h−1) 40 最大加速度/(m·s−2) 2 最大减速度/(m·s−2) 4.5 最小车间距/m 2 默认直行绿灯时间/s 33 默认左转绿灯时间/s 25 默认黄灯时间/s 3 实验中仿真车流数据使用济南市某交叉口实际数据,数据来自于交叉口附近布设的监控摄像,每个交叉口具有相对完整的记录. 数据集中信息包括地理位置信息、车辆到达时间及其他信息,以及对信息处理后生成与仿真环境匹配的路由文件. 加载路网和车辆路由文件后,使用Python语言编程实现ADM方法,借助Traci接口与仿真环境进行交互获取数据.
4.2 对比实验及评价指标
ADM方法将与2种方法进行对比.
1)传统固定配时法(fixed time, FT). 按照默认相位方案和信号周期顺序执行. 默认相位方案为{NSs,EWs,NSl,EWl,Wsl,Ssl,Esl,Nsl,WsNl,SsWl,EsSl,NsEl},默认直行绿灯时长为33 s,左转绿灯时长为25 s,黄灯时长为3 s.
2)基于Q学习的独立交通信号自适应控制方法IQA(independent Q-learning decision algorithm). 智能体之间无协同,根据自身信息进行动作选择,并采用同步决策方式.
评价指标包括:路口平均等待车辆数、路口车辆等待时间、路口最大排队长度.
4.3 实验结果
1)控制有效性分析. 在4800 s的仿真实验中,路网中车流量总数约为3000辆,实验中2个交叉口车流量经过优化控制后时变如图4和图5所示,可以看出2个交叉口车流量均呈先增后减的趋势. 受路网通行能力的限制,单位时间内可通行车辆数恒定、流量波动大时,代表交叉口通行效率不稳定. 当流量小时,表示交通畅通或出现绿灯空放现象;当流量大时,表示交通缓慢或已经拥堵.
结合图4和图5可以发现,在FT方法中,相位执行顺序和时长恒定不变,在整个仿真过程中车流量波动较小,对车流具有一定的疏通作用. 而不具有协调机制的IQA方法独立决策,无需考虑相邻交叉口情况. 当路网中流量增大时,因交叉口1和交叉口2无相互协调造成车流量大幅度波动. 最高峰时交叉口1中有近130辆车在行驶或等待,高于同时刻其他2种方法近1倍. 同时,在交叉口2车流明显低于其他2种方法,这表明相邻交叉口之间的协调控制能够有效减少独立控制方式的盲目判断,从而预防大量车辆拥堵现象的产生,最大限度地减少车辆停车次数对于提高路网通行有明显的作用. 本文研究中提出的ADM方法和FT方法的波动大致相同,但ADM整体上低于FT方法对应的曲线. 这表明采用动态信号决策周期能够有效提升信控优化效率. 对于突然大量增加的车流量,也能及时疏导避免在交叉口造成拥堵,展示出具有自适应学习能力和实时决策能力.
2)平均等待长度和平均等待车辆数对比分析. 在仿真过程中对2个交叉口平均等待长度进行记录,并计算出不同方法的平均值,如表2所示. 固定配时平均等待长度和平均等待车辆数这2项指标均较高,这表示车辆在交叉口聚集时间过长,产生拥堵现象,但方法由于不具有自适应性无法调节. 无协同的IQA方法优化效果不明显,经过分析得到,当发生拥堵时IQA能够根据环境变化对相位进行灵活调整,因此控制效果比固定配时方法好. 图6展示在仿真过程中不同方法控制下平均等待车辆数的变化,从图6中可以看出,ADM方法在运行整体调节效果较好,从长远角度考虑决策,尽量避免拥堵情况的发生,降低平均等待车辆数. 图7为在仿真过程中不同等待车辆数出现的频次,可以发现,在ADM方法的调控下,平均等待40辆车甚至更多事件发生的频率明显少于其他2种方法,这表明ADM方法能够有效避免拥堵情况的发生.
表 2 交叉口平均等待车辆数Table 2. Average Waiting Car Numbers at Intersections方法 交叉口1车辆数 交叉口2车辆数 ADM(本文) 5.79 4.23 IQA 6.17 5.68 FT 8.81 8.63 3)累计等待时间对比分析. 如图8所示,ADM方法相较于其他方法对路口整体的车辆等待通行时间的控制效果更好,可以较稳定地将路口车辆的等待时间控制在较小范围内波动,并且ADM方法的累计等待时间更短并且收敛速度相比其他2种方法更快.
5. 结 论
本文提出一种异步决策的多智能体交通信号自适应协调方法,该方法基于边缘计算技术实现,适用于大规模路网分布式控制场景. 基于本文提出的端—边—云架构,实现使用多种物联网终端设备采集环境信息,边缘进行小规模计算及决策,并在云上部署存储设备,进行全局计算和管理. 此外,针对同步决策中绿灯有效时间短问题,本文将异步引入多智能体协调决策中,并提出采用邻居信息库解决多智能体通信效率低的问题,在实验中验证本文提出方法的有效性.
未来拟进行的研究工作包括:考虑在不同拓扑结构的路网中使用智能体协同决策机制[14],以及基于分布式多层端—边—云架构的智能交通控制系统的设计,进一步研究部分网联车环境下实时交通信号优化控制方法,以及进行流量预测和行驶路线规划.
作者贡献声明:高涵设计实验方案和验证实验,并撰写论文;罗娟提出研究思路,对论文模型方法提出指导意见;蔡乾雅负责完成对比实验;郑燕柳对论文进行修改和完善.
-
表 1 云计算、边缘计算和边缘智能特点对比
Table 1 Features Comparison of Cloud Computing, Edge Computing , and Edge Intelligence
类别 云计算 边缘计算 边缘智能 架构模型 集中式 分布式 分布式 服务器位置 互联网中 边缘网络中 云—边—端协同网络 目标应用 互联网应用 物联网或移动应用 各种智能应用程序 服务类型 全球信息服务 有限的本地化信息服务 低延时、高可靠的智能服务 设备数量 数百亿 几千万甚至几亿 数百亿甚至上千亿 研究重点 工作流调度、虚拟机管理等 计算卸载、缓存、资源分配等 在边缘侧利用AI实现数据收集、缓存、处理和分析 表 2 智能的边缘计算和边缘的智能化特点对比
Table 2 Features Comparison of Intelligent Edge Computing and Edge Intelligence
类别 云/边/端 智能的边缘计算 边缘的智能化 结构层面 云 服务器集群 服务器集群 边缘 基准站、边缘节点 智能化服务 终端 终端设备 智能化应用 内容层面 利用AI技术解决边缘计算相关问题 实现边缘环境中应用的智能化 表 3 智能的边缘计算相关工作分类
Table 3 Related Work Classification for Intelligent Edge Computing
关键技术 适用场景 问题挑战 优化目标 相应算法 数据来源 计算卸载 车联网 高度动态的车辆拓扑结构. 优化卸载决策和带宽/计算资源分配 深度学习 文献[56] 无人机 无人机与终端用户之间的
计算和信道容量有限.最小化延迟和能耗 深度学习 文献[57] 设备到设备卸载 数据卸载过程效果不稳定. 优化用户体验 强化学习 文献[58] 物联网设备 嵌入式设备处理能力及资源受限; 降低了DNN推理的总延迟 深度学习 文献[59] 设备之间的对抗性竞争;
低延时通信约束.最小化延迟和通信成本 深度强化学习 文献[60] 资源分配 多用户资源约束条件 单个边缘服务器资源受限. 最小化延迟、提高系统实时性 深度学习 文献[61] 移动设备 边缘计算环境复杂多变. 保持MEC架构在不同条件下的稳定性 深度强化学习 文献[46] 车辆边缘计算网络 车辆动态变化. 最大化车辆边缘计算网络的长期效用 深度强化学习 文献[62] 工业物联网 频谱资源有限;
电池容量受限.最大化长期吞吐量 深度学习 文献[63] 无线网络 框架节点之间达成共识的同时
保证系统的性能.最大化系统吞吐量和用户的服务质量 深度强化学习 文献[64] 边缘缓存 边缘计算系统 无线信道的拥塞. 最小化系统成本消耗
系统性能最优深度强化学习 文献[36] 车联网 车辆移动性; 最大化系统效用 深度强化学习 文献[65] 动态网络拓扑;
存储容量和带宽资源有限;最小化系统成本和延迟 深度强化学习 文献[66] 主动缓存的时间变化性; 提高模型性能预测准确率 深度学习 文献[67] 车辆的高移动性. 以最大限度地降低能耗 深度强化学习 文献[68] 表 4 OpenEI和Edgent的特点对比
Table 4 Features Comparison of OpenEI and Edgent
类别 OpenEI Edgent 可部署的
硬件环境树莓派和集群计算机 树莓派和台式机 适用环境 各种操作系统 静态和动态网络 优化目标 最大化模型准确率 最小化延迟 特点 易于安装、可跨平台使用 超低延时、超高稳定
性及可靠性功能 为边缘提供智能处理和
数据共享功能按需DNN协作推理 -
[1] Cisco U. Cisco annual internet report, 2018—2023, white paper, [EB/OL]. 2020[2022-02-19]. https://www.cisco.com/c/en/us/solutions/colla teral/executive-perspectives/annual-internet-report/white-paper-c11-741490.html
[2] Shepherd J, Burian S. Detection of urban-induced rainfall anomalies in a major coastal city[J]. Earth Interactions, 2003, 7(4): 1−17 doi: 10.1175/1087-3562(2003)007<0001:DOUIRA>2.0.CO;2
[3] 施巍松,孙辉,曹杰,等. 边缘计算:万物互联时代新型计算模型[J]. 计算机研究与发展,2017,54(5):907−924 doi: 10.7544/issn1000-1239.2017.20160941 Shi Weisong, Sun Hui, Cao Jie, et al. Edge computing—An emerging computing model for the Internet of everything era[J]. Journal of Computer Research and Development, 2017, 54(5): 907−924 (in Chinese) doi: 10.7544/issn1000-1239.2017.20160941
[4] Patel M, Naughton B, Chan C, et al. Mobile-edge computing—Introductory technical white paper[J]. White Paper, Mobile-edge Computing Industry Initiative, 2014, 29: 854−864
[5] Shi Weisong, Cao Jie, Zhang Quan, et al. Edge computing: Vision and challenges[J]. IEEE Internet of Things Journal, 2016, 3(5): 637−646 doi: 10.1109/JIOT.2016.2579198
[6] 施巍松,张星洲,王一帆,等. 边缘计算:现状与展望[J]. 计算机研究与发展,2019,56(1):69−89 doi: 10.7544/issn1000-1239.2019.20180760 Shi Weisong, Zhang Xingzhou, Wang Yifan, et al. Edge computing: State-of-the-art and future directions[J]. Journal of Computer Research and Development, 2019, 56(1): 69−89 (in Chinese) doi: 10.7544/issn1000-1239.2019.20180760
[7] Wang Xiaofei, Han Yiwen, Leung V, et al. Convergence of edge computing and deep learning: A comprehensive survey[J]. IEEE Communications Surveys & Tutorials, 2020, 22(2): 869−904
[8] Zhou Zhi, Chen Xu, Li En, et al. Edge intelligence: Paving the last mile of artificial intelligence with edge computing[J]. Proceedings of the IEEE, 2019, 107(8): 1738−1762 doi: 10.1109/JPROC.2019.2918951
[9] Plastiras G, Terzi M, Kyrkou C, et al. Edge intelligence: Challenges and opportunities of near-sensor machine learning applications[C/OL] //Proc of the 29th Int Conf on Application Specific Systems, Architectures and Processors (ASAP). Piscataway, NJ: IEEE, 2018[2022-02-19]. https://ieee xplore.ieee.org/abstract/document/8445118
[10] Rausch T, Dustdar S. Edge intelligence: The convergence of humans, things, and AI[C] //Proc of the 2019 IEEE Int Conf on Cloud Engineering (IC2E). Piscataway, NJ: IEEE, 2019: 86−96
[11] Parekh B, Amin K. Edge intelligence: A robust reinforcement of edge computing and artificial intelligence[C] //Proc of Innovations in Information and Communication Technologies (IICT-2020). Berlin: Springer, 2021: 461−468
[12] Munir A, Blasch E, Kwon J, et al. Artificial intelligence and data fusion at the edge[J]. IEEE Aerospace and Electronic Systems Magazine, 2021, 36(7): 62−78 doi: 10.1109/MAES.2020.3043072
[13] Dillon T, Wu Chen, Chang E. Cloud computing: Issues and challenges [C] //Proc of the 24th IEEE Int Conf on Advanced Information Networking and Applications. Piscataway, NJ: IEEE, 2010: 27−33
[14] Khan W, Ahmed E, Hakak S, et al. Edge computing: A survey[J]. Future Generation Computer Systems, 2019, 97: 219−235 doi: 10.1016/j.future.2019.02.050
[15] Xu Dianlei, Li Tong, Li Yong, et al. Edge intelligence: Empowering intelligence to the edge of network[J]. Proceedings of the IEEE, 2021, 109(11): 1778−1837 doi: 10.1109/JPROC.2021.3119950
[16] Yi Shanhe, Hao Zijiang, Qin Zhengrui, et al. Fog computing: Platform and applications[C] //Proc of the 3rd IEEE Workshop on Hot Topics in Web Systems and Technologies (HotWeb). Piscataway, NJ: IEEE, 2015: 73−78
[17] Meng Jiaying, Tan Haisheng, Xu Chao, et al. Dedas: Online task dispatching and scheduling with bandwidth constraint in edge computing[C] //Proc of IEEE Conf on Computer Communications (INFOCOM 2019). Piscataway, NJ: IEEE, 2019: 2287−2295
[18] Sun Yaping, Chen Zhiyong, Tao Meixia, et al. Bandwidth gain from mobile edge computing and caching in wireless multicast systems[J]. IEEE Transactions on Wireless Communications, 2020, 19(6): 3992−4007 doi: 10.1109/TWC.2020.2979147
[19] Xu Xiaolong, Xue Yuan, Qi Lianyong, et al. An edge computing-enabled computation offloading method with privacy preservation for Internet of connected vehicles[J]. Future Generation Computer Systems, 2019, 96: 89−100 doi: 10.1016/j.future.2019.01.012
[20] 林永青. 人工智能起源处的“群星”[J]. 金融博览,2017(5):46−47 doi: 10.3969/j.issn.1673-4882.2017.05.021 Lin Yongqing. "Stars" at the origin of artificial intelligence[J]. Financial View, 2017(5): 46−47 (in Chinese) doi: 10.3969/j.issn.1673-4882.2017.05.021
[21] 崔雍浩,商聪,陈锶奇,等. 人工智能综述:AI的发展[J]. 无线电通信技术,2019,45(3):225−231 doi: 10.3969/j.issn.1003-3114.2019.03.01 Cui Yonghao, Shang Cong, Chen Siqi, et al. Overview of artificial intelligence: The development of AI[J]. Radio Communication Technology, 2019, 45(3): 225−231 (in Chinese) doi: 10.3969/j.issn.1003-3114.2019.03.01
[22] Qiu Junfei, Wu Qihui, Ding Guoru, et al. A survey of machine learning for big data processing[J]. EURASIP Journal on Advances in Signal Processing, 2016, 2016(1): 1−16 doi: 10.1186/s13634-015-0293-z
[23] Adam B, Smith I. Reinforcement learning for structural control[J]. Journal of Computing in Civil Engineering, 2008, 22(2): 133−139 doi: 10.1061/(ASCE)0887-3801(2008)22:2(133)
[24] Arulkumaran K, Deisenroth M P, Brundage M, et al. Deep reinforcement learning: A brief survey[J]. IEEE Signal Processing Magazine, 2017, 34(6): 26−38 doi: 10.1109/MSP.2017.2743240
[25] LeCun Y, Bengio Y, Hinton G. Deep learning[J]. Nature, 2015, 521(7553): 436−444 doi: 10.1038/nature14539
[26] Pouyanfar S, Sadiq S, Yan Yilin, et al. A survey on deep learning: Algorithms, techniques, and applications[J]. ACM Computing Surveys, 2018, 51(5): 1−36
[27] McCulloch W, Pitts W. A logical calculus of the ideas immanent in nervous activity[J]. The Bulletin of Mathematical Biophysics, 1943, 5(4): 115−133 doi: 10.1007/BF02478259
[28] Fukushima K, Miyake S, Ito T. Neocognitron: A neural network model for a mechanism of visual pattern recognition[J]. IEEE Transactions on Systems, Man, and Cybernetics, 1983(5): 826−834
[29] Jordan M. Serial Order: A Parallel Distributed Processing Approach[M] //Advances in Psychology. Amsterdam: Elsevier, 1997: 471−495
[30] Hinton G, Osindero S, Teh Y. A fast learning algorithm for deep belief nets[J]. Neural Computation, 2006, 18(7): 1527−1554 doi: 10.1162/neco.2006.18.7.1527
[31] Jia Yangqing, Shelhamer E, Donahue J, et al. Caffe: Convolutional architecture for fast feature embedding[C] //Proc of the 22nd ACM Int Conf on Multimedia. New York: ACM, 2014: 675−678
[32] Parvat A, Chavan J, Kadam S, et al. A survey of deep-learning frameworks[C/OL] //Proc of the 2017 Int Conf on Inventive Systems and Control (ICISC). Piscataway, NJ: IEEE, 2017[2022-02-19]. https://ieeexplore.ieee. org/abstract/document/8068684
[33] Abadi M. TensorFlow: Learning functions at scale[C/OL] //Proc of the 21st ACM SIGPLAN Int Conf on Functional Programming. New York: ACM, 2016[2022-02-19]. https://dl.acm.org/doi/abs/10.1145/2951913.2976746
[34] Qiu Chao, Wang Xiaofei, Yao Haipeng, et al. Bring intelligence among edges: A blockchain-assisted edge intelligence approach[C/OL] //Proc of IEEE Global Communications Conf (GLOBECOM 2020). Piscataway, NJ: IEEE, 2020[2022-02-19]. https://ieeexplore.ieee.org/abstract/document/9348271
[35] Babu B, Jayashree N. Edge intelligence models for industrial IoT (IIoT)[J]. RV Journal of Science Technology Engineering Arts and Management, 2020, 1: 5−17
[36] Wang Xiaofei, Han Yiwen, Wang Chenyang, et al. In-edge AI: Intelligentizing mobile edge computing, caching and communication by federated learning[J]. IEEE Network, 2019, 33(5): 156−165 doi: 10.1109/MNET.2019.1800286
[37] Wang Zhiyuan, Xu Hongli, Liu Jianchun, et al. Resource-efficient federated learning with hierarchical aggregation in edge computing[C/OL] //Proc of IEEE Conf on Computer Communications (INFOCOM 2021). Piscataway, NJ. IEEE, 2021[2022-02-19]. https://ieeexplore.ieee.org/abstra ct/document/9488756
[38] Nguyen D, Ding Ming, Pham Q, et al. Federated learning meets blockchain in edge computing: Opportunities and challenges[J]. IEEE Internet of Things Journal, 2021, 8(1): 12806−12825
[39] Deng Shuiguang, Zhao Hailiang, Fang Weijia, et al. Edge intelligence: The confluence of edge computing and artificial intelligence[J]. IEEE Internet of Things Journal, 2020, 7(8): 7457−7469 doi: 10.1109/JIOT.2020.2984887
[40] Qu Guanjin, Wu Huaming, Li Ruidong, et al. DMRO: A deep meta reinforcement learning-based task offloading framework for edge-cloud computing[J]. IEEE Transactions on Network and Service Management, 2021, 18(3): 3448−3459 doi: 10.1109/TNSM.2021.3087258
[41] Tang Ming, Wong V. Deep reinforcement learning for task offloading in mobile edge computing systems[J/OL]. IEEE Transactions on Mobile Computing, 2020[2022-02-19]. https://ieeexplore.ieee.org/abstract/docume nt/9253665
[42] Zhang Ke, Zhu Yongxu, Leng Supeng, et al. Deep learning empowered task offloading for mobile edge computing in urban informatics[J]. IEEE Internet of Things Journal, 2019, 6(5): 7635−7647 doi: 10.1109/JIOT.2019.2903191
[43] Deng Xiaoheng, Yin Jian, Guan Peiyuan, et al. Intelligent delay-aware partial computing task offloading for multi-user industrial Internet of things through edge computing[J/OL]. IEEE Internet of Things Journal, 2021[2022-02-19]. https://ieeexplore.ieee.org/abstract/document/9590527
[44] Wu Huaming, Zhang Ziru, Guan Chang, et al. Collaborate edge and cloud computing with distributed deep learning for smart city Internet of things[J]. IEEE Internet of Things Journal, 2020, 7(9): 8099−8110 doi: 10.1109/JIOT.2020.2996784
[45] Huang Hui, Ye Qiang, Du Hongwei. Reinforcement learning based offloading for realtime applications in mobile edge computing [C/OL] //Proc of IEEE Int Conf on Communications (ICC 2020). Piscataway, NJ: IEEE, 2020[2022-02-19]. https://ieeexplore.ieee.org/abstract/document/9148748
[46] Wang Jiadai, Zhao Lei, Liu Jiajia, et al. Smart resource allocation for mobile edge computing: A deep reinforcement learning approach[J]. IEEE Transactions on Emerging Topics in Computing, 2019, 9(3): 1529−1541
[47] He Ying, Wang Yuhang, Qiu Chao, et al. Blockchain-based edge computing resource allocation in IoT: A deep reinforcement learning approach[J]. IEEE Internet of Things Journal, 2020, 8(4): 2226−2237
[48] Dai Yueyue, Zhang Ke, Maharjan S, et al. Edge intelligence for energy-efficient computation offloading and resource allocation in 5G beyond[J]. IEEE Transactions on Vehicular Technology, 2020, 69(10): 12175−12186 doi: 10.1109/TVT.2020.3013990
[49] Lin Zehong, Bi Suzhi, Zhang Yingjun. Optimizing AI service placement and resource allocation in mobile edge intelligence systems[J]. IEEE Transactions on Wireless Communications, 2021, 20(11): 7257−7271 doi: 10.1109/TWC.2021.3081991
[50] Yu Zhengxin, Hu Jia, Min Geyong, et al. Mobility-aware proactive edge caching for connected vehicles using federated learning[J]. IEEE Transactions on Intelligent Transportation Systems, 2020, 22(8): 5341−5351
[51] Zhang Ke, Leng Supeng, He Yejun, et al. Cooperative content caching in 5G networks with mobile edge computing[J]. IEEE Wireless Communications, 2018, 25(3): 80−87 doi: 10.1109/MWC.2018.1700303
[52] Ning Zhaolong, Zhang Kaiyuan, Wang Xiaojie, et al. Joint computing and caching in 5G-envisioned Internet of vehicles: A deep reinforcement learning-based traffic control system[J]. IEEE Transactions on Intelligent Transportation Systems, 2020, 22(8): 5201−5212
[53] Ning Zhaolong, Zhang Kaiyuan, Wang Xiaojie, et al. Intelligent edge computing in Internet of vehicles: A joint computation offloading and caching solution[J]. IEEE Transactions on Intelligent Transportation Systems, 2020, 22(4): 2212−2225
[54] Ndikumana A, Tran N, Kim K, et al. Deep learning based caching for self-driving cars in multi-access edge computing[J]. IEEE Transactions on Intelligent Transportation Systems, 2020, 22(5): 2862−2877
[55] Zhang Ran, Yu F, Liu Jiang, et al. Deep reinforcement learning (DRL)-based device-to-device caching with blockchain and mobile edge computing[J]. IEEE Transactions on Wireless Communications, 2020, 19(10): 6469−6485 doi: 10.1109/TWC.2020.3003454
[56] Dai Penglin, Hu Kaiwen, Wu Xiao, et al. Asynchronous deep reinforcement learning for data-driven task offloading in MEC-empowered vehicular networks [C/OL] //Proc of IEEE Conf on Computer Communications (INFOCOM 2021). Piscataway, NJ: IEEE, 2021[2022-02-19]. https://ieeexplore.ieee.org/document/9488886
[57] Mukherjee M, Kumar V, Lat A, et al. Distributed deep learning-based task offloading for UAV-enabled mobile edge computing[C] //Proc of IEEE Conf on Computer Communications Workshops (INFOCOM 2020). Piscataway, NJ: IEEE, 2020: 1208−1212
[58] Liu Xianming, Zhang Chaokun, He Shen. Adaptive task offloading for mobile aware applications based on deep reinforcement learning[C] //Proc of the 19th Int Conf on mobile Ad Hoc and Smart Systems (MASS). Piscataway, NJ: IEEE, 2022, accepted
[59] Mohammed T, Joe-Wong C, Babbar R, et al. Distributed inference acceleration with adaptive DNN partitioning and offloading [C] //Proc of the IEEE Conf on Computer Communications (INFOCOM 2020). Piscataway, NJ: IEEE, 2020: 854−863
[60] Zhou Zhenyu, Wang Zhao, Yu Haijun, et al. Learning-based URLLC-aware task offloading for Internet of health things[J]. IEEE Journal on Selected Areas in Communications, 2020, 39(2): 396−410
[61] Tang Xin, Chen Xu, Zeng Liekang, et al. Joint multiuser DNN partitioning and computational resource allocation for collaborative edge intelligence[J]. IEEE Internet of Things Journal, 2020, 8(12): 9511−9522
[62] Liu Yi, Yu Huimin, Xie Shengli, et al. Deep reinforcement learning for offloading and resource allocation in vehicle edge computing and networks[J]. IEEE Transactions on Vehicular Technology, 2019, 68(11): 11158−11168 doi: 10.1109/TVT.2019.2935450
[63] Liao Haijun, Zhou Zhenyu, Zhao Xiongwen, et al. Learning-based context-aware resource allocation for edge-computing-empowered industrial IoT[J]. IEEE Internet of Things Journal, 2019, 7(5): 4260−4277
[64] Guo Fengxian, Yu F, Zhang Heli, et al. Adaptive resource allocation in future wireless networks with blockchain and mobile edge computing[J]. IEEE Transactions on Wireless Communications, 2019, 19(3): 1689−1703
[65] Dai Yueyue, Xu Du, Maharjan S, et al. Artificial intelligence empowered edge computing and caching for Internet of vehicles[J]. IEEE Wireless Communications, 2019, 26(3): 12−18 doi: 10.1109/MWC.2019.1800411
[66] Qiao Guanhua, Leng Supeng, Maharjan S, et al. Deep reinforcement learning for cooperative content caching in vehicular edge computing and networks[J]. IEEE Internet of Things Journal, 2019, 7(1): 247−257
[67] Ale L, Zhang Ning, Wu Huici, et al. Online proactive caching in mobile edge computing using bidirectional deep recurrent neural network[J]. IEEE Internet of Things Journal, 2019, 6(3): 5520−5530 doi: 10.1109/JIOT.2019.2903245
[68] Khan L U, Yaqoob I, Tran N H, et al. Edge-computing-enabled smart cities: A comprehensive survey[J]. IEEE Internet of Things Journal, 2020, 7(10): 10200−10232 doi: 10.1109/JIOT.2020.2987070
[69] Nakahara M, Hisano D, Nishimura M, et al. Retransmission edge computing system conducting adaptive image compression based on image recognition accuracy[C/OL] //Proc of the 94th IEEE Vehicular Technology Conf (VTC2021-Fall). Piscataway, NJ: IEEE, 2021[2022-02-19]. https://iee explore.ieee.org/document/9625464
[70] Muhammad G, Hossain M. Emotion recognition for cognitive edge computing using deep learning[J]. IEEE Internet of Things Journal, 2021, 8(23): 16894−16901 doi: 10.1109/JIOT.2021.3058587
[71] Shen Tao, Gao Chan, Xu Dawei. The analysis of intelligent real-time image recognition technology based on mobile edge computing and deep learning[J]. Journal of Real-Time Image Processing, 2021, 18(4): 1157−1166 doi: 10.1007/s11554-020-01039-x
[72] Monburinon N, Zabir S, Vechprasit N, et al. A novel hierarchical edge computing solution based on deep learning for distributed image recognition in IoT systems[C] //Proc of the 4th Int Conf on Information Technology (InCIT). Piscataway, NJ: IEEE, 2019: 294−299
[73] Wu Juan. English real-time speech recognition based on hidden markov and edge computing model[C] //Proc of the 3rd Int Conf on Inventive Research in Computing Applications (ICIRCA). Piscataway, NJ: IEEE, 2021: 376−379
[74] Xu Yunwei. Research on business English translation architecture based on artificial intelligence speech recognition and edge computing[J/OL]. Wireless Communications and Mobile Computing, 2021[2022-02-19]. https://www.hindawi.com/journals/wcmc/2021/5518868/
[75] Cheng Shitong, Xu Zhenghui, Li Xiuhua, et al. Task Offloading for automatic speech recognition in edge-cloud computing based mobile networks[C/OL] //Proc of IEEE Symp on Computers and Communications (ISCC). Piscataway, NJ: IEEE, 2020[2022-02-19]. https://ieeexplore.ieee.org/abstract/document/9219579
[76] Zeng Xiao, Fang Biyi, Shen Haichen, et al. Distream: Scaling live video analytics with workload-adaptive distributed edge intelligence [C] //Proc of the 18th Conf on Embedded Networked Sensor Systems. New York: ACM, 2020: 409−421
[77] Rocha N, Silva T, Batista T, et al. Leveraging edge intelligence for video analytics in smart city applications [J]. Information, 2021, 12(1): 14
[78] Zhang Xingzhou, Wang Yifan, Lu Sidi, et al. OpenEI: An open framework for edge intelligence[C] //Proc of the 39th IEEE Int Conf on Distributed Computing Systems (ICDCS). Piscataway, NJ: IEEE, 2019: 1840−1851
[79] Li En, Zeng Liekang, Zhou Zhi, et al. Edge AI: On-demand accelerating deep neural network inference via edge computing[J]. IEEE Transactions on Wireless Communications, 2019, 19(1): 447−457
[80] Erfanian A, Amirpour H, Tashtarian F, et al. LwTE-Live: Light-weight transcoding at the edge for live streaming[C] //Proc of the Workshop on Design, Deployment, and Evaluation of Network-assisted Video Streaming. New York: ACM, 2021: 22−28
[81] Jin Yibo, Jiao Lei, Qian Zhuzhong, et al. Learning for Learning: Predictive online control of federated learning with edge provisioning [C/OL] //Proc of the IEEE Conf on Computer Communications (INFOCOM 2021). Piscataway, NJ: IEEE, 2021[2022-02-19]. https://ieeexplore.ieee.org/abstra ct/document/9488733
[82] Zhang D, Kou Ziyi, Wang Dong. Fedsens: A federated learning approach for smart health sensing with class imbalance in resource constrained edge computing[C/OL] //Proc of the IEEE Conf on Computer Communications (INFOCOM 2021). Piscataway, NJ: IEEE, 2021[2022-02-19]. https://ieeexplore.ieee.org/document/9488776
[83] Wang Shuangguang, Guo Yan, Zhang Ning, et al. Delay-aware microservice coordination in mobile edge computing: A reinforcement learning approach[J]. IEEE Transactions on Mobile Computing, 2019, 20(3): 939−951
[84] Bahreini T, Badri H, Grosu D. Mechanisms for resource allocation and pricing in mobile edge computing systems[J]. IEEE Transactions on Parallel and Distributed Systems, 2021, 33(3): 667−682
[85] Tran T, Pompili D. Adaptive bitrate video caching and processing in mobile-edge computing networks[J]. IEEE Transactions on Mobile Computing, 2018, 18(9): 1965−1978
[86] Xing Hong, Liu Liang, Xu Jie, et al. Joint task assignment and resource allocation for D2D-enabled mobile-edge computing[J]. IEEE Transactions on Communications, 2019, 67(6): 4193−4207 doi: 10.1109/TCOMM.2019.2903088
[87] Zhang Chaokun, Zheng Rong, Cui Yong, et al. Delay-sensitive computation partitioning for mobile augmented reality applications[C/OL] //Proc of the 28th Int Symp on Quality of Service (IWQoS). Piscataway, NJ: IEEE, 2020[2022-02-19]. https://ieeexplore.ieee.org/docu ment/9212917
[88] Song Fuhong, Xing Huanlai, Luo Shouxi, et al. A multiobjective computation offloading algorithm for mobile-edge computing[J]. IEEE Internet of Things Journal, 2020, 7(9): 8780−8799 doi: 10.1109/JIOT.2020.2996762
[89] Lei Lei, Xu Huijuan, Xiong Xiong, et al. Joint computation offloading and multiuser scheduling using approximate dynamic programming in NB-IoT edge computing system[J]. IEEE Internet of Things Journal, 2019, 6(3): 5345−5362 doi: 10.1109/JIOT.2019.2900550
[90] Littman M, Moore A. Reinforcement learning: A survey[J]. Journal of Artificial Intelligence Research, 1996, 4(1): 237−285
[91] Wang Jin, Hu Jia, Min Geyong, et al. Fast adaptive task offloading in edge computing based on meta reinforcement learning[J]. IEEE Transactions on Parallel and Distributed Systems, 2020, 32(1): 242−253
[92] Huisman M, Van R, Plaat A. A survey of deep meta-learning[J]. Artificial Intelligence Review, 2021, 54(6): 4483−4541 doi: 10.1007/s10462-021-10004-4
[93] Zhang Ke, Cao Jiayu, Zhang Yan. Adaptive digital twin and multiagent deep reinforcement learning for vehicular edge computing and networks[J]. IEEE Transactions on Industrial Informatics, 2021, 18(2): 1405−1413
[94] Dai Yueyue, Xu Du, Zhang Ke, et al. Deep reinforcement learning and permissioned blockchain for content caching in vehicular edge computing and networks[J]. IEEE Transactions on Vehicular Technology, 2020, 69(4): 4312−4324 doi: 10.1109/TVT.2020.2973705
[95] Liang Fan, Yu Wei, Liu Xing, et al. Toward edge-based deep learning in industrial Internet of things[J]. IEEE Internet of Things Journal, 2020, 7(5): 4329−4341 doi: 10.1109/JIOT.2019.2963635
[96] Guillén M, Llanes A, Imbernón B, et al. Performance evaluation of edge-computing platforms for the prediction of low temperatures in agriculture using deep learning[J]. The Journal of Supercomputing, 2021, 77(1): 818−840 doi: 10.1007/s11227-020-03288-w
[97] Zhao Ning, Wu Hao, Yu F, et al. Deep-reinforcement-learning-based latency minimization in edge intelligence over vehicular networks[J]. IEEE Internet of Things Journal, 2021, 9(2): 1300−1312
[98] Li Bo, He Qiang, Chen Feifei, et al. Inspecting edge data integrity with aggregated signature in distributed edge computing environment [J/OL]. IEEE Transactions on Cloud Computing, 2021[2022-02-19]. https://ieeexplore.ieee.org/abstract/document/9354962
[99] Li Yannan, Yu Yong, Susilo W, et al. Security and privacy for edge intelligence in 5G and beyond networks: Challenges and solutions[J]. IEEE Wireless Communications, 2021, 28(2): 63−69 doi: 10.1109/MWC.001.2000318
[100] Li Bo, He Qiang, Chen Feifei, et al. Auditing cache data integrity in the edge computing environment[J]. IEEE Transactions on Parallel and Distributed Systems, 2020, 32(5): 1210−1223
[101] Cui Guangming, He Qiang, Li Bo, et al. Efficient verification of edge data integrity in edge computing environment [J/OL]. IEEE Transactions on Services Computing, 2021[2022-02-19]. https://ieeexplore.ieee.org/abstract /document/9459478
[102] Li Bo, He Qiang, Chen Feifei, et al. Cooperative assurance of cache data integrity for mobile edge computing[J]. IEEE Transactions on Information Forensics and Security, 2021, 16: 4648−4662 doi: 10.1109/TIFS.2021.3111747
[103] Tong Wei, Jiang Bingbing, Xu Fengyuan, et al. Privacy-preserving data integrity verification in mobile edge computing [C] //Proc of the 39th Int Conf on Distributed Computing Systems (ICDCS). Piscataway, NJ: IEEE, 2019: 1007−1018
[104] Xu Rongxu, Hang Lei, Jin Wenquan, et al. Distributed secure edge computing architecture based on blockchain for real-time data integrity in IoT environments[J]. Actuators, 2021, 10(8): 197−212 doi: 10.3390/act10080197
[105] Shanthamallu U, Thiagarajan J, Spanias A. Uncertainty-matching graph neural networks to defend against poisoning attacks [C] //Proc of the 35th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2021, 9524−9532
[106] Alfeld S, Zhu Xiaojin, Barford P. Data poisoning attacks against autoregressive models [C/OL] //Proc of the 30th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2016[2022-02-19]. https://ojs.aaai.org/index.php/AA AI/article/view/10237
[107] Jagielski M, Oprea A, Biggio B, et al. Manipulating machine learning: Poisoning attacks and countermeasures for regression learning [C] //Proc of the 2018 IEEE Symp on Security and Privacy (SP). Piscataway, NJ: IEEE, 2018: 19−35
[108] Steinhardt J, Koh P, Liang P. Certified defenses for data poisoning attacks [J/OL]. Advances in Neural Information Processing Systems, 2017[2022-02-19]. https://proceedings.neurips.cc/paper/2017/hash/9d7311ba459f9e45ed746755a32dcd11-Abstract.html
[109] Tolpegin V, Truex S, Gursoy M, et al. Data poisoning attacks against federated learning systems[C] //Proc of the European Symp on Research in Computer Security. Berlin: Springer, 2020: 480−501
[110] Douceur J. The sybil attack[C] //Proc of the Int Workshop on Peer-to-Peer Systems. Berlin: Springer, 2002: 251−260
[111] Doku R, Rawat D. Mitigating data poisoning attacks on a federated learning-edge computing network[C/OL] //Proc of the 18th Annual Consumer Communications & Networking Conf (CCNC). Piscataway, NJ: IEEE, 2021[2022-02-19]. https://ieeexplore.ieee.org/abstract/document/9369581
[112] Xiao Liang, Wan Xiaoyue, Dai Canhuang, et al. Security in mobile edge caching with reinforcement learning[J]. IEEE Wireless Communications, 2018, 25(3): 116−122 doi: 10.1109/MWC.2018.1700291
[113] Jia Xiaoying, He Debiao, Kumar N, et al. A provably secure and efficient identity-based anonymous authentication scheme for mobile edge computing[J]. IEEE Systems Journal, 2019, 14(1): 560−571
[114] Kaur K, Garg S, Kaddoum G, et al. A lightweight and privacy-preserving authentication protocol for mobile edge computing [C/OL] //Proc of IEEE Global Communications Conf (GLOBECOM 2019). Piscataway, NJ: IEEE, 2019[2022-02-19]. https://ieeexplore.ieee.org/abstract/document/9013856
[115] Gupta R, Reebadiya D, Tanwar S, et al. When blockchain meets edge intelligence: Trusted and security solutions for consumers[J]. IEEE Network, 2021, 35(5): 272−278 doi: 10.1109/MNET.001.2000735
[116] Ding Jiahao, Liang Guannan, Bi Jinbo, et al. Differentially private and communication efficient collaborative learning [C/OL] //Proc of the 35th AAAI Conf on Artificial Intelligence, Virtual Conf. Palo Alto, CA: AAAI, 2021[2022-02-19]. https://ojs.aaai.org/index.php/AAAI/article/view/16887
[117] Wang Ji, Bao Weidong, Sun Lichao, et al. Private model compression via knowledge distillation [C] //Proc of the 33rd AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2019: 1190−1197
[118] Abadi M, Chu A, Goodfellow I, et al. Deep learning with differential privacy [C] //Proc of the 2016 ACM SIGSAC Conf on Computer and Communications Security (CCS’16). New York: ACM, 2016: 308−318
[119] Li He, Ota K, Dong Miaoxiong. Learning IoT in edge: Deep learning for the Internet of things with edge computing[J]. IEEE network, 2018, 32(1): 96−101
[120] Yang Qiang, Liu Yang, Cheng Yong, et al. Federated learning[J]. Synthesis Lectures on Artificial Intelligence and Machine Learning, 2019, 13(3): 1−207 doi: 10.1007/978-3-031-01585-4
[121] Liu Yi, Zhao Ruihui, Kang Jiawen, et al. Towards communication-efficient and attack-resistant federated edge learning for industrial internet of things[J]. ACM Transactions on Internet Technology, 2021, 22(3): 1−22
[122] Liu Yi, James J, Kang Jiawen, et al. Privacy-preserving traffic flow prediction: A federated learning approach[J]. IEEE Internet of Things Journal, 2020, 7(8): 7751−7763 doi: 10.1109/JIOT.2020.2991401
[123] Liu Yi, Yuan Xingliang, Xiong Zehui, et al. Federated learning for 6G communications: Challenges, methods, and future directions[J]. China Communications, 2020, 17(9): 105−118 doi: 10.23919/JCC.2020.09.009
[124] Li Tian, Sahu A, Talwalkar A, et al. Federated learning: Challenges, methods, and future directions[J]. IEEE Signal Processing Magazine, 2020, 37(3): 50−60 doi: 10.1109/MSP.2020.2975749
[125] Thalluri L, Venkat S, Prasad C, et al. Artificial intelligence enabled smart city IoT system using edge computing [C] //Proc of the 2nd Int Conf on Smart Electronics and Communication (ICOSEC). Piscataway, NJ: IEEE, 2021: 12−20
[126] Nasir M, Muhammad K, Ullah A, et al. Enabling automation and edge intelligence over resource constraint IoT devices for smart home [J/OL]. Neurocomputing, 2021[2022-02-19]. https://www.sciencedirect.com/scienc e/article/abs/pii/S0925231221016301
[127] Sadiku M, Tembely M, Musa S. Internet of vehicles: An introduction [J]. Internet Journal of Advanced Research in Computer Science and Software Engineering, 2018, 8(1): 11
[128] Grover H, Alladi T, Chamola V, et al. Edge computing and deep learning enabled secure multitier network for Internet of vehicles[J]. IEEE Internet of Things Journal, 2021, 8(19): 14787−14796 doi: 10.1109/JIOT.2021.3071362
[129] Zhang Qingyang, Wang Yifan, Zhang Xingzhou, et al. OpenVDAP: An open vehicular data analytics platform for CAVs[C] //Proc of the 38th IEEE Int Conf on Distributed Computing Systems (ICDCS). Piscataway, NJ: IEEE, 2018: 1310−1320
[130] Lv Zhihan, Chen Dongliang, Wang Qingjun. Diversified technologies in Internet of vehicles under intelligent edge computing[J]. IEEE Transactions on Intelligent Transportation Systems, 2020, 22(4): 2048−2059
[131] Li Tailai, Zhang Chaokun, Zhou Xiaobo. BP-CODS: Blind-spot-prediction-assisted multi-vehicle collaborative data scheduling [C/OL] //Proc of the 17th Int Conf on Wireless Algorithms, Systems, and Applications (WASA 2022). Berlin: Springer, 2022[2022-02-19]. https://link.springer.com/chapter/10.1007/978-3-031-19211-1_25
[132] Liu Jia, Xiang Jianjian, Jin Yongjun, et al. Boost precision agriculture with unmanned aerial vehicle remote sensing and edge intelligence: A survey [J]. Remote Sensing, 2021, 13(21): 4387
[133] Tyagi A. Towards a second green revolution[J]. Irrigation and Drainage, 2016, 4(65): 388−389
[134] Kamilaris A, Prenafeta-Boldú F. Deep learning in agriculture: A survey[J]. Computers and electronics in agriculture, 2018, 147: 70−90 doi: 10.1016/j.compag.2018.02.016
[135] Dionisio J, William G, Gilbert R. 3D virtual worlds and the metaverse: Current status and future possibilities[J]. ACM Computing Surveys, 2013, 45(3): 1−38
[136] Duan Haihan, Li Jiaye, Fan Sizheng, et al. Metaverse for social good: A university campus prototype [C] //Proc of the 29th ACM Int Conf on Multimedia. New York: ACM, 2021: 153−161
[137] Zhang Zhengquan, Xiao Yue, Ma Zheng, et al. 6G wireless networks: Vision, requirements, architecture, and key technologies[J]. IEEE Vehicular Technology Magazine, 2019, 14(3): 28−41 doi: 10.1109/MVT.2019.2921208
[138] You Xiaohu, Wang Chengxiang, Huang Jie, et al. Towards 6G wireless communication networks: Vision, enabling technologies, and new paradigm shifts[J]. Science China Information Sciences, 2021, 64(1): 1−74
[139] Dai Hongning, Wu Yulei, Wang Hao, et al. Blockchain-empowered edge intelligence for Internet of medical things against COVID-19[J]. IEEE Internet of Things Magazine, 2021, 4(2): 34−39 doi: 10.1109/IOTM.0011.2100030
[140] Han Tao, Ansari N. On optimizing green energy utilization for cellular networks with hybrid energy supplies[J]. IEEE Transactions on Wireless Communications, 2013, 12(8): 3872−3882 doi: 10.1109/TCOMM.2013.051313.121249
[141] Quan Li, Huang Qin, Zhang Shengli, et al. Downsampling blockchain algorithm[C] //Proc of the IEEE Conf on Computer Communications Workshops (INFOCOM WKSHPS 2019). Piscataway, NJ: IEEE, 2019: 342−347
-
期刊类型引用(2)
1. 许明,李金烨,左东宇,张晶. 基于流量预测的信号灯配时优化强化学习方法. 系统仿真学报. 2025(04): 1051-1062 . 百度学术
2. 李超,李文斌,高阳. 图多智能体任务建模视角下的协作子任务行为发现. 计算机研究与发展. 2024(08): 1904-1916 . 本站查看
其他类型引用(4)