Processing math: 0%
  • 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
高级检索

面向边缘智能的联邦学习综述

张雪晴, 刘延伟, 刘金霞, 韩言妮

张雪晴, 刘延伟, 刘金霞, 韩言妮. 面向边缘智能的联邦学习综述[J]. 计算机研究与发展, 2023, 60(6): 1276-1295. DOI: 10.7544/issn1000-1239.202111100
引用本文: 张雪晴, 刘延伟, 刘金霞, 韩言妮. 面向边缘智能的联邦学习综述[J]. 计算机研究与发展, 2023, 60(6): 1276-1295. DOI: 10.7544/issn1000-1239.202111100
Zhang Xueqing, Liu Yanwei, Liu Jinxia, Han Yanni. An Overview of Federated Learning in Edge Intelligence[J]. Journal of Computer Research and Development, 2023, 60(6): 1276-1295. DOI: 10.7544/issn1000-1239.202111100
Citation: Zhang Xueqing, Liu Yanwei, Liu Jinxia, Han Yanni. An Overview of Federated Learning in Edge Intelligence[J]. Journal of Computer Research and Development, 2023, 60(6): 1276-1295. DOI: 10.7544/issn1000-1239.202111100
张雪晴, 刘延伟, 刘金霞, 韩言妮. 面向边缘智能的联邦学习综述[J]. 计算机研究与发展, 2023, 60(6): 1276-1295. CSTR: 32373.14.issn1000-1239.202111100
引用本文: 张雪晴, 刘延伟, 刘金霞, 韩言妮. 面向边缘智能的联邦学习综述[J]. 计算机研究与发展, 2023, 60(6): 1276-1295. CSTR: 32373.14.issn1000-1239.202111100
Zhang Xueqing, Liu Yanwei, Liu Jinxia, Han Yanni. An Overview of Federated Learning in Edge Intelligence[J]. Journal of Computer Research and Development, 2023, 60(6): 1276-1295. CSTR: 32373.14.issn1000-1239.202111100
Citation: Zhang Xueqing, Liu Yanwei, Liu Jinxia, Han Yanni. An Overview of Federated Learning in Edge Intelligence[J]. Journal of Computer Research and Development, 2023, 60(6): 1276-1295. CSTR: 32373.14.issn1000-1239.202111100

面向边缘智能的联邦学习综述

基金项目: 国家自然科学基金项目(61771469);重庆市属本科高校与中国科学院所属院所合作项目(HZ2021015)
详细信息
    作者简介:

    张雪晴: 1994年生. 硕士. 主要研究方向为机器学习

    刘延伟: 1976年生. 博士,副研究员. CCF会员. 主要研究方向为无线通信、智能多媒体信息处理和网络安全

    刘金霞: 1969年生. 硕士,教授. 主要研究方向为无线通信和边缘智能

    韩言妮: 1981年生. 博士,副研究员. 主要研究方向为无线通信和智能数据分析

    通讯作者:

    刘延伟(liuyanwei@iie.ac.cn

  • 中图分类号: TP3

An Overview of Federated Learning in Edge Intelligence

Funds: This work was supported by the National Natural Science Foundation of China (61771469) and the Cooperation Project Between Chongqing Municipal Undergraduate Universities and Institutes Affiliated to CAS (HZ2021015).
More Information
    Author Bio:

    Zhang Xueqing: born in 1994. Master. Her main research interest includes machine learning

    Liu Yanwei: born in 1976. PhD, associate professor. Member of CCF. His main research interests include wireless communication, intelligent multimedia processing, and cyber security

    Liu Jinxia: born in 1969. Master, professor. Her main research interests include wireless communication and edge intelligence

    Han Yanni: born in 1981. PhD, associate professor. Her main research interests include wireless communication and intelligent data analysis

  • 摘要:

    随着边缘智能需求的快速增长,联邦学习(federated learning,FL)技术在产业界受到了极大的关注. 与传统基于云计算的集中式机器学习相比,边缘网络环境下联邦学习借助移动边缘设备共同训练机器学习模型,不需要把大量本地数据发送到云端进行处理,缩短了数据处理计算节点与用户之间的距离,在满足用户低时延需求的同时,用户数据可以在本地训练进而实现数据隐私保护. 在边缘网络环境下,由于通信资源和计算资源受限,联邦学习的性能依赖于无线网络状态、终端设备资源以及数据质量的综合限制. 因此,面向边缘智能应用,首先分析了边缘智能环境下高效联邦学习面临的挑战,然后综述联邦学习在客户端选择、模型训练与模型更新等关键技术方面的研究进展,最后对边缘智能联邦学习的发展趋势进行了展望.

    Abstract:

    With the increasing demand of edge intelligence, federated learning (FL) has been now of great concern to the industry. Compared with the traditionally centralized machine learning that is mostly based on cloud computing, FL collaboratively trains the neural network model over a large number of edge devices in a distributed way, without sending a large amount of local data to the cloud for processing, which makes the compute-extensive learning tasks sunk to the edge of the network closed to the user. Consequently, the users’ data can be trained locally to meet the needs of low latency and privacy protection. In mobile edge networks, due to the limited communication resources and computing resources, the performance of FL is subject to the integrated constraint of the available computation and communication resources during wireless networking, and also data quality in mobile device. Aiming for the applications of edge intelligence, the tough challenges for seeking high efficiency FL are analyzed here. Next, the research progresses in client selection, model training and model updating in FL are summarized. Specifically, the typical work in data unloading, model segmentation, model compression, model aggregation, gradient descent algorithm optimization and wireless resource optimization are comprehensively analyzed. Finally, the future research trends of FL in edge intelligence are prospected.

  • 小目标检测作为目标检测中的难点技术,被广泛应用于自动驾驶、医学领域、无人机导航、卫星定位和工业检测等视觉任务中. 近些年基于深度学习的目标检测算法发展迅猛. 以YOLO(You Only Look Once)[1]和SSD(Single Shot MultiBox Detector)[2]为代表的一阶段算法直接预测出目标的位置和类别,具有较快的速度. 而二阶段算法[3-4]在生成候选框的基础上再回归出目标区域,具有更高的精度.但是这些算法在检测只含有较少像素的小目标(小于32×32像素)时表现较差,检测率甚至不到较大目标的一半. 因此,小目标检测仍然具有很大的改进空间.

    小目标检测效果差主要是由于网络本身的局限性以及训练数据不平衡所导致[5]. 为了获得较强的语义信息和较大的感受野,检测网络不断堆叠下采样层,使得小目标信息在前向传播的过程中逐渐丢失[6],限制了小目标的检测性能. 特征金字塔网络(feature pyramid network, FPN)[7]将低层特征图和高层特征横向融合,可以在一定程度上缓解信息丢失的问题[1-2]. 然而FPN直接融合不同层级的特征会造成语义冲突,限制多尺度特征的表达,使小目标容易淹没于冲突信息中. 同时,目前主流的公开数据集中,小目标的数量远远小于较大目标,使得小目标对损失的贡献小,网络收敛的方向不断向较大目标倾斜.

    针对小目标检测效果差的问题,本文提出一种上下文增强和特征提纯相结合的复合FPN结构,该结构主要包括上下文增强模块(context augmentation module, CAM)和特征提纯模块(feature refinement module, FRM). 同时,提出一种复制—缩小—粘贴(copy-reduce-paste)的数据增强方法,具体有3点:

    1)CAM融合多尺度空洞卷积特征以获取丰富的上下文信息,补充检测所需信息;

    2)FRM引入通道和空间自适应融合的特征提纯机制以抑制特征中的冲突信息;

    3)通过copy-reduce-paste数据增强来提高小目标在训练过程中对损失的贡献率.

    目标检测是一种基础的计算机视觉任务,经过多年的发展,基于卷积神经网络(convolutional neural network, CNN)的目标检测器逐渐成为主流. RCNN[3]首先生成候选区域以匹配不同尺寸的目标,然后通过CNN筛选候选区域. FasterR-CNN[4]将候选区域生成阶段和分类阶段结合在一起,以提高检测速度. EFPN[8]提出超分辨率FPN结构以放大小目标的特征[9]. 一阶段网络SSD将锚盒密集的布置在图像上以回归出目标框,同时充分利用不同尺度的特征,以检测较小目标. YOLOV3[1]利用特征金字塔的3层输出分别检测大、中、小目标,明显提高小目标检测性能. RefineDet[10]引入一种新的损失函数以解决简单样本和复杂样本不平衡的问题. 同时也有研究者提出基于anchor-free架构的检测器[11].尽管目标检测算法发展迅速,但是小目标检测率却一直较低. 本文选用带有FPN的YOLOV3作为基础网络,并在此基础上做出改进.

    多尺度特征是一种提高小目标检测率的有效方法. SSD[2]首次尝试在多尺度特征上预测目标位置和类别. FPN[7]自上而下地将含有丰富语义信息的高层特征图和含有丰富几何信息的低层特征图横向融合. PANet[12]在FPN的基础上添加了额外的自下而上的连接以更高效地传递浅层信息到高层. NAS-FPN[13]利用神经架构搜索技术搜索出了一种新的连接方式. BiFPN[14]改良了PANet的连接方式,使其更加高效,并在连接处引入了简单的注意力机制. 虽然文献[1214]中的结构都能提升网络多尺度表达的能力,但是都忽略了不同尺度特征之间冲突信息的存在可能会阻碍性能的进一步提升,本文则充分考虑了冲突信息对检测精度的影响.

    深度学习是基于数据的方法,因而对训练数据的预处理是其关键的一环. 常见的数据预处理方法如旋转、变形、随机擦除、随机遮挡和光照畸变等. Stitcher[15]将4张训练图像缩小为原图的1/4,并且将它们拼接为1张图像来实现小目标的数据增强,同时将损失值作为反馈信号以指导数据增强的进行. YOLOV4[16]将4张训练图像缩小为不同大小并且拼接为1张来实现小目标的数据增强. 文献[1516]中的方式对于目标尺寸普遍很大的图像来说,会将大目标图像缩小为中等目标大小,最终提高中等目标图像的检测率. Kisantal等人[5]采用将图像的小目标区域复制然后粘贴回原图的方式实现小目标数据增强. 但这种方式只能增加小目标个数而不能增加含有小目标的图像个数,也会造成一定的不平衡. 本文提出的数据增强算法则基于较大目标广泛分布于训练的各个批次的事实,保证训练平衡进行. 本文算法结构图如图1所示:

    图  1  FPN总体网络结构
    Figure  1.  Overall network structure of FPN

    图1中{C2, C3C4C5}分别表示图像经过{4, 8, 16, 32}倍下采样后的特征图,{C3C4C5}经过1层卷积后分别生成{F1F2F3},其中C2由于含有大量噪声而未被使用. {L1L2L3}分别是{F1F2F3}经过FPN后的结果,{P1P2P3}为{L1L2L3}经过FRM的输出.

    CAM启发于人类识别物体的模式. 如,我们很难分辨很高天空中的小鸟,但是考虑天空作为其背景,我们就很容易分辨出,因为从我们学习到的知识中可知,在天空背景下的微小目标很有可能是小鸟,而这种背景信息,即是目标的上下文信息.因此如果目标检测网络也在图像中学习到这样的“知识”将会有助于检测小目标.

    由于FPN不同层的特征密度不同,因而含有大量的语义差异,在实现信息共享的同时也引入了很多冲突信息. 因此,本文提出了FRM用于过滤冲突信息,减少语义差异. FRM通过将不同层间的特征自适应融合,以达到抑制层间冲突信息的目的.

    针对小目标对损失贡献低的问题,提出了一种copy-reduce-paste数据增强方法,以提高小目标对损失的贡献.

    目标检测需要定位信息也需要语义信息,处于FPN最低层的L3含有较多的定位信息而缺少语义信息. FPN自上而下的信息共享结构在通道数减少之后才进行融合,使得L3未能获取充分的语义信息. 为此我们利用不同空洞卷积率的空洞卷积来获取上下文信息,并将其注入到FPN中,以补充上下文信息.

    图2(a)是CAM的结构图. 对于大小为[bs, C, H, W]的输入分别进行空洞卷积率为1,3,5的空洞卷积[17]. bs, C, H, W分别为特征图的批次大小、通道数、高和宽.由于该模块输入的尺寸较小,为了获取更多的细节特征,不宜使用大卷积,因此选用3×3的卷积.同时为了避免引入较多的参数量,选取卷积核的个数为C/4,即首先压缩通道数为输入的1/4,然后再通过1×1的卷积扩张通道数为C,得到3种大小相同而感受野不同的输出,最后融合得到的特征. 特征融合可采用的方式如图2(b)~(d)所示. 图2(b),(c)分别为拼接融合和加权融合,即分别在通道和空间维度上直接拼接和相加. 图2(d)是自适应融合方式,即通过卷积、拼接和归一化等操作将输入特征图压缩为通道为3的空间权重,3个通道分别与3个输入一一对应,计算输入特征和空间权重的加权和可以将上下文信息聚合到输出中.

    图  2  CAM结构图
    Figure  2.  The structure of CAM

    本文通过消融实验验证各个融合方式的有效性,实验结果如表1所示.

    表  1  CAM消融实验结果
    Table  1.  Ablation Experimental Results of CAM %
    算法AP(IOU=0.5)AR(IOU=0.5)
    APsAPmARsARm
    基线模型34.860.557.978.7
    相加融合35.663.060.581.8
    自适应融合36.063.158.981.0
    拼接融合36.661.059.879.5
    注:基线模型为YOLOV3,测试数据集为VOC,IOU为交并比. APsAPm分别指小目标、中目标的平均精度;ARsARm分别指小目标、中目标的平均召回率.
    下载: 导出CSV 
    | 显示表格

    表1可知,对于小目标来说,拼接融合所取得的增益最大,APsARs分别提高了1.8%和1.9%. 自适应融合对中目标的提升最为明显,APm提升了2.6%. 相加融合带来的提升则基本介于拼接融合和自适应融合两者之间,因此本文选择拼接融合的方式.

    本文将部分特征图可视化以说明CAM的效果,可视化结果如图3所示.

    图  3  上下文信息增强效果图
    Figure  3.  Context information augmentation effect diagrams

    图3(b)为CAM输入特征图,从中可以发现在图像的目标处有微小响应,呈现为较小的“白点”. 图3(c)为CAM输出特征图,可以明显看到目标处的响应明显增强,并且响应范围更大,这是因为CAM将目标周围的上下文信息也融入特征中,使得目标处的响应更强. 因此将CAM提取的上下文信息注入网络中将有助于小目标的检测.

    FPN用于融合不同尺度大小的特征,然而不同尺度的特征具有不可忽视的语义差异,将不同尺度的特征直接融合可能引入大量的冗余信息和冲突信息,降低多尺度表达的能力. 为了抑制冲突信息,本文提出FRM,该模块结构如图4所示.

    图  4  FRM结构
    Figure  4.  The structure of FRM

    图4(a)为接在FPN第2层后的FRM结构图. 从图(4)可看出,{\boldsymbol{X}}^1 {\boldsymbol{X}}^2 {\boldsymbol{X}}^3 (FPN的3层输出)为该模块的输入,首先将{\boldsymbol{X}}^1 {\boldsymbol{X}}^2 {\boldsymbol{X}}^3 3个输入缩放到同一大小,分别为{\boldsymbol{R}}^1 {\boldsymbol{R}}^2 {\boldsymbol{R}}^3 ,然后再利用拼接和卷积操作将所有输入特征的通道数压缩为3,随后接上并联的通道提纯模块和空间提纯模块.

    通道提纯模块的具体结构如图4(b)所示,为了计算通道注意力,采用平均池化和最大池化相结合的方式来聚合图像的全局空间信息. 用{\boldsymbol{X}}^m 表示FRM的第m(m∈{1,2,3})层输入特征图,其输出可表示为

    {\boldsymbol{U}} = {\boldsymbol{\alpha }} \times RS({{\boldsymbol{X}}^1}) + {\boldsymbol{\beta }} \times {{\boldsymbol{X}}^2} + {\boldsymbol{\gamma }} \times RS({{\boldsymbol{X}}^3}). (1)

    其中RS表示resize函数,在式(1)中将{\boldsymbol{X}}^1 {\boldsymbol{X}}^3特征缩放到和{\boldsymbol{X}}^2同一尺度. {\boldsymbol{\alpha }}{\boldsymbol{\beta }}, {\boldsymbol{\gamma }}为通道自适应权重,其尺度为1×1×1. 经过归一化的{\boldsymbol{\alpha }}{\boldsymbol{\beta }}, {\boldsymbol{\gamma }} 代表3个输入的相对权重,这3个值越大表示具有更大的响应,将它们与输入相乘,响应大的输入将被放大,响应小的输入将被抑制,以此将更加有用的信息增强而抑制不重要的噪声. {\boldsymbol{\alpha }}{\boldsymbol{\beta }}, {\boldsymbol{\gamma }}可表示为

    {\text{[}}{\boldsymbol{\alpha }},{\boldsymbol{\beta }},{\boldsymbol{\gamma }}{\text{]}} = sigmoid{\kern 1pt} {\kern 1pt} {\text{[}}{AvgPool} {\text{(}}{\boldsymbol{F}}{\text{)}} + {MaxPool} {\text{(}}{\boldsymbol{F}}{\text{)]}}. (2)

    其中F图4(a)中标识的特征图,AvgPoolMaxPool分别为平均池化和最大池化操作.

    空间提纯模块利用softmax函数将特征图在空间上归一化,得到特征图中某点关于其他所有位置的相对权重,然后将其与输入分别相乘. 其输出可表示为

    \begin{split} {\boldsymbol{D}} =\;& {{\boldsymbol{\mu }}_{{\text{(}}x,y{\text{)}}}} \times RS\left( {{\boldsymbol{X}}_{{\text{(}}x,y{\text{)}}}^1} \right) + \;{{\boldsymbol{\nu }}_{{\text{(}}x,y{\text{)}}}} \times {\boldsymbol{X}}_{{\text{(}}x,y{\text{)}}}^2+ \\ &{{\boldsymbol{\omega }}_{{\text{(}}x,y{\text{)}}}} \times RS\left( {{\boldsymbol{X}}_{{\text{(}}x,y{\text{)}}}^3} \right). \end{split} (3)

    (x,y)表示特征图的空间坐标. {\boldsymbol{\mu }}{\boldsymbol{\nu }}{\boldsymbol{\omega }}为空间自适应权重,目标区域的响应较大,将会获得更大的权重,反之背景区域获得的权重较小. {\boldsymbol{\mu }}{\boldsymbol{\nu }}{\boldsymbol{\omega }}与输入具有相同的空间大小,因此将它们和输入直接相乘可以达到将目标特征放大和背景噪声抑制的目的. {\boldsymbol{\mu }}{\boldsymbol{\nu }}{\boldsymbol{\omega }}可由式(4)表示.

    [{\boldsymbol{\mu }},{\boldsymbol{\nu }},{\boldsymbol{\omega }}] = {\text{soft}}\max ({\boldsymbol{F}}). (4)

    softmax函数用于归一化特征参数以提高模型的泛化能力.那么此模块的总输出为

    {\boldsymbol{P}} = {\boldsymbol{U}} + {\boldsymbol{D}}. (5)

    FPN所有层的特征都在自适应权重的引导下融合,融合的结果作为整个网络的输出.

    为更加直观地说明特征提纯模块的作用,图5展示了部分可视化的特征图. 由于小目标的检测由FPN的最低层主导,因此我们仅可视化了最低层的特征. 图5F3L3P3分别对应图1中的标签F3L3P3.

    图  5  FRM可视化结果
    Figure  5.  Visualization results of FRM

    图5可知,F3特征可大致定位目标位置,但是包含较多背景噪声,具有较大误检的可能. L3相比于F3,背景信息明显减少,这是FPN融合高层信息的结果. 高层信息更加关注于物体的抽象信息而不关注背景信息,因此背景信息会被中和. 但是由于特征的细腻度不同,引入了冲突信息,使得目标的响应被削弱.而P3的目标特征被强化,并且目标和背景之间的边界更加明显. 由可视化分析可知,本文提出的FRM可减少干扰小目标的冲突信息,提高判别性,以此提高小目标的检测率.

    当前主流的公开数据集中,小目标的数量或包含小目标的图片数量远远小于较大目标的,如VOC数据集,统计情况如表2所示. 同时,如图6(a)所示,小目标产生的正样本数量远远小于较大目标的,因而小目标对损失的贡献率远远小于较大目标的,使得网络收敛的方向不断向较大目标倾斜.

    表  2  VOC数据集目标尺寸统计结果
    Table  2.  Statistical Results of Target Size on VOC Datebase %
    统计值小目标中目标大目标
    目标数量占比10.016.673.4
    图片数量占比8.216.275.6
    下载: 导出CSV 
    | 显示表格

    为了缓解这个问题,我们在训练过程中复制、缩小、粘贴图像中的目标,以增加小目标产生的正样本数量以及对损失的贡献值,使得训练更为平衡. 数据增强效果如图6(b)和图6(c)所示.

    图  6  数据增强示例
    Figure  6.  Data augmentation examples

    图6(b),图6(c)是粘贴1次的结果示例,实线框是原有的目标,虚线框为粘贴的目标. 首先复制大目标图像块,然后对图像块进行缩小,最后粘贴到原图的不同位置.我们提出的数据增强方法并没有直接复制小目标图像区域粘贴到不同位置,这是考虑到数据集中含有小目标的图像数量较少,如果仅仅复制粘贴小目标,在很多批次中小目标对损失的贡献仍然很低. 此外,我们研究了粘贴次数对小目标检测性能的影响, 实验结果如表3所示.

    表  3  数据增强消融实验结果
    Table  3.  Ablation Experimental Results of Data Augmentation %
    粘贴次数AP(IOU=0.5)AR(IOU=0.5)
    APsAPmARsARm
    0(基线模型)34.860.557.978.7
    137.362.759.880.9
    236.862.658.081.0
    333.259.758.079.8
    注:基线模型为YOLOV3IOU为交并比.APsAPm分别指小目标、中目标的平均精度;ARsARm分别指小目标、中目标的平均召回率.
    下载: 导出CSV 
    | 显示表格

    表3中可知,随着粘贴次数的增加,小目标的检测率逐渐减小,甚至会造成低于基线模型的情况.这可能是由于随着粘贴次数的增加,逐渐破坏了原始数据的分布,使得在测试集的表现较差. 在粘贴1次时,APs提高了2.5%,ARs提高了1.9%,同时中目标的检测率也略有提升,结果表明粘贴1个目标是最佳的设定.

    本文实验在VOC和TinyPerson两种数据集[18]上进行. VOC有22136张训练图像和4952张测试图像,共20个类别. TinyPerson数据集包含2个类别,798张训练图片和816张测试图片,其场景多为远距离大背景下的图像,所标注目标的平均大小为18像素,是一个真正意义上的小目标数据集.

    本文所使用的评估指标为:

    精度(precision, P),用来检测结果中相关类别占总结果的比重;

    召回率(recall, R),用来检测结果中相关类别占总类别的比重. 由P-R曲线可计算所有大、中、小目标平均检测精度的均值(mAP):

    mAP = \frac{{\text{1}}}{k}\sum\limits_{n = 1}^N {P\left( n \right) \times\Delta r\left( n \right)} . (6)

    其中N为测试集总数,P(n)表示n张图像的精确度,∆r(n)表示从n−1增加到n时召回率的变化量,k为类别数. 同时,使用下标s,m,l分别表示在小尺度、中尺度和大尺度目标上的性能. 本文所有的实验在同样的软件和硬件条件下进行(pytorch[19]框架,Intel Core i7-5820k CPU@3.30 GHz处理器,16 GB内存,GeForce GTX TITAN显卡).

    图7为训练时的损失变化曲线,我们采用SGD优化器训练50轮次(前2个轮次预热),批次设定为8,学习率初始值为 0.0001,训练的损失值平滑下降.部分特征可视化结果如图8所示.

    图  7  损失曲线
    Figure  7.  The curve of loss
    图  8  训练特征图可视化效果
    Figure  8.  Visualization results of feature maps in training

    图8所示,图8(b)为浅层特征,网络更关注物体的纹理信息. 图8(c)为深层特征,图像的信息逐渐抽象,网络更关注物体的高层语义信息.

    为验证本文算法在小目标检测上的有效性,本文在TinyPerson和VOC数据集上分别进行了实验.

    本文复现了4种算法在TinyPerson数据集上的检测结果,由于该数据集几乎全是小目标,因此只进行APs指标的对比,对比结果如表4所示.

    表  4  TinyPerson数据集上的检测结果
    Table  4.  Detection Results on TinyPerson Dataset %
    算法主干网络APs
    MaskR-CNN[20]ResNet5042.5
    AL-MDN[21]AGG1634.1
    DSFD[22]ResNet15251.6
    YOLOV5[23]CSPDarkNet54.3
    本文算法Darknet5355.1
    注:APs指小目标的平均精度.
    下载: 导出CSV 
    | 显示表格

    表4可知,本文算法在该数据集上的APs达到55.1%. 相比YOLOV5和DSFD算法,本文算法分别有0.8%和3.5%的提升,而相比于AL-MDN和MaskR-CNN则分别高出21%和12.6%.

    本文复现了3种较为前沿的目标检测算法在VOC上的结果,并且比较这些算法在小目标、中目标上的APAR,实验结果如表5所示:

    表  5  VOC数据集上的小目标检测结果
    Table  5.  Results of Small Targets Detection on VOC Dataset %
    算法 AP(IOU \in [0.5,0.95]) AR(IOU \in [0.5,0.95])
    APsAPmARsARm
    RefineDet[10]11.634.920.239.9
    CenterNet[24]9.231.317.443.0
    YOLOV4[16]13.034.518.142.8
    本文算法16.933.429.445.8
    注:IOU为交并比.APsAPm分别指小目标、中目标的平均精度;ARsARm分别指小目标、中目标的平均召回率.
    下载: 导出CSV 
    | 显示表格

    表5可知,本文算法相比于YOLOV4,APs高3.9%,ARs高11.3%;相比于RefineDet,APs高5.3%,ARs高9.2%;而相比于CenterNet,本文算法的APsARs分别具有7.7%和12.0%的优势. 不难发现,本文算法在小目标的召回率上具有较大优势,说明本文算法具有较强的小目标查找能力.

    将本文算法和近几年的一阶段算法和二阶段算法在VOC数据集上的mAP进行对比,对比结果如表6所示.

    表  6  VOC数据集上的实验结果(IOU=0.5)
    Table  6.  Experimental Results on VOC Dataset (IOU=0.5)
    类型算法主干网络输入尺寸mAP/%
    二阶段
    Faster R-CNN[4]ResNet1011000×60076.4
    R-FCN[3]ResNet1011000×60080.5
    HyperNet[25]VGG161000×60076.3
    CoupleNet[26]ResNet1011000×60082.7
    Reconfig[27]ResNet1011000×60082.4
    IPG-Net[28]IPGNet1011000×60084.8
    一阶段
    SSD[2]VGG16512×51279.8
    RefineDet[10]VGG16512×51281.8
    RFBNet[29]VGG16512×51282.2
    ScratchDet[30]RestNet34320×32080.4
    PFPNet[31]VGG16512×51282.3
    本文算法Darknet53448×44883.6
    本文算法+Darknet53448×44885.1
    注:“+”表示多尺度测试.
    下载: 导出CSV 
    | 显示表格

    表6可知,与一阶段算法相比,本文算法比PFPNet的mAP高1.3%,具有最好的表现. 与二阶段算法相比,本文算法优于大部分的二阶段算法,但比IPG-Net的mAP低1.2%,这主要是由于本文算法的主干网络性能较差以及输入图像大小较小. 如果本文采用多尺度测试的方法,则在VOC数据集上的检测率可达到85.1%,高于所有的对比算法.

    本文算法对小目标的检测具有较大优势,不管是总体检测效果还是小目标的检测率、召回率都表现良好,优于大多数检测算法.

    本文以消融实验验证每个模块的贡献.通过逐个添加数据增强方法、CAM和FRM到基线模型YOLOV3中,得出实验结果如表7所示:

    表  7  消融实验结果
    Table  7.  Ablation Experimental Results
    基线模型增强CAMFRMAP/% (IOU = 0.5)AR/% (IOU = 0.5)
    APsAPmAPlARsARmARl
    34.860.583.657.978.792.8
    37.362.783.459.880.993.0
    36.661.084.259.879.593.1
    37.662.183.959.079.192.6
    40.264.184.664.881.093.9
    注:√表示包含该模块,IOU为交并比.APsAPmAPl分别指小目标、中目标和大目标的平均精度;ARsARmARl分别指小目标、中目标和大目标的平均召回率.
    下载: 导出CSV 
    | 显示表格

    总体来说,本文提出的算法可显著提高目标检测率,尤其是小目标和中等目标的检测率,这也符合本文算法的初衷. 如表7所示,APs提升5.4%,APm提升3.6%,而APl提升1.0%. 同时对于不同尺度目标的召回率也有不同程度的提升. 具体来说,ARs提升6.9%,ARm提升1.3%,ARl提升1.1%.

    copy-reduce-paste数据增强方法将APsAPm分别提高2.5%和2.2%.而APl略有下降. 由此可知,该方法可有效提高小目标检测率.

    CAM分别提高小目标的APsARs 1.8%和0.6%.证实了补充上下文信息对于小目标检测的重要性.

    FRM将APsAPm分别提高2.8%和1.6%,而APl基本持平. 由此可见,FRM可滤除特征的冲突信息,提高较小目标特征的判别性.

    小目标特征模糊,能够提取的特征少,是目标检测领域的难点. 为了解决小目标特征消散的问题,本文引入CAM,通过不同空洞卷积率的空洞卷积提取上下文信息,以补充小目标的上下文信息. 由于小目标容易淹没在冲突信息中,本文提出FRM,该模块结合通道和空间自适应融合来抑制冲突信息,提高特征的判别性. 同时,提出一种copy-reduce-paste的小目标增强方法来提高小目标对损失函数的贡献,使得训练更加平衡.通过实验结果可知,本文提出的小目标检测网络在TinyPerson和VOC数据集上均表现良好,优于大多数的目标检测算法.

    致谢 感谢武汉大学超级计算中心对本文的数值计算提供的支持.

    作者贡献声明:肖进胜和赵陶设计网络并实践;肖进胜和周剑负责论文撰写;乐秋平和杨力衡提供数据支持和文章的润色

  • 图  1   边缘智能联邦学习架构

    Figure  1.   Edge intelligent federated learning architecture

    图  2   FedCS协议概述

    Figure  2.   FedCS protocol overview

    图  3   模型分割迁移框架

    Figure  3.   Model segmentation migration framework

    图  4   自适应模型聚合与固定频率聚合的比较

    Figure  4.   Comparison of adaptive model aggregation and fixed frequency aggregation

    图  5   智能交通

    Figure  5.   Intelligent transportation

    图  6   通过空中计算并利用空间自由度进行参数聚合[125]

    Figure  6.   The parameters are aggregated by air calculation and spatial freedom[125]

    表  1   现有联邦学习综述研究对比

    Table  1   Comparison of Studies on Existing Federated Learning Reviews

    研究
    工作
    资源
    优化
    激励
    机制
    算法
    设计
    从无线网络
    角度优化
    无线
    应用
    说明
    文献[11]××考虑边缘网络的FL
    文献[12]××××
    文献[13]××××
    文献[14]×××××主要考虑用于缓存和计算卸载的FL
    文献[15]××××
    本文
    注:“√”表示文献中完成了该项工作,“×”表示文献中未完成该项工作.
    下载: 导出CSV

    表  2   联邦学习客户端选择方案比较

    Table  2   Comparison of Federated Learning Client Selection Schemes

    方案类型方案思路客户端目标服务器端目标
    计算与通信资源优化剔除不必要的模型更新[17]、客户端分层[18]、控制学习节奏[19]、基于聚类实现自组织学习[20]、长期能耗约束下的带宽分配[21]、设置学习期限[25]、基于设备的计算能力进行选择[26]
    激励机制契约理论[28]:基于信誉进行激励反馈鼓励可靠的终端设备参与学习奖励和能耗的平衡最大化由全局迭代时间与补偿给客户端的报酬之间的差异所获得的利润
    Stackelberg博弈[31]:实现高质量无线通信效率的全局模型奖励(即准确率等级)和成本(即通信和计算成本)的平衡获取不同准确率的凹函数
    拍卖理论[32-33]:最大限度地降低客户端投标的成本奖励和成本的平衡最小化投标成本
    修订目标函数权重[30]为了引入潜在的公平性并降低训练精度方差,通过在q-FedAvg中分配更高的相对权重来强调具有高经验损失的本地设备
    下载: 导出CSV

    表  3   模型压缩技术总结

    Table  3   Summary of Model Compression Techniques

    方法优化手段优缺点
    结构化和草图更新机制[48]压缩传输模型,提升客户端到服务器的通信效率客户端到服务器参数压缩;代价是复杂的模型结构可能出现收敛问题
    服务端-客户端更新[49]压缩传输模型,提升服务器到客户端的通信效率服务器到客户端参数压缩;代价是准确性降低,可能有收敛问题

    草图[50]
    使用计数草图压缩模型更新,然后利用草图的可合并性来组合来自客户端的模型更新解决了客户端参与稀少而导致的收敛问题,建立在假设网络;已经尽了最大努力使通信效率最大化;可能遇到网络瓶颈
    Adam[1]通过使用Adam优化和压缩方案改进了FedAvg算法Adam优化加快了收敛速度,压缩方案降低了通信开销
    模型蒸馏[51-52]交换模型输出模型状态信息,即其有效载荷大小仅取决于输出维度的标签数量;然后使用联邦蒸馏实现权重更新规则解决了数据独立同分布的问题;代价是无线信道对模型训练精度的影响
    下载: 导出CSV

    表  4   模型训练优化方法及特点

    Table  4   Optimization Methods and Characteristics of Model Training

    优化方法特点方法来源
    数据卸载利用边缘计算服务器的
    强大算力加快模型训练
    文献[3638]
    模型分割迁移分割模型和隐私保护技术文献[4244]
    模型压缩采用不同压缩粒度对模型输出
    值或者中间值梯度进行压缩
    文献[4852]
    下载: 导出CSV

    表  5   主要联邦学习模型聚合技术的比较总结

    Table  5   A Comparative Summary of Major Federated Learning Mode Aggregation Technologies

    聚合技术优化角度主要思想特点
    FedAvg[7]统计异构性客户端对其本地数据执行多个批处理更新,并与服务器传输更新的权重,而不是梯度. 从统计的角度看,FedAvg已被证明设备间数据分布不一致的情况下开始发散;从系统的角度看,FedAvg不允许参与学习的设备根据其底层系统限制执行可变数量的本地更新.
    FedProx[55]统计异构性在每个客户端上的本地训练子问题中添加一项,以限制每个本地模型更新对全局模型的影响. FedProx的提出是为了提高统计异质性数据的收敛性. 与FedAvg类似,在FedProx中,所有设备在全局聚合阶段的权重相等,因为没有考虑设备功能(例如硬件、电量)的差异.
    FedPAQ[53]通信在与服务器共享更新之前,允许客户端在模型上执行多个本地更新. 与FedAvg类似,FedPAQ中的新全局模型为局部模型的平均值,但这在强凸和非凸设置中都需要很高的复杂性.
    FedMA[54]统计异构性在执行聚合前考虑神经元的排列不变性,并允许全局模型大小自适应.使用贝叶斯非参数机制根据数据分布的异构性调整中心模型的大小;FedMA中的贝叶斯非参数机制容易受到模型中毒攻击,在这种情况下,对手可以很容易地欺骗系统扩展全局模型,以适应任何中毒的本地模型.
    Turbo-Aggregate[62]通信和安全一种多组策略,其中客户端被分成几个组,模型更新以循环方式在组之间共享和一种保护用户隐私数据的附加秘密共享机制. Turbo-Aggregate非常适合无线拓扑,在这种拓扑中,网络条件和用户可用性可能会快速变化. Turbo-Aggregate中嵌入的安全聚合机制虽然能有效处理用户流失,但无法适应加入网络的新用户. 因此,通过重新配置系统规范(即多组结构和编码设置)以确保满足弹性和隐私保证,开发一种可自我配置的协议来扩展它的.
    自适应聚合[63]通信和统计
    异构性
    在给定的资源预算下确定局部更新和全局参数聚合之间的最佳折中的自适应控制算法. 改变了全局聚合频率,以确保期望的模型性能,同时确保在FL训练过程中有效利用可用资源,例如能量,可用于边缘计算中的FL. 自适应聚合方案的收敛性保证目前只考虑凸损失函数.
    HierFAVG[65]通信一种分层的客户端—边缘—云聚合体系结构,边缘服务器聚合其客户端的模型更新,然后将它们发送到云服务器进行全局聚合. 这种多层结构能够在现有的客户端—云架构上实现更高效的模型交换. HierFAVG仍然容易出现掉队和终端设备掉线的问题.
    自适应任务分配[66]设备异构性、通信、计算在保证异构信道上的数据分发/聚合总次数和异构设备上的本地计算,在延迟约束下最大化学习精度. 自适应任务分配方案,该方案将最大化分布式学习者的本地学习迭代次数(从而提高学习精度),同时遵守时间限制. 该方案没考虑动态参数,如变化的信道状态和数据到达时间.
    公平聚合[67]设备异构性、任务异构性、通信、计算一种具有自适应学习率的定制学习算法,以适应不同的精度要求,并加快本地训练过程. 为边缘服务器提出了一个公平的全局聚合策略,以最小化异构终端设备之间的精度差异. 一种学习率自适应的CuFL算法,以最小化总学习时间. 考虑到终端设备的任务异质性,CuFL允许终端设备在满足其独特的精度要求后提前退出训练. 该方案没考虑动态参数,如变化的信道状态和数据到达时间.
    下载: 导出CSV

    表  6   边缘网络下基于联邦学习的无人机应用

    Table  6   Unmanned Aerial Vehicle Application Based on Federated Learning in Edge Network

    挑战联邦学习结果
    客户端服务器数据特征本地和全局模型
    边缘内容
    缓存[95-96]
    UAVs边缘服务器
    边缘服务器
    内容特征(新鲜度、位置、占用内存、内容请求历史等)内容受欢迎度预测有效地确定哪些内容应该存储在每个缓存中
    无人机作
    为基站[93]
    地面用户关于地面用户可移动性的信息(位置、方向、速度等)地面用户模式(移动性
    和内容负荷)的预测
    优化无人机基站部署、提高网络覆盖和连通性、有效提供热门内容.
    无人机轨
    迹规划[92]
    UAVs边缘服务器或云源、目的点位置、无人机机动性信息(速度、方向、位置、高度等)、无人机能量消耗、物理障碍、服务需求等. 每条潜在路径的性能预测无人机选择最优轨迹、优化服务性能、优化无人机能耗
    下载: 导出CSV
  • [1]

    Mills J, Hu Jia, Min Geyong. Communication-efficient federated learning for wireless edge intelligence in IoT[J]. IEEE Internet of Things Journal, 2019, 7(7): 5986−5994

    [2]

    Covington P, Adams J, Sargin E. Deep neural networks for YouTube recommendations[C] //Proc of the 10th ACM Conf on Recommender Systems. New York: ACM, 2016: 191−198

    [3]

    Parkhi O M, Vedaldi A, Zisserman A. Deep face recognition[C] //Proc of the 15th IEEE Int Conf on Computer Vision Workshop. Piscataway, NJ: IEEE, 2015: 258−266

    [4]

    Mowla N I, Tran N H, Doh I, et al. Federated learning-based cognitive detection of jamming attack in flying ad-hoc network[J]. IEEE Access, 2020, 8: 4338−4350 doi: 10.1109/ACCESS.2019.2962873

    [5]

    Brik B, Ksentini A, Bouaziz M. Federated learning for UAVs-enabled wireless networks: Use cases, challenges, and open problems[J]. IEEE Access, 2020, 8: 53841−53849 doi: 10.1109/ACCESS.2020.2981430

    [6]

    Abbas N, Zhang Yan, Taherkordi A, et al. Mobile edge computing: A survey[J]. IEEE Internet of Things Journal, 2017, 5(1): 450−465

    [7]

    Mcmahan B, Moore E, Ramage D, et al. Communication-efficient learning of deep networks from decentralized data[C] //Proc of the 20th Int Conf on Artificial Intelligence and Statistics. New York: PMLR, 2017 : 1273−1282.

    [8]

    Yang Qiang, Liu Yang, Chen Tianjian, et al. Federated machine learning: Concept and applications[J]. ACM Transactions on Intelligent Systems and Technology, 2019, 10(2): 1−19

    [9]

    Zhou Zhi, Yang Song, Pu Lingjun, et al. CEFL: Online admission control, data scheduling, and accuracy tuning for cost-efficient federated learning across edge nodes[J]. IEEE Internet of Things Journal, 2020, 7(10): 9341−9356 doi: 10.1109/JIOT.2020.2984332

    [10]

    Ruder S. An overview of gradient descent optimization algorithms[J]. arXiv preprint, arXiv: 1609.04747, 2016

    [11]

    Lim W Y B, Luong N C, Hoang D T, et al. Federated learning in mobile edge networks: A comprehensive survey[J]. IEEE Communications Surveys & Tutorials, 2020, 22(3): 2031−2063

    [12]

    Li Tian, Sahu A K, Talwalkar A, et al. Federated learning: Challenges, methods, and future directions[J]. IEEE Signal Processing Magazine, 2020, 37(3): 50−60 doi: 10.1109/MSP.2020.2975749

    [13]

    Li Qinbin, Wen Zeyi, Wu Zhaomin, et al. A survey on federated learning systems: Vision, hype and reality for data privacy and protection[J]. arXiv preprint, arXiv: 1907.09693, 2019

    [14]

    Wang Xiaofei, Han Yiwen, Wang Chenyang, et al. In-edge AI: Intelligentizing mobile edge computing, caching and communication by federated learning[J]. IEEE Network, 2019, 33(5): 156−165 doi: 10.1109/MNET.2019.1800286

    [15]

    Kairouz P, Mcmahan H B, Avent B, et al. Advances and open problems in federated learning[J]. arXiv preprint, arXiv: 1912.04977, 2019

    [16] 王艳,李念爽,王希龄,等. 编码技术改进大规模分布式机器学习性能综述[J]. 计算机研究与发展,2020,57(3):542−561 doi: 10.7544/issn1000-1239.2020.20190286

    Wang Yan, Li Nianshuang, Wang Xiling, et al. Coding-based performance improvement of distributed machine learning in large-scale clusters[J]. Journal of Computer Research and Development, 2020, 57(3): 542−561 (in Chinese) doi: 10.7544/issn1000-1239.2020.20190286

    [17]

    Jin Yibo, Jiao Lei, Qian Zhuzhong, et al. Resource-efficient and convergence-preserving online participant selection in federated learning[C] //Proc of the 40th IEEE Int Conf on Distributed Computing Systems (ICDCS). Piscataway, NJ: IEEE, 2020: 606−616

    [18]

    Chai Z, Ali A, Zawad S, et al. TiFL: A tier-based federated learning system[C] //Proc of the 29th Int Symp on High-Performance Parallel and Distributed Computing. New York: ACM, 2020: 125−136

    [19]

    Li Li, Xiong Haoyi, Guo Zhishan, et al. SmartPC: Hierarchical pace control in real-time federated learning system[C] //Proc of the 40th IEEE Real-Time Systems Symp (RTSS). Piscataway, NJ: IEEE, 2019: 406−418

    [20]

    Khan L U, Alsenwi M, Han Zhu, et al. Self organizing federated learning over wireless networks: A socially aware clustering approach[C] //Proc of the 34th Int Conf on Information Networking (ICOIN). Piscataway, NJ: IEEE, 2020: 453−458

    [21]

    Xu Jie, Wang Heqiang. Client selection and bandwidth allocation in wireless federated learning networks: A long-term perspective[J]. IEEE Transactions on Wireless Communications, 2020, 20(2): 1188−1200

    [22]

    Damaskinos G, Guerraoui R, Kermarrec A M, et al. Fleet: Online federated learning via staleness awareness and performance prediction[C] //Proc of the 21st Int Middleware Conf. New York: ACM, 2020: 163−177

    [23]

    Sprague M R, Jalalirad A, Scavuzzo M, et al. Asynchronous federated learning for geospatial applications[C] //Proc of the Joint European Conf on Machine Learning and Knowledge Discovery in Databases. Cham, Switzerland: Springer, 2018: 21−28

    [24]

    Wu Wentai, He Ligang, Lin Weiwei, et al. Safa: A semi-asynchronous protocol for fast federated learning with low overhead[J]. IEEE Transactions on Computers, 2020, 70(5): 655−668

    [25]

    Nishio T, Yonetani R. Client selection for federated learning with heterogeneous resources in mobile edge[C/OL] //Proc of the 53rd IEEE Int Conf on Communications. Piscataway, NJ: IEEE, 2019[2022-09-05].https://ieeexplore.ieee.org/document/8761315

    [26]

    Yoshida N, Nishio T, Morikura M, et al. Hybrid-FL for wireless networks: Cooperative learning mechanism using non-IID data[C/OL] //Proc of the 54th IEEE Int Conf on Communications (ICC). Piscataway, NJ: IEEE, 2020[2022-09-05].https://ieeexplore.ieee.org/abstract/document/9149323

    [27]

    Khan L U, Pandey S R, Tran N H, et al. Federated learning for edge networks: Resource optimization and incentive mechanism[J]. IEEE Communications Magazine, 2020, 58(10): 88−93 doi: 10.1109/MCOM.001.1900649

    [28]

    Kang Jiawen, Xiong Zehui, Niyato D, et al. Incentive mechanism for reliable federated learning: A joint optimization approach to combining reputation and contract theory[J]. IEEE Internet of Things Journal, 2019, 6(6): 10700−10714 doi: 10.1109/JIOT.2019.2940820

    [29]

    Kim H, Park J, Bennis M, et al. Blockchained on-device federated learning[J]. IEEE Communications Letters, 2019, 24(6): 1279−1283

    [30]

    Li Tian, Sanjabi M, Beirami A, et al. Fair resource allocation in federated learning[J]. arXiv preprint, arXiv: 1905.10497, 2020

    [31]

    Pandey S R, Tran N H, Bennis M, et al. A crowdsourcing framework for on-device federated learning[J]. IEEE Transactions on Wireless Communications, 2020, 19(5): 3241−3256 doi: 10.1109/TWC.2020.2971981

    [32]

    Le T H T, Tran N H, Tun Y K, et al. Auction based incentive design for efficient federated learning in cellular wireless networks[C/OL] //Proc of the IEEE Wireless Communications and Networking Conf (WCNC). Piscataway, NJ: IEEE, 2020[2022-09-05].https://ieeexplore.ieee.org/abstract/document/9120773

    [33]

    Jiao Yutao, Wang Ping, Niyato D, et al. Toward an automated auction framework for wireless federated learning services market[J]. IEEE Transactions on mobile Computing, 2020, 20(10): 3034−3048

    [34]

    Gao Xiaozheng, Wang Ping, Niyato D, et al. Auction-based time scheduling for backscatter-aided RF-powered cognitive radio networks[J]. IEEE Transactions on Wireless Communications, 2019, 18(3): 1684−1697 doi: 10.1109/TWC.2019.2895340

    [35]

    Ko BongJun, Wang Shiqiang, He Ting, et al. On data summarization for machine learning in multi-organization federations[C] //Proc of the 7th IEEE Int Conf on Smart Computing (SMARTCOMP). Piscataway, NJ: IEEE, 2019: 63−68

    [36]

    Valerio L, Passarella A, Conti M. Optimal trade-off between accuracy and network cost of distributed learning in mobile edge Computing: An analytical approach[C/OL] //Proc of the 18th Int Symp on a World of Wireless, Mobile and Multimedia Networks (WoWMoM). Piscataway, NJ: IEEE, 2017[2022-09-05].https://ieeexplore.ieee.org/abstract/document/7974310

    [37]

    Skatchkovsky N, Simeone O. Optimizing pipelined computation and communication for latency-constrained edge learning[J]. IEEE Communications Letters, 2019, 23(9): 1542−1546 doi: 10.1109/LCOMM.2019.2922658

    [38]

    Huang Yutao, Zhu Yifei, Fan Xiaoyi, et al. Task scheduling with optimized transmission time in collaborative cloud-edge learning[C/OL] //Proc of the 27th Int Conf on Computer Communication and Networks (ICCCN). Piscataway, NJ: IEEE, 2018[2022-09-05].https://ieeexplore.ieee.org/abstract/document/8487352

    [39]

    Dey S, Mukherjee A, Pal A, et al. Partitioning of CNN models for execution on fog devices[C] //Proc of the 1st ACM Int Workshop on Smart Cities and Fog Computing. New York: ACM, 2018: 19−24

    [40]

    Zhang Shigeng, Li Yinggang, Liu Xuan, et al. Towards real-time cooperative deep inference over the cloud and edge end devices[J]. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2020, 4(2): 1−24

    [41]

    Dey S, Mukherjee A, Pal A. Embedded deep inference in practice: Case for model partitioning[C] //Proc of the 1st Workshop on Machine Learning on Edge in Sensor Systems. New York: ACM, 2019: 25−30

    [42]

    Lin Bing, Huang Yinhao, Zhang Jianshan, et al. Cost-driven off-loading for DNN-based applications over cloud, edge, and end devices[J]. IEEE Transactions on Industrial Informatics, 2019, 16(8): 5456−5466

    [43]

    Wang Lingdong, Xiang Liyao, Xu Jiayu, et al. Context-aware deep model compression for edge cloud computing[C] //Proc of the 40th Int Conf on Distributed Computing Systems (ICDCS). Piscataway, NJ: IEEE, 2020: 787−797

    [44]

    Wang Ji, Zhang Jianguo, Bao Weidong, et al. Not just privacy: Improving performance of private deep learning in mobile cloud[C] //Proc of the 24th ACM SIGKDD Int Conf on Knowledge Discovery & Data Mining. New York: ACM, 2018: 2407−2416

    [45]

    Zhang Jiale, Wang Junyu, Zhao Yanchao, et al. An efficient federated learning scheme with differential privacy in mobile edge computing[C] //Proc of the Int Conf on Machine Learning and Intelligent Communications. Berlin: Springer, 2019: 538−550

    [46]

    Ivkin N, Rothchild D, Ullah E, et al. Communication-efficient distributed SGD with sketching[J]. Advances in Neural Information Processing Systems, 2019, 32: 13144−13154

    [47]

    Zhang Boyu, Davoodi A, Hu Yuhen. Exploring energy and accuracy tradeoff in structure simplification of trained deep neural networks[J]. IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 2018, 8(4): 836−84 doi: 10.1109/JETCAS.2018.2833383

    [48]

    Konen J, Mcmahan H B, Yu F X, et al. Federated learning: Strategies for improving communication efficiency[J]. arXiv preprint, arXiv: 1610.05492, 2016

    [49]

    Caldas S, Konečny J, Mcmahan H B, et al. Expanding the reach of federated learning by reducing client resource requirements[J]. arXiv preprint, arXiv: 1812.07210, 2018

    [50]

    Rothchild D, Panda A, Ullah E, et al. FetchSGD: Communication-efficient federated learning with sketching[C] //Proc of the 37th Int Conf on Machine Learning. New York: PMLR, 2020: 8253−8265

    [51]

    Jeong E, Oh S, Kim H, et al. Communication-efficient on-device machine learning: Federated distillation and augmentation under non-IID private data[J]. arXiv preprint, arXiv: 1811.11479, 2018

    [52]

    Ahn J H, Simeone O, Kang J. Wireless federated distillation for distributed edge learning with heterogeneous data[C/OL] //Proc of the 30th Annual Int Symp on Personal, Indoor and Mobile Radio Communications (PIMRC). Piscataway, NJ: IEEE, 2019[2022-09-05]. https://ieeexplore.ieee.org/abstract/document/8904164

    [53]

    Reisizadeh A, Mokhtari A, Hassani H, et al. FedPAQ: A communication-efficient federated learning method with periodic averaging and quantization[C] //Proc of the 23rd Int Conf on Artificial Intelligence and Statistics. New York: PMLR, 2020: 2021−2031

    [54]

    Karimireddy S P, Kale S, Mohri M, et al. SCAFFOLD: Stochastic controlled averaging for federated learning[C] //Proc of the 37th Int Conf on Machine Learning. New York: PMLR, 2020: 5132−5143

    [55]

    Li Tian, Sahu A K, Zaheer M, et al. Federated optimization in heterogeneous networks[J]. Proceedings of Machine Learning and Systems, 2020, 2: 429−450

    [56]

    Wang Hongyi, Yurochkin M, Sun Yuekai, et al. Federated learning with matched averaging[J]. arXiv preprint, arXiv: 2002.06440, 2020

    [57]

    Pillutla K, Kakade S M, Harchaoui Z. Robust aggregation for federated learning[J]. IEEE Transactions on Signal Processing, 2022, 70: 1142−1154 doi: 10.1109/TSP.2022.3153135

    [58]

    Grama M, Musat M, Muñoz-González L, et al. Robust aggregation for adaptive privacy preserving federated learning in healthcare[J]. arXiv preprint, arXiv: 2009.08294, 2020

    [59]

    Ang Fan, Chen Li, Zhao Nan, et al. Robust federated learning with noisy communication[J]. IEEE Transactions on Communications, 2020, 68(6): 3452−3464 doi: 10.1109/TCOMM.2020.2979149

    [60]

    Lu Yanyang, Fan Lei. An efficient and robust aggregation algorithm for learning federated CNN[C/OL] //Proc of the 3rd Int Conf on Signal Processing and Machine Learning. New York: ACM, 2020[2022-09-05].https://dl.acm.org/doi/abs/10.1145/3432291.3432303

    [61]

    Chen Zhou, Lv Na, Liu Pengfei, et al. Intrusion detection for wireless edge networks based on federated learning[J]. IEEE Access, 2020, 8: 217463−217472 doi: 10.1109/ACCESS.2020.3041793

    [62]

    So J, Güler B, Avestimehr A S. Turbo-aggregate: Breaking the quadratic aggregation barrier in secure federated learning[J]. IEEE Journal on Selected Areas in Information Theory, 2021, 2(1): 479−489 doi: 10.1109/JSAIT.2021.3054610

    [63]

    Wang Shiqiang, Tuor T, Salonidis T, et al. Adaptive federated learning in resource constrained edge computing systems[J]. IEEE Journal on Selected Areas in Communications, 2019, 37(6): 1205−1221 doi: 10.1109/JSAC.2019.2904348

    [64]

    Zhang Xiongtao, Zhu Xiaomin, Wang Ji, et al. Federated learning with adaptive communication compression under dynamic bandwidth and unreliable networks[J]. Information Sciences, 2020, 540(5): 242−262

    [65]

    Liu Lumin, Zhang Jun, Song Shenghui, et al. Client-edge-cloud hierarchical federated learning[C/OL] //Proc of the 54th IEEE Int Conf on Communications (ICC). Piscataway, NJ: IEEE, 2020[2022-09-05].https://ieeexplore.ieee.org/abstract/document/9148862

    [66]

    Mohammad U, Sorour S. Adaptive task allocation for mobile edge learning[C/OL] //Proc of the Wireless Communications and Networking Conf Workshop (WCNCW). Piscataway, NJ: IEEE, 2019[2022-09-05].https://ieeexplore.ieee.org/abstract/document/8902527

    [67]

    Jiang Hui, Liu Min, Yang Bo, et al. Customized federated learning for accelerated edge computing with heterogeneous task targets[J]. Computer Networks, 2020, 183(12): 107569−107569

    [68]

    Lin Yujun, Han Song, Mao Huizi, et al. Deep gradient compression: Reducing the communication bandwidth for distributed training[J]. arXiv preprint, arXiv: 1712.01887, 2017

    [69]

    Liu Wei, Chen Li, Chen Yunfei, et al. Accelerating federated learning via momentum gradient descent[J]. IEEE Transactions on Parallel and Distributed Systems, 2020, 31(8): 1754−1766 doi: 10.1109/TPDS.2020.2975189

    [70]

    Abdi A, Saidutta Y M, Fekri F. Analog compression and communication for federated learning over wireless MAC[C/OL] //Proc of the 21st Int Workshop on Signal Processing Advances in Wireless Communications (SPAWC). Piscataway, NJ: IEEE, 2020[2022-09-05]. https://ieeexplore.ieee.org/abstract/document/9154309

    [71]

    Alistarh D, Grubic D, Li J, et al. QSGD: Communication-efficient SGD via gradient quantization and encoding[J]. Advances in Neural Information Processing Systems, 2017, 30: 1709−1720

    [72]

    Bernstein J, Wang Yuxiang, Azizzadenesheli K, et al. signSGD: Compressed optimisation for non-convex problems[C] //Proc of the 35th Int Conf on Machine Learning. New York: PMLR, 2018: 560−569

    [73]

    Zhu Guangxu, Wang Yong, Huang Kaibin. Broadband analog aggregation for low-latency federated edge learning[J]. IEEE Transactions on Wireless Communications, 2019, 19(1): 491−506

    [74]

    Amiri M M, Gündüz D. Federated learning over wireless fading channels[J]. IEEE Transactions on Wireless Communications, 2020, 19(5): 3546−3557 doi: 10.1109/TWC.2020.2974748

    [75]

    Wu Jiaxiang, Huang Weidong, Huang Junzhou, et al. Error compensated quantized SGD and its applications to large-scale distributed optimization[C] //Proc of the 35th Int Conf on Machine Learning. New York: PMLR, 2018: 5325−5333

    [76]

    Basu D, Data D, Karakus C, et al. Qsparse-local-SGD: Distributed SGD with quantization, sparsification, and local computations[J]. arXiv preprint, arXiv: 1906.02367, 2019

    [77]

    Xin Ran, Kar S, Khan U A. An introduction to decentralized stochastic optimization with gradient tracking[J]. arXiv preprint, arXiv: 1907.09648, 2019

    [78]

    Haddadpour F, Kamani M M, Mokhtari A, et al. Federated learning with compression: Unified analysis and sharp guarantees[C] //Proc of the 24th Int Conf on Artificial Intelligence and Statistics. New York: PMLR, 2021: 2350−2358

    [79]

    Tang Hanlin, Lian Xiangru, Yan Ming, et al. D2: Decentralized training over decentralized data[C] //Proc of the 35th Int Conf on Machine Learning. New York: PMLR, 2018: 4848−4856

    [80]

    Amiri M M, Gündüz D. Machine learning at the wireless edge: Distributed stochastic gradient descent over-the-air[J]. IEEE Transactions on Signal Processing, 2020, 68(1): 2155−2169

    [81]

    Zhu Guangxu, Du Yuqing, Gündüz D, et al. One-bit over-the-air aggregation for communication-efficient federated edge learning: Design and convergence analysis[J]. IEEE Transactions on Wireless Communications, 2020, 20(3): 2120−2135

    [82]

    Lu Yunlong, Huang Xiaohong, Dai Yueyue, et al. Differentially private asynchronous federated learning for mobile edge computing in urban informatics[J]. IEEE Transactions on Industrial Informatics, 2019, 16(3): 2134−2143

    [83]

    Sun Jun, Chen Tianyi, Giannakis G B, et al. Communication-efficient distributed learning via lazily aggregated quantized gradients[J]. arXiv preprint, arXiv: 1909.07588, 2019

    [84]

    Shokri R, Shmatikov V. Privacy-preserving deep learning[C] //Proc of the 22nd ACM SIGSAC Conf on Computer and Communications Security. New York: ACM, 2015: 1310−1321

    [85]

    Elgabli A, Park J, Bedi A S, et al. Q-GADMM: Quantized group ADMM for communication efficient decentralized machine learning[J]. IEEE Transactions on Communications, 2020, 69(1): 164−181

    [86]

    Elgabli A, Park J, Bedi A S, et al. GADMM: Fast and communication efficient framework for distributed machine learning[J]. Journal of Machine Learning Research, 2020, 21(76): 1−39

    [87]

    Elgabli A, Park J, Ahmed S, et al. L-FGADMM: Layer-wise federated group ADMM for communication efficient decentralized deep learning[C/OL] //Proc of the IEEE Wireless Communications and Networking Conf(WCNC). Piscataway, NJ: IEEE, 2020[2022-09-05].https://ieeexplore.ieee.org/abstract/document/9120758

    [88]

    Zhang Wei, Gupta S, Lian Xiangru, et al. Staleness-aware async-SGD for distributed deep learning[J]. arXiv preprint, arXiv: 1511.05950, 2015

    [89]

    Tao Zeyi, Li Qun. eSGD: Communication efficient distributed deep learning on the edge[C/OL] //Proc of the 1st USENIX Workshop on Hot Topics in Edge Computing (HotEdge 18). Berkeley, CA: USENIX Association, 2018[2022-09-05].https://www.usenix.org/conference/hotedge18/presentation/tao

    [90]

    Wang Luping, Wang Wei, Li Bo. CMFL: Mitigating communication overhead for federated learning[C] //Proc of the 39th Int Conf on Distributed Computing Systems (ICDCS). Piscataway, NJ: IEEE: 954−964

    [91]

    Xing Hong, Simeone O, Bi Suzhi. Decentralized federated learning via SGD over wireless D2D networks[C/OL] //Proc of the 21st Int Workshop on Signal Processing Advances in Wireless Communications (SPAWC). Piscataway, NJ: IEEE, 2020[2022-09-05].https://ieeexplore.ieee.org/abstract/document/9154332

    [92]

    Shiri H, Park J, Bennis M. Communication-efficient massive UAV online path control: Federated learning meets mean-field game theory[J]. IEEE Transactions on Communications, 2020, 68(11): 6840−6857 doi: 10.1109/TCOMM.2020.3017281

    [93]

    Zeng Tengchan, Semiari O, Mozaffari M, et al. Federated learning in the sky: Joint power allocation and scheduling with UAV swarms[C/OL] //Proc of the 54th IEEE Int Conf on Communications (ICC). Piscataway, NJ: IEEE, 2020[2022-09-05].https://ieeexplore.ieee.org/abstract/document/9148776

    [94]

    Pham Q V, Zeng Ming, Ruby R, et al. UAV communications for sustainable federated learning[J]. IEEE Transactions on Vehicular Technology, 2021, 70(4): 3944−3948 doi: 10.1109/TVT.2021.3065084

    [95]

    Fadlullah Z M, Kato N. HCP: Heterogeneous computing platform for federated learning based collaborative content caching towards 6G networks[J]. IEEE Transactions on Emerging Topics in Computing, 2020, 10(1): 112−123

    [96]

    Chen Mingzhe, Mozaffari M, Saad W, et al. Caching in the sky: Proactive deployment of cache-enabled unmanned aerial vehicles for optimized quality-of-experience[J]. IEEE Journal on Selected Areas in Communications, 2017, 35(5): 1046−1061 doi: 10.1109/JSAC.2017.2680898

    [97]

    Lahmeri M A, Kishk M A, Alouini M S. Artificial intelligence for UAV-enabled wireless networks: A survey[J]. IEEE Open Journal of the Communications Society, 2021, 2: 1015−1040 doi: 10.1109/OJCOMS.2021.3075201

    [98]

    Wang Yuntao, Su Zhou, Zhang Ning, et al. Learning in the air: Secure federated learning for UAV-assisted crowdsensing[J]. IEEE Transactions on Network Science and Engineering, 2020, 8(2): 1055−1069

    [99]

    Lim W Y B, Huang Jianqiang, Xiong Zehui, et al. Towards federated learning in UAV-enabled Internet of vehicles: A multi-dimensional contract-matching approach[J]. IEEE Transactions on Intelligent Transportation Systems, 2021, 22(8): 5140−5154 doi: 10.1109/TITS.2021.3056341

    [100]

    Samarakoon S, Bennis M, Saad W, et al. Distributed federated learning for ultra-reliable low-latency vehicular communications[J]. IEEE Transactions on Communications, 2019, 68(2): 1146−1159

    [101]

    Ye Dongdong, Yu Rong, Pan Miao, et al. Federated learning in vehicular edge computing: A selective model aggregation approach[J]. IEEE Access, 2020, 8: 23920−23935 doi: 10.1109/ACCESS.2020.2968399

    [102]

    Lu Yunlong, Huang Xiaohong, Dai Yueyue, et al. Federated learning for data privacy preservation in vehicular cyber-physical systems[J]. IEEE Network, 2020, 34(3): 50−56 doi: 10.1109/MNET.011.1900317

    [103]

    Du Zhaoyang, Wu Celimuge, Yoshinaga T, et al. Federated learning for vehicular Internet of things: Recent advances and open issues[J]. IEEE Open Journal of the Computer Society, 2020, 1: 45−61 doi: 10.1109/OJCS.2020.2992630

    [104]

    Deveaux D, Higuchi T, Uçar S, et al. On the orchestration of federated learning through vehicular knowledge networking[C/OL] //Proc of IEEE Vehicular Networking Conf (VNC). Piscataway, NJ: IEEE, 2020[2022-09-05].https://ieeexplore.ieee.org/abstract/document/9318386

    [105]

    Chen Mingzhe, Semiari O, Saad W, et al. Federated echo state learning for minimizing breaks in presence in wireless virtual reality networks[J]. IEEE Transactions on Wireless Communications, 2019, 19(1): 177−191

    [106]

    Mozaffari M, Saad W, Bennis M, et al. A tutorial on UAVs for wireless networks: Applications, challenges, and open problems[J]. IEEE Communications Surveys & Tutorials, 2019, 21(3): 2334−2360

    [107]

    Samarakoon S, Bennis M, Saad W, et al. Federated learning for ultra-reliable low-latency V2V communications[C/OL] //Proc of the IEEE Global Communications Conf (GLOBECOM). Piscataway, NJ: IEEE, 2018[2022-09-05].https://ieeexplore.ieee.org/abstract/document/8647927

    [108]

    Feyzmahdavian H R, Aytekin A, Johansson M. An asynchronous mini-batch algorithm for regularized stochastic optimization[J]. IEEE Transactions on Automatic Control, 2016, 61(12): 3740−3754 doi: 10.1109/TAC.2016.2525015

    [109]

    Lu Yunlong, Huang Xiaohong, Zhang Ke, et al. Blockchain empowered asynchronous federated learning for secure data sharing in Internet of vehicles[J]. IEEE Transactions on Vehicular Technology, 2020, 69(4): 4298−4311 doi: 10.1109/TVT.2020.2973651

    [110]

    Yin Feng, Lin Zhidi, Kong Qinglei, et al. FedLoc: Federated learning framework for data-driven cooperative localization and location data processing[J]. IEEE Open Journal of Signal Processing, 2020, 1: 187−215 doi: 10.1109/OJSP.2020.3036276

    [111]

    Merluzzi M, Di Lorenzo P, Barbarossa S. Dynamic resource allocation for wireless edge machine learning with latency and accuracy guarantees[C] //Proc of the 45th IEEE Int Conf on Acoustics, Speech and Signal Processing (ICASSP). Piscataway, NJ: IEEE, 2020: 9036−9040

    [112]

    Yang Zhaohui, Chen Mingzhe, Saad W, et al. Energy efficient federated learning over wireless communication networks[J]. IEEE Transactions on Wireless Communications, 2020, 20(3): 1935−1949

    [113]

    Luo Siqi, Chen Xu, Wu Qiong, et al. Hfel: Joint edge association and resource allocation for cost-efficient hierarchical federated edge learning[J]. IEEE Transactions on Wireless Communications, 2020, 19(10): 6535−6548 doi: 10.1109/TWC.2020.3003744

    [114]

    Abad M S H, Ozfatura E, Gunduz D, et al. Hierarchical federated learning across heterogeneous cellular networks[C] //Proc of the 45th IEEE Int Conf on Acoustics, Speech and Signal Processing (ICASSP). Piscataway, NJ: IEEE, 2020: 8866−8870

    [115]

    Liu Dongzhu, Zhu Guangxu, Zhang Jun, et al. Data-importance aware user scheduling for communication-efficient edge machine learning[J]. IEEE Transactions on Cognitive Communications and Networking, 2020, 7(1): 265−278

    [116]

    Zhan Yufeng, Li Peng, Guo Song. Experience-driven computational resource allocation of federated learning by deep reinforcement learning[C] //Proc of the 34th 2020 IEEE Int Parallel and Distributed Processing Symp (IPDPS). Piscataway, NJ: IEEE, 2020: 234−243

    [117]

    Zeng Qunsong, Du Yuqing, Huang Kaibin, et al. Energy-efficient radio resource allocation for federated edge learning[C/OL] //Proc of the 54th 2020 IEEE Intl Conf on Communications Workshops (ICC Workshops). Piscataway, NJ: IEEE, 2020[2022-09-05]. https://ieeexplore.ieee.org/abstract/document/9145118

    [118]

    Chen Mingzhe, Poor H V, Saad W, et al. Convergence time optimization for federated learning over wireless networks[J]. IEEE Transactions on Wireless Communications, 2020, 20(4): 2457−2471

    [119]

    Mo Xiaopeng, Xu Jie. Energy-efficient federated edge learning with joint communication and computation design[J]. Journal of Communications and Information Networks, 2021, 6(2): 110−124 doi: 10.23919/JCIN.2021.9475121

    [120]

    Ren Jinke, Yu Guanding, Ding Guangyao. Accelerating DNN training in wireless federated edge learning systems[J]. IEEE Journal on Selected Areas in Communications, 2020, 39(1): 219−232

    [121]

    Anh T T, Luong N C, Niyato D, et al. Efficient training management for mobile crowd-machine learning: A deep reinforcement learning approach[J]. IEEE Wireless Communications Letters, 2019, 8(5): 1345−1348 doi: 10.1109/LWC.2019.2917133

    [122]

    Nguyen H T, Luong N C, Zhao J, et al. Resource allocation in mobility-aware federated learning networks: A deep reinforcement learning approach[C/OL] //Pro of the 6th World Forum on Internet of Things (WF-IoT). Piscataway, NJ: IEEE, 2020[2022-09-05].https://ieeexplore.ieee.org/abstract/document/9221089

    [123]

    Zhang Xueqing, Liu Yanwei, Liu Jinxia, et al. D2D-assisted federated learning in mobile edge computing networks [C/OL] //Pro of the 2021 IEEE Wireless Communications and Networking Conf (WCNC). Piscataway, NJ: IEEE, 2021[2022-09-05].https://ieeexplore.ieee.org/abstract/document/9417459

    [124]

    Yang Kai, Jiang Tao, Shi Yuanming, et al. Federated learning via over-the-air computation[J]. IEEE Transactions on Wireless Communications, 2020, 19(3): 2022−2035 doi: 10.1109/TWC.2019.2961673

    [125]

    Qin Zhijin, Li G Y, Ye Hao. Federated learning and wireless communications[J]. IEEE Wireless Communications, 2021, 28(5): 134−140 doi: 10.1109/MWC.011.2000501

    [126]

    Amiria M M, Dumanb T M, Gündüzc D, et al. Collaborative machine learning at the wireless edge with blind transmitters[C/OL] //Proc of the 7th IEEE Global Conf on Signal and Information Processing. Piscataway, NJ: IEEE, 2019[2022-09-05].https://iris.unimore.it/handle/11380/1202665

    [127]

    Chen Mingzhe, Yang Zhaohui, Saad W, et al. A joint learning and communications framework for federated learning over wireless networks[J]. IEEE Transactions on Wireless Communications, 2020, 20(1): 269−283

    [128]

    Yang H H, Arafa A, Quek T Q, et al. Age-based scheduling policy for federated learning in mobile edge networks[C] //Proc of the 45th IEEE Int Conf on Acoustics, Speech and Signal Processing (ICASSP). Piscataway, NJ: IEEE: 8743−8747

    [129]

    Dinh C, Tran N H, Nguyen M N, et al. Federated learning over wireless networks: Convergence analysis and resource allocation[J]. IEEE/ACM Transactions on Networking, 2020, 29(1): 398−409

    [130]

    Yang Hao, Liu Zuozhu, Quek T Q, et al. Scheduling policies for federated learning in wireless networks[J]. IEEE Transactions on Communications, 2019, 68(1): 317−333

    [131]

    Shi Wenqi, Zhou Sheng, Niu Zhisheng. Device scheduling with fast convergence for wireless federated learning[C/OL] //Proc of the 54th IEEE Int Conf on Communications (ICC). Piscataway, NJ: IEEE, 2020[2022-09-05]. https://ieeexplore.ieee.org/abstract/document/9149138

    [132]

    Amiri M M, Gündüz D, Kulkarni S R, et al. Update aware device scheduling for federated learning at the wireless edge[C] //Proc of the 2020 IEEE Int Symp on Information Theory (ISIT). Piscataway, NJ: IEEE, 2020: 2598−2603

    [133]

    Bonawitz K, Ivanov V, Kreuter B, et al. Practical secure aggregation for privacy-preserving machine learning[C] //Proc of the ACM SIGSAC Conf on Computer and Communications Security. New York: ACM, 2017: 1175−1191

  • 期刊类型引用(13)

    1. 田青,王颖,张正,羊强. 改进YOLOv8n的选通图像目标检测算法. 计算机工程与应用. 2025(02): 124-134 . 百度学术
    2. 鞠伟强,曹立华. 基于改进的YOLOv7小目标检测算法. 计算机工程与设计. 2025(01): 145-151 . 百度学术
    3. 郝佳,姚国英,周剑,王斯远,肖进胜. 基于图像和点云融合的三维小目标检测方法. 测绘通报. 2025(03): 33-38 . 百度学术
    4. 杨新秀,徐黎明,冯正勇. 基于YOLOv5全局注意力和上下文增强的遥感图像目标检测方法. 西华师范大学学报(自然科学版). 2024(03): 321-326 . 百度学术
    5. 杨帅鹏,李贺,刘金江,付主木,张锐,贾会梅. 基于多尺度特征融合和注意力机制的水面死鱼检测方法. 郑州大学学报(理学版). 2024(06): 32-38 . 百度学术
    6. 李耀. 基于YOLOv5-PNCM的飞鸟目标检测算法研究. 现代计算机. 2024(11): 9-15+22 . 百度学术
    7. 冉庆东,郑力新. 基于改进YOLOv5的锂电池极片缺陷检测方法. 浙江大学学报(工学版). 2024(09): 1811-1821 . 百度学术
    8. 郭虎升. 目标检测综述:从传统方法到深度学习. 新兴科学和技术趋势. 2024(02): 128-145 . 百度学术
    9. 马枫,石子慧,孙杰,陈晨,毛显斌,严新平. 自注意力机制驱动的轻量化高鲁棒船舶目标检测方法. 中国舰船研究. 2024(05): 188-199 . 百度学术
    10. 周楝淞,邵发明,杨洁,彭泓力,李赛野,孙夏声. 基于区域预推荐和特征富集的SOD R-CNN交通标志检测网络. 信息安全与通信保密. 2024(10): 115-126 . 百度学术
    11. 贾帅帅,田明浩,路红阳. 基于数据增强的组合神经网络异常检测算法. 信息技术与信息化. 2023(04): 187-190 . 百度学术
    12. 潘晓英,贾凝心,穆元震,高炫蓉. 小目标检测研究综述. 中国图象图形学报. 2023(09): 2587-2615 . 百度学术
    13. 齐向明,柴蕊,高一萌. 重构SPPCSPC与优化下采样的小目标检测算法. 计算机工程与应用. 2023(20): 158-166 . 百度学术

    其他类型引用(17)

图(6)  /  表(6)
计量
  • 文章访问数:  1162
  • HTML全文浏览量:  179
  • PDF下载量:  594
  • 被引次数: 30
出版历程
  • 收稿日期:  2021-11-07
  • 修回日期:  2022-09-15
  • 网络出版日期:  2023-03-16
  • 刊出日期:  2023-05-31

目录

/

返回文章
返回