高级检索

    联邦学习自适应梯度累积后门攻击

    Federated Learning Adaptive Gradient Accumulation Backdoor Attack

    • 摘要: 本文提出了一种针对联邦学习系统的自适应梯度累积后门攻击框架AGABA,该方法结合了动态透明度的参数化子块触发器(AST)和多阶段梯度累积机制(MGA),有效解决了传统后门攻击在联邦环境下面临的隐蔽性与持久性平衡难题.AST通过动态透明度控制和分布式子块叠加技术,将完整触发器分解为多个独立组件,使恶意客户端能够在保持高度隐蔽性的同时协同构建全局触发模式.MGA采用三阶段攻击策略(初始累积、梯度累积和攻击实施)结合参数重要性感知机制,通过跨轮次的渐进式梯度累积实现恶意更新在模型聚合中的潜伏与激活.该框架利用动量加速的梯度差异传播和自适应记忆因子调整,确保攻击梯度始终位于合法分布区间内,有效规避基于统计异常的检测机制.实验表明,在20%恶意客户端参与的场景下,AGABA能够在多种主流防御机制保护下仍能保持较好的后门攻击成功率,优于现有单一攻击方法.

       

      Abstract: This paper proposes an adaptive gradient accumulation backdoor attack framework AGABA for federated learning system. This method combines the adaptive subblock trigger (AST) and multi-stage gradient accumulation mechanism (MGA), which effectively solves the problem of the balance between concealment and persistence of traditional backdoor attacks in the federated environment. AST decomposes the complete trigger into multiple independent components through dynamic transparency control and distributed sub block superposition technology, so that malicious clients can maintain a high degree of concealment while building a global trigger mode. MGA a three-stage attack strategy (initial accumulation, gradient accumulation and attack Implementation) combined with the important perception mechanism of parameters is used to realize the latency and activation of malicious updates in model aggregation through the gradual gradient accumulation across rounds. The framework uses momentum accelerated gradient difference propagation and adaptive memory factor adjustment to ensure that the attack gradient is always within the legal distribution range, effectively avoiding the detection mechanism based on statistical anomalies. Experiments show that AGABA can still maintain a good success rate of backdoor attacks under the protection of a variety of mainstream defense mechanisms in the scenario of 20% malicious clients' participation, which is better than the existing single attack method.

       

    /

    返回文章
    返回