Detecting Free-Riding Attack in Federated Learning Based on Gradient Backtracking
-
-
Abstract
With the development of the Internet of vehicles (IoV), the rapid growth of intelligent vehicles generates a massive amount of data. These data are invaluable for training intelligent IoV application models. Traditional model training requires the centralized collection of raw data through the cloud, consuming substantial communication resources and facing issues like privacy breaches and regulatory constraints. Federated learning (FL) offers a solution by using model transfer instead of data transfer to tackle these challenges. However, practical FL systems are confronted with the issue of malicious users attempting to deceive the server by uploading false local models, known as free-riding attacks. These attacks significantly undermine the fairness and effectiveness of FL. Current research assumes that free-riding attacks are limited to a small number of rational users. However, when there are multiple malicious free-riders, current research falls short in effectively detecting and defending against these attackers. To address this issue, we introduce a novel gradient backtracking based algorithm to identify free-riders. We introduce random testing rounds into standard FL and compare the similarity of user’s gradient between the testing round and the comparison round. It overcomes the challenge of ineffective defense in scenarios involving multiple malicious free-riders. Experimental results on the MNIST and CIFAR-10 datasets demonstrate that the proposed detection algorithm achieves outstanding performance in various free-riding attack scenarios.
-
-