Citation: | Liu Jialang, Guo Yanming, Lao Mingrui, Yu Tianyuan, Wu Yulun, Feng Yunhao, Wu Jiazhuang. Survey of Backdoor Attack and Defense Algorithms Based on Federated Learning[J]. Journal of Computer Research and Development, 2024, 61(10): 2607-2626. DOI: 10.7544/issn1000-1239.202440487 |
Federated learning is designed for data privacy and data security issues, after a large number of clients are trained locally in a distributed manner, the central server then aggregates the model parameter updates provided by each local client, but the central server is unable to see how these parameters are updated, and this feature creates a serious security issue, i.e., a malicious participant can train a poisoned model and upload the parameters in the local model, and then globally model to introduce backdoor features. In this paper, we focus on the security and robustness research under the scenarios specific to federated learning, i.e., backdoor attack and defense, summarize the scenarios that generate backdoor attacks under federated learning, summarize the latest methods of backdoor attack and defense under federated learning, and compare and analyze the performance of the various attack and defense methods, revealing their advantages and limitations. Finally, we point out various potential directions and new challenges for backdoor attacks and defenses under federated learning.
[1] |
Prakash S, Hashemi H, Wang Yongqin, et al. Secure and fault tolerant decentralized learning[J]. arXiv preprint, arXiv: 2010.07541, 2020
|
[2] |
周俊,方国英,吴楠. 联邦学习安全与隐私保护研究综述[J]. 西华大学学报:自然科学版,2020,39(4):9−17. 7
Zhou Jun, Fang Guoying, Wu Nan, Survey on security and privacy-preserving in federated learnin[J]. Journal of Xihua University(Nature Science Edition), 2020, 39(4): 9−17. 7 (in Chinese)
|
[3] |
陈兵,成翔,张佳乐,等. 联邦学习安全与隐私保护综述[J]. 南京航空航天大学学报,2020,52(5):675−684
Chen Bing, Cheng Xiang, Zhang Jiale, et al. Survey of security and privacy in fedrated learnin[J]. Journal of Nanjing University of Aeronautics& Astronautic, 2020, 52(5): 675−684 (in Chinese)
|
[4] |
高莹,陈晓峰,张一余,等. 联邦学习系统攻击与防御技术研究综述[J]. 计算机学报,2023,46(9):1781−1805
Gao Ying, Chne Xiaofeng, Zhang Yiyu, et al. Survey of attack and defense techniques for federated learning systems[J], Chinese Journal of Computers, 2023, 46(9): 1781−1805 (in Chinese)
|
[5] |
肖雄,唐卓,肖斌,等. 联邦学习的隐私保护与安全防御研究综述[J]. 计算机学报,2023,46(5):1019−1044
Xiao Xiong, Tang Zhuo, Xiao Bin, et al. A Survey on privacy and security issues in federated learning[J], Chinese Journal of Computers, 2023, 46(5): 1019−1044 (in Chinese)
|
[6] |
Liu Rui, Xing Pengwei, Deng Zichao, et al. Federated graph neural networks: Overview, techniques and challenges[J]. arXiv preprint, arXiv: 2202.07256, 2023
|
[7] |
Zhang Yifei, Zeng Dun, Luo Jinglong, et al. A survey of trustworthy federated learning with perspectives on security, robustness and privacy[C]//Proc of the ACM Web Conf. New York: ACM, 2023: 1167−1176
|
[8] |
Asadullah T, Mohamed A, Farag S, et al. Trustworthy federated learning: A survey[J]. arXiv preprint, arXiv: 2305.11537, 2023
|
[9] |
Yang Qiang, Liu Yang, Cheng Yong, et al. Federated Learning: Synthesis Lectures on Artificial Intelligence and Machine Learning[M]. San Rafael, CA: Morgan & Claypool, 2019, 13: 1−207
Yang Qiang,Liu Yang,Cheng Yong,et al. Federated Learning:Synthesis Lectures on Artificial Intelligence and Machine Learning[M]. San Rafael, CA: Morgan & Claypool, 2019,13:1−207
|
[10] |
Mothukuri V, Parizi R M, Pouriyeh S, et al. A survey on security and privacy of federated learning[J]. Future Generation Computer Systems, 2021, 115: 619−640 doi: 10.1016/j.future.2020.10.007
|
[11] |
Blanchard P, El Mhamdi E M, Guerraoui R, et al. Machine learning with adversaries: Byzantine tolerant gradient descent[J]. Advances in Neural Information Processing Systems, 2017, 30: 119−129
|
[12] |
El M, Rachid G, and Sébastien R, et al. The hidden vulnerability of distributed learning in Byzantium[C]//Proc of Int Conf on Machine Learning. New York: PMLR, 2018: 3521–3530
|
[13] |
Leslie Lamport, Robert Shostak, Marshall Pease. The Byzantine generals problem[J]. In Concurrency: The Works of Leslie Lamport. 2022: 203–226
|
[14] |
Shen S, Tople S, Saxena P. Auror: Defending against poisoning attacks in collaborative deep learning systems[C]//Proc of the 32nd Annual Conf on Computer Security Applications. Los Angeles: ACM, 2016: 508−519
|
[15] |
Fang Minghong, Cao Xiaoyu, Jia Jinyuan, et al. Local model poisoning attacks to {Byzantine-Robust} federated learning[C]//Proc of the 29th USENIX Security Symp (USENIX Security 20). Berkeley CA: USENIX Association, 2020: 1605−1622
|
[16] |
Damaskinos G, El-Mhamdi E M, Guerraoui R, et al. Aggregathor: Byzantine machine learning via robust gradient aggregation[J]. Proceedings of Machine Learning and Systems, 2019, 1: 81−106
|
[17] |
Chen Chen, Liu Yuchen, Ma Xingjun, et al. Calfat: Calibrated federated adversarial training with label skewness[J]. Advances in Neural Information Processing Systems, 2022, 35: 3569−3581
|
[18] |
Doan B G, Abbasnejad E, Ranasinghe D C. Februus: Input purification defense against trojan attacks on deep neural network systems[C]//Proc of the 36th Annual Computer Security Applications Conf. Los Angeles: ACM, 2020: 897-912
|
[19] |
Nuria R, Daniel J, M. Victoria L, et al. Survey on federated learning threats: Concepts, taxonomy on attacks and defences, experimental study and challenges[J]. Information Fusion, 2023, 90: 148−173 doi: 10.1016/j.inffus.2022.09.011
|
[20] |
Bhagoji A N, Chakraborty S, Mittal P, et al. Analyzing federated learning through an adversarial lens[C]//Proc of Int Conf on Machine Learning. New York: PMLR, 2019: 634−643
|
[21] |
Bagdasaryan E, Veit A, Hua Yiqing, et al. How to backdoor federated learning[C]//Proc of Int Conf on Artificial Intelligence and Statistics. New York: PMLR, 2020: 2938−2948
|
[22] |
Barreno M, Nelson B, Sears R. et al. Can machine learning be secure? [C]//Proc of the 2006 ACM Symp on Information, Computer and Communications Security. New York: ACM, 2006: 16-25
|
[23] |
Doshi K, Yilmaz Y. Federated learning-based driver activity recognition for edge devices[C]//Proc of the IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2022: 3338−3346
|
[24] |
Dai Yanbo, Li Songzi. Chameleon: Adapting to peer images for planting durable backdoors in federated learning[C]//Proc of Int Conf on Machine Learning. New York: PMLR, 2023: 6712−6725
|
[25] |
Fung C, Yoon C J M, Beschastnikh I. The limitations of federated learning in sybil settings[C]//Proc of 23rd Int Symp on Research in Attacks, Intrusions and Defenses (RAID 2020). San Sebastian: USENIX, 2020: 301−316
|
[26] |
Bernstein J, Zhao J, Azizzadenesheli K, et al. signSGD with majority vote is communication efficient and fault tolerant[J]. arXiv preprint, arXiv: 1810.05291, 2018
|
[27] |
Chen Ruiliang, Park J M J, Bian Kaigui. Robustness against Byzantine failures in distributed spectrum sensing[J]. Computer Communications, 2012, 35(17): 2115−2124 doi: 10.1016/j.comcom.2012.07.014
|
[28] |
Zhong Haoti, Liao Cong, Squicciarini A C, et al. Backdoor embedding in convolutional neural network models via invisible perturbation[C]//Proc of the ACM Conf on Data and Application Security and Privacy. New York: ACM, 2020: 97−108
|
[29] |
Chen Cheng, Kailkhura B, Goldhahn R, et al. Certifiably-robust federated adversarial learning via randomized smoothing[C]//Proc of 2021 IEEE 18th Int Conf on Mobile Ad Hoc and Smart Systems (MASS). Piscataway, NJ: IEEE, 2021: 173−179
|
[30] |
Saha A, Subramanya A, Pirsiavash H. Hidden trigger backdoor attacks[C]//Proc of the AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2020, 34(7): 11957−11965
|
[31] |
Enthoven D, Al-Ars Z. An overview of federated deep learning privacy attacks and defensive strategies[J]. Federated Learning Systems: Towards Next-Generation AI, 2021: 173−196
|
[32] |
Li Y, Jiang Y, Li Z, et al. Backdoor learning: A survey[J]. IEEE Transactions on Neural Networks and Learning Systems, 2022, 35(1): 5−22
|
[33] |
Sun Ziteng, Peter K, Ananda T, et al. Can you really backdoor federated learning?[J]. arXiv preprint, arXiv: 1911.07963, 2019
|
[34] |
Kota Y, Takeshi F. Disabling backdoor and identifying poison data by using knowledge distillation in backdoor attacks on deep neural networks[C]//Proc of the 13th ACM Workshop on Artificial Intelligence and Security. New York: ACM, 2020: 117–127
|
[35] |
Nguyen T D, Rieger P, De V, et al. {FLAME}: Taming backdoors in federated learning[C]//Proc of the 31st USENIX Security Symp (USENIX Security 22). Berkeley, CA: USENIX Association, 2022: 1415−1432
|
[36] |
Douceur J R. The sybil attack[C]//Proc of Int Workshop on Peer-to-Peer Systems. Berlin: Springer, 2002: 251−260
|
[37] |
Huang Hanxun, Ma Xingjun, Sarah E, et al. Distilling cognitive backdoor patterns within an image[J]. arXiv preprint, arXiv: 2301.10908, 2023
|
[38] |
Wang Ning, Xiao Yang, Chen Yimin, et al. flare: Defending federated learning against model poisoning attacks via latent space representations[C]//Proc of 2022 ACM on Asia Conf on Computer and Communications Security. New York: ACM, 2022: 946–958
|
[39] |
Gu T, Dolan-Gavitt B, Garg S. Badnets: Identifying vulnerabilities in the machine learning model supply chain[J]. arXiv preprint, arXiv: 1708.06733, 2017
|
[40] |
Alberti M, Pondenkandath V, Wursch M, et al. Are You Tampering with My Data?[J]. arXiv preprint, arXiv: 1808.04866, 2018
|
[41] |
Chen Xinyun, Liu Chang, Li Bo, et al. Targeted backdoor attacks on deep learning systems using data poisoning[J]. arXiv preprint, arXiv: 1712.05526, 2017
|
[42] |
Barni M, Kallas K, Tondi B. A New backdoor attack in CNNS by training set corruption without label poisoning[J]. arXiv preprint, arXiv: 2304.02643, 2023
|
[43] |
Liu Yunfei, Ma Xingjun, Bailey J, et al. Reflection backdoor: A natural backdoor attack on deep neural networks[C]//Proc of the European Conf on Computer Vision. Glasgow, Berlin: Springer, 2020: 182−199
|
[44] |
Madry A, Makelov A, Schmidt L, et al. Towards deep learning models resistant to adversarial attacks[J]. arXiv preprint, arXiv: 1706.06083, 2017
|
[45] |
Quiring E, Rieck K. Backdooring and poisoning neural networks with image-scaling attacks[C]//Proc of the IEEE Security and Privacy Workshops. Piscataway, NJ: IEEE, 2020: 41−47
|
[46] |
Nguyen T A, Tran A. Input-aware dynamic backdoor attack[C]//Proc of the Advances in Neural Information Processing Systems. Online, Vancouver: NeurIPS, 2020, 33: 3454−3464
|
[47] |
Li Yuezun, Li Yiming, Wu Baoyuan, et al. Invisible backdoor attack with sample-specific triggers[C]//Proc of the IEEE/CVF Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2021: 16463−16472
|
[48] |
Salem A, Wen R, Backes M, et al. Dynamic backdoor attacks against machine learning models[C]//Proc of the IEEE European Symp on Security and Privacy. Piscataway, NJ: IEEE, 2022: 703−718
|
[49] |
Shafahi A, Huang W, Najibi M, et al. Poison frogs! targeted clean-label poisoning attacks on neural networks[J]. arXiv preprint, arXiv: 2009.03561, 2020
|
[50] |
Zhu Chen, Huang W, Li Hengduo, et al. Transferable clean-label poisoning attacks on deep neural nets[C]//Proc of the Int Conf on Machine Learning. New York: PMLR, 2019: 7614−7623
|
[51] |
Gao Yinghua, Li Yiming, Zhu Linghui, et al. Not all samples are born equal: Towards effective clean-label backdoor attacks[J]. Pattern Recognition, 2023, 139 (Special Issue): 109512
|
[52] |
Lin Junyu, Xu Lei, Liu Yingqi, et al. Composite backdoor attack for deep neural network by mixing existing benign features[C]//Proc of the ACM SIGSAC Conf on Computer and Communications Security. New York: ACM, 2020: 113−131
|
[53] |
Liu Yingqi, Ma Shiqing, Aafer Y, et al. Trojaning attack on neural networks[C]//Proc of the Annual Network and Distributed System Security Symp. San Diego, USA: Internet Society, 2018: 1−15
|
[54] |
Rakin A S, He Zhezhi, Fan Deliang. Tbt: Targeted neural network attack with bit Trojan[C]//Proc of the IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2020: 13195−13204
|
[55] |
Dumford J, Scheirer W. Backdooring convolutional neural networks via targeted weight perturbations[C]//Proc of the IEEE Int Joint Conf on Biometrics. Piscataway, NJ: IEEE, 2020: 1−9
|
[56] |
Hong S, Carlini N, Kurakin A. Handcrafted backdoors in deep neural networks[J]. Advances in Neural Information Processing Systems, 2022, 35: 8068−8080
|
[57] |
Zou Minghui, Yang Shi, Wang Chengliang, et al. Potrojan: Powerful neural-level trojan designs in deep learning models[J]. arXiv preprint, arXiv: 1802.03043, 2018
|
[58] |
Salem A, Backes M, Zhang Y. Don’t trigger me! a triggerless backdoor attack against deep neural networks[J]. arXiv preprint, arXiv: 2010.03282, 2020
|
[59] |
Yao Yuanshun, Li Huiying, Zheng Haitao, et al. Latent backdoor attacks on deep neural networks[C]//Proc of the ACM SIGSAC Conf on Computer and Communications Security. New York: ACM, 2019: 2041−2055
|
[60] |
Vale T, Stacey T, Mehmet E, et al. Data poisoning attacks against federated learning systems[C]//Proc of Computer Security–ESORICS 2020: 25th European Symposium on Research in Computer Security, ESORICS 2020. Berlin: Springer, 2021: 480–501
|
[61] |
Zhang Hengtong, Zheng Tianhang, Gao Jing, et al. Data poisoning attack against knowledge graph embedding[J]. arXiv preprint, arXiv: 1904.12052, 2019
|
[62] |
Zhu Shuwen, Luo Ge, Wei Ping, et al. Image-imperceptible backdoor attacks[J]. Journal of Image and Graphics, 2023, 28(3): 864−877
|
[63] |
Sun W, Jiang X, Dou S, et al. Invisible backdoor attack with dynamic triggers against person re-identification[J]. IEEE Transactions on Information Forensics and Security, 2023, 18: 1653−1665 doi: 10.1109/TIFS.2023.3245406
|
[64] |
Zhou Yao, Wu Jun, He Jingrui. Adversarially robust federated learning for neural networks[C]//Proc of Int Conf on ICLR. Washington DC: IEEE, 2021: 105−116
Zhou Yao,Wu Jun,He Jingrui. Adversarially robust federated learning for neural networks[C]//Proc of Int Conf on ICLR. Washington DC: IEEE, 2021:105−116
|
[65] |
Zhang Jiale, Chen Bing, Cheng Xiang , et al. PoisonGAN: Generative poisoning attacks against federated learning in edge computing systems[J]. IEEE Internet of Things Journal, 2021, 8(5): 3310−3322
|
[66] |
Xu Kaidi, Liu Sijia, Chen Pinyu, et al. Defending against backdoor attack on deep neural networks[J]. arXiv preprint, arXiv: 2002.12162, 2020
|
[67] |
Cheng H, Yan T, Shan H. On the trade-off between adversarial and backdoor robustness[J]. Advances in Neural Information Processing Systems, 2020, 33: 11973−11983
|
[68] |
Zhou Xingchen, Xu Ming, Wu Yiming, et al. Deep model poisoning attack on federated learning[J]. Future Internet, 2021, 13(3): 73 doi: 10.3390/fi13030073
|
[69] |
Zhang Zhengming, Ashwinee P, Song Linyue, et al. Neurotoxin: Durable backdoors in federated learning[C]//Proc of Int Conf on Machine Learning. New York: PMLR, 2022: 26429−26446
|
[70] |
Sun Y, Ochiai H, Sakuma J. Semi-targeted model poisoning attack on federated learning via backward error analysis[C]//Proc of 2022 Int Joint Conf on Neural Networks (IJCNN). Piscataway, NJ: IEEE, 2022: 1−8
|
[71] |
Yang Haonan, Zhong Yongchao, Yang Bo, et al. An overview of sybil attack detection mechanisms in vfc[C]//Proc of 2022 52nd Annual IEEE/IFIP Int Conf on Dependable Systems and Networks Workshops (DSN-W). Piscataway, NJ: IEEE, 2022, 117–122
|
[72] |
Zeng Yi, Chen Si, Won P, et al. Adversarial unlearning of backdoors via implicit hypergradient[J]. arXiv preprint, arXiv: 2110.03735, 2021
|
[73] |
Zhu Chen, Huang R, Li Hengduo, et al. Transferable clean-label poisoning attacks on deep neural nets[C]//Proc of Int Confon machine learning. New York: PMLR, 2019: 7614−7623
|
[74] |
Zhu Chen, Huang W R, Li Hengduo, et al. Transferable clean-label poisoning attacks on deep neural nets[C]//Proc of the 36th Int Conf on Machine Learning. Long Beach: PMLR, 2019: 7614-7623
|
[75] |
Wang Bolun, Yao Yuanshun, Shan S, et al. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks[C]//Proc of 2019 IEEE Symp on Security and Privacy (SP). Piscataway, NJ: IEEE, 2019, 9707–9723
|
[76] |
Xie Chulin, Chen Minghao, Chen Pinyu, et al. Crfl: Certifiably robust federated learning against backdoor attacks[C]//Proc of Int Conf on Machine Learning, New York: PMLR, 2021: 11372–11382
|
[77] |
Andreina S, Marson G A, Möllering H, et al. BaFFle: Backdoor detection via feedback-based federated learning[C]//Proc of 2021 IEEE 41st Int Conf on Distributed Computing Systems (ICDCS). Piscataway, NJ: IEEE, 2021: 852−863
|
[78] |
Razmi F, Lou Jian, Li Xiong. Does differential privacy prevent backdoor attacks in practice?[C]//Proc of IFIP Annual Conf on Data and Applications Security and Privacy. Cham: Springer, 2024: 320−340
|
[79] |
Zhang Zaixi, Cao Xiaoyu, Jia Jinyuan, et al. FL detector: Defending federated learning against model poisoning attacks via detecting malicious clients[C]//Proc of the 28th ACM SIGKDD Conf on Knowledge Discovery and Data Mining. New York: ACM, 2022: 2545–2555
|
[80] |
Zhang Jie, Li Bo, Chen Chen, et al. Delving into the adversarial robustness of federated learning[C]//Proc of the AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI 2023, 37(9): 11245−11253
|
[81] |
Wu C, Wu F, Cao Y, et al. Fedgnn: Federated graph neural network for privacy-preserving recommendation[J]. arXiv preprint, arXiv: 2102.04925, 2021
|
[82] |
Xie Cong, Koyejo O, Gupta I. Fall of empires: Breaking byzantine-tolerant sgd by inner prod-uct manipulation[C]//Proc of Uncertainty in Artificial Intelligence. New York: PMLR, 2020, 261–270
|
[83] |
Zizzo G, Rawat A, Sinn M, et al. Fat: Federated adversarial training[J]. arXiv preprint, arXiv: 2012.01791, 2020
|
[84] |
Nguyen T, Phillip R, Mohammad H, et al. flguard: Secure and private federated learning[J]. Crytography and Security, (Preprint), 2022. https://arxiv.org/abs/2101.02281
|
[85] |
Jebreel N M, Josep Domingo-Ferrer J, Sánchez D, et al. Defending against the label-flipping attack in federated learning[J]. arXiv: 2207.01982v1
Jebreel N M, Josep Domingo-Ferrer J, Sánchez D, et al. Defending against the label-flipping attack in federated learning[J]. arXiv: 2207.01982v1
|
[86] |
Zhu Liuwan, Ning Rui, Wang Cong, et al. Gangsweep: Sweep out neural backdoors by gan[C]//Proc of the 28th ACM Int Conf on Multimedia. New York: ACM, 2020: 3173−3181
|
[87] |
Ahmed S, Rui W, Michael B, et al. Dynamic backdoor attacks against machine learning models[C]//Proc of the 7th European Symp on Security and Privacy (EuroS&P). Piscataway, NJ: IEEE, 2023: 703–718
|
[88] |
Unterluggauer T, Harris A, Constable S, et al. Chameleon cache: Approximating fully associative caches with random replacement to prevent contention-based cache attacks[C]//Proc of IEEE Int Symp on Secure and Private Execution Environment Design (SEED). Piscataway, NJ: IEEE, 2022: 13−24
|
[89] |
Tolpegin V, Truex S, Gursoy M E, et al. Data poisoning attacks against federated learning systems[C]//Proc of Computer Security–ESORICS 2020: 25th European Symp on Research in Computer Security, ESORICS 2020. Guildford, UK: Springer, 2021: 480–501
|
[90] |
Nguyen T D, Rieger P, De V, et al. {FLAME}: Taming backdoors in federated learning[C]//Proc of the 31st USENIX Security Symp (USENIX Security 22). Berkeley, California: USENIX Association, 2022: 1415-1432
Nguyen T D,Rieger P,De V,et al. {FLAME}:Taming backdoors in federated learning[C]//Proc of the 31st USENIX Security Symp (USENIX Security 22). Berkeley, California: USENIX Association, 2022: 1415-1432
|
[91] |
Liu Yunfei, Ma Xingjun, Bailey J, et al. Reflection backdoor: A natural backdoor attack on deep neural networks[C]//Proc of the European Conf on Computer Vision. Berlin: Springer, 2020: 182−199
|
[92] |
Chen Mingqing, Suresh A T, Mathews R, et al. Federated learning of n-gram language models[J]. arXiv preprint, arXiv: 1910.03432, 2019
|
[93] |
Lin Yuchen, He Chaoyang, Zeng Zihang, et al. Fednlp: A research platform for federated learning in natural language processing[J]. arXiv preprint, arXiv: 2104.08815, 2021
|
[94] |
Gu Tianyu, Dolan-Gavitt B, Garg S. Badnets: Identifying vulnerabilities in the machine learning model supply chain[J]. arXiv preprint, arXiv: 1708.06733, 2017
|
[95] |
Lin Jierui, Du Min, Liu Jian. Free-riders in federated learning: Attacks and defenses[J]. arXiv preprint, arXiv: 1911.12560, 2019
|
[96] |
Lan H, Gu J, Torr P, et al. Influencer backdoor attack on semantic segmentation[J]. arXiv preprint, arXiv: 2303.12054, 2023
|
[97] |
Zhang Linyuan, Ding Guoru, Wu Qihui, et al. Byzantine attack and defense in cognitive radio networks: A survey[J]. IEEE Communications Surveys & Tutorials, 2015, 17(3): 1342−1363
|
[1] | Lin Liansheng, Zheng Huanqin, Su Shen, Lei Kai, Chen Xiaofeng, Tian Zhihong. An On-Chain Mechanism Against DeFi Price Manipulation Attacks[J]. Journal of Computer Research and Development, 2025, 62(2): 443-457. DOI: 10.7544/issn1000-1239.202330291 |
[2] | Song Shuwei, Ni Xiaoze, Chen Ting. Gas Optimization for Smart Contracts: A Survey[J]. Journal of Computer Research and Development, 2023, 60(2): 311-325. DOI: 10.7544/issn1000-1239.202220887 |
[3] | Ying Chenhao, Xia Fuyuan, Li Jie, Si Xueming, Luo Yuan. Incentive Mechanism Based on Truth Estimation of Private Data for Blockchain-Based Mobile Crowdsensing[J]. Journal of Computer Research and Development, 2022, 59(10): 2212-2232. DOI: 10.7544/issn1000-1239.20220493 |
[4] | Feng Jingyu, Yang Jinwen, Zhang Ruitong, Zhang Wenbo. A Spectrum Sharing Incentive Scheme Against Location Privacy Leakage in IoT Networks[J]. Journal of Computer Research and Development, 2020, 57(10): 2209-2220. DOI: 10.7544/issn1000-1239.2020.20200453 |
[5] | Hai Mo, Zhu Jianming. A Propagation Mechanism Combining an Optimal Propagation Path and Incentive in Blockchain Networks[J]. Journal of Computer Research and Development, 2019, 56(6): 1205-1218. DOI: 10.7544/issn1000-1239.2019.20180419 |
[6] | He Yunhua, Li Mengru, Li Hong, Sun Limin, Xiao Ke, Yang Chao. A Blockchain Based Incentive Mechanism for Crowdsensing Applications[J]. Journal of Computer Research and Development, 2019, 56(3): 544-554. DOI: 10.7544/issn1000-1239.2019.20170670 |
[7] | He Haiwu, Yan An, Chen Zehua. Survey of Smart Contract Technology and Application Based on Blockchain[J]. Journal of Computer Research and Development, 2018, 55(11): 2452-2466. DOI: 10.7544/issn1000-1239.2018.20170658 |
[8] | Xiong Jinbo, Ma Rong, Niu Ben, Guo Yunchuan, Lin Li. Privacy Protection Incentive Mechanism Based on User-Union Matching in Mobile Crowdsensing[J]. Journal of Computer Research and Development, 2018, 55(7): 1359-1370. DOI: 10.7544/issn1000-1239.2018.20180080 |
[9] | Wang Bo, Huang Chuanhe, Yang Wenzhong, Dan Feng, and Xu Liya. An Incentive-Cooperative Forwarding Model Based on Punishment Mechanism in Wireless Ad Hoc Networks[J]. Journal of Computer Research and Development, 2011, 48(3): 398-406. |
[10] | Yue Guangxue, Li Renfa, Chen Zhi, Zhou Xu. Analysis of Free-riding Behaviors and Modeling Restrain Mechanisms for Peer-to-Peer Networks[J]. Journal of Computer Research and Development, 2011, 48(3): 382-397. |
1. |
李硕,王馨爽. 多场景融合的码号数据分发架构及关键技术研究. 数据通信. 2024(06): 1-3+11 .
![]() | |
2. |
俞惠芳,李磊. 基于椭圆曲线签密的跨链医疗数据共享方案. 通信学报. 2024(12): 57-66 .
![]() |