高级检索

    大语言模型安全与隐私风险综述

    A Survey on Security and Privacy Risks in Large Language Models

    • 摘要: 近年来,大语言模型(large language model,LLM)作为深度学习网络技术的关键分支,在自然语言处理(natural language processing,NLP)领域取得了一系列突破性成就,并被广泛采用. 然而,在其包括预训练、微调和实际部署在内的完整生命周期中,多种安全威胁和隐私泄露的风险相继被发现,引起了学术和工业界越来越多的关注. 首先以LLM发展过程中出现的预训练-微调范式、预训练-提示学习范式和预训练指令微调范式为线索,梳理了针对LLM的常规安全威胁,即3种对抗攻击(对抗样本攻击、后门攻击、投毒攻击)的代表性研究,接着总结了一些最新工作披露的新型安全威胁,然后介绍了LLM的隐私风险及其研究进展. 相关内容有助于LLM的研究和部署者在模型设计、训练及应用过程中,识别、预防和缓解这些威胁与风险,同时实现模型性能与安全及隐私保护之间的平衡.

       

      Abstract: In recent years, Large Language Models (LLMs) have emerged as a critical branch of deep learning network technology, achieving a series of breakthrough accomplishments in the field of Natural Language Processing (NLP), and gaining widespread adoption. However, throughout their entire lifecycle, including pre-training, fine-tuning, and actual deployment, a variety of security threats and risks of privacy breaches have been discovered, drawing increasing attention from both the academic and industrial sectors. Navigating the development of the paradigm of using large language models to handle natural language processing tasks, as known as the pre-training and fine-tuning paradigm, the pre-training and prompt learning paradigm, and the pre-training and instruction-tuning paradigm, this article outline conventional security threats against large language models, specifically representative studies on the three types of traditional adversarial attacks (adversarial example attack, backdoor attack and poisoning attack). It then summarizes some of the novel security threats revealed by recent research, followed by a discussion on the privacy risks of large language models and the progress in their research. The content aids researchers and deployers of large language models in identifying, preventing, and mitigating these threats and risks during the model design, training, and application processes, while also achieving a balance between model performance, security, and privacy protection.

       

    /

    返回文章
    返回