高级检索

    基于LLM的可解释嵌入式系统需求规约方法

    LLM-Based Interpretable Embedded System Requirements Specification

    • 摘要: 嵌入式系统需求规约是系统开发的基石,承担着将高层用户意图转化为明确且可执行的软件需求的重要职能。当前,该过程仍高度依赖领域专家的人工推导与撰写,存在效率低、可解释性不足等固有局限性。尽管自动化需求生成被视为潜在的解决方案,其发展仍面临双重核心挑战:如何有效嵌入领域特定的设备知识、如何严格遵循意图可满足和可追溯性标准,构建具备可解释性的需求体系。为解决上述问题,提出了一种基于大语言模型(LLM)的可解释需求规约方法。该方法通过构建融合领域知识与需求工程的提示链,在人机交互协作下,逐层将用户意图补全并细化为设备、系统及软件需求,确保需求可满足,并引入问题图-情景图模型以保障需求的可追溯。在4个嵌入式系统案例上的实验表明,本方法较人工方法节约73%时间成本,需求规约文档质量提升82%,而可解释性相较人工方法和单步LLM方法分别提升约15%和21%,为复杂嵌入式系统提供了一种高效且可信的需求规约新途径。

       

      Abstract: The Embedded System Requirements Specification serves as the cornerstone of system development, playing a crucial role in transforming high-level user intentions into clear and executable software requirements. Currently, this process remains highly dependent on manual derivation and composition by domain experts, with inherent limitations such as low efficiency and insufficient interpretability. Although automated requirements generation is regarded as a potential solution, its development still faces two core challenges: how to effectively embed domain-specific device knowledge, and how to strictly adhere to the standards of intent satisfiability and traceability to construct an interpretable requirements system. To address these issues, this paper proposes an interpretable requirements specification method based on Large Language Models (LLMs). By constructing a prompt chain that integrates domain knowledge and requirements engineering, this method gradually complements and refines user intentions into device, system, and software requirements level by level under human-computer interaction collaboration, ensuring the satisfiability of requirements. Additionally, it introduces the Problem diagram and Scenario diagram model to guarantee the traceability of requirements. Experiments on four embedded system cases show that compared with manual methods, this method reduces time cost by approximately 73% and improves the quality of requirements specification documents by 82%. Moreover, its interpretability increases by about 15% and 21% compared to both manual methods and single-step LLM methods, providing an efficient and reliable new approach for requirements specification of complex embedded systems.

       

    /

    返回文章
    返回