Abstract:
The Embedded System Requirements Specification serves as the cornerstone of system development, playing a crucial role in transforming high-level user intentions into clear and executable software requirements. Currently, this process remains highly dependent on manual derivation and composition by domain experts, with inherent limitations such as low efficiency and insufficient interpretability. Although automated requirements generation is regarded as a potential solution, its development still faces two core challenges: how to effectively embed domain-specific device knowledge, and how to strictly adhere to the standards of intent satisfiability and traceability to construct an interpretable requirements system. To address these issues, this paper proposes an interpretable requirements specification method based on Large Language Models (LLMs). By constructing a prompt chain that integrates domain knowledge and requirements engineering, this method gradually complements and refines user intentions into device, system, and software requirements level by level under human-computer interaction collaboration, ensuring the satisfiability of requirements. Additionally, it introduces the Problem diagram and Scenario diagram model to guarantee the traceability of requirements. Experiments on four embedded system cases show that compared with manual methods, this method reduces time cost by approximately 73% and improves the quality of requirements specification documents by 82%. Moreover, its interpretability increases by about 15% and 21% compared to both manual methods and single-step LLM methods, providing an efficient and reliable new approach for requirements specification of complex embedded systems.