Advanced Search
    Xia Yuhuan, Li Tun, Zhou Xianfa, Zhao Wenbo, Zhang Ruiyu, Guo Yang. SimulatorGen: An LLM-Based Multi-Agent Framework for Automatic Generation of DNN Accelerator SimulatorsJ. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202660116
    Citation: Xia Yuhuan, Li Tun, Zhou Xianfa, Zhao Wenbo, Zhang Ruiyu, Guo Yang. SimulatorGen: An LLM-Based Multi-Agent Framework for Automatic Generation of DNN Accelerator SimulatorsJ. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202660116

    SimulatorGen: An LLM-Based Multi-Agent Framework for Automatic Generation of DNN Accelerator Simulators

    • With the rapid development of deep neural network (DNN) accelerators, building simulators for new architectures is costly and time-consuming. Although advances in large language models (LLMs) have opened possibilities for automated simulator generation, existing approaches suffer from limited generality, inability to construct complete systems, and high construction complexity. To address these challenges, we propose SimulatorGen, a multi-agent framework that generates DNN accelerator simulator code from natural language descriptions. First, we abstract the architecture of DNN accelerator simulators and extract 23 component specifications. Based on this abstraction, four collaborative agents are introduced to accomplish generation: the Analyst Agent retrieves domain knowledge from the simulator library via retrieval-augmented generation (RAG) and constructs structured prompts by leveraging chain-of-thought (CoT) reasoning; the Coder Agent generates or refines code using prompts and test feedback; the Tester Agent performs syntax checking, functional testing, and formal verification using the Z3 solver based on properties extracted from specifications; and the Assembly Agent conducts component integration, automated execution, and metric comparison to enable end-to-end construction. We evaluate SimulatorGen on 23 generation tasks covering diverse DNN accelerator modules and architectures. Experimental results show that SimulatorGen built on GPT-4o outperforms LLM baselines, including Claude-Sonnet-4, achieving a Pass@1 score of 82.39%. Furthermore, using the successfully generated components, we construct runnable simulators for tensor processing unit (TPU) and MAERI architectures. Compared with STONNE, the simulators built by SimulatorGen achieve relative errors ranging from 1.31% to 7.34% in energy, latency, and energy-delay product (EDP) across multiple DNN models, while maintaining functional consistency verified through testing and execution, demonstrating faithful modeling of accelerator behavior. In contrast to the single-agent SimulatorCoder, which only supports module replacement, SimulatorGen enables end-to-end generation of complete simulators, further validating the effectiveness of the proposed approach.
    • loading

    Catalog

      Turn off MathJax
      Article Contents

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return