Advanced Search
    Li Guopeng, Wu Ruiqi, Tan Haisheng, Chen Guoliang. A Plan Reuse Mechanism for LLM-based Agent[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202440380
    Citation: Li Guopeng, Wu Ruiqi, Tan Haisheng, Chen Guoliang. A Plan Reuse Mechanism for LLM-based Agent[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202440380

    A Plan Reuse Mechanism for LLM-based Agent

    • Integrating Large Language Models (LLMs) into personal assistants, like Xiao Ai and Blue Heart V, effectively enhances their ability to interact with humans, solve complex tasks, and manage IoT devices. Such assistants are also termed LLM-based agents. Upon receiving user requests, the LLM-based agent generates plans using an LLM, executes these plans through various tools, and then returns the response to the user. During this process, the latency for generating a plan with an LLM can reach tens of seconds, significantly degrading user experience. Real-world dataset analysis shows that about 30% of the requests received by LLM-based agents are identical or similar, which allows the reuse of previously generated plans to reduce latency. However, it is difficult to accurately define the similarity between the request texts received by the LLM-based agent through directly evaluating the original request texts. Moreover, the diverse expressions of natural language and the unstructured format of plan texts make implementing plan reuse challenging. To address these issues, this paper presents and implements a plan reuse mechanism for LLM-based agents called AgentReuse. AgentReuse leverages the similarities and differences among requests’ semantics and uses intent classification to evaluate the similarities between requests and enable the reuse of plans. Experimental results based on a real-world dataset demonstrate that AgentReuse achieves a 93% effective plan reuse rate, an F1 score of 0.9718, and an accuracy of 0.9459 in evaluating request similarities, reducing latency by 93.12% compared to baselines without using the reuse mechanism.
    • loading

    Catalog

      Turn off MathJax
      Article Contents

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return