Advanced Search
    Wei Wei, Yuan Xing, Du Junwei, Li Yuying. A Multi-agent Collaboration for Completing and Optimizing Bug ReportsJ. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202550693
    Citation: Wei Wei, Yuan Xing, Du Junwei, Li Yuying. A Multi-agent Collaboration for Completing and Optimizing Bug ReportsJ. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202550693

    A Multi-agent Collaboration for Completing and Optimizing Bug Reports

    • Bug reports serve as a critical foundation for developers to identify and resolve software bugs, with their quality directly influencing the efficiency of software maintenance. While prior research has demonstrated that high-quality bug reports can significantly reduce repair time, information incompleteness remains a prevalent issue in open-source projects. Although current approaches leveraging machine learning and large language models (LLMs) for automatic report completion improve content completeness, they exhibit notable limitations: traditional retrieval-based methods often concatenate fragments from similar reports, leading to semantic discontinuities and logical inconsistencies; while LLM-generated content tends to be fluent, it may introduce factual hallucinations. Inspired by the human expert practice of “phased processing and multi-role collaboration”, this paper proposes a novel multi-agent collaborative framework for bug report completion and optimization. Our approach ensures high-quality output through three key design principles: 1) Decomposing the completion task into three distinct phases—bug analysis, report completion, and quality assessment—each managed by a dedicated agent, thereby reducing the cognitive burden on individual models; 2) Employing structured prompt templates to precisely guide LLMs in assuming specialized roles as domain expert agents (e.g., analyst, completer, reviewer), clearly defining responsibilities at each stage and enhancing output accuracy; 3) Incorporating a dynamic feedback mechanism that enables iterative cross-validation and collaborative refinement among agents, effectively mitigating semantic drift and ensuring both logical coherence and factual consistency in the final output. Extensive experiments on four public datasets demonstrate that our method outperforms baseline approaches, achieving improvements of 10.41%, 7.52%, 13.55%, and 16.64% on BLEU, Sentence-BERT, ROUGE-L, and METEOR scores, respectively. Furthermore, manual evaluation confirms that the completed reports exhibit superior completeness, clarity, and practical utility compared to existing methods, offering robust support for bug management in open-source communities.
    • loading

    Catalog

      Turn off MathJax
      Article Contents

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return