Abstract:
The increasing capabilities of large language models (LLMs) in knowledge storage have underscored their potential utility as knowledge bases. However, it’s important to note that any given prompt can merely offer a lower-bound estimate of the knowledge encompassed by the language model. Prior prompt learning methods in the context of Language Models as Knowledge Bases (LMs-as-KBs) have overlooked the influence of query style. We have unveiled a significant revelation - there are indeed learnable preference within LLMs pertaining to query style. Leveraging this distinctive model characteristic, we introduce the Adaptive query style transfer (ARES) method to improve the performance of LMs-as-KBs by adapting LLM’s preference. ARES initiates by presenting a candidate set of queries, achieved through paraphrasing to incorporate various expression styles. Subsequently, an evaluator is trained to learn and discern LLM’s preferences for query styles, ultimately evaluating the candidate set and selecting the potentially optimal query. Experiments conducted across multiple datasets have convincingly demonstrated the efficacy of our approach in enhancing question answering accuracy on LMs-as-KBs scenarios. Furthermore, Incremental comparisons with the original model and three baseline methods show an average improvement of 2.26%, 1.68%, 1.19%, and 1.17%, respectively, indicating ARES can be effectively utilized in conjunction with other approaches, leading to enhanced performance and optimization across different dimensions.