LEMON: LLM-Driven Ensemble Reduction and Verilog Generation for Efficient Hardware Implementation
-
-
Abstract
Ensemble learning has been widely adopted to enhance the generalization performance of artificial intelligence models. However, the additional memory overhead and computational costs induced by ensemble models limit their deployment in resource-constrained scenarios. To address this challenge, this paper proposes LEMON, a large language model (LLM)-driven framework for ensemble reduction and Verilog HDL generation, designed to enable efficient hardware implementation. By leveraging the advanced capabilities of LLMs in semantic understanding, code generation, and complex problem solving, LEMON significantly reduces the memory footprint and computational demands of ensemble models. Extensive evaluations on 20 public datasets demonstrate that while maintaining a compact model size, LEMON achieves comparable or superior prediction accuracy to the original ensemble methods across the majority of test sets. Notably, compared to state-of-the-art techniques, LEMON achieves speedups exceeding 9× and 338× in ensemble reduction, showcasing exceptional scalability across diverse scenarios. Furthermore, LEMON integrates an LLM-powered automated hardware deployment pipeline that significantly simplifies the transition from high-level software models to optimized FPGA implementations. Compared to conventional random forest hardware implementations, LEMON reduces power consumption by over 64.7% and hardware resource utilization increased by more than 92.6%, making it highly suitable for edge computing and embedded system applications.
-
-