Quantum Knowledge Distillation for Large Language Models
AI Breakdown
Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.
Abstract
As foundational tools in natural language processing, Large Language Models (LLMs) have immense parameter scales, which makes deployment and inference increasingly prohibitive, especially in resource-constrained devices. Therefore, knowledge distillation for LLMs, i.e., compressing the LLM to a smaller model, is meaningful. With strong parameter representation capacity, quantum computing is regarded as a promising solution. Here, we propose a Quantum knowledge Distillation model for LLMs (QD-LLM) that leverages variational quantum circuits to learn from LLMs. In classical simulation, QD-LLM outperforms several mainstream distillation methods on multiple text classification tasks in terms of both accuracy and efficiency using only 11 qubits. The results reveal an interesting phenomenon that the simulation of quantum student models may be regarded as a new class of quantum-inspired classical algorithms. Remarkably, we deploy the obtained circuits on the Baihua superconducting quantum processor via the Quafu platform to assess practical feasibility. The model maintains stable inference performance despite hardware constraints such as decoherence and finite sampling. In summary, QD-LLM marks a foundational step in connecting quantum computing with LLMs, demonstrating the feasibility of quantum-native approaches that aim to compress and deploy models of increasingly larger scales. The code of this article has been open-sourced at https://github.com/Lilingxiao-bupt/QD-LLM.