Distilling the knowledge with quantum neural networks
AI Breakdown
Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.
Abstract
Quantum Neural Networks (QNNs) are a promising class of quantum machine learning models with potential quantum advantages when implemented on scalable, error-corrected quantum computers. However, as system sizes increase, deploying QNNs becomes challenging. Similar to their classical counterparts, a key obstacle to their practical applications is that large-scale QNNs may not be easily deployed on smaller systems that have limited resources. Here, we tackle this challenge by compressing QNNs via knowledge distillation. We demonstrate how well-trained QNNs on large systems can be distilled into smaller architectures with similar configurations. We numerically show that knowledge distillation helps reduce the training cost of QNNs in terms of the number of qubits and circuit depth. Additionally, we find that a self-knowledge-distillation approach can accelerate training convergence. We believe our results offer new strategies for the efficient compression and practical deployment of QNNs.