Validating large-scale quantum machine learning: efficient simulation of quantum support vector machines using tensor networks
AI Breakdown
Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.
Abstract
We present an efficient tensor-network-based approach for simulating large-scale quantum circuits exemplified by quantum support vector machines (QSVMs). Experimentally, leveraging the cuTensorNet library on multiple GPUs, our method effectively reduces the exponential runtime growth to near-quadratic scaling with respect to the number of qubits in practical scenarios. Traditional state-vector simulations become computationally infeasible beyond approximately 50 qubits; in contrast, our simulator successfully handles QSVMs with up to 784 qubits, executing simulations within seconds on a single high-performance GPU. Furthermore, utilizing the message passing interface for multi-GPU environments, our method demonstrates strong linear scalability, effectively decreasing computation time as dataset sizes increase. We validate our framework using the MNIST and Fashion MNIST datasets, achieving successful multiclass classification and highlighting the potential of QSVMs for high-dimensional data analysis. By integrating tensor-network techniques with advanced high-performance computing resources, this work demonstrates both the feasibility and scalability of simulating large-qubit quantum machine learning models, providing a valuable validation tool within the emerging Quantum-HPC ecosystem.