Quantum Brain
← Back to papers

Conditional diffusion-based parameter generation for quantum approximate optimization algorithm

Fanxu Meng, Xiang-Yu Zhou, Pengcheng Zhu, Yu Luo·July 17, 2024·DOI: 10.1140/epjqt/s40507-025-00397-4
Physics

AI Breakdown

Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.

Abstract

The Quantum Approximate Optimization Algorithm (QAOA) is a hybrid quantum-classical algorithm that shows promise in efficiently solving the Max-Cut problem, a representative example of combinatorial optimization. However, its effectiveness heavily depends on the parameter optimization pipeline, where the parameter initialization strategy is nontrivial due to the non-convex and complex optimization landscapes characterized by issues with low-quality local minima. Recent inspiration comes from the diffusion of classical neural network parameters, which has demonstrated that neural network training can benefit from generating good initial parameters through diffusion models. However, whether the diffusion model can enhance the parameter optimization and performance of QAOA by generating well-performing initial parameters is still an open topic. Therefore, in this work, we formulate the problem of finding good initial parameters as a generative task and propose the initial parameter generation scheme through dataset-conditioned pre-trained parameter sampling. Concretely, the generative machine learning model, specifically the denoising diffusion probabilistic model (DDPM), is trained to learn the distribution of pre-trained parameters conditioned on the graph dataset. Intuitively, the proposed framework aims to effectively distill knowledge from pre-trained parameters to generate well-performing initial parameters for QAOA. To benchmark our framework, we adopt trotterized quantum annealing (TQA)-based and graph neural network (GNN) prediction-based initialization protocols as baselines. Through numerical experiments on Max-Cut problem instances of various sizes, we show that conditional DDPM can consistently generate high-quality initial parameters, improve convergence to the approximation ratio, and exhibit greater robustness against local minima over baselines. Additionally, the experimental results also indicate that the conditional DDPM trained on small problem instances can be extrapolated to larger ones, thus demonstrating the extrapolation capacity of our framework in terms of the qubit number.

Related Research

Quantum Intelligence

Ask about quantum research, companies, or market developments.