Quantum Brain
← Back to papers

Extrapolation method to optimize linear-ramp QAOA parameters: Evaluation of QAOA runtime scaling

Vanessa Dehn, Martin Zaefferer, Gerhard Hellstern, Florentin Reiter, Thomas Wellens·April 11, 2025
Physics

AI Breakdown

Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.

Abstract

The Quantum Approximate Optimization Algorithm (QAOA) has been suggested as a promising candidate for the solution of combinatorial optimization problems. Yet, whether - or under what conditions - it may offer an advantage compared to classical algorithms remains to be proven. Using the standard variational form of QAOA requires a high number of circuit parameters that have to be optimized at a sufficiently large depth, which constitutes a bottleneck for achieving a potential scaling advantage. The linear-ramp QAOA (LR-QAOA) has been proposed to address this issue, as it relies on only two parameters which have to be optimized. Based on this, we develop a method to estimate suitable values for those parameters through extrapolation, starting from smaller problem sizes (number of qubits) towards larger problem sizes. We apply this method to several use cases such as portfolio optimization, feature selection and clustering, and compare the quantum runtime scaling with that of classical methods. In the case of portfolio optimization, we demonstrate superior scaling compared to the classical runtime.

Related Research

Quantum Intelligence

Ask about quantum research, companies, or market developments.