Quantum speed-ups for solving semidefinite relaxations of polynomial optimization
AI Breakdown
Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.
Abstract
We study quantum algorithms for approximating Lasserre's hierarchy values for polynomial optimization. Let $f,g_1,\ldots,g_m$ be real polynomials in $n$ variables and $f^\star$ the infimum of $f$ over the semialgebraic set $S(g)=\{x: g_i(x)\ge 0\}$. Let $λ_k$ be the value of the order-$k$ Lasserre relaxation. Assume either (i) $f^\star=λ_k$ and the optimum is attained in the $\ell_1$-ball of radius $1/2$, or (ii) $S(g)$ lies in the simplex $\{x\ge 0: \sum_j x_j\le 1/2\}$, and the constraints define this simplex. After an appropriate coefficient rescaling, we give a quantum algorithm based on matrix multiplicative weights that approximates $λ_k$ to accuracy $\varepsilon>0$ with runtime, for fixed $k$, \[ O(n^k\varepsilon^{-4}+n^{k/2}\varepsilon^{-5}),\qquad O\!\left(s_g\!\left[n^k\varepsilon^{-4}+\!\left(n^{k}+\!\sum_{i=1}^m n^{k-d_i}\right)^{1/2}\!\varepsilon^{-5}\right]\right), \] where $s_g$ bounds the sparsity of the coefficient-matching matrices associated with the constraints. Classical matrix multiplicative-weights methods scale as $O(n^{3k}\mathrm{poly}(1/\varepsilon))$ even in the unconstrained case. As an example, we obtain an $O(n\varepsilon^{-4}+\sqrt{n}\varepsilon^{-5})$ quantum algorithm for portfolio optimization, improving over the classical $O(n^{ω+1}\log(1/\varepsilon))$ bound with $ω\approx2.373$. Our approach builds on and sharpens the analysis of Apeldoorn and Gilyén for the SDPs arising in polynomial optimization. We also show how to implement the required block encodings without QRAM. Under the stated assumptions, our method achieves a super-quadratic speedup in the problem dimension for computing Lasserre relaxations.