An Efficient Decomposition of the Carleman Linearized Burgers' Equation
AI Breakdown
Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.
Abstract
Herein, we present a polylogarithmic decomposition method to load the matrix from the linearized 1-dimensional Burgers' equation onto a quantum computer. First, we use the Carleman linearization method to map the nonlinear Burgers' equation into an infinite linear system of equations, which is subsequently truncated to order $α$. This new finite linear system is then embedded into a larger system of equations with the key property that its matrix can be decomposed into a linear combination of $\mathcal{O}(\log n_t + α^2\log n_x)$ terms for $n_t$ time steps and $n_x$ spatial grid points. While the terms in this linear combination are not unitary, each can be implemented using a simple block encoding procedure. A numerical simulation is performed by combining our approach with the variational quantuam linear solver demonstrating that accurate solutions are possible. Finally, a resource estimate shows that the upper bound of the Clifford and T gate counts scale like $\mathcal{O}(α(\log n_x)^2)$ and $\mathcal{O}((\log n_x)^2)$, respectively. This is therefore the first explicit polylogarithmic data loading method with respect to $n_x$ and $n_t$ for a Carleman linearized system.