Quantum Brain
← Back to papers

Vectorized Attention with Learnable Encoding for Quantum Transformer

Ziqing Guo, Ziwen Pan, Alex Khan, Jan Balewski·August 25, 2025·DOI: 10.48550/arXiv.2508.18464
Computer SciencePhysics

AI Breakdown

Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.

Abstract

Vectorized quantum block encoding provides a way to embed classical data into Hilbert space, offering a pathway for quantum models, such as Quantum Transformers (QT), that replace classical self-attention with quantum circuit simulations to operate more efficiently. Current QTs rely on deep-parameterized quantum circuits (PQCs), rendering them vulnerable to QPU noise, and thus hindering their practical performance. In this paper, we propose the Vectorized Quantum Transformer (VQT), a model that supports ideal masked-attention matrix computation through quantum approximation simulation and efficient training via vectorized nonlinear quantum encoder, yielding shot-efficient and gradient-free quantum circuit simulation (QCS) and reduced classical sampling overhead. In addition, we demonstrate an accuracy comparison for IBM and IonQ in quantum circuit simulation and competitive results in benchmarking natural language processing tasks on IBM’s state-of-the-art, high-fidelity Kingston QPU. Our noise intermediate-scale quantum (NISQ)-friendly VQT approach unlocks a novel architecture for end-to-end machine learning in quantum computing.

Related Research

Quantum Intelligence

Ask about quantum research, companies, or market developments.