Quantum Brain
← Back to papers

Superpositional Gradient Descent: Harnessing Quantum Principles for Model Training

Ahmet Erdem Pamuk, Emir Kaan Özdemir, Şuayp Talha Kocabay·November 1, 2025·DOI: 10.1109/QAI63978.2025.00036
cs.LGQuantum Physics

AI Breakdown

Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.

Abstract

Large language models (LLMs) are increasingly trained with classical optimization techniques like AdamW to improve convergence and generalization. However, the mechanisms by which quantum-inspired methods enhance classical training remain underexplored. We introduce Superpositional Gradient Descent (SGD), a novel optimizer linking gradient updates with quantum superposition by injecting quantum circuit perturbations. We present a mathematical framework and implement hybrid quantum-classical circuits in PyTorch and Qiskit. On synthetic sequence classification and large-scale LLM fine-tuning, SGD converges faster and yields lower final loss than AdamW. Despite promising results, scalability and hardware constraints limit adoption. Overall, this work provides new insights into the intersection of quantum computing and deep learning, suggesting practical pathways for leveraging quantum principles to control and enhance model behavior.

Related Research

Quantum Intelligence

Ask about quantum research, companies, or market developments.