Quantum Brain
← Back to papers

Programming quantum computers with large language models

Elena R. Henderson, Jessie M. Henderson, Joshua Ange, Mitchell A. Thornton·May 30, 2025·DOI: 10.1117/12.3049666
PhysicsEngineering

AI Breakdown

Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.

Abstract

Large language models (LLMs) promise transformative change to fields as diverse as medical diagnosis, legal services, and software development. One reason for such an impact is LLMs’ ability to make highly technical endeavors more accessible to a broader audience. Accessibility has long been a goal for the growing fields of quantum computing, informatics, and engineering, especially as more quantum systems become publicly available via cloud interfaces. Between programming quantum computers and using LLMs, the latter seems the more accessible task: while leveraging an LLM’s fullest potential requires experience with prompt engineering, any literate person can provide queries and read responses. By contrast, designing and executing quantum programs—outside of those available online—requires significant background knowledge, from selection of operations for algorithm implementation to configuration choices for particular hardware specifications and providers. Current research is exploring LLM utility for classical software development, but there has been relatively little investigation into the same for quantum programming. Consequently, this work is a first look at how well an uncustomized, publicly available LLM can write straightforward quantum circuits. We examine how well OpenAI’s ChatGPT (GPT-4) can write quantum circuits for two hardware providers: the superconducting qubit machines of IBM and the photonic devices of Xanadu. We find that ChatGPT currently fares substantially better with the former.

Related Research

Quantum Intelligence

Ask about quantum research, companies, or market developments.