Quantum Brain
← Back to papers

Large Language Model Scaling Laws for Neural Quantum States in Quantum Chemistry

Oliver Knitter, Dan Zhao, Stefan Leichenauer, Shravan Veerapaneni·September 16, 2025
cs.LGcs.CEQuantum Physics

AI Breakdown

Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.

Abstract

Scaling laws have been used to describe how large language model (LLM) performance scales with model size, training data size, or amount of computational resources. Motivated by the fact that neural quantum states (NQS) has increasingly adopted LLM-based components, we seek to understand NQS scaling laws, thereby shedding light on the scalability and optimal performance--resource trade-offs of NQS ansatze. In particular, we identify scaling laws that predict the performance, as measured by absolute error and V-score, for transformer-based NQS as a function of problem size in second-quantized quantum chemistry applications. By performing analogous compute-constrained optimization of the obtained parametric curves, we find that the relationship between model size and training time is highly dependent on loss metric and ansatz, and does not follow the approximately linear relationship found for language models.

Related Research

Quantum Intelligence

Ask about quantum research, companies, or market developments.