← Back to papers

Breaking concentration barriers for quantum extreme learning on digital quantum processors

Timothée Dao, Ege Yilmaz, Ibrahim Shehzad, Christophe Pere, Kumar Ghosh, Isabelle Wittmann, Thomas Brunschwiler, Giorgio Cortiana, Corey O'Meara, Stefan Woerner, Francesco Tacchino·March 13, 2026
Quantum Physics

AI Breakdown

Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.

Abstract

Reservoir computing leverages rich, non-linear dynamics to process temporal data. Quantum variants promise enhanced expressivity from high-dimensional Hilbert spaces, yet their practical applicability is hindered by hardware noise and concentration effects that can erase input-output distinguishability at large system sizes. In this work, we present and experimentally demonstrate a Quantum Extreme Learning Machine (QELM) tailored to state-of-the-art superconducting platforms, employing up to 124 qubits and circuits with more than 5,000 two-qubit gates on IBM Quantum computers. We introduce a practical multi-objective hyperparameter tuning strategy that jointly monitors observable variability, capacity, and task performance to identify noise-robust operating points. In addition, we develop a local eigentask analysis that enables computationally efficient feature selection and effective information retrieval. We report evidence of a regime of optimality that is identifiable at small scales and transferable across tasks and larger systems, and we achieve performances competitive with leading classical baselines on representative benchmarks for time-series forecasting and satellite image classification. Together, our results establish a viable and robust framework for large-scale, pre-fault-tolerant quantum machine learning and provide a foundation for extending reservoir-based methods to more expressive architectures and real-world scenarios.

Related Research