Learning functions of quantum states with distributed architectures
AI Breakdown
Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.
Abstract
Distributed architectures are gaining prominence in quantum machine learning as a means to overcome hardware limitations and enable scalable quantum information processing. In this context, we analyze the design and performance of distributed Quantum Extreme Learning Machine (QELM) architectures for learning functions of quantum states directly from data, restricting measurements to easily implementable projective measurements in the computational basis. The aim is to determine which schemes can effectively recover specific properties of input quantum states, including both linear and nonlinear features, while also quantifying the resource requirements in terms of measurements and reservoir dimensionality. We compare standard three-layer QELM with a spatially multiplexed architecture composed of multiple independent three-layer units for linear (quantum) tasks, showing a linear reduction in resource requirements per unit. For nonlinear properties, the study examines the multiple-injection architecture and introduces a novel distributed design that incorporates entanglement between subsystems within a spatially multiplexed framework, evaluating its performance through the reconstruction of complex nonlinear quantities such as polynomial targets, Rényi entropy, and entanglement measures. Our results demonstrate that the distributed design enables the reconstruction of higher-order nonlinearities by increasing the number of interacting subsystems with reduced resources, rather than increasing the size of an individual reservoir, providing a scalable and hardware-efficient route to quantum property learning.