Measuring a Quantum Measure Exceeding Unity
AI Breakdown
Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.
Abstract
The history based formalism known as Quantum Measure Theory (QMT) generalizes the concept of probability-measure so as to incorporate quantum interference. The resulting \textit{quantum measure} $μ$ is defined for arbitrary events (sets of histories), not just for observables at a fixed moment of time. Thanks to interference effects, $μ$ can exceed unity, exhibiting its non-classical nature in a particularly striking manner. Here, in an optical experiment, we illustrate an ancilla based filtering scheme that gives operational meaning to the quantum measure. For a specific photonic event $E$, we report a measured value of $μ(E)=1.172$, which within errors agrees with the theoretical value of $5/4$, while exceeding the maximum value permissible for a classical probability (namely $1$) by about $13$ $σ$-equivalent (percentile-based) units. The directly observed quantity is an ordinary detector probability $p_D\le 1$ (or, with laser light, an equivalent power ratio); the value $μ(E)>1$ is inferred via the calibrated relation $μ(E)=2p_D$ for our filter. If an unconventional theoretical concept is to play a role in meeting the foundational challenges of quantum theory, it seems important to bring it into contact with experiment as much as possible. Our experiment does this for the quantum measure.