Image Classification Using Quantum Inference on the D-Wave 2X
AI Breakdown
Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.
Abstract
We use a quantum annealing D-Wave 2X computer to obtain solutions to NP-hard sparse coding problems. To reduce the dimensionality of the sparse coding problem to fit on the quantum D-Wave 2X hardware, we passed downsampled MNIST images through a bottleneck autoencoder. To establish a benchmark for classification performance on this reduced dimensional data set, we built two deep convolutional neural networks (DCNNs). The first DCNN used an AlexNet-like architecture and the second a state-of-the-art residual network (RESNET)model, both implemented in TensorFlow. The two DCNNs yielded classification scores of 94.54 ± 0.7% and 98.8 ± 0.1%, respectively. As a control, we showed that both DCNN architectures produced near-state-of-the-art classification performance $(\sim99\%)$ on the original MNIST images. To obtain a set of optimized features for inferring sparse representations of the reduced dimensional MNIST dataset, we imprinted on a random set of 47 image patches followed by an off-line unsupervised learning algorithm using stochastic gradient descent to optimize for sparse coding. Our single-layer of sparse coding matched the stride and patch size of the first convolutional layer of the AlexNet-like DCNN and contained 47 fully-connected features, 47 being the maximum number of dictionary elements that could be embedded onto the D-Wave 2X hardware. When the sparse representations inferred by the D-Wave 2X were passed to a linear support vector machine, we obtained a classification score of 95.68%. We found that the classification performance supported by quantum inference was maximal at an optimal level of sparsity corresponding to a critical value of the sparsity/reconstruction error trade-off parameter that previous work has associated with a second order phase transition, an observation supported by a free energy analysis of D-Wave energy states. We mimicked a transfer learning protocol by feeding the D-Wave representations into a multilayer perceptron (MLP), yielding 98.48% classification performance. The classification performance supported by a single-layer of quantum inference was superior to that supported by a classical matching pursuit algorithm set to the same level of sparsity. Whereas the classification performance of both DCNNs declined as the number of training examples was reduced, the classification performance supported by quantum inference was insensitive to the number of training examples. We thus conclude that quantum inference supports classification of reduced dimensional MNIST images exceeding that of a size-matched AlexNet-like DCNN and nearly equivalent to a state-of-the-art RESNET DCNN.