Quantum Brain
← Back to papers

An Interpretable Quantum Adjoint Convolutional Layer for Image Classification

Ren-Xin Zhao, Shi Wang, Yaonan Wang·April 26, 2024·DOI: 10.1109/TCYB.2025.3567090
MedicinePhysicsComputer Science

AI Breakdown

Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.

Abstract

The interpretability of quantum machine learning (QML) refers to the capability to provide clear and understandable explanations for the predictions and decision-making processes of QML models. However, most quantum convolutional layers (QCLs) utilize closed-box structures that are inherently devoid of interpretability, leading to the opacity of principles and the suboptimal mapping of classical data. This significantly undermines the reliability of QML models. In addition, most of the current QML interpretability focuses on post hoc interpretability seriously neglecting the importance of exploring intrinsic causes. To tackle these challenges, we introduce the quantum adjoint convolution operation (QACO). It is an intrinsic interpretability scheme based on quantum evolution, as its quantum mapping precisely corresponds to the position and pixel values of the image and its principle is equivalent to the Frobenius inner product (FIP). Furthermore, we extend the QACO concept into the quantum adjoint convolutional layer (QACL) by integrating the quantum phase estimation (QPE) algorithm, enabling the parallel computation of all FIPs. Experimental results on PennyLane and TensorFlow platforms demonstrate that our method achieves a 6.3%, 3.4%, and 2.9% higher average test accuracy on Fashion MNIST, MNIST, and DermaMNIST datasets compared to classical and uninterpretable quantum counterparts, respectively, while maintaining 73.3% noise-robust accuracy under Gaussian noise, showcasing its superior generalizability and resilience in practical scenarios.

Related Research

Quantum Intelligence

Ask about quantum research, companies, or market developments.