Quantum Brain
← Back to papers

Optimizing Quantum Federated Learning Based on Federated Quantum Natural Gradient Descent

Jun Qi, Xiao-Lei Zhang, Javier Tejedor·February 27, 2023·DOI: 10.1109/ICASSP49357.2023.10094634
PhysicsComputer Science

AI Breakdown

Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.

Abstract

Quantum federated learning (QFL) is a quantum extension of the classical federated learning model across multiple local quantum devices. An efficient optimization algorithm is always expected to minimize the communication overhead among different quantum participants. In this work, we propose an efficient optimization algorithm, namely federated quantum natural gradient descent (FQNGD), and further, apply it to a QFL framework that is com-posed of a variational quantum circuit (VQC)-based quantum neural networks (QNN). Compared with stochastic gradient descent methods like Adam and Adagrad, the FQNGD algorithm admits much fewer training iterations for the QFL to get converged. Moreover, it can significantly reduce the total communication overhead among local quantum devices. Our experiments on a handwritten digit classification dataset justify the effectiveness of the FQNGD for the QFL framework in terms of a faster convergence rate on the training set and higher accuracy on the test set.

Related Research

Quantum Intelligence

Ask about quantum research, companies, or market developments.