Posted

Mehrad Sahebi, Alice Barthe, Yudai Suzuki, Zoë Holmes, Michele Grossi (May 23 2025).
Abstract: In the quest for quantum advantage, a central question is under what conditions can classical algorithms achieve a performance comparable to quantum algorithms--a concept known as dequantization. Random Fourier features (RFFs) have demonstrated potential for dequantizing certain quantum neural networks (QNNs) applied to regression tasks, but their applicability to other learning problems and architectures remains unexplored. In this work, we derive bounds on the generalization performance gap between classical RFF models and quantum models for regression and classification tasks with both QNN and quantum kernel architectures. We support our findings with numerical experiments that illustrate the practical dequantization of existing quantum kernel-based methods. Our findings not only broaden the applicability of RFF-based dequantization but also enhance the understanding of potential quantum advantages in practical machine-learning tasks.

Order by:

Want to join this discussion?

Join our community today and start discussing with our members by participating in exciting events, competitions, and challenges. Sign up now to engage with quantum experts!