Erik Recio-Armengol, Shahnawaz Ahmed, Joseph Bowles (Mar 06 2025).
Abstract: We propose an approach to generative quantum machine learning that overcomes the fundamental scaling issues of variational quantum circuits. The core idea is to use a class of generative models based on instantaneous quantum polynomial circuits, which we show can be trained efficiently on classical hardware. Although training is classically efficient, sampling from these circuits is widely believed to be classically hard, and so computational advantages are possible when sampling from the trained model on quantum hardware. By combining our approach with a data-dependent parameter initialisation strategy, we do not encounter issues of barren plateaus and successfully circumvent the poor scaling of gradient estimation that plagues traditional approaches to quantum circuit optimisation. We investigate and evaluate our approach on a number of real and synthetic datasets, training models with up to one thousand qubits and hundreds of thousands of parameters. We find that the quantum models can successfully learn from high dimensional data, and perform surprisingly well compared to simple energy-based classical generative models trained with a similar amount of hyperparameter optimisation. Overall, our work demonstrates that a path to scalable quantum generative machine learning exists and can be investigated today at large scales.