Overview
Quantum error mitigation (QEM) aims to recover accurate physical quantities from noisy quantum devices without requiring full fault-tolerant error correction. Given a quantum state ρ affected by a noise channel 𝒩, the measured observable is
- noisy: ⟨O⟩ₙ = Tr[O 𝒩(ρ)],
- ideal: ⟨O⟩ᵢ = Tr[Oρ].
The goal of QEM is to design a mapping M such that:
M(⟨O⟩ₙ) ≈ ⟨O⟩ᵢ.
Existing approaches often use linear models, zero-noise extrapolation, or small neural networks. In this challenge, participants will build next-generation, data-driven QEM models capable of learning complex noise patterns and outperforming current ANN-based and classical QEM strategies.
What Participants Will Do
Participants will construct a full QEM pipeline based on three main components: data generation, model design, and benchmarking. The aim is to develop a mapping
fθ: xₙ → x̂ᵢ
where xₙ represents noisy measurement statistics (bitstring probabilities, expectation vectors, or raw counts) and x̂ᵢ is the model’s estimate of the ideal value.
1. Create a Data-Driven QEM Dataset
Participants will generate paired data {(xₙᵏ, xᵢᵏ)}ₖ from quantum circuits U with varying depths, architectures, and noise strengths. For a circuit U(θ) preparing ρ = U|0…0⟩⟨0…0|U†:
- compute ideal measurement statistics
- pᵢ(b) = |⟨b|ρ|b⟩|², or
- ideal expectation values ⟨O⟩ᵢ = Tr[Oρ],
- inject noise channels 𝒩 such as
- depolarizing: 𝒩_dep(ρ) = (1 − p)ρ + p I / 2ⁿ,
- amplitude damping,
- readout confusion/noise,
- sample noisy observables xₙ through repeated measurements.
Participants may choose to represent xₙ using:
- Bitstring frequencies pₙ(b),
- Expectation-value vectors vₙ = (⟨Z₁⟩, ⟨Z₂⟩, …),
- Correlation terms ⟨ZᵢZⱼ⟩, or
- Custom feature encodings.
2. Build a Noise-Aware or Adaptive QEM Model
Participants must design a model that learns a mapping
fθ(xₙ) → x̂ᵢ.
This may include:
- neural networks,
- probabilistic regressors,
- mixture-of-experts, or
- transformer-style architectures.
Participants are encouraged to incorporate noise awareness, such as estimating a latent noise descriptor
z = gϕ(xₙ),
followed by a conditional mitigation model
x̂ᵢ = hψ(xₙ, z).
Training objectives may minimize errors using L1 or L2 loss:
L(θ) = ‖x̂ᵢ − xᵢ‖₁
or
L(θ) = (x̂ᵢ − xᵢ)²,
depending on whether the targets are probabilities or expectation values.
Participants may also explore physically motivated regularizers, e.g.:
- enforcing normalization ∑ᵦ p̂ᵢ(b) = 1,
- constraining observables to the range [−1, 1].
Participants will build a benchmark suite containing:
- variational ansätze,
- QAOA layers,
- random circuits.
For each circuit C and noise level λ, compute:
- unmitigated value ⟨O⟩ₙ(C, λ),
- baseline-mitigated value ⟨O⟩_base(C, λ),
- model-mitigated value ⟨O⟩_model(C, λ).
Assess performance using:
- absolute error,
- improvement ratio R = E_base / E_model,
- fidelity between probability distributions F(pᵢ, p̂ᵢ),
- scaling behavior as circuit depth increases.
The objective is to demonstrate clear improvements, ideally showing robustness of the model as noise strength increases or as circuits grow deeper.
Technical Focus Areas
Participants will gain hands-on experience with:
- NISQ-era noise modeling,
- supervised learning on quantum outputs,
- generalization across circuits,
- regression on expectation values,
- transfer learning between devices,
- constructing physically constrained ML models.
Any quantum SDK (Qiskit, PennyLane, Cirq) and ML toolkit (PyTorch, TensorFlow, JAX, scikit-learn) is permitted.
Evaluation Criteria
Submissions will be evaluated on:
-
Mitigation accuracy
- How well does the model approximate ⟨O⟩ᵢ across circuits and noise conditions?
-
Generalization
- Does the model perform well on unseen circuits and unseen noise strengths?
-
Novelty and rigor
- Are the ideas, architectures, or loss functions innovative and technically grounded?
-
Reproducibility
- Is the code clean, documented, and runnable?
-
Scientific insight
- Does the team interpret why the method works or fails, supported by plots and analysis?
Recommended Resources
- Adeniyi & Kumar, “Adaptive Neural Network for Quantum Error Mitigation,” Quantum Machine Intelligence (2025).
- Cai et al., “Quantum Error Mitigation,” Reviews of Modern Physics (2023).
- Kim et al., “Quantum Error Mitigation with Artificial Neural Networks,” IEEE Access (2020).