What are the biggest reproducibility pain points you see in the quantum-algorithms literature today? From my (limited) reading, a few culprits keep popping up:
Compiler/transpiler variance — different runs map the same circuit to different hardware circuits unless seeds/pipelines are pinned ;
Benchmark fragmentation — bespoke datasets/circuits and inconsistent metrics make cross-paper comparisons shaky ;
SDK churn — breaking API changes and migrations undermine long-term reruns.
Given these realities, what minimal good practices should we rally around for papers and repos (e.g., compiler provenance, seed discipline, device/calibration snapshots, standardized datasets/benchmarks)? And are there examples where a community practice actually moved the needle on repeatability?