This dataset aggregates contraction-schedule results for
Quantum LEGO (QL) layouts used to compute
weight enumerator polynomials (WEPs) of stabilizer codes. It accompanies the paper
Hyper-optimized Quantum LEGO Contraction Schedules (2025), which introduces the
Sparse Stabilizer Tensor (SST) cost function and benchmarks hyper-optimized schedules (via
cotengra) across QL layouts such as MSP and Tanner.
The dataset combines the authors’ MSP/Tanner layout runs; see the paper’s Sec. III.4 and VI for layout details.
# Install: pip install polars aqora-cli pyarrow fsspec
import polars as pl
from aqora_cli.pyarrow import dataset
df = pl.scan_pyarrow_dataset(
dataset("aqora/hyperoptimized-quantum-lego-contraction-schedules", "v1.0.0")
)
# Peek schema without reading all data:
print(df.collect_schema()) # shows column names/types
# Example 1 — compare average ops by cost function for MSP layouts
res1 = (
df.filter(pl.col("representation") == "MSP")
.group_by("cost_fn")
.agg(pl.col("operations").mean().alias("avg_ops"))
.sort("avg_ops")
.collect()
)
print(res1)
# Example 2 — scaling: max intermediate tensor by distance
res2 = (
df.group_by("distance")
.agg(pl.col("max tensor size").max().alias("max_intermediate"))
.sort("distance")
.collect()
)
print(res2)
# Example 3 — when does QL beat brute force?
res3 = (
df.filter(pl.col("operations_w_bruteforce").is_not_null())
.with_columns((pl.col("operations") < pl.col("operations_w_bruteforce"))
.alias("ql_beats_bruteforce"))
.group_by("representation")
.agg(pl.col("ql_beats_bruteforce").mean().alias("share_better"))
.sort("share_better", descending=True)
.collect()
)
print(res3)
All columns are preserved as in the source CSVs.
The paper shows that intermediate tensors in stabilizer-WEP QL networks are often highly sparse, making dense-assumption cost functions unreliable. The SST cost, derived from parity-check matrix ranks, correlates tightly with true cost and improves schedule selection. Expect contraction cost
to reflect the chosen cost_fn
semantics.