Search for a command to run...
Title: Deep Learning Approaches to Quantum Error Mitigation - Data & Models Description: This record contains supplementary materials for the paper "Deep Learning Approaches to Quantum Error Mitigation". Dataset Archives (*.tgz) Each .tgz archive corresponds to a specific dataset split and experimental setting: Splits: train / val / test Devices: algiers / hanoi Circuit types: pauli / random Data source: real (hardware) / simulated Archived contents preserve the original directory structure with: data_inputs_version{0,1,2}.npy — Circuit representations data_masks_inputs_version{0,1,2}.npy — Attention masks data_ideal_outputs_version{0,1,2}.npy — Ideal probability distributions data_noisy_outputs_version{0,1,2}.npy — Noisy probability distributions data_mitigated_outputs_version{0,1,2}_*.npy — Analytical mitigation baselines (SPAM, repolariser, mix) Model Checkpoints (wandb_models.zip) Pre-trained neural network checkpoints organized by architecture and experiment: PercLoss-* — Perceiver IO models (primary architecture) TF-Model-* — Transformer encoder-decoder TFM-Model-* — Transformer membrane variant Encoder-Model-* — Encoder-only transformer DECODERONLY-Model-* — Noisy-encoder architecture (note: an alternative naming convention for Noisy-enc; see paper for details) RNN-Model-* — LSTM/GRU baseline MLP-Model-* — Multi-layer perceptron baseline MLPPREDICTION-Model-* — MLP with prediction head BERT-Model-* — BERT-style models (additional experiments, not in paper) Each model directory includes 5 versions trained with different random seeds (42, 123, 456, 789, 101112) for reproducibility analysis. Experiment naming convention: {Architecture}-{Training|Fine-Tuning|x2Fine-Tuning}-{circuit_type}-{data_source}-{device}[-from{source_device}] References Paper: arXiv:2601.14226 GitHub: https://github.com/Quantinuum/Deep-Learning-Approaches-to-Quantum-Error-Mitigation