Search for a command to run...
Artifcat of the Paper: Noise Fingerprints for Cross-Platform Quantum Simulator Discrepancy Analysis Contemporary quantum computers are noisy. For the correct operation of quantum software, it is essential to have an accurate and practical system that captures the actual noise behaviour. This is a significant Quantum Software Engineering (QSE) challenge as simulator noise models and manufacturers’ reported noise data are often inaccurate and out of sync with current hardware behaviour. Providing a practical solution is difficult because traditional noise reconstruction scales exponentially in the number of qubits and is likely to be outdated by the time it is obtained.In this vision paper, we outline a novel pipeline to address the challenge by adapting classical shadow tomography from quantum mechanics. Our pipeline, called SHADOWP, enables empirical noise fingerprinting, avoiding the exponential number of measurements and allowing for a continuously updatable noise fingerprint in the software development pipeline, e.g. in a simulation environment. As a proof of concept, we implement the pipeline and report initial evaluation on two frameworks, Qiskit and Cirq, under two widely used hardware-informed profiles, IBM Boston and Quantinuum H2. Our results show fingerprints exhibit channel-specific structure, produce interpretable heatmaps, and reveal systematic cross-platform discrepancies under matched noise configurations, quantified by large Frobenius distances at a fraction of full tomography cost; on 69 MQTBench programs, larger fingerprint deltas correlate with cross-configuration output distributions divergence, signalling portability risk. GitHub of this release is available here: https://github.com/vili-1/SimSHADOW/tree/ICST-SVE-2026. ---- Files: Requirement file for reproducing our results: requirements.txt. Our tool code: artifact-main.zip and documentation READMEartifact.pdf. Fingerprints for 4 profiles: High Profile results-1000-high.zip and logs logs-1000-high.zip (Threat to Validity Section) Low Profile results-1000-low.zip and logs logs-1000-low.zip (Threat to Validity Section) Quantinuum Profile results-1000-quantinuum_h2.zip and logs logs-1000-quantinuum_h2.zip (Results Section) IBM Profile results-1000-ibm_boston.zip and logs logs-1000-ibm_boston.zip (Results Section) NOTE: *.log files and *.txt files may contain debugging data and prints. None of these was used in the analysis of EQ1 and EQ2. NOTE 2: Specifically, EQ1 results were the post-processing of results/SHADOWP_results_*.json (experiment outputs) only. Figure 2 full reproducibility package: figure2+frobenius_stats_package.zip. Specifically: Generates the paper figure using the JSON files in data/. Reproduces the stats used in the paper text: Frobenius distances for IBM Boston and Quantinuum H2 (Qiskit vs Cirq, per noise channel),and bootstrap standard errors. Benchmarking Results (Last part of (ii) in the Results Section): noise-state-nier.xlsx. Raw results of the benchmarking part: logger_19Feb-machineOf14Feb2026.log (F-I) and logger_19Feb-machineOf12Feb2026.log (A-D). ---- Replication Package: Quick Start (local execution) 1. Requirements sudo apt-get install python3-pip python3 -m pip --version python3 -m pip install "qiskit~=2.2" "qiskit-aer~=0.17" "qiskit-nature~=0.7" scipy pip install openfermion pip install --upgrade cirq pip install --quiet ply pip install qiskit_qasm3_import pip install qiskit # Test all ok python -c "import qiskit, cirq; print('ok')" 2. Reproducing Evaluation Results of NIER/Vision Paper: See full instructions in the READMEartifact.pdf. EQ2 in Wild Results: We utilised these MQTBench benchmarks with qubits 5-11: ghz_indep_tket_*.qasm grover-noancilla_indep_tket_*.qasm qftentangled_indep_tket_*.qasm random_indep_tket_*.qasm su2random_indep_tket_*.qasm graphstate_indep_tket_*.qasm qaoa_indep_tket_*.qasm qft_indep_tket_*.qasm realamprandom_indep_tket_*.qasm twolocalrandom_indep_tket_*.qasm Each experiment consists of 10k shots, repeated 30 times. ----