Search for a command to run...
Optical computing aims to leverage the large bandwidth of light and exploits its potential for low-latency signal processing to efficiently handle large-scale Machine Learning (ML) workloads. A critical component of this approach is a Photonic Tensor Core (PTC), which facilitates general matrix-vector products. However, computational accuracy can be degraded by noise and fabrication imperfections, posing challenges for practical deployment. In this work, we address the problem of designing noise-robust PTCs within footprint-constrained architectures composed of Phase Shifters (PSs), Directional Couplers (DCs), and Waveguide Crossings (CRSs). Using a recently developed automated design framework, we propose and demonstrate in simulations the efficacy of two complementary techniques: noise injection during chip topology training and penalty terms that promote noise robust and imperfection-insensitive local optima in the loss landscape of the ML task. We find that, compared to the original design, the ML task accuracy degrades significantly slower under increasing noise levels when the proposed robustness techniques are used. We increase the average accuracy in the presence of DC biases from 87.2% to 97.8% using noise injection on the MNIST classification task. Furthermore, using the gradient-based techniques, we increase the accuracy when the DCs experience high variations from ∼81% to ∼84% on the F-MNIST classification task. These advances contribute to the practical viability of photonic accelerators for ML and potentially beyond, by addressing noise and imperfections through hardware–algorithm co-design.