Search for a command to run...
Missing-pixel-based image restoration is a promising approach for lightweight image transmission, where a subset of pixels is intentionally removed and reconstructed by a neural network at the decoder. Conventional restoration models typically rely on both upper- and lower-side pixels of the target block, but this requirement introduces latency because the restoration must wait for future lines to be decoded. When the input is restricted to upper-side pixels only, the restoration accuracy significantly degrades due to limited contextual information. This study introduces a lightweight sequential inference mechanism that reuses previously restored pixel values as auxiliary inputs for subsequent block restoration. This sequential inference effectively enlarges the available context without increasing the physical input window or relying on heavy recurrent architectures. A generation-based training scheme (gen-1 to gen-5) was adopted to simulate realistic prediction noise and stabilize sequential inference. Experiments were conducted on 2×2 block restoration using two structured missing-pixel patterns. The results show that sequential inference consistently improves performance over the conventional upper-input model. In the smallest 4×3 input window, the proposed method outperformed the Upper model by more than 1 dB and even surpassed the Normal model that uses both upper- and lower-side pixels. In contrast, for larger windows such as 6×4 and 10×6, the improvement was minimal because sufficient context was already available. These findings indicate that sequential inference provides a practical enhancement for lightweight, low-latency codecs, particularly under limited-context conditions where conventional models struggle.
DOI: 10.1117/12.3102464