Search for a command to run...
The Sentinel-3 Ocean and Land Colour Instrument (OLCI) is designed for water monitoring. Its 21-spectral bands serve as the basis for the precise retrieval of water quality parameters. However, its coarse resolution restricts the depiction of the spatial distribution of water quality parameters in small inland water bodies. Spatial–spectral fusion is a common method to address the inherent constraints between the spatial and spectral resolutions of sensors. Central to the popular methods is the deep learning-based method. Nonetheless, deep-learning-based models still face challenges in fusing Sentinel-2 Multi-Spectral Instrument (MSI) and Sentinel-3 OLCI data. Here, we propose a Multi-Scale-Attention-based Unsupervised Generative Adversarial Network (MSA-UGAN), which effectively integrates OLCI’s spectral advantage and MSI’s spatial resolution. Quantitative evaluation was conducted against five benchmark methods, including traditional approaches (GS, SFIM, MTF-GLP) and deep learning models (SRCNN, UCGAN). The results show that MSA-UGAN achieves the best overall performance: QNR (0.9709) and SSIM (0.9087) are the highest, while SAM (1.1331), spatial distortion (DS = 0.0389), and spectral distortion (Dλ = 0.0252) are the lowest. This shows that MSA-UGAN can better preserve the spatial details of S2 MSI and the spectral features of S3 OLCI data. Moreover, ERGAS (2.2734) also performs excellently in the comparative experiments. The experiment of Chlorophyll-a inversion using the fused image in Chen Lake revealed a spatial gradient ranging from 3.25 to 19.33 µg/L, with the highest concentrations in the southwestern nearshore waters, likely associated with aquaculture. These results jointly indicate that MSA-UGAN can generate high-spatial-resolution multispectral images, and the fused images can be effectively utilized for water quality monitoring, thereby providing essential data support for the precision management and scientific decision-making regarding inland lakes.