Search for a command to run...
Deep learning (DL) has been widely recognized for its strong feature representation capability, making it a promising technique to improve pansharpening methods. However, existing DL-based methods commonly extract the spectral information and spatial information from the high-resolution panchromatic (PAN) images and low-resolution multispectral (MS) images, respectively. This separation limits the effective extraction and integration of potential spectral-spatial information, ultimately reducing the quality of the generated high-resolution multispectral (HRMS) images. In this paper, we propose a novel spatialspectral dual guided network (SSDGN), aiming to fully capitalize on the spectral and spatial information contained in both the PAN and MS images. Firstly, to enhance feature extraction, we introduce two subnetworks: the progressive spectral feature extraction (PSpeFE) subnetwork and progressive spatial feature extraction (PSpaFE) for spatial features. These extract information from the PAN and MS images. Additionally, features from the frequency domain (FD) and intensity domain (ID) of both image types are leveraged to guide and enhance the efficiency of feature extraction. Then, a joint spatial-spectral attention feature fusion module and a multi-stage residual reconstruction module are devised to efficiently harness the extracted spatial and spectral information. Finally, extensive experiments are conducted to evaluate the performance and effectiveness of the proposed SSDGN. Compared to the second-best methods, our approach achieves an average reduction of 12.3% in ERGAS across three satellite datasets. The QNR metric improves by up to 2.1% and averages a 0.8% gain, demonstrating consistent advantages in spectral fidelity and fusion quality.
Published in: IEEE Transactions on Geoscience and Remote Sensing
Volume 63, pp. 1-16