Search for a command to run...
Abstract Hyperspectral change detection has emerged as a vital tool in modern remote sensing applications such as environmental monitoring, urban expansion, and agricultural health. Unlike multispectral systems, hyperspectral imaging captures hundreds of narrow spectral bands, enabling granular material-specific analysis. However, its high dimensionality, spectral variability, and sub-pixel mixing introduce challenges such as data redundancy, noise amplification, and difficulty in isolating meaningful changes. Traditional methods struggle to disentangle mixed pixels or model complex spatial–spectral–temporal relationships, limiting their ability to detect changes. The curse of dimensionality further exacerbates computational inefficiency and sensitivity to environmental noise. Thus, we introduce UnTrNet, an advanced transformer-based approach designed specifically for hyperspectral change detection. UnTrNet begins by applying linear spectral unmixing to each pair of bitemporal images, generating a set of compact abundance maps that highlight the most informative material signatures. These maps are then tokenized and processed through a lightweight transformer encoder, where multi-head self-attention is used to capture fine-grained spectral–spatial patterns and long-range temporal dependencies. This design enables UnTrNet to focus computational resources on the most critical features, improving both efficiency and accuracy. Extensive experiments were conducted on three benchmark datasets: China Farmland, USA, and Urban. UnTrNet performance was evaluated by varying the number of transformer layers and attention heads. Results demonstrate that UnTrNet achieves competitive accuracy on the China Farmland dataset, with the 12-layer configuration achieving the highest accuracy of 98.89%. On the USA dataset, the 12-layer UnTrNet with 16 attention heads outperforms state-of-the-art methods, achieving an accuracy of 97.89%. Additionally, on the Urban dataset, all configurations of UnTrNet achieve over 99% accuracy due to the dataset's well-structured spatial and spectral features.
Published in: International Journal of Data Science and Analytics
Volume 22, Issue 1