Search for a command to run...
Accurate meteorological visibility estimation is critical to the safety and reliability of transportation and environmental monitoring systems. Despite the prevalence of deep learning, models often struggle with the non-linear visual degradation caused by varying atmospheric conditions and a scarcity of instrument-calibrated datasets. This study makes two primary contributions. First, we introduce the Hong Kong Chu Hai College Visibility Dataset (HKCHC-VD) comprising 11,148 high-resolution images paired with precise visibility measurements from a Biral SWS-100 sensor. Second, we propose a Range-Aware Attention Framework (RAT-Attn), an adaptive attention mechanism that translates classical range-specific atmospheric modeling into differentiable deep learning operations. This is a domain-specific architectural optimization that integrates a dual-backbone architecture (CNN and Vision Transformer) with a learnable threshold mechanism. This design enables the model to dynamically prioritize spatial and channel-wise features based on estimated visibility intervals, specifically targeting the non-linear visual degradation unique to fog and haze. Experimental results demonstrate that our proposed approach outperforms existing baselines, including VisNet and landmark ANN-based methods. The ResNet + ViT (spatial-threshold) variant achieves the most balanced performance, recording a Mean Squared Error (MSE) of 5.87 km<sup>2</sup>, a Mean Absolute Error (MAE) of 1.65 km, and a classification accuracy of 87.07%. In critical low-visibility conditions (0 to 10 km), the framework reduces regression error by over 75% compared to the baselines. These results confirm that range-aware adaptive feature fusion is essential for robust meteorological estimation in real-world environments.