Search for a command to run...
Inclement weather conditions present significant challenges for computer vision tasks by reducing visibility, altering contrast, and creating color variations, further complicated by the dynamic nature of these environments. This paper introduces an image deraining approach that integrates a total variation preprocessing step with a Generative Adversarial Network (GAN) framework, enabling effective rain streak removal and detail recovery. Our model leverages a U-Net architecture enriched with advanced attention mechanisms, notably Contextual Attention, which captures long-range dependencies and contextual information across the image. By drawing on principles from image inpainting, which focuses on reconstructing missing or corrupted image areas, our model fills in rain-obscured regions with contextually accurate details, closely approximating the underlying scene. Additionally, Squeeze and Excitation, along with the Convolutional Block Attention Module (CBAM), are employed to further enhance feature extraction, distinguishing rain streaks from background features more effectively. We evaluate our approach using key metrics, including Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), Universal Image Quality Index (UQI), and Visual Information Fidelity (VIF). Extensive experiments on benchmark datasets, including Rain800, Rain100H, DID_MDN, and Rain100L, show that our method achieves PSNR values of 40.27 dB for light rain images, 38.76 dB for medium rain images, and 26.22 dB for heavy rain images. The proposed approach significantly outperforms existing state-of-the-art techniques in both qualitative and quantitative assessments, providing a robust solution for real-time applications that require clear, detailed imagery in adverse weather conditions.