Search for a command to run...
Abstract Lightning poses a significant threat to life, property, and infrastructure. Numerous studies to date have focused on short-term lightning prediction, and how to parameterize lightning flashes within numerical weather prediction models. This paper compares two deep learning architectures, a U-Net and convolutional long-short term memory (LSTM), to generate 1–4 day lightning prediction probabilities. The Global Forecast System (GFS) convective parameters and hydrometeor mixing ratios are used as inputs and the flashes from the Geostationary Operational Environmental Satellite Geostationary Lightning Mapper across the contiguous United States (CONUS) provide training labels. Two model architectures are evaluated using attribute diagrams and precision-recall performance diagrams. The attributes diagram shows high-reliability through Day 4 for both the U-Net and LSTM. The area under the precision-recall curve (PR-AUC) decreases for both architectures, from 0.62 to 0.52 (U-Net) and 0.61 to 0.50 (LSTM), from Day 1 to Day 4, respectively. Single variable permutation importances are presented, indicating convective available potential energy (CAPE) is the most important variable for both the U-Net and LSTM. Case studies are presented for a forecast hit and miss accompanied by partial dependence plots (PDPs) to investigate the models’ dependencies on inputs. The hit forecast showcased lightning probabilities above 90 percent for both models and had good agreement with the Storm Prediction Center’s Calibrated Thunder Guidance. PDPs showed high-spatial sensitivity to CAPE, reflectivity, and precipitation rate inputs. The miss-forecast, at Vance AFB, missed the occurred lightning on Day 1, where the PDPs revealed unchanged model output when reflectivity was varied. Overall, both AI methods illustrate promise for improved Day 1–4 lightning prediction.