Explainable Ai For Environmental Decision Support: Interpreting Deep Learning Models In Climate Science
DOI:
https://doi.org/10.64252/h6ng2f46Keywords:
Explainable AI (XAI), Deep Learning, Climate Science, CNN-LSTM, SHAP, Environmental Decision SupportAbstract
Deep learning (DL) models have demonstrated high accuracy in climate science applications but suffer from "black-box" opacity, hindering their adoption in environmental decision-making. This research bridges this gap by integrating Explainable AI (XAI) techniques with DL models to enhance transparency in climate predictions. Using a hybrid Convolutional Neural Network-Long Short-Term Memory (CNN-LSTM) architecture, we forecast regional temperature anomalies and interpret outputs via SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations). Our methodology is validated on ERA5 reanalysis data (1980–2025), achieving a prediction RMSE of 0.86°C. XAI analysis reveals that oceanic heat fluxes and atmospheric pressure patterns are critical drivers of anomalies. The framework empowers policymakers with actionable insights, ensuring DL models are both accurate and trustworthy for climate action.