Challenges in Explainable AI and Privacy Preserving Machine Learning
DOI:
https://doi.org/10.64252/cw1qpz94Abstract
The primary requirement of several applications using machine learning models is input data. Privacy preserving Machine Learning focuses on preserving the privacy of user’s data and the results predicted from the ML model. Membership inference attacks, poisoning attacks, model extraction attacks, reconstruction attacks effect the private information of users and the output. PPML techniques like Homomorphic encryption, Differential Privacy methods, federated learning and trusted execution environments ensure privacy of data. Also, there should be transparency and interpretability of the output generated by the ML models. The need for both explainability in complex models and ensuring privacy is the motivation of this paper.
In this paper the scope of explainable AI (XAI) and Privacy Preserving ML and its significance of ensuring privacy through various Privacy Preserving ML techniques like HE, DP, FL, TEE and transparency through explainable AI is discussed. The objective of this paper is to discuss the various challenges of applying XAI techniques like Shapley Additive Explanations (SHAP) and Local Interpretable Model Agnostic Explanations (LIME) to encrypted data predictions. This survey contributes to the area of Privacy Preserving Machine Learning (PPML) and Privacy Preserving Explainable AI (PPXAI) by providing an insight and discussing various challenges in bridging the gap between PPML and the need for ensuring security, transparency and interpretability.




