Logo del repository
  1. Home
 
Opzioni

Federated SHAP: Privacy-Preserving and Consistent Post-hoc Explainability in Federated Learning

Pietro Ducange
•
Francesco Marcelloni
•
Giustino Claudio Miglionico
altro
Fabrizio Ruffini
2026
  • journal article

Periodico
MACHINE LEARNING
Abstract
The widespread adoption of Artificial Intelligence in everyday activities highlights a growing and urgent need for trustworthiness. Designing trustworthy AI systems requires addressing key technical challenges, including ensuring data privacy and model explainability. Federated Learning (FL) is a widely adopted paradigm to preserve data privacy in collaborative learning scenarios, while post-hoc methods are commonly applied to enhance the explainability of opaque AI-based models. In this paper, we propose a novel approach, called Federated SHAP, to simultaneously address privacy and explainability. Specifically, we leverage the SHapley Additive exPlanations (SHAP) method to provide post-hoc explanations of Neural Networks trained through FL. SHAP relies on a representative background dataset; however, constructing such a dataset in the FL setting is particularly challenging since raw data distributed across multiple clients cannot be shared directly due to strict privacy requirements. To address this challenge, we propose two tailored strategies depending on the data type: for tabular data, we adopt a Federated Fuzzy C-Means clustering algorithm to collaboratively summarize the distributed datasets into a suitable background dataset; for image data, we introduce a Federated Generative Adversarial Network (GAN) to synthesize representative background instances. A comprehensive experimental evaluation demonstrates the effectiveness and robustness of our proposed approaches, comparing them against several baseline and alternative strategies in terms of both representativeness and quality of generated explanations. Compared to baselines employing randomly generated representative background datasets, our approach reduces the discrepancy of SHAP explanations by up to three times on tabular data and two times on image data (depending on the test case involved), when measured against the centralized SHAP values computed using the full training set as background dataset.
DOI
10.1007/s10994-025-06956-1
WOS
WOS:001665617400001
Archivio
https://hdl.handle.net/11368/3124578
info:eu-repo/semantics/altIdentifier/scopus/2-s2.0-105027956425
https://link.springer.com/article/10.1007/s10994-025-06956-1
Diritti
closed access
license:copyright editore
license uri:iris.pri02
FVG url
https://arts.units.it/request-item?handle=11368/3124578
Soggetti
  • Explainable artificia...

  • Federated learning

  • Fuzzy C-mean

  • Fuzzy clustering

  • GAN

  • SHAP

google-scholar
Get Involved!
  • Source Code
  • Documentation
  • Slack Channel
Make it your own

DSpace-CRIS can be extensively configured to meet your needs. Decide which information need to be collected and available with fine-grained security. Start updating the theme to match your nstitution's web identity.

Need professional help?

The original creators of DSpace-CRIS at 4Science can take your project to the next level, get in touch!

Realizzato con Software DSpace-CRIS - Estensione mantenuta e ottimizzata da 4Science

  • Impostazioni dei cookie
  • Informativa sulla privacy
  • Accordo con l'utente finale
  • Invia il tuo Feedback