Logo del repository
  1. Home
 
Opzioni

Model learning with personalized interpretability estimation (ML-PIE)

Virgolin M.
•
De Lorenzo A.
•
Randone F.
altro
Wahde M.
2021
  • conference object

Abstract
High-stakes applications require AI-generated models to be interpretable. Current algorithms for the synthesis of potentially interpretable models rely on objectives or regularization terms that represent interpretability only coarsely (e.g., model size) and are not designed for a specific user. Yet, interpretability is intrinsically subjective. In this paper, we propose an approach for the synthesis of models that are tailored to the user by enabling the user to steer the model synthesis process according to her or his preferences. We use a bi-objective evolutionary algorithm to synthesize models with trade-offs between accuracy and a user-specific notion of interpretability. The latter is estimated by a neural network that is trained concurrently to the evolution using the feedback of the user, which is collected using uncertainty-based active learning. To maximize usability, the user is only asked to tell, given two models at the time, which one is less complex. With experiments on two real-world datasets involving 61 participants, we find that our approach is capable of learning estimations of interpretability that can be very different for different users. Moreover, the users tend to prefer models found using the proposed approach over models found using non-personalized interpretability indices.
DOI
10.1145/3449726.3463166
Archivio
http://hdl.handle.net/11368/2993382
info:eu-repo/semantics/altIdentifier/scopus/2-s2.0-85111049856
https://dl.acm.org/doi/abs/10.1145/3449726.3463166
Diritti
open access
license:copyright editore
FVG url
https://arts.units.it/bitstream/11368/2993382/1/3449726.3463166.pdf
Soggetti
  • active learning

  • explainable artificia...

  • genetic programming

  • interpretable machine...

  • neural networks

Scopus© citazioni
2
Data di acquisizione
Jun 7, 2022
Vedi dettagli
Visualizzazioni
3
Data di acquisizione
Apr 19, 2024
Vedi dettagli
google-scholar
Get Involved!
  • Source Code
  • Documentation
  • Slack Channel
Make it your own

DSpace-CRIS can be extensively configured to meet your needs. Decide which information need to be collected and available with fine-grained security. Start updating the theme to match your nstitution's web identity.

Need professional help?

The original creators of DSpace-CRIS at 4Science can take your project to the next level, get in touch!

Realizzato con Software DSpace-CRIS - Estensione mantenuta e ottimizzata da 4Science

  • Impostazioni dei cookie
  • Informativa sulla privacy
  • Accordo con l'utente finale
  • Invia il tuo Feedback