Logo del repository
  1. Home
 
Opzioni

Learning a formula of interpretability to learn interpretable formulas

Virgolin M.
•
De Lorenzo A.
•
Medvet E.
•
Randone F.
2020
  • conference object

Abstract
Many risk-sensitive applications require Machine Learning (ML) models to be interpretable. Attempts to obtain interpretable models typically rely on tuning, by trial-and-error, hyper-parameters of model complexity that are only loosely related to interpretability. We show that it is instead possible to take a meta-learning approach: an ML model of non-trivial Proxies of Human Interpretability (PHIs) can be learned from human feedback, then this model can be incorporated within an ML training process to directly optimize for interpretability. We show this for evolutionary symbolic regression. We first design and distribute a survey finalized at finding a link between features of mathematical formulas and two established PHIs, simulatability and decomposability. Next, we use the resulting dataset to learn an ML model of interpretability. Lastly, we query this model to estimate the interpretability of evolving solutions within bi-objective genetic programming. We perform experiments on five synthetic and eight real-world symbolic regression problems, comparing to the traditional use of solution size minimization. The results show that the use of our model leads to formulas that are, for a same level of accuracy-interpretability trade-off, either significantly more or equally accurate. Moreover, the formulas are also arguably more interpretable. Given the very positive results, we believe that our approach represents an important stepping stone for the design of next-generation interpretable (evolutionary) ML algorithms.
DOI
10.1007/978-3-030-58115-2_6
WOS
WOS:001299687200006
Archivio
http://hdl.handle.net/11368/2978952
info:eu-repo/semantics/altIdentifier/scopus/2-s2.0-85091188050
https://link.springer.com/chapter/10.1007/978-3-030-58115-2_6
Diritti
open access
license:copyright editore
FVG url
https://arts.units.it/bitstream/11368/2978952/1/2020-PPSN-FormulaInterpretabilityLearning (3).pdf
Soggetti
  • Explainable artificia...

  • Genetic programming

  • Interpretable machine...

  • Multi-objective

  • Symbolic regression

Scopus© citazioni
6
Data di acquisizione
Jun 7, 2022
Vedi dettagli
google-scholar
Get Involved!
  • Source Code
  • Documentation
  • Slack Channel
Make it your own

DSpace-CRIS can be extensively configured to meet your needs. Decide which information need to be collected and available with fine-grained security. Start updating the theme to match your nstitution's web identity.

Need professional help?

The original creators of DSpace-CRIS at 4Science can take your project to the next level, get in touch!

Realizzato con Software DSpace-CRIS - Estensione mantenuta e ottimizzata da 4Science

  • Impostazioni dei cookie
  • Informativa sulla privacy
  • Accordo con l'utente finale
  • Invia il tuo Feedback