Logo del repository
  1. Home
 
Opzioni

Interpretable Time Series Classification via Temporal Logic Embeddings

FERFOGLIA, IRENE
  • doctoral thesis

Abstract
This thesis addresses the problem of interpretability in time series classification, a domain where state-of-the-art deep learning models achieve high predictive performance at the cost of transparency. In safety-critical and regulated application areas, such as healthcare, industrial monitoring, and cyber-physical systems, opaque decision-making processes hinder trust, accountability, and practical deployment. Existing explainability approaches for time series are predominantly post-hoc, often unstable and weakly grounded semantically, and do not guarantee faithfulness to the model’s actual reasoning. The primary objective of this thesis is to develop a time series classification framework in which interpretability is embedded directly into the learning process. Rather than explaining predictions after training, the proposed approach aims to produce predictions that are inherently expressed in terms of human-understandable temporal concepts. To achieve this, the thesis investigates the integration of Signal Temporal Logic (STL) into modern learning architectures. STL provides a formally grounded and expressive language for describing temporal properties such as sustained behaviours over time and bounded temporal constraints, which are central to many real-world time series phenomena. The main contribution of the thesis is STELLE (Signal Temporal Logic Embedding for Logically-grounded Learning and Explanation), a neuro-symbolic architecture for interpretable time series classification. STELLE maps raw time series trajectories into a concept space defined by STL formulae using a novel trajectory embedding kernel based on STL robustness. This kernel establishes a quantitative and differentiable link between continuous signals and symbolic temporal concepts, enabling joint optimisation of classification performance and interpretability. STELLE is interpretable by design and produces explanations directly from its internal structure. The framework supports both local explanations, which characterise individual predictions, and global explanations, which capture class-level temporal patterns, without relying on additional post-hoc methods. The resulting explanations are concise, semantically meaningful, and faithful to the classifier’s decision process. An extensive experimental evaluation on univariate and multivariate benchmarks demonstrates that STELLE achieves competitive accuracy with respect to state-of-the-art time series classification methods, while providing logically grounded explanations. Through ablation and sensitivity studies, the thesis analyses the impact of key architectural and design choices, illustrating the trade-offs between expressiveness, interpretability, and performance. Overall, this work contributes a principled neuro-symbolic framework that advances interpretable and trustworthy learning for time series data.
This thesis addresses the problem of interpretability in time series classification, a domain where state-of-the-art deep learning models achieve high predictive performance at the cost of transparency. In safety-critical and regulated application areas, such as healthcare, industrial monitoring, and cyber-physical systems, opaque decision-making processes hinder trust, accountability, and practical deployment. Existing explainability approaches for time series are predominantly post-hoc, often unstable and weakly grounded semantically, and do not guarantee faithfulness to the model’s actual reasoning. The primary objective of this thesis is to develop a time series classification framework in which interpretability is embedded directly into the learning process. Rather than explaining predictions after training, the proposed approach aims to produce predictions that are inherently expressed in terms of human-understandable temporal concepts. To achieve this, the thesis investigates the integration of Signal Temporal Logic (STL) into modern learning architectures. STL provides a formally grounded and expressive language for describing temporal properties such as sustained behaviours over time and bounded temporal constraints, which are central to many real-world time series phenomena. The main contribution of the thesis is STELLE (Signal Temporal Logic Embedding for Logically-grounded Learning and Explanation), a neuro-symbolic architecture for interpretable time series classification. STELLE maps raw time series trajectories into a concept space defined by STL formulae using a novel trajectory embedding kernel based on STL robustness. This kernel establishes a quantitative and differentiable link between continuous signals and symbolic temporal concepts, enabling joint optimisation of classification performance and interpretability. STELLE is interpretable by design and produces explanations directly from its internal structure. The framework supports both local explanations, which characterise individual predictions, and global explanations, which capture class-level temporal patterns, without relying on additional post-hoc methods. The resulting explanations are concise, semantically meaningful, and faithful to the classifier’s decision process. An extensive experimental evaluation on univariate and multivariate benchmarks demonstrates that STELLE achieves competitive accuracy with respect to state-of-the-art time series classification methods, while providing logically grounded explanations. Through ablation and sensitivity studies, the thesis analyses the impact of key architectural and design choices, illustrating the trade-offs between expressiveness, interpretability, and performance. Overall, this work contributes a principled neuro-symbolic framework that advances interpretable and trustworthy learning for time series data.
Archivio
https://hdl.handle.net/11368/3129438
Diritti
open access
FVG url
https://arts.units.it/bitstream/11368/3129438/2/FERFOGLIA_PhD_Thesis.pdf
Soggetti
  • Time Serie

  • Explainability

  • Deep Learning

  • Neuro-symbolic

  • Temporal Logic

  • Settore INF/01 - Info...

google-scholar
Get Involved!
  • Source Code
  • Documentation
  • Slack Channel
Make it your own

DSpace-CRIS can be extensively configured to meet your needs. Decide which information need to be collected and available with fine-grained security. Start updating the theme to match your nstitution's web identity.

Need professional help?

The original creators of DSpace-CRIS at 4Science can take your project to the next level, get in touch!

Realizzato con Software DSpace-CRIS - Estensione mantenuta e ottimizzata da 4Science

  • Impostazioni dei cookie
  • Informativa sulla privacy
  • Accordo con l'utente finale
  • Invia il tuo Feedback