Logo del repository
  1. Home
 
Opzioni

Explainable Automated Anomaly Recognition in Failure Analysis: is Deep Learning Doing it Correctly?

Leonardo Arrighi
•
Sylvio Barbon Junior
•
Felice Andrea Pellegrino
altro
Marco Zullich
2023
  • conference object

Abstract
EXplainable AI (XAI) techniques can be employed to help identify points of concern in the objects analyzed when using image-based Deep Neural Networks (DNNs). There has been an increasing number of works proposing the usage of DNNs to perform Failure Analysis (FA) in various industrial applications. These DNNs support practitioners by providing an initial screening to speed up the manual FA process. In this work, we offer a proof-of-concept for using a DNN to recognize failures in pictures of Printed Circuit Boards (PCBs), using the boolean information of (non) faultiness as ground truth. To understand if the model correctly identifies faulty connectors within the PCBs, we make use of XAI tools based on Class Activation Mapping (CAM), observing that the output of these techniques seems not to align well with these connectors. We further analyze the faithfulness of these techniques with respect to the DNN, observing that often they do not seem to capture relevant features according to the model’s decision process. Finally, we mask out faulty connectors from the original images, noticing that the DNN predictions do not change significantly, thus showing that the model possibly did not learn to base its predictions on features associated with actual failures. We conclude with a warning that FA using DNNs should be conducted using more complex techniques, such as object detection, and that XAI tools should not be taken as oracles, but their correctness should be further analyzed.
DOI
10.1007/978-3-031-44067-0_22
WOS
WOS:001286483700022
Archivio
https://hdl.handle.net/11368/3061718
info:eu-repo/semantics/altIdentifier/scopus/2-s2.0-85175960649
https://link.springer.com/chapter/10.1007/978-3-031-44067-0_22
Diritti
open access
license:copyright editore
license uri:iris.pri02
FVG url
https://arts.units.it/bitstream/11368/3061718/3/2023___Leonardo_Arrighi___Explainable_AI_PCBs__compressed.pdf
Soggetti
  • Convolutional Neural ...

  • Explainable Artificia...

  • Class Activation Mapp...

  • Faithfulne

  • Printed Circuit Board...

google-scholar
Get Involved!
  • Source Code
  • Documentation
  • Slack Channel
Make it your own

DSpace-CRIS can be extensively configured to meet your needs. Decide which information need to be collected and available with fine-grained security. Start updating the theme to match your nstitution's web identity.

Need professional help?

The original creators of DSpace-CRIS at 4Science can take your project to the next level, get in touch!

Realizzato con Software DSpace-CRIS - Estensione mantenuta e ottimizzata da 4Science

  • Impostazioni dei cookie
  • Informativa sulla privacy
  • Accordo con l'utente finale
  • Invia il tuo Feedback