Logo del repository
  1. Home
 
Opzioni

A Representation Learning Perspective on some Emergent Properties of Neural Networks

BASILE, LORENZO
  • doctoral thesis

Abstract
At the core of modern Artificial Intelligence (AI) are Neural Networks, complex computational models that take inspiration from the interconnected structure of neurons in the brain, which underpins natural intelligence. Neural networks extract meaningful patterns from data by first projecting them into high-dimensional latent representations, which then can be leveraged to solve a variety of downstream tasks. In doing so, mimicking the brain, they rely on the interplay between several interconnected computational units that, taken individually, perform simple operations. However, when observed at the full model scale, their collective behavior can result in the onset of unforeseen emergent properties. Understanding these properties could lead to enhancements in the performance and reliability of neural networks and it is an open research problem in AI interpretability. In this thesis, our aim is to investigate some of these properties, by analyzing the latent representations learned by neural networks.
At the core of modern Artificial Intelligence (AI) are Neural Networks, complex computational models that take inspiration from the interconnected structure of neurons in the brain, which underpins natural intelligence. Neural networks extract meaningful patterns from data by first projecting them into high-dimensional latent representations, which then can be leveraged to solve a variety of downstream tasks. In doing so, mimicking the brain, they rely on the interplay between several interconnected computational units that, taken individually, perform simple operations. However, when observed at the full model scale, their collective behavior can result in the onset of unforeseen emergent properties. Understanding these properties could lead to enhancements in the performance and reliability of neural networks and it is an open research problem in AI interpretability. In this thesis, our aim is to investigate some of these properties, by analyzing the latent representations learned by neural networks.
Archivio
https://hdl.handle.net/11368/3106330
Diritti
open access
FVG url
https://arts.units.it/bitstream/11368/3106330/2/thesis_basile_final.pdf
Soggetti
  • Deep Learning

  • Neural Network

  • Representation

  • Emergent propertie

  • Interpretability

  • Settore INF/01 - Info...

google-scholar
Get Involved!
  • Source Code
  • Documentation
  • Slack Channel
Make it your own

DSpace-CRIS can be extensively configured to meet your needs. Decide which information need to be collected and available with fine-grained security. Start updating the theme to match your nstitution's web identity.

Need professional help?

The original creators of DSpace-CRIS at 4Science can take your project to the next level, get in touch!

Realizzato con Software DSpace-CRIS - Estensione mantenuta e ottimizzata da 4Science

  • Impostazioni dei cookie
  • Informativa sulla privacy
  • Accordo con l'utente finale
  • Invia il tuo Feedback