Logo del repository
  1. Home
 
Opzioni

Fusing contextual word embeddings for concreteness estimation

Incitti F.
•
Snidaro L.
2021
  • conference object

Abstract
Natural Language Processing (NLP) has a long history, and recent research has focused in particular on encoding meaning in a computable way. Word embeddings have been used for this specific purpose, allowing language tasks to be treated as mathematical problems. Real valued vectors have been generated or employed as word representations for several NLP tasks. In this work, different types of pre-trained word embeddings are fused together to estimate word concreteness. In the evaluation of this task, we have taken into account how much contextual information can affect final results, and also how to properly fuse different word embeddings in order to maximize their performance. The best architecture in our study surpasses the winning solution in the Evalita 2020 competition for the word concreteness task.
WOS
WOS:000869154400067
Archivio
https://hdl.handle.net/11390/1220059
info:eu-repo/semantics/altIdentifier/scopus/2-s2.0-85123409801
https://ricerca.unityfvg.it/handle/11390/1220059
Diritti
closed access
Soggetti
  • Autoencoder

  • BERT

  • Concreteness task

  • Context

  • Elmo

  • Glove

  • Information Fusion

  • NLP

  • Word Embedding

  • Word2vec

google-scholar
Get Involved!
  • Source Code
  • Documentation
  • Slack Channel
Make it your own

DSpace-CRIS can be extensively configured to meet your needs. Decide which information need to be collected and available with fine-grained security. Start updating the theme to match your nstitution's web identity.

Need professional help?

The original creators of DSpace-CRIS at 4Science can take your project to the next level, get in touch!

Realizzato con Software DSpace-CRIS - Estensione mantenuta e ottimizzata da 4Science

  • Impostazioni dei cookie
  • Informativa sulla privacy
  • Accordo con l'utente finale
  • Invia il tuo Feedback