In recent years the idea of fusing diverse type of information has often been employed to solve various Deep Learning tasks. Whether these regard an NLP problem or a Machine Vision one, the concept of using more inputs of the same type has been the basis of many studies. Considering NLP problems, attempts of different word embeddings have already been tried, managing to make improvements to the most common benchmarks. Here we want to explore the combination not only of different types of input together, but also different data modalities. This is done by fusing two popular word embeddings together, mainly ELMo and BERT, with other inputs that embed a visual description of the analysed text. Doing so, different modalities -textual and visual- are both employed to solve a textual problem, a concreteness task. Multimodal feature fusion is here explored through several techniques: input redundancy, concatenation, average, dimensionality reduction and augmentation. By combining these techniques it is possible to generate different vector representations: the goal is to understand which feature fusion techniques allow to obtain more accurate embeddings.