Visually grounded and textual semantic models differentially decode brain activity associated with concrete and abstract nouns
View/ Open
Volume
5
Pagination
17 - 30 (14)
Publisher
Journal
Transactions of the Association for Computational Linguistics
Metadata
Show full item recordAbstract
Important advances have recently been made using computational semantic models to de- code brain activity patterns associated with concepts; however, this work has almost ex- clusively focused on concrete nouns. How well these models extend to decoding abstract nouns is largely unknown. We address this question by applying state-of-the-art compu- tational models to decode functional Magnetic Resonance Imaging (fMRI) activity patterns, elicited by participants reading and imagin- ing a diverse set of both concrete and abstract nouns. One of the models we use is linguistic, exploiting the recent word2vec skipgram ap- proach trained on Wikipedia. The second is visually grounded, using deep convolutional neural networks trained on Google Images. Dual coding theory considers concrete con- cepts to be encoded in the brain both linguisti- cally and visually, and abstract concepts only linguistically. Splitting the fMRI data accord- ing to human concreteness ratings, we indeed observe that both models significantly decode the most concrete nouns; however, accuracy is significantly greater using the text-based mod- els for the most abstract nouns. More gener- ally this confirms that current computational models are sufficiently advanced to assist in investigating the representational structure of abstract concepts in the brain.