Comparing Probabilistic, Distributional and Transformer-Based Models on Logical Metonymy Interpretation

Giulia Rambelli1, Emmanuele Chersoni2, Alessandro Lenci1, Philippe Blache3, Chu-Ren Huang4
1University of Pisa, 2Hong Kong Polytechnic University, 3LPL CNRS, 4The Hong Kong Polytechnic Universiy


Abstract

In linguistics and cognitive science, Logical metonymies are defined as type clashes between an event-selecting verb and an entity-denoting noun (e.g. The editor finished the article), which are typically interpreted by inferring a hidden event (e.g. reading) on the basis of contextual cues.

This paper tackles the problem of logical metonymy interpretation, that is, the retrieval of the covert event via computational methods. We compare different types of models, including the probabilistic and the distributional ones previously introduced in the literature on the topic. For the first time, we also tested on this task some of the recent Transformer-based models, such as BERT, RoBERTa, XLNet, and GPT-2.

Our results show a complex scenario, in which the best Transformer-based models and some traditional distributional models perform very similarly. However, the low performance on some of the testing datasets suggests that logical metonymy is still a challenging phenomenon for computational modeling.