Thanks to the effectiveness and wide availability of modern pretrained language models (PLMs), recently proposed approaches have achieved remarkable results in dependency- and span-based, multilingual and cross-lingual Semantic Role Labeling (SRL). These results have prompted researchers to investigate the inner workings of modern PLMs with the aim of understanding how, where, and to what extent they encode information about SRL. In this paper, we follow this line of research and probe for predicate argument structures in PLMs. Our study shows that PLMs do encode semantic structures directly into the contextualized representation of a predicate, and also provides insights into the correlation between predicate senses and their structures, the degree of transferability between nominal and verbal structures, and how such structures are encoded across languages. Finally, we look at the practical implications of such insights and demonstrate the benefits of embedding predicate argument structure information into an SRL model.
Dettaglio pubblicazione
2022, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Pages 4622-4632
Probing for Predicate Argument Structures in Pretrained Language Models (04b Atto di convegno in volume)
Conia Simone, Navigli Roberto
Gruppo di ricerca: Natural Language Processing
keywords