Starting from last year, WMT human evaluation has been performed within the Multidimensional Quality Metrics (MQM) framework, where human annotators are asked to
identify error spans in translations, alongside
an error category and a severity. In this paper, we describe our submission to the WMT
2022 Metrics Shared Task, where we propose
using the same paradigm for automatic evaluation: we present the MATESE metrics, which
reframe machine translation evaluation as a
sequence tagging problem. Our submission
also includes a reference-free metric, denominated MATESE-QE. Despite the paucity of
the openly available MQM data, our metrics
obtain promising results, showing high levels
of correlation with human judgements, while
also enabling an evaluation that is interpretable.
Moreover, MATESE-QE can also be employed
in settings where it is infeasible to curate reference translations manually.
Dettaglio pubblicazione
2022, Proceedings of the Seventh Conference on Machine Translation (WMT), Pages 569-577
MaTESe: Machine Translation Evaluation as a Sequence Tagging Problem (04b Atto di convegno in volume)
Perrella Stefano, Proietti Lorenzo, Scirè Alessandro, Campolungo Niccolò, Navigli Roberto
Gruppo di ricerca: Natural Language Processing
keywords