In this paper we present a novel mechanism to
get explanations that allow to better understand
network predictions when dealing with sequential
data. Specifically, we adopt memory-based net-
works — Differential Neural Computers — to ex-
ploit their capability of storing data in memory and
reusing it for inference. By tracking both the mem-
ory access at prediction time, and the information
stored by the network at each step of the input
sequence, we can retrieve the most relevant input
steps associated to each prediction. We validate
our approach (1) on a modified T-maze, which is a
non-Markovian discrete control task evaluating an
algorithm’s ability to correlate events far apart in
history, and (2) on the Story Cloze Test, which is
a commonsense reasoning framework for evaluat-
ing story understanding that requires a system to
choose the correct ending to a four-sentence story.
Our results show that we are able to explain agent’s
decisions in (1) and to reconstruct the most relevant
sentences used by the network to select the story
ending in (2). Additionally, we show not only that
by removing those sentences the network predic-
tion changes, but also that the same are sufficient to
reproduce the inference.
Dettaglio pubblicazione
2020, Proceedings of the 29th International Joint Conference on Artificial Intelligence, Pages -
Explainable inference on sequential data via memory-tracking (04b Atto di convegno in volume)
La Rosa Biagio, Capobianco Roberto, Nardi Daniele
Gruppo di ricerca: Artificial Intelligence and Robotics
keywords