CAISA Lab

ESWC2021 Context Transformer with Stacked Pointer Networks for Conversational Question Answering over Knowledge Graphs

ESWC Conversational Question Answering

We are delighted that the paper “Context Transformer with Stacked Pointer Networks for Conversational Question Answering over Knowledge Graphs” from Joan Plepi, Endri Kacupaj, Kuldeep Singh, Harsh Thakkar and Jens Lehmann, is accepted in ESWC ‘21. This paper is built mainly from the work of our lab member Joan on his master thesis research during his studies at the University of Bonn.

In this paper, is tackled the problem of complex conversational question answering over large-scale knowledge graph. They propose CARTON (Context trAnsformeR sTacked pOinter Networks)- a multi-task learning framework consisting of a context transformer model extended with a stack of pointer networks for multi-task neural semantic parsing. The framework handles semantic parsing using the context transformer model while the remaining tasks such as type prediction, predicate prediction, and entity detection are handled by the stacked pointer networks. CARTON’s stacked pointer networks incorporate knowledge graph information for performing any reasoning and does not rely only on the conversational context. Moreover, pointer networks provide the flexibility for handling out-of-vocabulary entities, predicates, and types that are unseen during training.

CARTON

CARTON achieves new state of the art results on eight out of ten question types from a large-scale conversational question answering dataset. CARTON is evaluated on the Complex Sequential Question Answering (CSQA) dataset consisting of conversations over linked QA pairs. The dataset contains 200K dialogues with 1.6M turns, and over 12.8M entities. Our implementation, the annotated dataset with proposed grammar, and results are published in the githube page carton.