WiNLP @EMNLP 2021 Automated Template Paraphrasing for Conversational Assistants

Conversational Assistants Automated Template Paraphrasing

We are excitied to present our paper “Automated Template Paraphrasing for Conversational Assistants” from Liane Vogel and Lucie Flek at Widening NLP EMNLP 2021.

In this paper, we explore the usage of automatic paraphrasing models such as GPT-2 and CVAE to augment template phrases for task-oriented dialogue systems while preserving the slots. Additionally, we systematically analyze how far manually annotated training data can be reduced.

We extrinsically evaluate the performance of a natural language understanding system on augmented data on various levels of data availability, reducing manually written templates by up to 75 percent while preserving the same level of accuracy. We further point out that the typical NLG quality metrics such as BLEU, utterance similarity, or utterance perplexity, are not suitable to assess the intrinsic quality of NLU paraphrases, and that public task-oriented NLU datasets such as ATIS and SNIPS have severe limitations