Capturing Speaker Incorrectness: Speaker-Focused Post-Correction for Abstractive Dialogue Summarization

Dongyub Lee1, Jungwoo Lim2, Taesun Whang3, chanhee lee2, Seungwoo Cho4, Mingun Park5, Heuiseok Lim2
1Kakao Corp, 2Korea University, 3Wisenut Inc., 4Kakao Enterprise at South Korea, 5Microsoft


Abstract

In this paper, we focus on improving the quality of the summary generated by neural abstractive dialogue summarization systems. Even though pre-trained language models generate well-constructed and promising results, it is still challenging to summarize the conversation of multiple participants since the summary should include a description of the overall situation and the actions of each speaker. This paper proposes self-supervised strategies for speaker-focused post-correction in abstractive dialogue summarization. Specifically, our model first discriminates which type of speaker correction is required in a draft summary and then generates a revised summary according to the required type. Experimental results show that our proposed method adequately corrects the draft summaries, and the revised summaries are significantly improved in both quantitative and qualitative evaluations.