EASE: Extractive-Abstractive Summarization End-to-End using the Information Bottleneck Principle

Haoran Li1, Arash Einolghozati1, Srinivasan Iyer1, Bhargavi Paranjape2, Yashar Mehdad3, Sonal Gupta1, Marjan Ghazvininejad4
1Facebook, 2University of Washington, 3Facebook AI, 4Facebook AI Research


Abstract

Current abstractive summarization systems outperform their extractive counterparts, but their widespread adoption is inhibited by the inherent lack of interpretability. Extractive summarization systems, though interpretable, suffer from redundancy and possible lack of coherence. To achieve the best of both worlds, we propose EASE, an extractive-abstractive framework that generates concise abstractive summaries that can be traced back to an extractive summary. Our framework can be applied to any evidence-based text generation problem and can accommodate various pretrained models in its simple architecture. We use the Information Bottleneck principle to jointly train the extraction and abstraction in an end-to-end fashion. Inspired by previous research that humans use a two-stage framework to summarize long documents (Jing and McKeown, 2000), our framework first extracts a pre-defined amount of evidence spans and then generates a summary using only the evidence. Using automatic and human evaluations, we show that the generated summaries are better than strong extractive and extractive-abstractive baselines.