Distilling Opinions at Scale: Incremental Opinion Summarization using XL-OPSUMM
Opinion summarization in e-commerce encapsulates the collective views of numerous users about a product based on their reviews. Typically, a product on an e-commerce platform has thousands of reviews, each review comprising around 10-15 words. While Large Language Models (LLMs) have shown proficienc...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Opinion summarization in e-commerce encapsulates the collective views of
numerous users about a product based on their reviews. Typically, a product on
an e-commerce platform has thousands of reviews, each review comprising around
10-15 words. While Large Language Models (LLMs) have shown proficiency in
summarization tasks, they struggle to handle such a large volume of reviews due
to context limitations. To mitigate, we propose a scalable framework called
Xl-OpSumm that generates summaries incrementally. However, the existing test
set, AMASUM has only 560 reviews per product on average. Due to the lack of a
test set with thousands of reviews, we created a new test set called
Xl-Flipkart by gathering data from the Flipkart website and generating
summaries using GPT-4. Through various automatic evaluations and extensive
analysis, we evaluated the framework's efficiency on two datasets, AMASUM and
Xl-Flipkart. Experimental results show that our framework, Xl-OpSumm powered by
Llama-3-8B-8k, achieves an average ROUGE-1 F1 gain of 4.38% and a ROUGE-L F1
gain of 3.70% over the next best-performing model. |
---|---|
DOI: | 10.48550/arxiv.2406.10886 |