A Comparative Evaluation of Visual Summarization Techniques for Event Sequences

Real-world event sequences are often complex and heterogeneous, making it difficult to create meaningful visualizations using simple data aggregation and visual encoding techniques. Consequently, visualization researchers have developed numerous visual summarization techniques to generate concise ov...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Zinat, Kazi Tasnim, Yang, Jinhua, Gandhi, Arjun, Mitra, Nistha, Liu, Zhicheng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Zinat, Kazi Tasnim
Yang, Jinhua
Gandhi, Arjun
Mitra, Nistha
Liu, Zhicheng
description Real-world event sequences are often complex and heterogeneous, making it difficult to create meaningful visualizations using simple data aggregation and visual encoding techniques. Consequently, visualization researchers have developed numerous visual summarization techniques to generate concise overviews of sequential data. These techniques vary widely in terms of summary structures and contents, and currently there is a knowledge gap in understanding the effectiveness of these techniques. In this work, we present the design and results of an insight-based crowdsourcing experiment evaluating three existing visual summarization techniques: CoreFlow, SentenTree, and Sequence Synopsis. We compare the visual summaries generated by these techniques across three tasks, on six datasets, at six levels of granularity. We analyze the effects of these variables on summary quality as rated by participants and completion time of the experiment tasks. Our analysis shows that Sequence Synopsis produces the highest-quality visual summaries for all three tasks, but understanding Sequence Synopsis results also takes the longest time. We also find that the participants evaluate visual summary quality based on two aspects: content and interpretability. We discuss the implications of our findings on developing and evaluating new visual summarization techniques.
doi_str_mv 10.48550/arxiv.2306.02489
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2306_02489</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2306_02489</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-75ed151d57a8dc253ce0ab306f46fd28e80df0142250142fec4bf67173e7196c3</originalsourceid><addsrcrecordid>eNotj8tqwzAURLXpoqT9gK6qH7Crt-RlMOkDAlnEZGsU6YoK_Ejl2LT9-ipJN3cuAzPMQeiJklIYKcmLTd9xKRknqiRMmOoe7da4HvuTTfYcF8CbxXZzfscBjwEf4jTbDu_nvrcp_t78BtznEL9mmHAYU07AcMZ7yMbgYHpAd8F2Ezz-6wo1r5umfi-2u7ePer0trNJVoSV4KqmX2hrvmOQOiD3mWUGo4JkBQ3wgVDAmLzeAE8egNNUcNK2U4yv0fKu9ErWnFPPCn_ZC1l7J-B8mSUlm</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>A Comparative Evaluation of Visual Summarization Techniques for Event Sequences</title><source>arXiv.org</source><creator>Zinat, Kazi Tasnim ; Yang, Jinhua ; Gandhi, Arjun ; Mitra, Nistha ; Liu, Zhicheng</creator><creatorcontrib>Zinat, Kazi Tasnim ; Yang, Jinhua ; Gandhi, Arjun ; Mitra, Nistha ; Liu, Zhicheng</creatorcontrib><description>Real-world event sequences are often complex and heterogeneous, making it difficult to create meaningful visualizations using simple data aggregation and visual encoding techniques. Consequently, visualization researchers have developed numerous visual summarization techniques to generate concise overviews of sequential data. These techniques vary widely in terms of summary structures and contents, and currently there is a knowledge gap in understanding the effectiveness of these techniques. In this work, we present the design and results of an insight-based crowdsourcing experiment evaluating three existing visual summarization techniques: CoreFlow, SentenTree, and Sequence Synopsis. We compare the visual summaries generated by these techniques across three tasks, on six datasets, at six levels of granularity. We analyze the effects of these variables on summary quality as rated by participants and completion time of the experiment tasks. Our analysis shows that Sequence Synopsis produces the highest-quality visual summaries for all three tasks, but understanding Sequence Synopsis results also takes the longest time. We also find that the participants evaluate visual summary quality based on two aspects: content and interpretability. We discuss the implications of our findings on developing and evaluating new visual summarization techniques.</description><identifier>DOI: 10.48550/arxiv.2306.02489</identifier><language>eng</language><subject>Computer Science - Human-Computer Interaction</subject><creationdate>2023-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2306.02489$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2306.02489$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zinat, Kazi Tasnim</creatorcontrib><creatorcontrib>Yang, Jinhua</creatorcontrib><creatorcontrib>Gandhi, Arjun</creatorcontrib><creatorcontrib>Mitra, Nistha</creatorcontrib><creatorcontrib>Liu, Zhicheng</creatorcontrib><title>A Comparative Evaluation of Visual Summarization Techniques for Event Sequences</title><description>Real-world event sequences are often complex and heterogeneous, making it difficult to create meaningful visualizations using simple data aggregation and visual encoding techniques. Consequently, visualization researchers have developed numerous visual summarization techniques to generate concise overviews of sequential data. These techniques vary widely in terms of summary structures and contents, and currently there is a knowledge gap in understanding the effectiveness of these techniques. In this work, we present the design and results of an insight-based crowdsourcing experiment evaluating three existing visual summarization techniques: CoreFlow, SentenTree, and Sequence Synopsis. We compare the visual summaries generated by these techniques across three tasks, on six datasets, at six levels of granularity. We analyze the effects of these variables on summary quality as rated by participants and completion time of the experiment tasks. Our analysis shows that Sequence Synopsis produces the highest-quality visual summaries for all three tasks, but understanding Sequence Synopsis results also takes the longest time. We also find that the participants evaluate visual summary quality based on two aspects: content and interpretability. We discuss the implications of our findings on developing and evaluating new visual summarization techniques.</description><subject>Computer Science - Human-Computer Interaction</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tqwzAURLXpoqT9gK6qH7Crt-RlMOkDAlnEZGsU6YoK_Ejl2LT9-ipJN3cuAzPMQeiJklIYKcmLTd9xKRknqiRMmOoe7da4HvuTTfYcF8CbxXZzfscBjwEf4jTbDu_nvrcp_t78BtznEL9mmHAYU07AcMZ7yMbgYHpAd8F2Ezz-6wo1r5umfi-2u7ePer0trNJVoSV4KqmX2hrvmOQOiD3mWUGo4JkBQ3wgVDAmLzeAE8egNNUcNK2U4yv0fKu9ErWnFPPCn_ZC1l7J-B8mSUlm</recordid><startdate>20230604</startdate><enddate>20230604</enddate><creator>Zinat, Kazi Tasnim</creator><creator>Yang, Jinhua</creator><creator>Gandhi, Arjun</creator><creator>Mitra, Nistha</creator><creator>Liu, Zhicheng</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230604</creationdate><title>A Comparative Evaluation of Visual Summarization Techniques for Event Sequences</title><author>Zinat, Kazi Tasnim ; Yang, Jinhua ; Gandhi, Arjun ; Mitra, Nistha ; Liu, Zhicheng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-75ed151d57a8dc253ce0ab306f46fd28e80df0142250142fec4bf67173e7196c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Human-Computer Interaction</topic><toplevel>online_resources</toplevel><creatorcontrib>Zinat, Kazi Tasnim</creatorcontrib><creatorcontrib>Yang, Jinhua</creatorcontrib><creatorcontrib>Gandhi, Arjun</creatorcontrib><creatorcontrib>Mitra, Nistha</creatorcontrib><creatorcontrib>Liu, Zhicheng</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zinat, Kazi Tasnim</au><au>Yang, Jinhua</au><au>Gandhi, Arjun</au><au>Mitra, Nistha</au><au>Liu, Zhicheng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Comparative Evaluation of Visual Summarization Techniques for Event Sequences</atitle><date>2023-06-04</date><risdate>2023</risdate><abstract>Real-world event sequences are often complex and heterogeneous, making it difficult to create meaningful visualizations using simple data aggregation and visual encoding techniques. Consequently, visualization researchers have developed numerous visual summarization techniques to generate concise overviews of sequential data. These techniques vary widely in terms of summary structures and contents, and currently there is a knowledge gap in understanding the effectiveness of these techniques. In this work, we present the design and results of an insight-based crowdsourcing experiment evaluating three existing visual summarization techniques: CoreFlow, SentenTree, and Sequence Synopsis. We compare the visual summaries generated by these techniques across three tasks, on six datasets, at six levels of granularity. We analyze the effects of these variables on summary quality as rated by participants and completion time of the experiment tasks. Our analysis shows that Sequence Synopsis produces the highest-quality visual summaries for all three tasks, but understanding Sequence Synopsis results also takes the longest time. We also find that the participants evaluate visual summary quality based on two aspects: content and interpretability. We discuss the implications of our findings on developing and evaluating new visual summarization techniques.</abstract><doi>10.48550/arxiv.2306.02489</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2306.02489
ispartof
issn
language eng
recordid cdi_arxiv_primary_2306_02489
source arXiv.org
subjects Computer Science - Human-Computer Interaction
title A Comparative Evaluation of Visual Summarization Techniques for Event Sequences
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-03T12%3A11%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Comparative%20Evaluation%20of%20Visual%20Summarization%20Techniques%20for%20Event%20Sequences&rft.au=Zinat,%20Kazi%20Tasnim&rft.date=2023-06-04&rft_id=info:doi/10.48550/arxiv.2306.02489&rft_dat=%3Carxiv_GOX%3E2306_02489%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true