Automatic segmentation and summarization for videos taken with smart glasses
This paper discusses the topic of automatic segmentation and extraction of important segments of videos taken with Google Glasses. Using the information from both the video images and additional sensor data that are recorded concurrently, we devise methods that automatically divide the video into co...
Gespeichert in:
Veröffentlicht in: | Multimedia tools and applications 2018-05, Vol.77 (10), p.12679-12699 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 12699 |
---|---|
container_issue | 10 |
container_start_page | 12679 |
container_title | Multimedia tools and applications |
container_volume | 77 |
creator | Chiu, Yen-Chia Liu, Li-Yi Wang, Tsaipei |
description | This paper discusses the topic of automatic segmentation and extraction of important segments of videos taken with Google Glasses. Using the information from both the video images and additional sensor data that are recorded concurrently, we devise methods that automatically divide the video into coherent segments and estimate the importance of the each segment. Such information then enables automatic generation of video summary that contains only the important segments. The features used include colors, image details, motions, and speeches. We then train multi-layer perceptrons for the two tasks (segmentation and importance estimation) according to human annotations. We also present a systematic evaluation procedure that compares the automatic segmentation and importance estimation results with those given by multiple users and demonstrate the effectiveness of our approach. |
doi_str_mv | 10.1007/s11042-017-4910-8 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2044094322</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2044094322</sourcerecordid><originalsourceid>FETCH-LOGICAL-c316t-3329965370abb20377dec06634e8cc57263965e02af19f971aac63c35208c67a3</originalsourceid><addsrcrecordid>eNp1kE9PwzAMxSMEEmPwAbhF4hywkzZpj9PEP2kSFzhHWZaOjrUdcQaCT0-mInHi5Cf792zrMXaJcI0A5oYQoZAC0IiiRhDVEZtgaZQwRuJx1qoCYUrAU3ZGtAFAXcpiwhazfRo6l1rPKay70Kesh567fsVp33Uutt9jpxki_2hXYSCe3Fvo-WebXjllIvH11hEFOmcnjdtSuPitU_Zyd_s8fxCLp_vH-WwhvEKdhFKyrnWpDLjlUoIyZhU8aK2KUHlfGqlVHgeQrsG6qQ0657XyqpRQeW2cmrKrce8uDu_7QMluhn3s80kroSigLpSUmcKR8nEgiqGxu9jmd78sgj2EZsfQbA7NHkKzVfbI0UOZ7dch_m3-3_QD9KhulA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2044094322</pqid></control><display><type>article</type><title>Automatic segmentation and summarization for videos taken with smart glasses</title><source>SpringerNature Journals</source><creator>Chiu, Yen-Chia ; Liu, Li-Yi ; Wang, Tsaipei</creator><creatorcontrib>Chiu, Yen-Chia ; Liu, Li-Yi ; Wang, Tsaipei</creatorcontrib><description>This paper discusses the topic of automatic segmentation and extraction of important segments of videos taken with Google Glasses. Using the information from both the video images and additional sensor data that are recorded concurrently, we devise methods that automatically divide the video into coherent segments and estimate the importance of the each segment. Such information then enables automatic generation of video summary that contains only the important segments. The features used include colors, image details, motions, and speeches. We then train multi-layer perceptrons for the two tasks (segmentation and importance estimation) according to human annotations. We also present a systematic evaluation procedure that compares the automatic segmentation and importance estimation results with those given by multiple users and demonstrate the effectiveness of our approach.</description><identifier>ISSN: 1380-7501</identifier><identifier>EISSN: 1573-7721</identifier><identifier>DOI: 10.1007/s11042-017-4910-8</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Annotations ; Cameras ; Computer Communication Networks ; Computer Science ; Data Structures and Information Theory ; Image segmentation ; Multilayers ; Multimedia Information Systems ; Sensors ; Special Purpose and Application-Based Systems ; Surveillance ; User generated content</subject><ispartof>Multimedia tools and applications, 2018-05, Vol.77 (10), p.12679-12699</ispartof><rights>Springer Science+Business Media, LLC 2017</rights><rights>Multimedia Tools and Applications is a copyright of Springer, (2017). All Rights Reserved.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c316t-3329965370abb20377dec06634e8cc57263965e02af19f971aac63c35208c67a3</citedby><cites>FETCH-LOGICAL-c316t-3329965370abb20377dec06634e8cc57263965e02af19f971aac63c35208c67a3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11042-017-4910-8$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11042-017-4910-8$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,41488,42557,51319</link.rule.ids></links><search><creatorcontrib>Chiu, Yen-Chia</creatorcontrib><creatorcontrib>Liu, Li-Yi</creatorcontrib><creatorcontrib>Wang, Tsaipei</creatorcontrib><title>Automatic segmentation and summarization for videos taken with smart glasses</title><title>Multimedia tools and applications</title><addtitle>Multimed Tools Appl</addtitle><description>This paper discusses the topic of automatic segmentation and extraction of important segments of videos taken with Google Glasses. Using the information from both the video images and additional sensor data that are recorded concurrently, we devise methods that automatically divide the video into coherent segments and estimate the importance of the each segment. Such information then enables automatic generation of video summary that contains only the important segments. The features used include colors, image details, motions, and speeches. We then train multi-layer perceptrons for the two tasks (segmentation and importance estimation) according to human annotations. We also present a systematic evaluation procedure that compares the automatic segmentation and importance estimation results with those given by multiple users and demonstrate the effectiveness of our approach.</description><subject>Annotations</subject><subject>Cameras</subject><subject>Computer Communication Networks</subject><subject>Computer Science</subject><subject>Data Structures and Information Theory</subject><subject>Image segmentation</subject><subject>Multilayers</subject><subject>Multimedia Information Systems</subject><subject>Sensors</subject><subject>Special Purpose and Application-Based Systems</subject><subject>Surveillance</subject><subject>User generated content</subject><issn>1380-7501</issn><issn>1573-7721</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>8G5</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNp1kE9PwzAMxSMEEmPwAbhF4hywkzZpj9PEP2kSFzhHWZaOjrUdcQaCT0-mInHi5Cf792zrMXaJcI0A5oYQoZAC0IiiRhDVEZtgaZQwRuJx1qoCYUrAU3ZGtAFAXcpiwhazfRo6l1rPKay70Kesh567fsVp33Uutt9jpxki_2hXYSCe3Fvo-WebXjllIvH11hEFOmcnjdtSuPitU_Zyd_s8fxCLp_vH-WwhvEKdhFKyrnWpDLjlUoIyZhU8aK2KUHlfGqlVHgeQrsG6qQ0657XyqpRQeW2cmrKrce8uDu_7QMluhn3s80kroSigLpSUmcKR8nEgiqGxu9jmd78sgj2EZsfQbA7NHkKzVfbI0UOZ7dch_m3-3_QD9KhulA</recordid><startdate>20180501</startdate><enddate>20180501</enddate><creator>Chiu, Yen-Chia</creator><creator>Liu, Li-Yi</creator><creator>Wang, Tsaipei</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M2O</scope><scope>MBDVC</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>Q9U</scope></search><sort><creationdate>20180501</creationdate><title>Automatic segmentation and summarization for videos taken with smart glasses</title><author>Chiu, Yen-Chia ; Liu, Li-Yi ; Wang, Tsaipei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c316t-3329965370abb20377dec06634e8cc57263965e02af19f971aac63c35208c67a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Annotations</topic><topic>Cameras</topic><topic>Computer Communication Networks</topic><topic>Computer Science</topic><topic>Data Structures and Information Theory</topic><topic>Image segmentation</topic><topic>Multilayers</topic><topic>Multimedia Information Systems</topic><topic>Sensors</topic><topic>Special Purpose and Application-Based Systems</topic><topic>Surveillance</topic><topic>User generated content</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Chiu, Yen-Chia</creatorcontrib><creatorcontrib>Liu, Li-Yi</creatorcontrib><creatorcontrib>Wang, Tsaipei</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>Access via ABI/INFORM (ProQuest)</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Research Library</collection><collection>Research Library (Corporate)</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central Basic</collection><jtitle>Multimedia tools and applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chiu, Yen-Chia</au><au>Liu, Li-Yi</au><au>Wang, Tsaipei</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Automatic segmentation and summarization for videos taken with smart glasses</atitle><jtitle>Multimedia tools and applications</jtitle><stitle>Multimed Tools Appl</stitle><date>2018-05-01</date><risdate>2018</risdate><volume>77</volume><issue>10</issue><spage>12679</spage><epage>12699</epage><pages>12679-12699</pages><issn>1380-7501</issn><eissn>1573-7721</eissn><abstract>This paper discusses the topic of automatic segmentation and extraction of important segments of videos taken with Google Glasses. Using the information from both the video images and additional sensor data that are recorded concurrently, we devise methods that automatically divide the video into coherent segments and estimate the importance of the each segment. Such information then enables automatic generation of video summary that contains only the important segments. The features used include colors, image details, motions, and speeches. We then train multi-layer perceptrons for the two tasks (segmentation and importance estimation) according to human annotations. We also present a systematic evaluation procedure that compares the automatic segmentation and importance estimation results with those given by multiple users and demonstrate the effectiveness of our approach.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11042-017-4910-8</doi><tpages>21</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1380-7501 |
ispartof | Multimedia tools and applications, 2018-05, Vol.77 (10), p.12679-12699 |
issn | 1380-7501 1573-7721 |
language | eng |
recordid | cdi_proquest_journals_2044094322 |
source | SpringerNature Journals |
subjects | Annotations Cameras Computer Communication Networks Computer Science Data Structures and Information Theory Image segmentation Multilayers Multimedia Information Systems Sensors Special Purpose and Application-Based Systems Surveillance User generated content |
title | Automatic segmentation and summarization for videos taken with smart glasses |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T17%3A55%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Automatic%20segmentation%20and%20summarization%20for%20videos%20taken%20with%20smart%20glasses&rft.jtitle=Multimedia%20tools%20and%20applications&rft.au=Chiu,%20Yen-Chia&rft.date=2018-05-01&rft.volume=77&rft.issue=10&rft.spage=12679&rft.epage=12699&rft.pages=12679-12699&rft.issn=1380-7501&rft.eissn=1573-7721&rft_id=info:doi/10.1007/s11042-017-4910-8&rft_dat=%3Cproquest_cross%3E2044094322%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2044094322&rft_id=info:pmid/&rfr_iscdi=true |