The ContextCam: Automated Point of Capture Video Annotation
Rich, structured annotations of video recordings enable interesting uses, but existing techniques for manual, and even semi-automated, tagging can be too time-consuming. We present in this paper the ContextCam, a prototype of a consumer video camera that provides point of capture annotation of time,...
Gespeichert in:
Veröffentlicht in: | Lecture notes in computer science 2004, p.301-318 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 318 |
---|---|
container_issue | |
container_start_page | 301 |
container_title | Lecture notes in computer science |
container_volume | |
creator | Patel, Shwetak N. Abowd, Gregory D. |
description | Rich, structured annotations of video recordings enable interesting uses, but existing techniques for manual, and even semi-automated, tagging can be too time-consuming. We present in this paper the ContextCam, a prototype of a consumer video camera that provides point of capture annotation of time, location, person presence and event information associated to recorded video. Both low- and high-level metadata are discovered via a variety of sensing and active tagging techniques, as well as through the application of machine learning techniques that use past annotations to suggest metadata for the current recordings. Furthermore, the ContextCam provides users with a minimally intrusive interface for correcting predicted high-level metadata during video recording. |
doi_str_mv | 10.1007/978-3-540-30119-6_18 |
format | Article |
fullrecord | <record><control><sourceid>pascalfrancis_sprin</sourceid><recordid>TN_cdi_pascalfrancis_primary_16194851</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>16194851</sourcerecordid><originalsourceid>FETCH-LOGICAL-p228t-4f83302715c8bba00f3773980b9e9c0d6f0b2218325ca351af69a4421372b2f3</originalsourceid><addsrcrecordid>eNotkE1PwzAMhsOXxBj7Bxxy4RiI46RN4DRVfEmT4DBxjdIugcLWVG0mwb8n2_DFkt9Hlv0QcgX8Bjgvb02pGTIlOUMOYFhhQR-RC8yT_UAekwkUAAxRmhMyy_wuE8IoBadkkinBTCnxnMzG8YvnEgVKJSbkfvnpaRW75H9S5TZ3dL5NceOSX9G32HaJxkAr16ft4Ol7u_KRzrsuJpfa2F2Ss-DWo5_99ylZPj4sq2e2eH16qeYL1guhE5NBYz6gBNXounacByxLNJrXxpuGr4rAayFAo1CNQwUuFMZJKQBLUYuAU3J9WNu7sXHrMLiuaUfbD-3GDb82P26kVpA5ceDGHHUffrB1jN-jBW53Fm22YtFmL3Yvze4s4h-mV13L</addsrcrecordid><sourcetype>Index Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>The ContextCam: Automated Point of Capture Video Annotation</title><source>Springer Books</source><creator>Patel, Shwetak N. ; Abowd, Gregory D.</creator><contributor>Davies, Nigel ; Mynatt, Elizabeth D. ; Siio, Itiro</contributor><creatorcontrib>Patel, Shwetak N. ; Abowd, Gregory D. ; Davies, Nigel ; Mynatt, Elizabeth D. ; Siio, Itiro</creatorcontrib><description>Rich, structured annotations of video recordings enable interesting uses, but existing techniques for manual, and even semi-automated, tagging can be too time-consuming. We present in this paper the ContextCam, a prototype of a consumer video camera that provides point of capture annotation of time, location, person presence and event information associated to recorded video. Both low- and high-level metadata are discovered via a variety of sensing and active tagging techniques, as well as through the application of machine learning techniques that use past annotations to suggest metadata for the current recordings. Furthermore, the ContextCam provides users with a minimally intrusive interface for correcting predicted high-level metadata during video recording.</description><identifier>ISSN: 0302-9743</identifier><identifier>ISBN: 9783540229551</identifier><identifier>ISBN: 3540229558</identifier><identifier>EISSN: 1611-3349</identifier><identifier>EISBN: 3540301194</identifier><identifier>EISBN: 9783540301196</identifier><identifier>DOI: 10.1007/978-3-540-30119-6_18</identifier><language>eng</language><publisher>Berlin, Heidelberg: Springer Berlin Heidelberg</publisher><subject>Applied sciences ; Bayesian Classifier ; Computer science; control theory; systems ; Computer systems and distributed systems. User interface ; Event Information ; Exact sciences and technology ; Manual Annotation ; Software ; Video Annotation ; Video Scene</subject><ispartof>Lecture notes in computer science, 2004, p.301-318</ispartof><rights>Springer-Verlag Berlin Heidelberg 2004</rights><rights>2004 INIST-CNRS</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/978-3-540-30119-6_18$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/978-3-540-30119-6_18$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>309,310,775,776,780,785,786,789,4036,4037,27902,38232,41418,42487</link.rule.ids><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&idt=16194851$$DView record in Pascal Francis$$Hfree_for_read</backlink></links><search><contributor>Davies, Nigel</contributor><contributor>Mynatt, Elizabeth D.</contributor><contributor>Siio, Itiro</contributor><creatorcontrib>Patel, Shwetak N.</creatorcontrib><creatorcontrib>Abowd, Gregory D.</creatorcontrib><title>The ContextCam: Automated Point of Capture Video Annotation</title><title>Lecture notes in computer science</title><description>Rich, structured annotations of video recordings enable interesting uses, but existing techniques for manual, and even semi-automated, tagging can be too time-consuming. We present in this paper the ContextCam, a prototype of a consumer video camera that provides point of capture annotation of time, location, person presence and event information associated to recorded video. Both low- and high-level metadata are discovered via a variety of sensing and active tagging techniques, as well as through the application of machine learning techniques that use past annotations to suggest metadata for the current recordings. Furthermore, the ContextCam provides users with a minimally intrusive interface for correcting predicted high-level metadata during video recording.</description><subject>Applied sciences</subject><subject>Bayesian Classifier</subject><subject>Computer science; control theory; systems</subject><subject>Computer systems and distributed systems. User interface</subject><subject>Event Information</subject><subject>Exact sciences and technology</subject><subject>Manual Annotation</subject><subject>Software</subject><subject>Video Annotation</subject><subject>Video Scene</subject><issn>0302-9743</issn><issn>1611-3349</issn><isbn>9783540229551</isbn><isbn>3540229558</isbn><isbn>3540301194</isbn><isbn>9783540301196</isbn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2004</creationdate><recordtype>article</recordtype><recordid>eNotkE1PwzAMhsOXxBj7Bxxy4RiI46RN4DRVfEmT4DBxjdIugcLWVG0mwb8n2_DFkt9Hlv0QcgX8Bjgvb02pGTIlOUMOYFhhQR-RC8yT_UAekwkUAAxRmhMyy_wuE8IoBadkkinBTCnxnMzG8YvnEgVKJSbkfvnpaRW75H9S5TZ3dL5NceOSX9G32HaJxkAr16ft4Ol7u_KRzrsuJpfa2F2Ss-DWo5_99ylZPj4sq2e2eH16qeYL1guhE5NBYz6gBNXounacByxLNJrXxpuGr4rAayFAo1CNQwUuFMZJKQBLUYuAU3J9WNu7sXHrMLiuaUfbD-3GDb82P26kVpA5ceDGHHUffrB1jN-jBW53Fm22YtFmL3Yvze4s4h-mV13L</recordid><startdate>2004</startdate><enddate>2004</enddate><creator>Patel, Shwetak N.</creator><creator>Abowd, Gregory D.</creator><general>Springer Berlin Heidelberg</general><general>Springer</general><scope>IQODW</scope></search><sort><creationdate>2004</creationdate><title>The ContextCam: Automated Point of Capture Video Annotation</title><author>Patel, Shwetak N. ; Abowd, Gregory D.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-p228t-4f83302715c8bba00f3773980b9e9c0d6f0b2218325ca351af69a4421372b2f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2004</creationdate><topic>Applied sciences</topic><topic>Bayesian Classifier</topic><topic>Computer science; control theory; systems</topic><topic>Computer systems and distributed systems. User interface</topic><topic>Event Information</topic><topic>Exact sciences and technology</topic><topic>Manual Annotation</topic><topic>Software</topic><topic>Video Annotation</topic><topic>Video Scene</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Patel, Shwetak N.</creatorcontrib><creatorcontrib>Abowd, Gregory D.</creatorcontrib><collection>Pascal-Francis</collection><jtitle>Lecture notes in computer science</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Patel, Shwetak N.</au><au>Abowd, Gregory D.</au><au>Davies, Nigel</au><au>Mynatt, Elizabeth D.</au><au>Siio, Itiro</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>The ContextCam: Automated Point of Capture Video Annotation</atitle><jtitle>Lecture notes in computer science</jtitle><date>2004</date><risdate>2004</risdate><spage>301</spage><epage>318</epage><pages>301-318</pages><issn>0302-9743</issn><eissn>1611-3349</eissn><isbn>9783540229551</isbn><isbn>3540229558</isbn><eisbn>3540301194</eisbn><eisbn>9783540301196</eisbn><abstract>Rich, structured annotations of video recordings enable interesting uses, but existing techniques for manual, and even semi-automated, tagging can be too time-consuming. We present in this paper the ContextCam, a prototype of a consumer video camera that provides point of capture annotation of time, location, person presence and event information associated to recorded video. Both low- and high-level metadata are discovered via a variety of sensing and active tagging techniques, as well as through the application of machine learning techniques that use past annotations to suggest metadata for the current recordings. Furthermore, the ContextCam provides users with a minimally intrusive interface for correcting predicted high-level metadata during video recording.</abstract><cop>Berlin, Heidelberg</cop><pub>Springer Berlin Heidelberg</pub><doi>10.1007/978-3-540-30119-6_18</doi><tpages>18</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0302-9743 |
ispartof | Lecture notes in computer science, 2004, p.301-318 |
issn | 0302-9743 1611-3349 |
language | eng |
recordid | cdi_pascalfrancis_primary_16194851 |
source | Springer Books |
subjects | Applied sciences Bayesian Classifier Computer science control theory systems Computer systems and distributed systems. User interface Event Information Exact sciences and technology Manual Annotation Software Video Annotation Video Scene |
title | The ContextCam: Automated Point of Capture Video Annotation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T02%3A25%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-pascalfrancis_sprin&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=The%20ContextCam:%20Automated%20Point%20of%20Capture%20Video%20Annotation&rft.jtitle=Lecture%20notes%20in%20computer%20science&rft.au=Patel,%20Shwetak%20N.&rft.date=2004&rft.spage=301&rft.epage=318&rft.pages=301-318&rft.issn=0302-9743&rft.eissn=1611-3349&rft.isbn=9783540229551&rft.isbn_list=3540229558&rft_id=info:doi/10.1007/978-3-540-30119-6_18&rft_dat=%3Cpascalfrancis_sprin%3E16194851%3C/pascalfrancis_sprin%3E%3Curl%3E%3C/url%3E&rft.eisbn=3540301194&rft.eisbn_list=9783540301196&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |