Video key frame self-adaptive extraction method under emotion encourage

The invention relates to a video key frame self-adaptive extraction method under emotion encourage. The video key frame self-adaptive extraction method comprises the steps of: thinking in terms of emotional fluctuation of a video looker, computing exercise intensity of video frames to serve as visua...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: YU CHUNYAN, CHEN ZHAOJIONG, WENG ZILIN, SU CHENHAN, YE DONGYI
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator YU CHUNYAN
CHEN ZHAOJIONG
WENG ZILIN
SU CHENHAN
YE DONGYI
description The invention relates to a video key frame self-adaptive extraction method under emotion encourage. The video key frame self-adaptive extraction method comprises the steps of: thinking in terms of emotional fluctuation of a video looker, computing exercise intensity of video frames to serve as visual emotion incentive degrees of the video looker when looking a video, computing short-time average energy and tone as audition emotion incentive degrees, and linearly fusing the visual emotion incentive degree and the audition emotion incentive degree to obtain the video emotion incentive degree of each video frame and then generate a video emotion incentive degree curve of the scene; obtaining video key frame number KN shall be distributed to the scene according to the video emotion incentive change of the scene; at last taking the video frames corresponding to KN crests before the highest emotion incentive degree of the video emotion incentive degree curve as the scene key frames. The video key frame self-adaptiv
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN104008175A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN104008175A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN104008175A3</originalsourceid><addsrcrecordid>eNrjZHAPy0xJzVfITq1USCtKzE1VKE7NSdNNTEksKMksS1VIrSgpSkwuyczPU8hNLcnIT1EozUtJLVJIzc0HC6bmJeeXFiWmp_IwsKYl5hSn8kJpbgZFN9cQZw_d1IL8-NTigsTk1LzUknhnP0MDEwMDC0NzU0djYtQAAHvbNCY</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Video key frame self-adaptive extraction method under emotion encourage</title><source>esp@cenet</source><creator>YU CHUNYAN ; CHEN ZHAOJIONG ; WENG ZILIN ; SU CHENHAN ; YE DONGYI</creator><creatorcontrib>YU CHUNYAN ; CHEN ZHAOJIONG ; WENG ZILIN ; SU CHENHAN ; YE DONGYI</creatorcontrib><description>The invention relates to a video key frame self-adaptive extraction method under emotion encourage. The video key frame self-adaptive extraction method comprises the steps of: thinking in terms of emotional fluctuation of a video looker, computing exercise intensity of video frames to serve as visual emotion incentive degrees of the video looker when looking a video, computing short-time average energy and tone as audition emotion incentive degrees, and linearly fusing the visual emotion incentive degree and the audition emotion incentive degree to obtain the video emotion incentive degree of each video frame and then generate a video emotion incentive degree curve of the scene; obtaining video key frame number KN shall be distributed to the scene according to the video emotion incentive change of the scene; at last taking the video frames corresponding to KN crests before the highest emotion incentive degree of the video emotion incentive degree curve as the scene key frames. The video key frame self-adaptiv</description><language>chi ; eng</language><subject>CALCULATING ; COMPUTING ; COUNTING ; ELECTRIC DIGITAL DATA PROCESSING ; IMAGE DATA PROCESSING OR GENERATION, IN GENERAL ; PHYSICS</subject><creationdate>2014</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20140827&amp;DB=EPODOC&amp;CC=CN&amp;NR=104008175A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25543,76293</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20140827&amp;DB=EPODOC&amp;CC=CN&amp;NR=104008175A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>YU CHUNYAN</creatorcontrib><creatorcontrib>CHEN ZHAOJIONG</creatorcontrib><creatorcontrib>WENG ZILIN</creatorcontrib><creatorcontrib>SU CHENHAN</creatorcontrib><creatorcontrib>YE DONGYI</creatorcontrib><title>Video key frame self-adaptive extraction method under emotion encourage</title><description>The invention relates to a video key frame self-adaptive extraction method under emotion encourage. The video key frame self-adaptive extraction method comprises the steps of: thinking in terms of emotional fluctuation of a video looker, computing exercise intensity of video frames to serve as visual emotion incentive degrees of the video looker when looking a video, computing short-time average energy and tone as audition emotion incentive degrees, and linearly fusing the visual emotion incentive degree and the audition emotion incentive degree to obtain the video emotion incentive degree of each video frame and then generate a video emotion incentive degree curve of the scene; obtaining video key frame number KN shall be distributed to the scene according to the video emotion incentive change of the scene; at last taking the video frames corresponding to KN crests before the highest emotion incentive degree of the video emotion incentive degree curve as the scene key frames. The video key frame self-adaptiv</description><subject>CALCULATING</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>ELECTRIC DIGITAL DATA PROCESSING</subject><subject>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2014</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZHAPy0xJzVfITq1USCtKzE1VKE7NSdNNTEksKMksS1VIrSgpSkwuyczPU8hNLcnIT1EozUtJLVJIzc0HC6bmJeeXFiWmp_IwsKYl5hSn8kJpbgZFN9cQZw_d1IL8-NTigsTk1LzUknhnP0MDEwMDC0NzU0djYtQAAHvbNCY</recordid><startdate>20140827</startdate><enddate>20140827</enddate><creator>YU CHUNYAN</creator><creator>CHEN ZHAOJIONG</creator><creator>WENG ZILIN</creator><creator>SU CHENHAN</creator><creator>YE DONGYI</creator><scope>EVB</scope></search><sort><creationdate>20140827</creationdate><title>Video key frame self-adaptive extraction method under emotion encourage</title><author>YU CHUNYAN ; CHEN ZHAOJIONG ; WENG ZILIN ; SU CHENHAN ; YE DONGYI</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN104008175A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2014</creationdate><topic>CALCULATING</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>ELECTRIC DIGITAL DATA PROCESSING</topic><topic>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>YU CHUNYAN</creatorcontrib><creatorcontrib>CHEN ZHAOJIONG</creatorcontrib><creatorcontrib>WENG ZILIN</creatorcontrib><creatorcontrib>SU CHENHAN</creatorcontrib><creatorcontrib>YE DONGYI</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>YU CHUNYAN</au><au>CHEN ZHAOJIONG</au><au>WENG ZILIN</au><au>SU CHENHAN</au><au>YE DONGYI</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Video key frame self-adaptive extraction method under emotion encourage</title><date>2014-08-27</date><risdate>2014</risdate><abstract>The invention relates to a video key frame self-adaptive extraction method under emotion encourage. The video key frame self-adaptive extraction method comprises the steps of: thinking in terms of emotional fluctuation of a video looker, computing exercise intensity of video frames to serve as visual emotion incentive degrees of the video looker when looking a video, computing short-time average energy and tone as audition emotion incentive degrees, and linearly fusing the visual emotion incentive degree and the audition emotion incentive degree to obtain the video emotion incentive degree of each video frame and then generate a video emotion incentive degree curve of the scene; obtaining video key frame number KN shall be distributed to the scene according to the video emotion incentive change of the scene; at last taking the video frames corresponding to KN crests before the highest emotion incentive degree of the video emotion incentive degree curve as the scene key frames. The video key frame self-adaptiv</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language chi ; eng
recordid cdi_epo_espacenet_CN104008175A
source esp@cenet
subjects CALCULATING
COMPUTING
COUNTING
ELECTRIC DIGITAL DATA PROCESSING
IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
PHYSICS
title Video key frame self-adaptive extraction method under emotion encourage
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T19%3A07%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=YU%20CHUNYAN&rft.date=2014-08-27&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN104008175A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true