Analysis method for recording personal daily emotion and related equipment

The invention provides an analysis method for recording personal daily emotion and related equipment, and is applied to the technical field of data processing. The method comprises the steps of obtaining voice data of a target area in a preset time period; processing the voice data according to a pr...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: TAN SHUPING, HUANG JIE, QU WEI, TIAN ZHANXIAO
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator TAN SHUPING
HUANG JIE
QU WEI
TIAN ZHANXIAO
description The invention provides an analysis method for recording personal daily emotion and related equipment, and is applied to the technical field of data processing. The method comprises the steps of obtaining voice data of a target area in a preset time period; processing the voice data according to a preset rule to generate voice data of the target user; preprocessing the voice data of the target user to generate non-mute voice data; performing pre-emphasis processing on the non-silent voice data based on a preset rule to generate voice data with a high-frequency component; slicing the voice data with the high-frequency component to generate a plurality of voice segments; performing emotion analysis processing on each voice segment to generate an emotion analysis result; and generating an emotion analysis report based on the emotion analysis result. By acquiring and processing the voice data of the user within the preset time and performing emotion analysis based on the processed voice data, the emotion result of
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN118136057A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN118136057A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN118136057A3</originalsourceid><addsrcrecordid>eNqNyr0KwjAUBtAuDqK-w_UBBEPxZy1FEQcn93JpvtpAcm9M4tC3t4MP4HSWs6zujbCfsssUUEa1NGiihF6TdfKiiJR1HmTZ-YkQtDgVYrFz8lxgCe-PiwFS1tViYJ-x-bmqttfLs73tELVDjtxDULr2YczZ1Mf94dTU_5wvUMg1eA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Analysis method for recording personal daily emotion and related equipment</title><source>esp@cenet</source><creator>TAN SHUPING ; HUANG JIE ; QU WEI ; TIAN ZHANXIAO</creator><creatorcontrib>TAN SHUPING ; HUANG JIE ; QU WEI ; TIAN ZHANXIAO</creatorcontrib><description>The invention provides an analysis method for recording personal daily emotion and related equipment, and is applied to the technical field of data processing. The method comprises the steps of obtaining voice data of a target area in a preset time period; processing the voice data according to a preset rule to generate voice data of the target user; preprocessing the voice data of the target user to generate non-mute voice data; performing pre-emphasis processing on the non-silent voice data based on a preset rule to generate voice data with a high-frequency component; slicing the voice data with the high-frequency component to generate a plurality of voice segments; performing emotion analysis processing on each voice segment to generate an emotion analysis result; and generating an emotion analysis report based on the emotion analysis result. By acquiring and processing the voice data of the user within the preset time and performing emotion analysis based on the processed voice data, the emotion result of</description><language>chi ; eng</language><subject>ACOUSTICS ; MUSICAL INSTRUMENTS ; PHYSICS ; SPEECH ANALYSIS OR SYNTHESIS ; SPEECH OR AUDIO CODING OR DECODING ; SPEECH OR VOICE PROCESSING ; SPEECH RECOGNITION</subject><creationdate>2024</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20240604&amp;DB=EPODOC&amp;CC=CN&amp;NR=118136057A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25564,76547</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20240604&amp;DB=EPODOC&amp;CC=CN&amp;NR=118136057A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>TAN SHUPING</creatorcontrib><creatorcontrib>HUANG JIE</creatorcontrib><creatorcontrib>QU WEI</creatorcontrib><creatorcontrib>TIAN ZHANXIAO</creatorcontrib><title>Analysis method for recording personal daily emotion and related equipment</title><description>The invention provides an analysis method for recording personal daily emotion and related equipment, and is applied to the technical field of data processing. The method comprises the steps of obtaining voice data of a target area in a preset time period; processing the voice data according to a preset rule to generate voice data of the target user; preprocessing the voice data of the target user to generate non-mute voice data; performing pre-emphasis processing on the non-silent voice data based on a preset rule to generate voice data with a high-frequency component; slicing the voice data with the high-frequency component to generate a plurality of voice segments; performing emotion analysis processing on each voice segment to generate an emotion analysis result; and generating an emotion analysis report based on the emotion analysis result. By acquiring and processing the voice data of the user within the preset time and performing emotion analysis based on the processed voice data, the emotion result of</description><subject>ACOUSTICS</subject><subject>MUSICAL INSTRUMENTS</subject><subject>PHYSICS</subject><subject>SPEECH ANALYSIS OR SYNTHESIS</subject><subject>SPEECH OR AUDIO CODING OR DECODING</subject><subject>SPEECH OR VOICE PROCESSING</subject><subject>SPEECH RECOGNITION</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2024</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqNyr0KwjAUBtAuDqK-w_UBBEPxZy1FEQcn93JpvtpAcm9M4tC3t4MP4HSWs6zujbCfsssUUEa1NGiihF6TdfKiiJR1HmTZ-YkQtDgVYrFz8lxgCe-PiwFS1tViYJ-x-bmqttfLs73tELVDjtxDULr2YczZ1Mf94dTU_5wvUMg1eA</recordid><startdate>20240604</startdate><enddate>20240604</enddate><creator>TAN SHUPING</creator><creator>HUANG JIE</creator><creator>QU WEI</creator><creator>TIAN ZHANXIAO</creator><scope>EVB</scope></search><sort><creationdate>20240604</creationdate><title>Analysis method for recording personal daily emotion and related equipment</title><author>TAN SHUPING ; HUANG JIE ; QU WEI ; TIAN ZHANXIAO</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN118136057A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2024</creationdate><topic>ACOUSTICS</topic><topic>MUSICAL INSTRUMENTS</topic><topic>PHYSICS</topic><topic>SPEECH ANALYSIS OR SYNTHESIS</topic><topic>SPEECH OR AUDIO CODING OR DECODING</topic><topic>SPEECH OR VOICE PROCESSING</topic><topic>SPEECH RECOGNITION</topic><toplevel>online_resources</toplevel><creatorcontrib>TAN SHUPING</creatorcontrib><creatorcontrib>HUANG JIE</creatorcontrib><creatorcontrib>QU WEI</creatorcontrib><creatorcontrib>TIAN ZHANXIAO</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>TAN SHUPING</au><au>HUANG JIE</au><au>QU WEI</au><au>TIAN ZHANXIAO</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Analysis method for recording personal daily emotion and related equipment</title><date>2024-06-04</date><risdate>2024</risdate><abstract>The invention provides an analysis method for recording personal daily emotion and related equipment, and is applied to the technical field of data processing. The method comprises the steps of obtaining voice data of a target area in a preset time period; processing the voice data according to a preset rule to generate voice data of the target user; preprocessing the voice data of the target user to generate non-mute voice data; performing pre-emphasis processing on the non-silent voice data based on a preset rule to generate voice data with a high-frequency component; slicing the voice data with the high-frequency component to generate a plurality of voice segments; performing emotion analysis processing on each voice segment to generate an emotion analysis result; and generating an emotion analysis report based on the emotion analysis result. By acquiring and processing the voice data of the user within the preset time and performing emotion analysis based on the processed voice data, the emotion result of</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language chi ; eng
recordid cdi_epo_espacenet_CN118136057A
source esp@cenet
subjects ACOUSTICS
MUSICAL INSTRUMENTS
PHYSICS
SPEECH ANALYSIS OR SYNTHESIS
SPEECH OR AUDIO CODING OR DECODING
SPEECH OR VOICE PROCESSING
SPEECH RECOGNITION
title Analysis method for recording personal daily emotion and related equipment
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T07%3A39%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=TAN%20SHUPING&rft.date=2024-06-04&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN118136057A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true