Speech Activity Detection Using Dual Sensory Based Learning
A dual sensory input speech detection method includes receiving, at a first time, a first video image input of a conference participant of the video conference and a first audio input of the conference participant; communicating the first video image input to the video conference; identifying the fi...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Patent |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Sircar, Shiladitya |
description | A dual sensory input speech detection method includes receiving, at a first time, a first video image input of a conference participant of the video conference and a first audio input of the conference participant; communicating the first video image input to the video conference; identifying the first video image input as a first facial image of the conference participant; determining, based on the first facial image, the first video image input indicates the conference participant is in a speaking state; identifying the first audio input as a first speech sound; determining, while in the speaking state, the first speech sound originates from the conference participant; and communicating the first audio input to an audio output for the video conference. |
format | Patent |
fullrecord | <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US2023017401A1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US2023017401A1</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US2023017401A13</originalsourceid><addsrcrecordid>eNrjZLAOLkhNTc5QcEwuySzLLKlUcEktSQWy8_MUQosz89IVXEoTcxSCU_OK84sqFZwSi1NTFHxSE4vygHI8DKxpiTnFqbxQmptB2c01xNlDN7UgPz61uCAxOTUvtSQ-NNjIwMjYwNDcxMDQ0dCYOFUAJa0vsA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Speech Activity Detection Using Dual Sensory Based Learning</title><source>esp@cenet</source><creator>Sircar, Shiladitya</creator><creatorcontrib>Sircar, Shiladitya</creatorcontrib><description>A dual sensory input speech detection method includes receiving, at a first time, a first video image input of a conference participant of the video conference and a first audio input of the conference participant; communicating the first video image input to the video conference; identifying the first video image input as a first facial image of the conference participant; determining, based on the first facial image, the first video image input indicates the conference participant is in a speaking state; identifying the first audio input as a first speech sound; determining, while in the speaking state, the first speech sound originates from the conference participant; and communicating the first audio input to an audio output for the video conference.</description><language>eng</language><subject>ACOUSTICS ; CALCULATING ; COMPUTING ; COUNTING ; ELECTRIC COMMUNICATION TECHNIQUE ; ELECTRICITY ; MUSICAL INSTRUMENTS ; PHYSICS ; PICTORIAL COMMUNICATION, e.g. TELEVISION ; SPEECH ANALYSIS OR SYNTHESIS ; SPEECH OR AUDIO CODING OR DECODING ; SPEECH OR VOICE PROCESSING ; SPEECH RECOGNITION</subject><creationdate>2023</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20230119&DB=EPODOC&CC=US&NR=2023017401A1$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25564,76547</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20230119&DB=EPODOC&CC=US&NR=2023017401A1$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>Sircar, Shiladitya</creatorcontrib><title>Speech Activity Detection Using Dual Sensory Based Learning</title><description>A dual sensory input speech detection method includes receiving, at a first time, a first video image input of a conference participant of the video conference and a first audio input of the conference participant; communicating the first video image input to the video conference; identifying the first video image input as a first facial image of the conference participant; determining, based on the first facial image, the first video image input indicates the conference participant is in a speaking state; identifying the first audio input as a first speech sound; determining, while in the speaking state, the first speech sound originates from the conference participant; and communicating the first audio input to an audio output for the video conference.</description><subject>ACOUSTICS</subject><subject>CALCULATING</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>ELECTRIC COMMUNICATION TECHNIQUE</subject><subject>ELECTRICITY</subject><subject>MUSICAL INSTRUMENTS</subject><subject>PHYSICS</subject><subject>PICTORIAL COMMUNICATION, e.g. TELEVISION</subject><subject>SPEECH ANALYSIS OR SYNTHESIS</subject><subject>SPEECH OR AUDIO CODING OR DECODING</subject><subject>SPEECH OR VOICE PROCESSING</subject><subject>SPEECH RECOGNITION</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2023</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZLAOLkhNTc5QcEwuySzLLKlUcEktSQWy8_MUQosz89IVXEoTcxSCU_OK84sqFZwSi1NTFHxSE4vygHI8DKxpiTnFqbxQmptB2c01xNlDN7UgPz61uCAxOTUvtSQ-NNjIwMjYwNDcxMDQ0dCYOFUAJa0vsA</recordid><startdate>20230119</startdate><enddate>20230119</enddate><creator>Sircar, Shiladitya</creator><scope>EVB</scope></search><sort><creationdate>20230119</creationdate><title>Speech Activity Detection Using Dual Sensory Based Learning</title><author>Sircar, Shiladitya</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US2023017401A13</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2023</creationdate><topic>ACOUSTICS</topic><topic>CALCULATING</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>ELECTRIC COMMUNICATION TECHNIQUE</topic><topic>ELECTRICITY</topic><topic>MUSICAL INSTRUMENTS</topic><topic>PHYSICS</topic><topic>PICTORIAL COMMUNICATION, e.g. TELEVISION</topic><topic>SPEECH ANALYSIS OR SYNTHESIS</topic><topic>SPEECH OR AUDIO CODING OR DECODING</topic><topic>SPEECH OR VOICE PROCESSING</topic><topic>SPEECH RECOGNITION</topic><toplevel>online_resources</toplevel><creatorcontrib>Sircar, Shiladitya</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Sircar, Shiladitya</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Speech Activity Detection Using Dual Sensory Based Learning</title><date>2023-01-19</date><risdate>2023</risdate><abstract>A dual sensory input speech detection method includes receiving, at a first time, a first video image input of a conference participant of the video conference and a first audio input of the conference participant; communicating the first video image input to the video conference; identifying the first video image input as a first facial image of the conference participant; determining, based on the first facial image, the first video image input indicates the conference participant is in a speaking state; identifying the first audio input as a first speech sound; determining, while in the speaking state, the first speech sound originates from the conference participant; and communicating the first audio input to an audio output for the video conference.</abstract><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | |
ispartof | |
issn | |
language | eng |
recordid | cdi_epo_espacenet_US2023017401A1 |
source | esp@cenet |
subjects | ACOUSTICS CALCULATING COMPUTING COUNTING ELECTRIC COMMUNICATION TECHNIQUE ELECTRICITY MUSICAL INSTRUMENTS PHYSICS PICTORIAL COMMUNICATION, e.g. TELEVISION SPEECH ANALYSIS OR SYNTHESIS SPEECH OR AUDIO CODING OR DECODING SPEECH OR VOICE PROCESSING SPEECH RECOGNITION |
title | Speech Activity Detection Using Dual Sensory Based Learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T15%3A32%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=Sircar,%20Shiladitya&rft.date=2023-01-19&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS2023017401A1%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |