System and tools for enhanced 3D audio authoring and rendering

The invention is directed to a method, which comprises receiving audio reproduction data comprising one or more audio objects and metadata associated with each of the one or more audio objects; receiving reproduction environment data comprising an indication of a number of reproduction speakers in t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Tsingos, Nicolas R, Scharpf, Jurgen W, Robinson, Charles Q
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Tsingos, Nicolas R
Scharpf, Jurgen W
Robinson, Charles Q
description The invention is directed to a method, which comprises receiving audio reproduction data comprising one or more audio objects and metadata associated with each of the one or more audio objects; receiving reproduction environment data comprising an indication of a number of reproduction speakers in the reproduction environment and an indication of the location of each reproduction speaker within the reproduction environment; and rendering the audio objects into one or more speaker feed signals by applying an amplitude panning process to each audio object, wherein the amplitude panning process is based, at least in part, on the metadata associated with each audio object and the location of each reproduction speaker within the reproduction environment, and wherein each speaker feed signal corresponds to at least one of the reproduction speakers within the reproduction environment. The metadata associated with each audio object includes audio object coordinates indicating the intended reproduction position of the audio object within the reproduction environment and zone constraint metadata indicating whether rendering the audio object involves imposing speaker zone constraints.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_AU2021200437BB2</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>AU2021200437BB2</sourcerecordid><originalsourceid>FETCH-epo_espacenet_AU2021200437BB23</originalsourceid><addsrcrecordid>eNrjZLALriwuSc1VSMxLUSjJz88pVkjLL1JIzctIzEtOTVEwdlFILE3JzAeSJRn5RZl56WCVRal5KakgHg8Da1piTnEqL5TmZlBxcw1x9tBNLciPTy0uSExOzUstiXcMNTIwMjQyMDAxNndyMjImUhkAK58xOg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>System and tools for enhanced 3D audio authoring and rendering</title><source>esp@cenet</source><creator>Tsingos, Nicolas R ; Scharpf, Jurgen W ; Robinson, Charles Q</creator><creatorcontrib>Tsingos, Nicolas R ; Scharpf, Jurgen W ; Robinson, Charles Q</creatorcontrib><description>The invention is directed to a method, which comprises receiving audio reproduction data comprising one or more audio objects and metadata associated with each of the one or more audio objects; receiving reproduction environment data comprising an indication of a number of reproduction speakers in the reproduction environment and an indication of the location of each reproduction speaker within the reproduction environment; and rendering the audio objects into one or more speaker feed signals by applying an amplitude panning process to each audio object, wherein the amplitude panning process is based, at least in part, on the metadata associated with each audio object and the location of each reproduction speaker within the reproduction environment, and wherein each speaker feed signal corresponds to at least one of the reproduction speakers within the reproduction environment. The metadata associated with each audio object includes audio object coordinates indicating the intended reproduction position of the audio object within the reproduction environment and zone constraint metadata indicating whether rendering the audio object involves imposing speaker zone constraints.</description><language>eng</language><subject>ELECTRIC COMMUNICATION TECHNIQUE ; ELECTRICITY ; STEREOPHONIC SYSTEMS</subject><creationdate>2022</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20220310&amp;DB=EPODOC&amp;CC=AU&amp;NR=2021200437B2$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76290</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20220310&amp;DB=EPODOC&amp;CC=AU&amp;NR=2021200437B2$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>Tsingos, Nicolas R</creatorcontrib><creatorcontrib>Scharpf, Jurgen W</creatorcontrib><creatorcontrib>Robinson, Charles Q</creatorcontrib><title>System and tools for enhanced 3D audio authoring and rendering</title><description>The invention is directed to a method, which comprises receiving audio reproduction data comprising one or more audio objects and metadata associated with each of the one or more audio objects; receiving reproduction environment data comprising an indication of a number of reproduction speakers in the reproduction environment and an indication of the location of each reproduction speaker within the reproduction environment; and rendering the audio objects into one or more speaker feed signals by applying an amplitude panning process to each audio object, wherein the amplitude panning process is based, at least in part, on the metadata associated with each audio object and the location of each reproduction speaker within the reproduction environment, and wherein each speaker feed signal corresponds to at least one of the reproduction speakers within the reproduction environment. The metadata associated with each audio object includes audio object coordinates indicating the intended reproduction position of the audio object within the reproduction environment and zone constraint metadata indicating whether rendering the audio object involves imposing speaker zone constraints.</description><subject>ELECTRIC COMMUNICATION TECHNIQUE</subject><subject>ELECTRICITY</subject><subject>STEREOPHONIC SYSTEMS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2022</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZLALriwuSc1VSMxLUSjJz88pVkjLL1JIzctIzEtOTVEwdlFILE3JzAeSJRn5RZl56WCVRal5KakgHg8Da1piTnEqL5TmZlBxcw1x9tBNLciPTy0uSExOzUstiXcMNTIwMjQyMDAxNndyMjImUhkAK58xOg</recordid><startdate>20220310</startdate><enddate>20220310</enddate><creator>Tsingos, Nicolas R</creator><creator>Scharpf, Jurgen W</creator><creator>Robinson, Charles Q</creator><scope>EVB</scope></search><sort><creationdate>20220310</creationdate><title>System and tools for enhanced 3D audio authoring and rendering</title><author>Tsingos, Nicolas R ; Scharpf, Jurgen W ; Robinson, Charles Q</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_AU2021200437BB23</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2022</creationdate><topic>ELECTRIC COMMUNICATION TECHNIQUE</topic><topic>ELECTRICITY</topic><topic>STEREOPHONIC SYSTEMS</topic><toplevel>online_resources</toplevel><creatorcontrib>Tsingos, Nicolas R</creatorcontrib><creatorcontrib>Scharpf, Jurgen W</creatorcontrib><creatorcontrib>Robinson, Charles Q</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Tsingos, Nicolas R</au><au>Scharpf, Jurgen W</au><au>Robinson, Charles Q</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>System and tools for enhanced 3D audio authoring and rendering</title><date>2022-03-10</date><risdate>2022</risdate><abstract>The invention is directed to a method, which comprises receiving audio reproduction data comprising one or more audio objects and metadata associated with each of the one or more audio objects; receiving reproduction environment data comprising an indication of a number of reproduction speakers in the reproduction environment and an indication of the location of each reproduction speaker within the reproduction environment; and rendering the audio objects into one or more speaker feed signals by applying an amplitude panning process to each audio object, wherein the amplitude panning process is based, at least in part, on the metadata associated with each audio object and the location of each reproduction speaker within the reproduction environment, and wherein each speaker feed signal corresponds to at least one of the reproduction speakers within the reproduction environment. The metadata associated with each audio object includes audio object coordinates indicating the intended reproduction position of the audio object within the reproduction environment and zone constraint metadata indicating whether rendering the audio object involves imposing speaker zone constraints.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng
recordid cdi_epo_espacenet_AU2021200437BB2
source esp@cenet
subjects ELECTRIC COMMUNICATION TECHNIQUE
ELECTRICITY
STEREOPHONIC SYSTEMS
title System and tools for enhanced 3D audio authoring and rendering
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T22%3A06%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=Tsingos,%20Nicolas%20R&rft.date=2022-03-10&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EAU2021200437BB2%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true