LATENT-SPACE REPRESENTATIONS OF AUDIO SIGNALS FOR CONTENT-BASED RETRIEVAL

A method and system are provided for extracting features from digital audio signals which exhibit variations in pitch, timbre, decay, reverberation, and other psychoacoustic attributes and learning, from the extracted features, an artificial neural network model for generating contextual latent-spac...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: RAJASHEKHARAPPA, Naveen Sasalu, KORETZKY, Alejandro
Format: Patent
Sprache:eng ; fre ; ger
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator RAJASHEKHARAPPA, Naveen Sasalu
KORETZKY, Alejandro
description A method and system are provided for extracting features from digital audio signals which exhibit variations in pitch, timbre, decay, reverberation, and other psychoacoustic attributes and learning, from the extracted features, an artificial neural network model for generating contextual latent-space representations of digital audio signals. A method and system are also provided for learning an artificial neural network model for generating consistent latent-space representations of digital audio signals in which the generated latent-space representations are comparable for the purposes of determining psychoacoustic similarity between digital audio signals. A method and system are also provided for extracting features from digital audio signals and learning, from the extracted features, an artificial neural network model for generating latent-space representations of digital audio signals which take care of selecting salient attributes of the signals that represent psychoacoustic differences between the signals.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_EP4189670A1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>EP4189670A1</sourcerecordid><originalsourceid>FETCH-epo_espacenet_EP4189670A13</originalsourceid><addsrcrecordid>eNrjZPD0cQxx9QvRDQ5wdHZVCHINCHINBvIdQzz9_YIV_N0UHENdPP0Vgj3d_Rx9ghXc_IMUnP39wFqcHINdXYBaQoI8XcMcfXgYWNMSc4pTeaE0N4OCm2uIs4duakF-fGpxQWJyal5qSbxrgImhhaWZuYGjoTERSgD2mCz5</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>LATENT-SPACE REPRESENTATIONS OF AUDIO SIGNALS FOR CONTENT-BASED RETRIEVAL</title><source>esp@cenet</source><creator>RAJASHEKHARAPPA, Naveen Sasalu ; KORETZKY, Alejandro</creator><creatorcontrib>RAJASHEKHARAPPA, Naveen Sasalu ; KORETZKY, Alejandro</creatorcontrib><description>A method and system are provided for extracting features from digital audio signals which exhibit variations in pitch, timbre, decay, reverberation, and other psychoacoustic attributes and learning, from the extracted features, an artificial neural network model for generating contextual latent-space representations of digital audio signals. A method and system are also provided for learning an artificial neural network model for generating consistent latent-space representations of digital audio signals in which the generated latent-space representations are comparable for the purposes of determining psychoacoustic similarity between digital audio signals. A method and system are also provided for extracting features from digital audio signals and learning, from the extracted features, an artificial neural network model for generating latent-space representations of digital audio signals which take care of selecting salient attributes of the signals that represent psychoacoustic differences between the signals.</description><language>eng ; fre ; ger</language><subject>ACOUSTICS ; CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; ELECTROPHONIC MUSICAL INSTRUMENTS ; MUSICAL INSTRUMENTS ; PHYSICS</subject><creationdate>2023</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20230607&amp;DB=EPODOC&amp;CC=EP&amp;NR=4189670A1$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76516</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20230607&amp;DB=EPODOC&amp;CC=EP&amp;NR=4189670A1$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>RAJASHEKHARAPPA, Naveen Sasalu</creatorcontrib><creatorcontrib>KORETZKY, Alejandro</creatorcontrib><title>LATENT-SPACE REPRESENTATIONS OF AUDIO SIGNALS FOR CONTENT-BASED RETRIEVAL</title><description>A method and system are provided for extracting features from digital audio signals which exhibit variations in pitch, timbre, decay, reverberation, and other psychoacoustic attributes and learning, from the extracted features, an artificial neural network model for generating contextual latent-space representations of digital audio signals. A method and system are also provided for learning an artificial neural network model for generating consistent latent-space representations of digital audio signals in which the generated latent-space representations are comparable for the purposes of determining psychoacoustic similarity between digital audio signals. A method and system are also provided for extracting features from digital audio signals and learning, from the extracted features, an artificial neural network model for generating latent-space representations of digital audio signals which take care of selecting salient attributes of the signals that represent psychoacoustic differences between the signals.</description><subject>ACOUSTICS</subject><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>ELECTROPHONIC MUSICAL INSTRUMENTS</subject><subject>MUSICAL INSTRUMENTS</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2023</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZPD0cQxx9QvRDQ5wdHZVCHINCHINBvIdQzz9_YIV_N0UHENdPP0Vgj3d_Rx9ghXc_IMUnP39wFqcHINdXYBaQoI8XcMcfXgYWNMSc4pTeaE0N4OCm2uIs4duakF-fGpxQWJyal5qSbxrgImhhaWZuYGjoTERSgD2mCz5</recordid><startdate>20230607</startdate><enddate>20230607</enddate><creator>RAJASHEKHARAPPA, Naveen Sasalu</creator><creator>KORETZKY, Alejandro</creator><scope>EVB</scope></search><sort><creationdate>20230607</creationdate><title>LATENT-SPACE REPRESENTATIONS OF AUDIO SIGNALS FOR CONTENT-BASED RETRIEVAL</title><author>RAJASHEKHARAPPA, Naveen Sasalu ; KORETZKY, Alejandro</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_EP4189670A13</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng ; fre ; ger</language><creationdate>2023</creationdate><topic>ACOUSTICS</topic><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>ELECTROPHONIC MUSICAL INSTRUMENTS</topic><topic>MUSICAL INSTRUMENTS</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>RAJASHEKHARAPPA, Naveen Sasalu</creatorcontrib><creatorcontrib>KORETZKY, Alejandro</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>RAJASHEKHARAPPA, Naveen Sasalu</au><au>KORETZKY, Alejandro</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>LATENT-SPACE REPRESENTATIONS OF AUDIO SIGNALS FOR CONTENT-BASED RETRIEVAL</title><date>2023-06-07</date><risdate>2023</risdate><abstract>A method and system are provided for extracting features from digital audio signals which exhibit variations in pitch, timbre, decay, reverberation, and other psychoacoustic attributes and learning, from the extracted features, an artificial neural network model for generating contextual latent-space representations of digital audio signals. A method and system are also provided for learning an artificial neural network model for generating consistent latent-space representations of digital audio signals in which the generated latent-space representations are comparable for the purposes of determining psychoacoustic similarity between digital audio signals. A method and system are also provided for extracting features from digital audio signals and learning, from the extracted features, an artificial neural network model for generating latent-space representations of digital audio signals which take care of selecting salient attributes of the signals that represent psychoacoustic differences between the signals.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng ; fre ; ger
recordid cdi_epo_espacenet_EP4189670A1
source esp@cenet
subjects ACOUSTICS
CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
ELECTROPHONIC MUSICAL INSTRUMENTS
MUSICAL INSTRUMENTS
PHYSICS
title LATENT-SPACE REPRESENTATIONS OF AUDIO SIGNALS FOR CONTENT-BASED RETRIEVAL
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-15T03%3A59%3A42IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=RAJASHEKHARAPPA,%20Naveen%20Sasalu&rft.date=2023-06-07&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EEP4189670A1%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true