Deep learning-based accompaniment extraction method and system, storage medium and equipment

The invention discloses an accompaniment extraction method and system based on deep learning, a storage medium and equipment, and belongs to the technical field of short-distance wireless communication, and the method comprises the steps: carrying out the framing of a song PCM signal at a wireless t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: WANG LINGZHI, ZHU YONG, LI QIANG, YE DONGXIANG
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator WANG LINGZHI
ZHU YONG
LI QIANG
YE DONGXIANG
description The invention discloses an accompaniment extraction method and system based on deep learning, a storage medium and equipment, and belongs to the technical field of short-distance wireless communication, and the method comprises the steps: carrying out the framing of a song PCM signal at a wireless transmitting end, and carrying out the windowing according to the frames, and obtaining a windowing signal; performing time-frequency transformation on the windowed signal by using improved discrete cosine transform to obtain an MDCT spectral coefficient; performing feature extraction on the windowed signal to obtain an MDFT amplitude spectrum corresponding to the windowed signal; inputting the MDFT amplitude spectrum into a pre-trained neural network model to obtain a floating value mask; performing point multiplication on the MDCT spectral coefficient and the floating value mask to obtain the spectral coefficient of the accompaniment signal; according to the spectrum coefficient of the accompaniment signal, contin
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN118155592A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN118155592A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN118155592A3</originalsourceid><addsrcrecordid>eNqNy7EKwjAUheEsDqK-Q9ztUCWgo1TFyclRKNf0WAPNTUxuQd_eKj6A0xm-84_VZQdE3YESO26LK2U0mqwNPhI7DxaNpySy4gJrD7mHwbnR-ZUFfqGzhEQtBmpc77-ER-_iJ52q0Y26jNlvJ2p-2J-rY4EYauRIFgypq1NZrktjzGa5Xf3zeQPYujwt</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Deep learning-based accompaniment extraction method and system, storage medium and equipment</title><source>esp@cenet</source><creator>WANG LINGZHI ; ZHU YONG ; LI QIANG ; YE DONGXIANG</creator><creatorcontrib>WANG LINGZHI ; ZHU YONG ; LI QIANG ; YE DONGXIANG</creatorcontrib><description>The invention discloses an accompaniment extraction method and system based on deep learning, a storage medium and equipment, and belongs to the technical field of short-distance wireless communication, and the method comprises the steps: carrying out the framing of a song PCM signal at a wireless transmitting end, and carrying out the windowing according to the frames, and obtaining a windowing signal; performing time-frequency transformation on the windowed signal by using improved discrete cosine transform to obtain an MDCT spectral coefficient; performing feature extraction on the windowed signal to obtain an MDFT amplitude spectrum corresponding to the windowed signal; inputting the MDFT amplitude spectrum into a pre-trained neural network model to obtain a floating value mask; performing point multiplication on the MDCT spectral coefficient and the floating value mask to obtain the spectral coefficient of the accompaniment signal; according to the spectrum coefficient of the accompaniment signal, contin</description><language>chi ; eng</language><subject>ACOUSTICS ; CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; ELECTROPHONIC MUSICAL INSTRUMENTS ; MUSICAL INSTRUMENTS ; PHYSICS ; SPEECH ANALYSIS OR SYNTHESIS ; SPEECH OR AUDIO CODING OR DECODING ; SPEECH OR VOICE PROCESSING ; SPEECH RECOGNITION</subject><creationdate>2024</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20240607&amp;DB=EPODOC&amp;CC=CN&amp;NR=118155592A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25543,76293</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20240607&amp;DB=EPODOC&amp;CC=CN&amp;NR=118155592A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>WANG LINGZHI</creatorcontrib><creatorcontrib>ZHU YONG</creatorcontrib><creatorcontrib>LI QIANG</creatorcontrib><creatorcontrib>YE DONGXIANG</creatorcontrib><title>Deep learning-based accompaniment extraction method and system, storage medium and equipment</title><description>The invention discloses an accompaniment extraction method and system based on deep learning, a storage medium and equipment, and belongs to the technical field of short-distance wireless communication, and the method comprises the steps: carrying out the framing of a song PCM signal at a wireless transmitting end, and carrying out the windowing according to the frames, and obtaining a windowing signal; performing time-frequency transformation on the windowed signal by using improved discrete cosine transform to obtain an MDCT spectral coefficient; performing feature extraction on the windowed signal to obtain an MDFT amplitude spectrum corresponding to the windowed signal; inputting the MDFT amplitude spectrum into a pre-trained neural network model to obtain a floating value mask; performing point multiplication on the MDCT spectral coefficient and the floating value mask to obtain the spectral coefficient of the accompaniment signal; according to the spectrum coefficient of the accompaniment signal, contin</description><subject>ACOUSTICS</subject><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>ELECTROPHONIC MUSICAL INSTRUMENTS</subject><subject>MUSICAL INSTRUMENTS</subject><subject>PHYSICS</subject><subject>SPEECH ANALYSIS OR SYNTHESIS</subject><subject>SPEECH OR AUDIO CODING OR DECODING</subject><subject>SPEECH OR VOICE PROCESSING</subject><subject>SPEECH RECOGNITION</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2024</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqNy7EKwjAUheEsDqK-Q9ztUCWgo1TFyclRKNf0WAPNTUxuQd_eKj6A0xm-84_VZQdE3YESO26LK2U0mqwNPhI7DxaNpySy4gJrD7mHwbnR-ZUFfqGzhEQtBmpc77-ER-_iJ52q0Y26jNlvJ2p-2J-rY4EYauRIFgypq1NZrktjzGa5Xf3zeQPYujwt</recordid><startdate>20240607</startdate><enddate>20240607</enddate><creator>WANG LINGZHI</creator><creator>ZHU YONG</creator><creator>LI QIANG</creator><creator>YE DONGXIANG</creator><scope>EVB</scope></search><sort><creationdate>20240607</creationdate><title>Deep learning-based accompaniment extraction method and system, storage medium and equipment</title><author>WANG LINGZHI ; ZHU YONG ; LI QIANG ; YE DONGXIANG</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN118155592A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2024</creationdate><topic>ACOUSTICS</topic><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>ELECTROPHONIC MUSICAL INSTRUMENTS</topic><topic>MUSICAL INSTRUMENTS</topic><topic>PHYSICS</topic><topic>SPEECH ANALYSIS OR SYNTHESIS</topic><topic>SPEECH OR AUDIO CODING OR DECODING</topic><topic>SPEECH OR VOICE PROCESSING</topic><topic>SPEECH RECOGNITION</topic><toplevel>online_resources</toplevel><creatorcontrib>WANG LINGZHI</creatorcontrib><creatorcontrib>ZHU YONG</creatorcontrib><creatorcontrib>LI QIANG</creatorcontrib><creatorcontrib>YE DONGXIANG</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>WANG LINGZHI</au><au>ZHU YONG</au><au>LI QIANG</au><au>YE DONGXIANG</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Deep learning-based accompaniment extraction method and system, storage medium and equipment</title><date>2024-06-07</date><risdate>2024</risdate><abstract>The invention discloses an accompaniment extraction method and system based on deep learning, a storage medium and equipment, and belongs to the technical field of short-distance wireless communication, and the method comprises the steps: carrying out the framing of a song PCM signal at a wireless transmitting end, and carrying out the windowing according to the frames, and obtaining a windowing signal; performing time-frequency transformation on the windowed signal by using improved discrete cosine transform to obtain an MDCT spectral coefficient; performing feature extraction on the windowed signal to obtain an MDFT amplitude spectrum corresponding to the windowed signal; inputting the MDFT amplitude spectrum into a pre-trained neural network model to obtain a floating value mask; performing point multiplication on the MDCT spectral coefficient and the floating value mask to obtain the spectral coefficient of the accompaniment signal; according to the spectrum coefficient of the accompaniment signal, contin</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language chi ; eng
recordid cdi_epo_espacenet_CN118155592A
source esp@cenet
subjects ACOUSTICS
CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
ELECTROPHONIC MUSICAL INSTRUMENTS
MUSICAL INSTRUMENTS
PHYSICS
SPEECH ANALYSIS OR SYNTHESIS
SPEECH OR AUDIO CODING OR DECODING
SPEECH OR VOICE PROCESSING
SPEECH RECOGNITION
title Deep learning-based accompaniment extraction method and system, storage medium and equipment
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-23T11%3A57%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=WANG%20LINGZHI&rft.date=2024-06-07&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN118155592A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true