Neural Speech Coding for Real-Time Communications Using Constant Bitrate Scalar Quantization
Neural audio coding has emerged as a vivid research direction by promising good audio quality at very low bitrates unachievable by classical coding techniques. Here, end-to-end trainable autoencoder-like models represent the state of the art, where a discrete representation in the bottleneck of the...
Gespeichert in:
Veröffentlicht in: | IEEE journal of selected topics in signal processing 2024-11, p.1-15 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 15 |
---|---|
container_issue | |
container_start_page | 1 |
container_title | IEEE journal of selected topics in signal processing |
container_volume | |
creator | Brendel, Andreas Pia, Nicola Gupta, Kishan Behringer, Lyonel Fuchs, Guillaume Multrus, Markus |
description | Neural audio coding has emerged as a vivid research direction by promising good audio quality at very low bitrates unachievable by classical coding techniques. Here, end-to-end trainable autoencoder-like models represent the state of the art, where a discrete representation in the bottleneck of the autoencoder is learned. This allows for efficient transmission of the input audio signal. The learned discrete representation of neural codecs is typically generated by applying a quantizer to the output of the neural encoder. In almost all state-of-theart neural audio coding approaches, this quantizer is realized as a Vector Quantizer (VQ) and a lot of effort has been spent to alleviate drawbacks of this quantization technique when used together with a neural audio coder. In this paper, we propose and analyze simple alternatives to VQ, which are based on projected Scalar Quantization (SQ). These quantization techniques do not need any additional losses, scheduling parameters or codebook storage thereby simplifying the training of neural audio codecs. For real-time speech communication applications, these neural codecs are required to operate at low complexity, low latency and at low bitrates. We address those challenges by proposing a new causal network architecture that is based on SQ and a Short- Time Fourier Transform (STFT) representation. The proposed method performs particularly well in the very low complexity and low bitrate regime. |
doi_str_mv | 10.1109/JSTSP.2024.3491575 |
format | Article |
fullrecord | <record><control><sourceid>crossref_RIE</sourceid><recordid>TN_cdi_ieee_primary_10742547</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10742547</ieee_id><sourcerecordid>10_1109_JSTSP_2024_3491575</sourcerecordid><originalsourceid>FETCH-LOGICAL-c1085-e3113787f588275a546f45c0e94d5a6167a16068d00d26c832a916a1f1fad3c93</originalsourceid><addsrcrecordid>eNpNkMlOwzAURS0EEqXwA4iFfyDFz2OyhIhRFVPaHVL05DhglKGy0wV8PUnLgtW7urrnLQ4h58AWACy7fCxWxcuCMy4XQmagjDogM8gkJEym8nDKgidSKXFMTmL8YkwZDXJG3p_cNmBDi41z9pPmfeW7D1r3gb45bJKVb91Ytu228xYH33eRruM0ycc4YDfQaz8EHBwtLDYY6Ot2LP3PbntKjmpsojv7u3Oyvr1Z5ffJ8vnuIb9aJhZYqhInAIRJTa3SlBuFSupaKstcJiuFGrRB0EynFWMV1zYVHDPQCDXUWAmbiTnh-7829DEGV5eb4FsM3yWwcvJT7vyUk5_yz88IXewh75z7BxjJlTTiF-V8Ydg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Neural Speech Coding for Real-Time Communications Using Constant Bitrate Scalar Quantization</title><source>IEEE Electronic Library (IEL)</source><creator>Brendel, Andreas ; Pia, Nicola ; Gupta, Kishan ; Behringer, Lyonel ; Fuchs, Guillaume ; Multrus, Markus</creator><creatorcontrib>Brendel, Andreas ; Pia, Nicola ; Gupta, Kishan ; Behringer, Lyonel ; Fuchs, Guillaume ; Multrus, Markus</creatorcontrib><description>Neural audio coding has emerged as a vivid research direction by promising good audio quality at very low bitrates unachievable by classical coding techniques. Here, end-to-end trainable autoencoder-like models represent the state of the art, where a discrete representation in the bottleneck of the autoencoder is learned. This allows for efficient transmission of the input audio signal. The learned discrete representation of neural codecs is typically generated by applying a quantizer to the output of the neural encoder. In almost all state-of-theart neural audio coding approaches, this quantizer is realized as a Vector Quantizer (VQ) and a lot of effort has been spent to alleviate drawbacks of this quantization technique when used together with a neural audio coder. In this paper, we propose and analyze simple alternatives to VQ, which are based on projected Scalar Quantization (SQ). These quantization techniques do not need any additional losses, scheduling parameters or codebook storage thereby simplifying the training of neural audio codecs. For real-time speech communication applications, these neural codecs are required to operate at low complexity, low latency and at low bitrates. We address those challenges by proposing a new causal network architecture that is based on SQ and a Short- Time Fourier Transform (STFT) representation. The proposed method performs particularly well in the very low complexity and low bitrate regime.</description><identifier>ISSN: 1932-4553</identifier><identifier>EISSN: 1941-0484</identifier><identifier>DOI: 10.1109/JSTSP.2024.3491575</identifier><identifier>CODEN: IJSTGY</identifier><language>eng</language><publisher>IEEE</publisher><subject>Audio coding ; Bit rate ; Codecs ; Complexity theory ; Discrete representation learning ; low complexity ; neural speech coding ; quantization ; Quantization (signal) ; real-time ; Real-time systems ; Representation learning ; Speech coding ; Training ; Vectors</subject><ispartof>IEEE journal of selected topics in signal processing, 2024-11, p.1-15</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>0000-0002-6051-6346</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10742547$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27923,27924,54757</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10742547$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Brendel, Andreas</creatorcontrib><creatorcontrib>Pia, Nicola</creatorcontrib><creatorcontrib>Gupta, Kishan</creatorcontrib><creatorcontrib>Behringer, Lyonel</creatorcontrib><creatorcontrib>Fuchs, Guillaume</creatorcontrib><creatorcontrib>Multrus, Markus</creatorcontrib><title>Neural Speech Coding for Real-Time Communications Using Constant Bitrate Scalar Quantization</title><title>IEEE journal of selected topics in signal processing</title><addtitle>JSTSP</addtitle><description>Neural audio coding has emerged as a vivid research direction by promising good audio quality at very low bitrates unachievable by classical coding techniques. Here, end-to-end trainable autoencoder-like models represent the state of the art, where a discrete representation in the bottleneck of the autoencoder is learned. This allows for efficient transmission of the input audio signal. The learned discrete representation of neural codecs is typically generated by applying a quantizer to the output of the neural encoder. In almost all state-of-theart neural audio coding approaches, this quantizer is realized as a Vector Quantizer (VQ) and a lot of effort has been spent to alleviate drawbacks of this quantization technique when used together with a neural audio coder. In this paper, we propose and analyze simple alternatives to VQ, which are based on projected Scalar Quantization (SQ). These quantization techniques do not need any additional losses, scheduling parameters or codebook storage thereby simplifying the training of neural audio codecs. For real-time speech communication applications, these neural codecs are required to operate at low complexity, low latency and at low bitrates. We address those challenges by proposing a new causal network architecture that is based on SQ and a Short- Time Fourier Transform (STFT) representation. The proposed method performs particularly well in the very low complexity and low bitrate regime.</description><subject>Audio coding</subject><subject>Bit rate</subject><subject>Codecs</subject><subject>Complexity theory</subject><subject>Discrete representation learning</subject><subject>low complexity</subject><subject>neural speech coding</subject><subject>quantization</subject><subject>Quantization (signal)</subject><subject>real-time</subject><subject>Real-time systems</subject><subject>Representation learning</subject><subject>Speech coding</subject><subject>Training</subject><subject>Vectors</subject><issn>1932-4553</issn><issn>1941-0484</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkMlOwzAURS0EEqXwA4iFfyDFz2OyhIhRFVPaHVL05DhglKGy0wV8PUnLgtW7urrnLQ4h58AWACy7fCxWxcuCMy4XQmagjDogM8gkJEym8nDKgidSKXFMTmL8YkwZDXJG3p_cNmBDi41z9pPmfeW7D1r3gb45bJKVb91Ytu228xYH33eRruM0ycc4YDfQaz8EHBwtLDYY6Ot2LP3PbntKjmpsojv7u3Oyvr1Z5ffJ8vnuIb9aJhZYqhInAIRJTa3SlBuFSupaKstcJiuFGrRB0EynFWMV1zYVHDPQCDXUWAmbiTnh-7829DEGV5eb4FsM3yWwcvJT7vyUk5_yz88IXewh75z7BxjJlTTiF-V8Ydg</recordid><startdate>20241104</startdate><enddate>20241104</enddate><creator>Brendel, Andreas</creator><creator>Pia, Nicola</creator><creator>Gupta, Kishan</creator><creator>Behringer, Lyonel</creator><creator>Fuchs, Guillaume</creator><creator>Multrus, Markus</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-6051-6346</orcidid></search><sort><creationdate>20241104</creationdate><title>Neural Speech Coding for Real-Time Communications Using Constant Bitrate Scalar Quantization</title><author>Brendel, Andreas ; Pia, Nicola ; Gupta, Kishan ; Behringer, Lyonel ; Fuchs, Guillaume ; Multrus, Markus</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c1085-e3113787f588275a546f45c0e94d5a6167a16068d00d26c832a916a1f1fad3c93</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Audio coding</topic><topic>Bit rate</topic><topic>Codecs</topic><topic>Complexity theory</topic><topic>Discrete representation learning</topic><topic>low complexity</topic><topic>neural speech coding</topic><topic>quantization</topic><topic>Quantization (signal)</topic><topic>real-time</topic><topic>Real-time systems</topic><topic>Representation learning</topic><topic>Speech coding</topic><topic>Training</topic><topic>Vectors</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Brendel, Andreas</creatorcontrib><creatorcontrib>Pia, Nicola</creatorcontrib><creatorcontrib>Gupta, Kishan</creatorcontrib><creatorcontrib>Behringer, Lyonel</creatorcontrib><creatorcontrib>Fuchs, Guillaume</creatorcontrib><creatorcontrib>Multrus, Markus</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><jtitle>IEEE journal of selected topics in signal processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Brendel, Andreas</au><au>Pia, Nicola</au><au>Gupta, Kishan</au><au>Behringer, Lyonel</au><au>Fuchs, Guillaume</au><au>Multrus, Markus</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Neural Speech Coding for Real-Time Communications Using Constant Bitrate Scalar Quantization</atitle><jtitle>IEEE journal of selected topics in signal processing</jtitle><stitle>JSTSP</stitle><date>2024-11-04</date><risdate>2024</risdate><spage>1</spage><epage>15</epage><pages>1-15</pages><issn>1932-4553</issn><eissn>1941-0484</eissn><coden>IJSTGY</coden><abstract>Neural audio coding has emerged as a vivid research direction by promising good audio quality at very low bitrates unachievable by classical coding techniques. Here, end-to-end trainable autoencoder-like models represent the state of the art, where a discrete representation in the bottleneck of the autoencoder is learned. This allows for efficient transmission of the input audio signal. The learned discrete representation of neural codecs is typically generated by applying a quantizer to the output of the neural encoder. In almost all state-of-theart neural audio coding approaches, this quantizer is realized as a Vector Quantizer (VQ) and a lot of effort has been spent to alleviate drawbacks of this quantization technique when used together with a neural audio coder. In this paper, we propose and analyze simple alternatives to VQ, which are based on projected Scalar Quantization (SQ). These quantization techniques do not need any additional losses, scheduling parameters or codebook storage thereby simplifying the training of neural audio codecs. For real-time speech communication applications, these neural codecs are required to operate at low complexity, low latency and at low bitrates. We address those challenges by proposing a new causal network architecture that is based on SQ and a Short- Time Fourier Transform (STFT) representation. The proposed method performs particularly well in the very low complexity and low bitrate regime.</abstract><pub>IEEE</pub><doi>10.1109/JSTSP.2024.3491575</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0002-6051-6346</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1932-4553 |
ispartof | IEEE journal of selected topics in signal processing, 2024-11, p.1-15 |
issn | 1932-4553 1941-0484 |
language | eng |
recordid | cdi_ieee_primary_10742547 |
source | IEEE Electronic Library (IEL) |
subjects | Audio coding Bit rate Codecs Complexity theory Discrete representation learning low complexity neural speech coding quantization Quantization (signal) real-time Real-time systems Representation learning Speech coding Training Vectors |
title | Neural Speech Coding for Real-Time Communications Using Constant Bitrate Scalar Quantization |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T17%3A47%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Neural%20Speech%20Coding%20for%20Real-Time%20Communications%20Using%20Constant%20Bitrate%20Scalar%20Quantization&rft.jtitle=IEEE%20journal%20of%20selected%20topics%20in%20signal%20processing&rft.au=Brendel,%20Andreas&rft.date=2024-11-04&rft.spage=1&rft.epage=15&rft.pages=1-15&rft.issn=1932-4553&rft.eissn=1941-0484&rft.coden=IJSTGY&rft_id=info:doi/10.1109/JSTSP.2024.3491575&rft_dat=%3Ccrossref_RIE%3E10_1109_JSTSP_2024_3491575%3C/crossref_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10742547&rfr_iscdi=true |