Rateless Lossy Compression via the Extremes
We begin by presenting a simple lossy compressor operating at near-zero rate: The encoder merely describes the indices of the few maximal source components, while the decoder's reconstruction is a natural estimate of the source components based on this information. This scheme turns out to be n...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on information theory 2016-10, Vol.62 (10), p.5484-5495 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 5495 |
---|---|
container_issue | 10 |
container_start_page | 5484 |
container_title | IEEE transactions on information theory |
container_volume | 62 |
creator | No, Albert Weissman, Tsachy |
description | We begin by presenting a simple lossy compressor operating at near-zero rate: The encoder merely describes the indices of the few maximal source components, while the decoder's reconstruction is a natural estimate of the source components based on this information. This scheme turns out to be near optimal for the memoryless Gaussian source in the sense of achieving the zero-rate slope of its distortion-rate function. Motivated by this finding, we then propose a scheme comprised of iterating the above lossy compressor on an appropriately transformed version of the difference between the source and its reconstruction from the previous iteration. The proposed scheme achieves the rate distortion function of the Gaussian memoryless source (under squared error distortion) when employed on any finite-variance ergodic source. It further possesses desirable properties, and we, respectively, refer to as infinitesimal successive refinability, ratelessness, and complete separability. Its storage and computation requirements are of order no more than (n 2 )/(log β n) per source symbol for β > 0 at both the encoder and the decoder. Though the details of its derivation, construction, and analysis differ considerably, we discuss similarities between the proposed scheme and the recently introduced Sparse Regression Codes of Venkataramanan et al. |
doi_str_mv | 10.1109/TIT.2016.2598148 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_7542580</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>7542580</ieee_id><sourcerecordid>1993013429</sourcerecordid><originalsourceid>FETCH-LOGICAL-c444t-5b540d67f925e6b950fb41df127c12ca5667dd5f8a0f1c956741757e02b28eb03</originalsourceid><addsrcrecordid>eNpdkc1LHEEQxZuQENdN7gFBBrwIMpuunq7-uATCokZYEGRzbnpmanRkPjbds6L_vS27WUxORVG_elS9x9g34AsAbr-vb9YLwUEtBFoD0nxgM0DUuVUoP7IZ52ByK6U5YscxPqZWIojP7EjYQiOgnLGLOz9RRzFmqzHGl2w59puQ2nYcsqfWZ9MDZZfPU6Ce4hf2qfFdpK_7Ome_ry7Xy1_56vb6ZvlzlVdSyinHEiWvlW6sQFKlRd6UEuoGhK5AVB6V0nWNjfG8gcqi0hI0auKiFIZKXszZj53uZlv2VFc0TMF3bhPa3ocXN_rW_TsZ2gd3Pz451EZJUSSB871AGP9sKU6ub2NFXecHGrfRgbUFh0ImG-bs7D_0cdyGIb3nwIhkl1FGJYrvqCoklwI1h2OAu7ckXErCvSXh9kmkldP3TxwW_lqfgJMd0BLRYaxRCjS8eAV_PIuc</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1824518686</pqid></control><display><type>article</type><title>Rateless Lossy Compression via the Extremes</title><source>IEEE Electronic Library (IEL)</source><creator>No, Albert ; Weissman, Tsachy</creator><creatorcontrib>No, Albert ; Weissman, Tsachy</creatorcontrib><description>We begin by presenting a simple lossy compressor operating at near-zero rate: The encoder merely describes the indices of the few maximal source components, while the decoder's reconstruction is a natural estimate of the source components based on this information. This scheme turns out to be near optimal for the memoryless Gaussian source in the sense of achieving the zero-rate slope of its distortion-rate function. Motivated by this finding, we then propose a scheme comprised of iterating the above lossy compressor on an appropriately transformed version of the difference between the source and its reconstruction from the previous iteration. The proposed scheme achieves the rate distortion function of the Gaussian memoryless source (under squared error distortion) when employed on any finite-variance ergodic source. It further possesses desirable properties, and we, respectively, refer to as infinitesimal successive refinability, ratelessness, and complete separability. Its storage and computation requirements are of order no more than (n 2 )/(log β n) per source symbol for β > 0 at both the encoder and the decoder. Though the details of its derivation, construction, and analysis differ considerably, we discuss similarities between the proposed scheme and the recently introduced Sparse Regression Codes of Venkataramanan et al.</description><identifier>ISSN: 0018-9448</identifier><identifier>EISSN: 1557-9654</identifier><identifier>DOI: 10.1109/TIT.2016.2598148</identifier><identifier>PMID: 29375154</identifier><identifier>CODEN: IETTAW</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Complete separability ; Data compression ; Decoding ; Distortion ; extreme value theory ; infinitesimal successive refinability ; Information theory ; Mathematical functions ; Matrix decomposition ; Normal distribution ; order statistics ; rate distortion code ; Rate-distortion ; rateless code ; Regression analysis ; Source coding ; spherical distribution ; Symmetric matrices ; uniform random orthogonal matrix</subject><ispartof>IEEE transactions on information theory, 2016-10, Vol.62 (10), p.5484-5495</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) Oct 2016</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c444t-5b540d67f925e6b950fb41df127c12ca5667dd5f8a0f1c956741757e02b28eb03</citedby><cites>FETCH-LOGICAL-c444t-5b540d67f925e6b950fb41df127c12ca5667dd5f8a0f1c956741757e02b28eb03</cites><orcidid>0000-0002-6346-4182</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/7542580$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>230,314,780,784,796,885,27923,27924,54757</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/7542580$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/29375154$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>No, Albert</creatorcontrib><creatorcontrib>Weissman, Tsachy</creatorcontrib><title>Rateless Lossy Compression via the Extremes</title><title>IEEE transactions on information theory</title><addtitle>TIT</addtitle><addtitle>IEEE Trans Inf Theory</addtitle><description>We begin by presenting a simple lossy compressor operating at near-zero rate: The encoder merely describes the indices of the few maximal source components, while the decoder's reconstruction is a natural estimate of the source components based on this information. This scheme turns out to be near optimal for the memoryless Gaussian source in the sense of achieving the zero-rate slope of its distortion-rate function. Motivated by this finding, we then propose a scheme comprised of iterating the above lossy compressor on an appropriately transformed version of the difference between the source and its reconstruction from the previous iteration. The proposed scheme achieves the rate distortion function of the Gaussian memoryless source (under squared error distortion) when employed on any finite-variance ergodic source. It further possesses desirable properties, and we, respectively, refer to as infinitesimal successive refinability, ratelessness, and complete separability. Its storage and computation requirements are of order no more than (n 2 )/(log β n) per source symbol for β > 0 at both the encoder and the decoder. Though the details of its derivation, construction, and analysis differ considerably, we discuss similarities between the proposed scheme and the recently introduced Sparse Regression Codes of Venkataramanan et al.</description><subject>Complete separability</subject><subject>Data compression</subject><subject>Decoding</subject><subject>Distortion</subject><subject>extreme value theory</subject><subject>infinitesimal successive refinability</subject><subject>Information theory</subject><subject>Mathematical functions</subject><subject>Matrix decomposition</subject><subject>Normal distribution</subject><subject>order statistics</subject><subject>rate distortion code</subject><subject>Rate-distortion</subject><subject>rateless code</subject><subject>Regression analysis</subject><subject>Source coding</subject><subject>spherical distribution</subject><subject>Symmetric matrices</subject><subject>uniform random orthogonal matrix</subject><issn>0018-9448</issn><issn>1557-9654</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2016</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkc1LHEEQxZuQENdN7gFBBrwIMpuunq7-uATCokZYEGRzbnpmanRkPjbds6L_vS27WUxORVG_elS9x9g34AsAbr-vb9YLwUEtBFoD0nxgM0DUuVUoP7IZ52ByK6U5YscxPqZWIojP7EjYQiOgnLGLOz9RRzFmqzHGl2w59puQ2nYcsqfWZ9MDZZfPU6Ce4hf2qfFdpK_7Ome_ry7Xy1_56vb6ZvlzlVdSyinHEiWvlW6sQFKlRd6UEuoGhK5AVB6V0nWNjfG8gcqi0hI0auKiFIZKXszZj53uZlv2VFc0TMF3bhPa3ocXN_rW_TsZ2gd3Pz451EZJUSSB871AGP9sKU6ub2NFXecHGrfRgbUFh0ImG-bs7D_0cdyGIb3nwIhkl1FGJYrvqCoklwI1h2OAu7ckXErCvSXh9kmkldP3TxwW_lqfgJMd0BLRYaxRCjS8eAV_PIuc</recordid><startdate>20161001</startdate><enddate>20161001</enddate><creator>No, Albert</creator><creator>Weissman, Tsachy</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><scope>5PM</scope><orcidid>https://orcid.org/0000-0002-6346-4182</orcidid></search><sort><creationdate>20161001</creationdate><title>Rateless Lossy Compression via the Extremes</title><author>No, Albert ; Weissman, Tsachy</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c444t-5b540d67f925e6b950fb41df127c12ca5667dd5f8a0f1c956741757e02b28eb03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2016</creationdate><topic>Complete separability</topic><topic>Data compression</topic><topic>Decoding</topic><topic>Distortion</topic><topic>extreme value theory</topic><topic>infinitesimal successive refinability</topic><topic>Information theory</topic><topic>Mathematical functions</topic><topic>Matrix decomposition</topic><topic>Normal distribution</topic><topic>order statistics</topic><topic>rate distortion code</topic><topic>Rate-distortion</topic><topic>rateless code</topic><topic>Regression analysis</topic><topic>Source coding</topic><topic>spherical distribution</topic><topic>Symmetric matrices</topic><topic>uniform random orthogonal matrix</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>No, Albert</creatorcontrib><creatorcontrib>Weissman, Tsachy</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>IEEE transactions on information theory</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>No, Albert</au><au>Weissman, Tsachy</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Rateless Lossy Compression via the Extremes</atitle><jtitle>IEEE transactions on information theory</jtitle><stitle>TIT</stitle><addtitle>IEEE Trans Inf Theory</addtitle><date>2016-10-01</date><risdate>2016</risdate><volume>62</volume><issue>10</issue><spage>5484</spage><epage>5495</epage><pages>5484-5495</pages><issn>0018-9448</issn><eissn>1557-9654</eissn><coden>IETTAW</coden><abstract>We begin by presenting a simple lossy compressor operating at near-zero rate: The encoder merely describes the indices of the few maximal source components, while the decoder's reconstruction is a natural estimate of the source components based on this information. This scheme turns out to be near optimal for the memoryless Gaussian source in the sense of achieving the zero-rate slope of its distortion-rate function. Motivated by this finding, we then propose a scheme comprised of iterating the above lossy compressor on an appropriately transformed version of the difference between the source and its reconstruction from the previous iteration. The proposed scheme achieves the rate distortion function of the Gaussian memoryless source (under squared error distortion) when employed on any finite-variance ergodic source. It further possesses desirable properties, and we, respectively, refer to as infinitesimal successive refinability, ratelessness, and complete separability. Its storage and computation requirements are of order no more than (n 2 )/(log β n) per source symbol for β > 0 at both the encoder and the decoder. Though the details of its derivation, construction, and analysis differ considerably, we discuss similarities between the proposed scheme and the recently introduced Sparse Regression Codes of Venkataramanan et al.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>29375154</pmid><doi>10.1109/TIT.2016.2598148</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0002-6346-4182</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 0018-9448 |
ispartof | IEEE transactions on information theory, 2016-10, Vol.62 (10), p.5484-5495 |
issn | 0018-9448 1557-9654 |
language | eng |
recordid | cdi_ieee_primary_7542580 |
source | IEEE Electronic Library (IEL) |
subjects | Complete separability Data compression Decoding Distortion extreme value theory infinitesimal successive refinability Information theory Mathematical functions Matrix decomposition Normal distribution order statistics rate distortion code Rate-distortion rateless code Regression analysis Source coding spherical distribution Symmetric matrices uniform random orthogonal matrix |
title | Rateless Lossy Compression via the Extremes |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T14%3A57%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Rateless%20Lossy%20Compression%20via%20the%20Extremes&rft.jtitle=IEEE%20transactions%20on%20information%20theory&rft.au=No,%20Albert&rft.date=2016-10-01&rft.volume=62&rft.issue=10&rft.spage=5484&rft.epage=5495&rft.pages=5484-5495&rft.issn=0018-9448&rft.eissn=1557-9654&rft.coden=IETTAW&rft_id=info:doi/10.1109/TIT.2016.2598148&rft_dat=%3Cproquest_RIE%3E1993013429%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1824518686&rft_id=info:pmid/29375154&rft_ieee_id=7542580&rfr_iscdi=true |