Modeling Motion for Spatial Scalability

The dramatic proliferation of visual displays, from cell phones, through video iPods, PDAs, and notebooks, to high-quality HDTV screens, has raised the demand for a video compression scheme capable of decoding a "once-encoded" video at a range of supported video resolutions and with high q...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Bozinovic, N., Konrad, J.
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page II
container_issue
container_start_page 29
container_title
container_volume 2
creator Bozinovic, N.
Konrad, J.
description The dramatic proliferation of visual displays, from cell phones, through video iPods, PDAs, and notebooks, to high-quality HDTV screens, has raised the demand for a video compression scheme capable of decoding a "once-encoded" video at a range of supported video resolutions and with high quality. A promising solution to this problem has been recently proposed in the form of wavelet video coding based on motion-compensated temporal filtering (MCTF); scalability is naturally supported while efficiency is comparable to state-of-the-art hybrid coders. However, although rate (quality) and temporal scalability are natural in mainstream "t+2D" wavelet video coders, spatial scalability suffers from drift problems. In the light of the recently proposed "2D+t+2D" modification, which targets spatial scalability performance, we present a framework for the modeling of spatially-scalable motion that is well matched to this new structure. We propose a motion estimation scheme in which motion fields at different spatial scales are jointly estimated and coded. In addition, at lower spatial resolutions, we extend the block-wise constant motion model to a higher-order model based on cubic splines, effectively creating a "mixture motion model" that combines different models at different supported spatial scales. This advanced spatial modeling of motion significantly improves the coding efficiency of motion at low resolutions and leads to an excellent over-all compression performance; spatial scalability performance of the proposed scheme approaches that of a non-scalable coder
doi_str_mv 10.1109/ICASSP.2006.1660271
format Conference Proceeding
fullrecord <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_1660271</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>1660271</ieee_id><sourcerecordid>1660271</sourcerecordid><originalsourceid>FETCH-LOGICAL-i90t-7c18b767118ad27d7e5a86ed257f051d43e824dfa782115c42bef1dea1eac8cf3</originalsourceid><addsrcrecordid>eNotjz1PwzAQQC0-JKKSX9AlG1PK3cXx2SOqgCK1Aikd2ConPiOj0FRJlv57kOj0tqf3lFoirBDBPb6tn5rmY0UAZoXGADFeqYwqdiU6-LxWuWOLmrQGbZy9URnWBKVB7e5UPk3fAIDOsK4oUw-7IUifjl_FbpjTcCziMBbNyc_J90XT-d63qU_z-V7dRt9Pkl-4UPuX5_16U27fX_-CtmVyMJfcoW3ZMKL1gTiw1N4aCVRzhBqDrsSSDtGzJcS609RKxCAexXe2i9VCLf-1SUQOpzH9-PF8uFxWv7FoQ40</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Modeling Motion for Spatial Scalability</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Bozinovic, N. ; Konrad, J.</creator><creatorcontrib>Bozinovic, N. ; Konrad, J.</creatorcontrib><description>The dramatic proliferation of visual displays, from cell phones, through video iPods, PDAs, and notebooks, to high-quality HDTV screens, has raised the demand for a video compression scheme capable of decoding a "once-encoded" video at a range of supported video resolutions and with high quality. A promising solution to this problem has been recently proposed in the form of wavelet video coding based on motion-compensated temporal filtering (MCTF); scalability is naturally supported while efficiency is comparable to state-of-the-art hybrid coders. However, although rate (quality) and temporal scalability are natural in mainstream "t+2D" wavelet video coders, spatial scalability suffers from drift problems. In the light of the recently proposed "2D+t+2D" modification, which targets spatial scalability performance, we present a framework for the modeling of spatially-scalable motion that is well matched to this new structure. We propose a motion estimation scheme in which motion fields at different spatial scales are jointly estimated and coded. In addition, at lower spatial resolutions, we extend the block-wise constant motion model to a higher-order model based on cubic splines, effectively creating a "mixture motion model" that combines different models at different supported spatial scales. This advanced spatial modeling of motion significantly improves the coding efficiency of motion at low resolutions and leads to an excellent over-all compression performance; spatial scalability performance of the proposed scheme approaches that of a non-scalable coder</description><identifier>ISSN: 1520-6149</identifier><identifier>ISBN: 9781424404698</identifier><identifier>ISBN: 142440469X</identifier><identifier>EISSN: 2379-190X</identifier><identifier>DOI: 10.1109/ICASSP.2006.1660271</identifier><language>eng</language><publisher>IEEE</publisher><subject>Cellular phones ; Decoding ; Displays ; HDTV ; Motion estimation ; Personal digital assistants ; Portable media players ; Scalability ; Spatial resolution ; Video compression</subject><ispartof>2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings, 2006, Vol.2, p.29-II</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/1660271$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,776,780,785,786,2051,4035,4036,27904,54899</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/1660271$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Bozinovic, N.</creatorcontrib><creatorcontrib>Konrad, J.</creatorcontrib><title>Modeling Motion for Spatial Scalability</title><title>2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings</title><addtitle>ICASSP</addtitle><description>The dramatic proliferation of visual displays, from cell phones, through video iPods, PDAs, and notebooks, to high-quality HDTV screens, has raised the demand for a video compression scheme capable of decoding a "once-encoded" video at a range of supported video resolutions and with high quality. A promising solution to this problem has been recently proposed in the form of wavelet video coding based on motion-compensated temporal filtering (MCTF); scalability is naturally supported while efficiency is comparable to state-of-the-art hybrid coders. However, although rate (quality) and temporal scalability are natural in mainstream "t+2D" wavelet video coders, spatial scalability suffers from drift problems. In the light of the recently proposed "2D+t+2D" modification, which targets spatial scalability performance, we present a framework for the modeling of spatially-scalable motion that is well matched to this new structure. We propose a motion estimation scheme in which motion fields at different spatial scales are jointly estimated and coded. In addition, at lower spatial resolutions, we extend the block-wise constant motion model to a higher-order model based on cubic splines, effectively creating a "mixture motion model" that combines different models at different supported spatial scales. This advanced spatial modeling of motion significantly improves the coding efficiency of motion at low resolutions and leads to an excellent over-all compression performance; spatial scalability performance of the proposed scheme approaches that of a non-scalable coder</description><subject>Cellular phones</subject><subject>Decoding</subject><subject>Displays</subject><subject>HDTV</subject><subject>Motion estimation</subject><subject>Personal digital assistants</subject><subject>Portable media players</subject><subject>Scalability</subject><subject>Spatial resolution</subject><subject>Video compression</subject><issn>1520-6149</issn><issn>2379-190X</issn><isbn>9781424404698</isbn><isbn>142440469X</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2006</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNotjz1PwzAQQC0-JKKSX9AlG1PK3cXx2SOqgCK1Aikd2ConPiOj0FRJlv57kOj0tqf3lFoirBDBPb6tn5rmY0UAZoXGADFeqYwqdiU6-LxWuWOLmrQGbZy9URnWBKVB7e5UPk3fAIDOsK4oUw-7IUifjl_FbpjTcCziMBbNyc_J90XT-d63qU_z-V7dRt9Pkl-4UPuX5_16U27fX_-CtmVyMJfcoW3ZMKL1gTiw1N4aCVRzhBqDrsSSDtGzJcS609RKxCAexXe2i9VCLf-1SUQOpzH9-PF8uFxWv7FoQ40</recordid><startdate>2006</startdate><enddate>2006</enddate><creator>Bozinovic, N.</creator><creator>Konrad, J.</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>2006</creationdate><title>Modeling Motion for Spatial Scalability</title><author>Bozinovic, N. ; Konrad, J.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i90t-7c18b767118ad27d7e5a86ed257f051d43e824dfa782115c42bef1dea1eac8cf3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2006</creationdate><topic>Cellular phones</topic><topic>Decoding</topic><topic>Displays</topic><topic>HDTV</topic><topic>Motion estimation</topic><topic>Personal digital assistants</topic><topic>Portable media players</topic><topic>Scalability</topic><topic>Spatial resolution</topic><topic>Video compression</topic><toplevel>online_resources</toplevel><creatorcontrib>Bozinovic, N.</creatorcontrib><creatorcontrib>Konrad, J.</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Xplore</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Bozinovic, N.</au><au>Konrad, J.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Modeling Motion for Spatial Scalability</atitle><btitle>2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings</btitle><stitle>ICASSP</stitle><date>2006</date><risdate>2006</risdate><volume>2</volume><spage>29</spage><epage>II</epage><pages>29-II</pages><issn>1520-6149</issn><eissn>2379-190X</eissn><isbn>9781424404698</isbn><isbn>142440469X</isbn><abstract>The dramatic proliferation of visual displays, from cell phones, through video iPods, PDAs, and notebooks, to high-quality HDTV screens, has raised the demand for a video compression scheme capable of decoding a "once-encoded" video at a range of supported video resolutions and with high quality. A promising solution to this problem has been recently proposed in the form of wavelet video coding based on motion-compensated temporal filtering (MCTF); scalability is naturally supported while efficiency is comparable to state-of-the-art hybrid coders. However, although rate (quality) and temporal scalability are natural in mainstream "t+2D" wavelet video coders, spatial scalability suffers from drift problems. In the light of the recently proposed "2D+t+2D" modification, which targets spatial scalability performance, we present a framework for the modeling of spatially-scalable motion that is well matched to this new structure. We propose a motion estimation scheme in which motion fields at different spatial scales are jointly estimated and coded. In addition, at lower spatial resolutions, we extend the block-wise constant motion model to a higher-order model based on cubic splines, effectively creating a "mixture motion model" that combines different models at different supported spatial scales. This advanced spatial modeling of motion significantly improves the coding efficiency of motion at low resolutions and leads to an excellent over-all compression performance; spatial scalability performance of the proposed scheme approaches that of a non-scalable coder</abstract><pub>IEEE</pub><doi>10.1109/ICASSP.2006.1660271</doi></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1520-6149
ispartof 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings, 2006, Vol.2, p.29-II
issn 1520-6149
2379-190X
language eng
recordid cdi_ieee_primary_1660271
source IEEE Electronic Library (IEL) Conference Proceedings
subjects Cellular phones
Decoding
Displays
HDTV
Motion estimation
Personal digital assistants
Portable media players
Scalability
Spatial resolution
Video compression
title Modeling Motion for Spatial Scalability
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T18%3A57%3A52IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Modeling%20Motion%20for%20Spatial%20Scalability&rft.btitle=2006%20IEEE%20International%20Conference%20on%20Acoustics%20Speech%20and%20Signal%20Processing%20Proceedings&rft.au=Bozinovic,%20N.&rft.date=2006&rft.volume=2&rft.spage=29&rft.epage=II&rft.pages=29-II&rft.issn=1520-6149&rft.eissn=2379-190X&rft.isbn=9781424404698&rft.isbn_list=142440469X&rft_id=info:doi/10.1109/ICASSP.2006.1660271&rft_dat=%3Cieee_6IE%3E1660271%3C/ieee_6IE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=1660271&rfr_iscdi=true