Deep Video Precoding

Several groups are currently investigating how deep learning may advance the state-of-the-art in image and video coding. An open question is how to make deep neural networks work in conjunction with existing (and upcoming) video codecs, such as MPEG AVC, HEVC, VVC, Google VP9 and AOM AV1, as well as...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Bourtsoulatze, Eirina, Chadha, Aaron, Fadeev, Ilya, Giotsas, Vasileios, Andreopoulos, Yiannis
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Bourtsoulatze, Eirina
Chadha, Aaron
Fadeev, Ilya
Giotsas, Vasileios
Andreopoulos, Yiannis
description Several groups are currently investigating how deep learning may advance the state-of-the-art in image and video coding. An open question is how to make deep neural networks work in conjunction with existing (and upcoming) video codecs, such as MPEG AVC, HEVC, VVC, Google VP9 and AOM AV1, as well as existing container and transport formats, without imposing any changes at the client side. Such compatibility is a crucial aspect when it comes to practical deployment, especially due to the fact that the video content industry and hardware manufacturers are expected to remain committed to these standards for the foreseeable future. We propose to use deep neural networks as precoders for current and future video codecs and adaptive video streaming systems. In our current design, the core precoding component comprises a cascaded structure of downscaling neural networks that operates during video encoding, prior to transmission. This is coupled with a precoding mode selection algorithm for each independently-decodable stream segment, which adjusts the downscaling factor according to scene characteristics, the utilized encoder, and the desired bitrate and encoding configuration. Our framework is compatible with all current and future codec and transport standards, as our deep precoding network structure is trained in conjunction with linear upscaling filters (e.g., the bilinear filter), which are supported by all web video players. Results with FHD and UHD content and widely-used AVC, HEVC and VP9 encoders show that coupling such standards with the proposed deep video precoding allows for 15% to 45% rate reduction under encoding configurations and bitrates suitable for video-on-demand adaptive streaming systems. The use of precoding can also lead to encoding complexity reduction, which is essential for cost-effective cloud deployment of complex encoders like H.265/HEVC and VP9.
doi_str_mv 10.48550/arxiv.1908.00812
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1908_00812</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1908_00812</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-1b7647ab00bd6348eca1f69d45450a195b06f2d6a3f95adecbf231f4f7a8d6303</originalsourceid><addsrcrecordid>eNotzjsLwjAYheEsDlLdXJzsH2j90lw7Sr1CQYfiWr40iRTUlgii_97rdKb38BAypZByLQTMMTzae0pz0CmAptmQTJbO9fGxta6LD8E1nW2vpxEZeDzf3Pi_EanWq6rYJuV-sysWZYJSZQk1SnKFBsBYybh2DVIvc8sFF4A0Fwakz6xE5nOB1jXGZ4x67hXqdwAsIrPf7ZdV96G9YHjWH1795bEXPdUzOQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Deep Video Precoding</title><source>arXiv.org</source><creator>Bourtsoulatze, Eirina ; Chadha, Aaron ; Fadeev, Ilya ; Giotsas, Vasileios ; Andreopoulos, Yiannis</creator><creatorcontrib>Bourtsoulatze, Eirina ; Chadha, Aaron ; Fadeev, Ilya ; Giotsas, Vasileios ; Andreopoulos, Yiannis</creatorcontrib><description>Several groups are currently investigating how deep learning may advance the state-of-the-art in image and video coding. An open question is how to make deep neural networks work in conjunction with existing (and upcoming) video codecs, such as MPEG AVC, HEVC, VVC, Google VP9 and AOM AV1, as well as existing container and transport formats, without imposing any changes at the client side. Such compatibility is a crucial aspect when it comes to practical deployment, especially due to the fact that the video content industry and hardware manufacturers are expected to remain committed to these standards for the foreseeable future. We propose to use deep neural networks as precoders for current and future video codecs and adaptive video streaming systems. In our current design, the core precoding component comprises a cascaded structure of downscaling neural networks that operates during video encoding, prior to transmission. This is coupled with a precoding mode selection algorithm for each independently-decodable stream segment, which adjusts the downscaling factor according to scene characteristics, the utilized encoder, and the desired bitrate and encoding configuration. Our framework is compatible with all current and future codec and transport standards, as our deep precoding network structure is trained in conjunction with linear upscaling filters (e.g., the bilinear filter), which are supported by all web video players. Results with FHD and UHD content and widely-used AVC, HEVC and VP9 encoders show that coupling such standards with the proposed deep video precoding allows for 15% to 45% rate reduction under encoding configurations and bitrates suitable for video-on-demand adaptive streaming systems. The use of precoding can also lead to encoding complexity reduction, which is essential for cost-effective cloud deployment of complex encoders like H.265/HEVC and VP9.</description><identifier>DOI: 10.48550/arxiv.1908.00812</identifier><language>eng</language><subject>Computer Science - Learning ; Computer Science - Multimedia ; Statistics - Machine Learning</subject><creationdate>2019-08</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1908.00812$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1908.00812$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Bourtsoulatze, Eirina</creatorcontrib><creatorcontrib>Chadha, Aaron</creatorcontrib><creatorcontrib>Fadeev, Ilya</creatorcontrib><creatorcontrib>Giotsas, Vasileios</creatorcontrib><creatorcontrib>Andreopoulos, Yiannis</creatorcontrib><title>Deep Video Precoding</title><description>Several groups are currently investigating how deep learning may advance the state-of-the-art in image and video coding. An open question is how to make deep neural networks work in conjunction with existing (and upcoming) video codecs, such as MPEG AVC, HEVC, VVC, Google VP9 and AOM AV1, as well as existing container and transport formats, without imposing any changes at the client side. Such compatibility is a crucial aspect when it comes to practical deployment, especially due to the fact that the video content industry and hardware manufacturers are expected to remain committed to these standards for the foreseeable future. We propose to use deep neural networks as precoders for current and future video codecs and adaptive video streaming systems. In our current design, the core precoding component comprises a cascaded structure of downscaling neural networks that operates during video encoding, prior to transmission. This is coupled with a precoding mode selection algorithm for each independently-decodable stream segment, which adjusts the downscaling factor according to scene characteristics, the utilized encoder, and the desired bitrate and encoding configuration. Our framework is compatible with all current and future codec and transport standards, as our deep precoding network structure is trained in conjunction with linear upscaling filters (e.g., the bilinear filter), which are supported by all web video players. Results with FHD and UHD content and widely-used AVC, HEVC and VP9 encoders show that coupling such standards with the proposed deep video precoding allows for 15% to 45% rate reduction under encoding configurations and bitrates suitable for video-on-demand adaptive streaming systems. The use of precoding can also lead to encoding complexity reduction, which is essential for cost-effective cloud deployment of complex encoders like H.265/HEVC and VP9.</description><subject>Computer Science - Learning</subject><subject>Computer Science - Multimedia</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzjsLwjAYheEsDlLdXJzsH2j90lw7Sr1CQYfiWr40iRTUlgii_97rdKb38BAypZByLQTMMTzae0pz0CmAptmQTJbO9fGxta6LD8E1nW2vpxEZeDzf3Pi_EanWq6rYJuV-sysWZYJSZQk1SnKFBsBYybh2DVIvc8sFF4A0Fwakz6xE5nOB1jXGZ4x67hXqdwAsIrPf7ZdV96G9YHjWH1795bEXPdUzOQ</recordid><startdate>20190802</startdate><enddate>20190802</enddate><creator>Bourtsoulatze, Eirina</creator><creator>Chadha, Aaron</creator><creator>Fadeev, Ilya</creator><creator>Giotsas, Vasileios</creator><creator>Andreopoulos, Yiannis</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20190802</creationdate><title>Deep Video Precoding</title><author>Bourtsoulatze, Eirina ; Chadha, Aaron ; Fadeev, Ilya ; Giotsas, Vasileios ; Andreopoulos, Yiannis</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-1b7647ab00bd6348eca1f69d45450a195b06f2d6a3f95adecbf231f4f7a8d6303</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Learning</topic><topic>Computer Science - Multimedia</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Bourtsoulatze, Eirina</creatorcontrib><creatorcontrib>Chadha, Aaron</creatorcontrib><creatorcontrib>Fadeev, Ilya</creatorcontrib><creatorcontrib>Giotsas, Vasileios</creatorcontrib><creatorcontrib>Andreopoulos, Yiannis</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Bourtsoulatze, Eirina</au><au>Chadha, Aaron</au><au>Fadeev, Ilya</au><au>Giotsas, Vasileios</au><au>Andreopoulos, Yiannis</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep Video Precoding</atitle><date>2019-08-02</date><risdate>2019</risdate><abstract>Several groups are currently investigating how deep learning may advance the state-of-the-art in image and video coding. An open question is how to make deep neural networks work in conjunction with existing (and upcoming) video codecs, such as MPEG AVC, HEVC, VVC, Google VP9 and AOM AV1, as well as existing container and transport formats, without imposing any changes at the client side. Such compatibility is a crucial aspect when it comes to practical deployment, especially due to the fact that the video content industry and hardware manufacturers are expected to remain committed to these standards for the foreseeable future. We propose to use deep neural networks as precoders for current and future video codecs and adaptive video streaming systems. In our current design, the core precoding component comprises a cascaded structure of downscaling neural networks that operates during video encoding, prior to transmission. This is coupled with a precoding mode selection algorithm for each independently-decodable stream segment, which adjusts the downscaling factor according to scene characteristics, the utilized encoder, and the desired bitrate and encoding configuration. Our framework is compatible with all current and future codec and transport standards, as our deep precoding network structure is trained in conjunction with linear upscaling filters (e.g., the bilinear filter), which are supported by all web video players. Results with FHD and UHD content and widely-used AVC, HEVC and VP9 encoders show that coupling such standards with the proposed deep video precoding allows for 15% to 45% rate reduction under encoding configurations and bitrates suitable for video-on-demand adaptive streaming systems. The use of precoding can also lead to encoding complexity reduction, which is essential for cost-effective cloud deployment of complex encoders like H.265/HEVC and VP9.</abstract><doi>10.48550/arxiv.1908.00812</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1908.00812
ispartof
issn
language eng
recordid cdi_arxiv_primary_1908_00812
source arXiv.org
subjects Computer Science - Learning
Computer Science - Multimedia
Statistics - Machine Learning
title Deep Video Precoding
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-18T22%3A50%3A42IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20Video%20Precoding&rft.au=Bourtsoulatze,%20Eirina&rft.date=2019-08-02&rft_id=info:doi/10.48550/arxiv.1908.00812&rft_dat=%3Carxiv_GOX%3E1908_00812%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true