A Pytorch Reproduction of Masked Generative Image Transformer

In this technical report, we present a reproduction of MaskGIT: Masked Generative Image Transformer, using PyTorch. The approach involves leveraging a masked bidirectional transformer architecture, enabling image generation with only few steps (8~16 steps) for 512 x 512 resolution images, i.e., ~64x...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Besnier, Victor, Chen, Mickael
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Besnier, Victor
Chen, Mickael
description In this technical report, we present a reproduction of MaskGIT: Masked Generative Image Transformer, using PyTorch. The approach involves leveraging a masked bidirectional transformer architecture, enabling image generation with only few steps (8~16 steps) for 512 x 512 resolution images, i.e., ~64x faster than an auto-regressive approach. Through rigorous experimentation and optimization, we achieved results that closely align with the findings presented in the original paper. We match the reported FID of 7.32 with our replication and obtain 7.59 with similar hyperparameters on ImageNet at resolution 512 x 512. Moreover, we improve over the official implementation with some minor hyperparameter tweaking, achieving FID of 7.26. At the lower resolution of 256 x 256 pixels, our reimplementation scores 6.80, in comparison to the original paper's 6.18. To promote further research on Masked Generative Models and facilitate their reproducibility, we released our code and pre-trained weights openly at https://github.com/valeoai/MaskGIT-pytorch/
doi_str_mv 10.48550/arxiv.2310.14400
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2310_14400</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2310_14400</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-3f847039438216ebef263f0f8baeb6ad2425155ecb38f0023674a9b8f8ca09cd3</originalsourceid><addsrcrecordid>eNotj71OwzAURr0woMIDMOEXSLnxX5yBoaqgVCoCoezRtXMNESSubkJF3x4oTEf6hqPvCHFVwtJ4a-EG-as_LJX-GUpjAM7F7Uo-H-fM8U2-0J5z9xnnPo8yJ_mI0zt1ckMjMc79geR2wFeSDeM4pcwD8YU4S_gx0eU_F6K5v2vWD8XuabNdr3YFugoKnbypQNdGe1U6CpSU0wmSD0jBYaeMsqW1FIP2CUBpVxmsg08-ItSx0wtx_ac9_W_33A_Ix_a3oz116G_cnEJT</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>A Pytorch Reproduction of Masked Generative Image Transformer</title><source>arXiv.org</source><creator>Besnier, Victor ; Chen, Mickael</creator><creatorcontrib>Besnier, Victor ; Chen, Mickael</creatorcontrib><description>In this technical report, we present a reproduction of MaskGIT: Masked Generative Image Transformer, using PyTorch. The approach involves leveraging a masked bidirectional transformer architecture, enabling image generation with only few steps (8~16 steps) for 512 x 512 resolution images, i.e., ~64x faster than an auto-regressive approach. Through rigorous experimentation and optimization, we achieved results that closely align with the findings presented in the original paper. We match the reported FID of 7.32 with our replication and obtain 7.59 with similar hyperparameters on ImageNet at resolution 512 x 512. Moreover, we improve over the official implementation with some minor hyperparameter tweaking, achieving FID of 7.26. At the lower resolution of 256 x 256 pixels, our reimplementation scores 6.80, in comparison to the original paper's 6.18. To promote further research on Masked Generative Models and facilitate their reproducibility, we released our code and pre-trained weights openly at https://github.com/valeoai/MaskGIT-pytorch/</description><identifier>DOI: 10.48550/arxiv.2310.14400</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2023-10</creationdate><rights>http://creativecommons.org/licenses/by-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2310.14400$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2310.14400$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Besnier, Victor</creatorcontrib><creatorcontrib>Chen, Mickael</creatorcontrib><title>A Pytorch Reproduction of Masked Generative Image Transformer</title><description>In this technical report, we present a reproduction of MaskGIT: Masked Generative Image Transformer, using PyTorch. The approach involves leveraging a masked bidirectional transformer architecture, enabling image generation with only few steps (8~16 steps) for 512 x 512 resolution images, i.e., ~64x faster than an auto-regressive approach. Through rigorous experimentation and optimization, we achieved results that closely align with the findings presented in the original paper. We match the reported FID of 7.32 with our replication and obtain 7.59 with similar hyperparameters on ImageNet at resolution 512 x 512. Moreover, we improve over the official implementation with some minor hyperparameter tweaking, achieving FID of 7.26. At the lower resolution of 256 x 256 pixels, our reimplementation scores 6.80, in comparison to the original paper's 6.18. To promote further research on Masked Generative Models and facilitate their reproducibility, we released our code and pre-trained weights openly at https://github.com/valeoai/MaskGIT-pytorch/</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj71OwzAURr0woMIDMOEXSLnxX5yBoaqgVCoCoezRtXMNESSubkJF3x4oTEf6hqPvCHFVwtJ4a-EG-as_LJX-GUpjAM7F7Uo-H-fM8U2-0J5z9xnnPo8yJ_mI0zt1ckMjMc79geR2wFeSDeM4pcwD8YU4S_gx0eU_F6K5v2vWD8XuabNdr3YFugoKnbypQNdGe1U6CpSU0wmSD0jBYaeMsqW1FIP2CUBpVxmsg08-ItSx0wtx_ac9_W_33A_Ix_a3oz116G_cnEJT</recordid><startdate>20231022</startdate><enddate>20231022</enddate><creator>Besnier, Victor</creator><creator>Chen, Mickael</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231022</creationdate><title>A Pytorch Reproduction of Masked Generative Image Transformer</title><author>Besnier, Victor ; Chen, Mickael</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-3f847039438216ebef263f0f8baeb6ad2425155ecb38f0023674a9b8f8ca09cd3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Besnier, Victor</creatorcontrib><creatorcontrib>Chen, Mickael</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Besnier, Victor</au><au>Chen, Mickael</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Pytorch Reproduction of Masked Generative Image Transformer</atitle><date>2023-10-22</date><risdate>2023</risdate><abstract>In this technical report, we present a reproduction of MaskGIT: Masked Generative Image Transformer, using PyTorch. The approach involves leveraging a masked bidirectional transformer architecture, enabling image generation with only few steps (8~16 steps) for 512 x 512 resolution images, i.e., ~64x faster than an auto-regressive approach. Through rigorous experimentation and optimization, we achieved results that closely align with the findings presented in the original paper. We match the reported FID of 7.32 with our replication and obtain 7.59 with similar hyperparameters on ImageNet at resolution 512 x 512. Moreover, we improve over the official implementation with some minor hyperparameter tweaking, achieving FID of 7.26. At the lower resolution of 256 x 256 pixels, our reimplementation scores 6.80, in comparison to the original paper's 6.18. To promote further research on Masked Generative Models and facilitate their reproducibility, we released our code and pre-trained weights openly at https://github.com/valeoai/MaskGIT-pytorch/</abstract><doi>10.48550/arxiv.2310.14400</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2310.14400
ispartof
issn
language eng
recordid cdi_arxiv_primary_2310_14400
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title A Pytorch Reproduction of Masked Generative Image Transformer
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T21%3A31%3A52IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Pytorch%20Reproduction%20of%20Masked%20Generative%20Image%20Transformer&rft.au=Besnier,%20Victor&rft.date=2023-10-22&rft_id=info:doi/10.48550/arxiv.2310.14400&rft_dat=%3Carxiv_GOX%3E2310_14400%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true