ZoomLDM: Latent Diffusion Model for multi-scale image generation

Diffusion models have revolutionized image generation, yet several challenges restrict their application to large-image domains, such as digital pathology and satellite imagery. Given that it is infeasible to directly train a model on 'whole' images from domains with potential gigapixel si...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-11
Hauptverfasser: Yellapragada, Srikar, Graikos, Alexandros, Triaridis, Kostas, Prasanna, Prateek, Gupta, Rajarsi R, Saltz, Joel, Samaras, Dimitris
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Yellapragada, Srikar
Graikos, Alexandros
Triaridis, Kostas
Prasanna, Prateek
Gupta, Rajarsi R
Saltz, Joel
Samaras, Dimitris
description Diffusion models have revolutionized image generation, yet several challenges restrict their application to large-image domains, such as digital pathology and satellite imagery. Given that it is infeasible to directly train a model on 'whole' images from domains with potential gigapixel sizes, diffusion-based generative methods have focused on synthesizing small, fixed-size patches extracted from these images. However, generating small patches has limited applicability since patch-based models fail to capture the global structures and wider context of large images, which can be crucial for synthesizing (semantically) accurate samples. In this paper, to overcome this limitation, we present ZoomLDM, a diffusion model tailored for generating images across multiple scales. Central to our approach is a novel magnification-aware conditioning mechanism that utilizes self-supervised learning (SSL) embeddings and allows the diffusion model to synthesize images at different 'zoom' levels, i.e., fixed-size patches extracted from large images at varying scales. ZoomLDM achieves state-of-the-art image generation quality across all scales, excelling particularly in the data-scarce setting of generating thumbnails of entire large images. The multi-scale nature of ZoomLDM unlocks additional capabilities in large image generation, enabling computationally tractable and globally coherent image synthesis up to \(4096 \times 4096\) pixels and \(4\times\) super-resolution. Additionally, multi-scale features extracted from ZoomLDM are highly effective in multiple instance learning experiments. We provide high-resolution examples of the generated images on our website https://histodiffusion.github.io/docs/publications/zoomldm/.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3133543245</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3133543245</sourcerecordid><originalsourceid>FETCH-proquest_journals_31335432453</originalsourceid><addsrcrecordid>eNqNyr0KwjAUQOEgCBbtO1xwLrS5jYqTYBWHdnNyKUFvSkqaaH7e3w4-gNMZzrdgGUesikPN-YrlIYxlWfLdnguBGTs9nJvapjtCKyPZCI1WKgXtLHTuRQaU8zAlE3URntIQ6EkOBANZ8jLObMOWSppA-a9rtr1e7udb8fbukyjEfnTJ23n1WCGKGnkt8D_1BeXmOHk</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3133543245</pqid></control><display><type>article</type><title>ZoomLDM: Latent Diffusion Model for multi-scale image generation</title><source>Free E- Journals</source><creator>Yellapragada, Srikar ; Graikos, Alexandros ; Triaridis, Kostas ; Prasanna, Prateek ; Gupta, Rajarsi R ; Saltz, Joel ; Samaras, Dimitris</creator><creatorcontrib>Yellapragada, Srikar ; Graikos, Alexandros ; Triaridis, Kostas ; Prasanna, Prateek ; Gupta, Rajarsi R ; Saltz, Joel ; Samaras, Dimitris</creatorcontrib><description>Diffusion models have revolutionized image generation, yet several challenges restrict their application to large-image domains, such as digital pathology and satellite imagery. Given that it is infeasible to directly train a model on 'whole' images from domains with potential gigapixel sizes, diffusion-based generative methods have focused on synthesizing small, fixed-size patches extracted from these images. However, generating small patches has limited applicability since patch-based models fail to capture the global structures and wider context of large images, which can be crucial for synthesizing (semantically) accurate samples. In this paper, to overcome this limitation, we present ZoomLDM, a diffusion model tailored for generating images across multiple scales. Central to our approach is a novel magnification-aware conditioning mechanism that utilizes self-supervised learning (SSL) embeddings and allows the diffusion model to synthesize images at different 'zoom' levels, i.e., fixed-size patches extracted from large images at varying scales. ZoomLDM achieves state-of-the-art image generation quality across all scales, excelling particularly in the data-scarce setting of generating thumbnails of entire large images. The multi-scale nature of ZoomLDM unlocks additional capabilities in large image generation, enabling computationally tractable and globally coherent image synthesis up to \(4096 \times 4096\) pixels and \(4\times\) super-resolution. Additionally, multi-scale features extracted from ZoomLDM are highly effective in multiple instance learning experiments. We provide high-resolution examples of the generated images on our website https://histodiffusion.github.io/docs/publications/zoomldm/.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Digital imaging ; Image processing ; Image quality ; Image resolution ; Satellite imagery ; Self-supervised learning ; Synthesis ; Thumbnail icons</subject><ispartof>arXiv.org, 2024-11</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Yellapragada, Srikar</creatorcontrib><creatorcontrib>Graikos, Alexandros</creatorcontrib><creatorcontrib>Triaridis, Kostas</creatorcontrib><creatorcontrib>Prasanna, Prateek</creatorcontrib><creatorcontrib>Gupta, Rajarsi R</creatorcontrib><creatorcontrib>Saltz, Joel</creatorcontrib><creatorcontrib>Samaras, Dimitris</creatorcontrib><title>ZoomLDM: Latent Diffusion Model for multi-scale image generation</title><title>arXiv.org</title><description>Diffusion models have revolutionized image generation, yet several challenges restrict their application to large-image domains, such as digital pathology and satellite imagery. Given that it is infeasible to directly train a model on 'whole' images from domains with potential gigapixel sizes, diffusion-based generative methods have focused on synthesizing small, fixed-size patches extracted from these images. However, generating small patches has limited applicability since patch-based models fail to capture the global structures and wider context of large images, which can be crucial for synthesizing (semantically) accurate samples. In this paper, to overcome this limitation, we present ZoomLDM, a diffusion model tailored for generating images across multiple scales. Central to our approach is a novel magnification-aware conditioning mechanism that utilizes self-supervised learning (SSL) embeddings and allows the diffusion model to synthesize images at different 'zoom' levels, i.e., fixed-size patches extracted from large images at varying scales. ZoomLDM achieves state-of-the-art image generation quality across all scales, excelling particularly in the data-scarce setting of generating thumbnails of entire large images. The multi-scale nature of ZoomLDM unlocks additional capabilities in large image generation, enabling computationally tractable and globally coherent image synthesis up to \(4096 \times 4096\) pixels and \(4\times\) super-resolution. Additionally, multi-scale features extracted from ZoomLDM are highly effective in multiple instance learning experiments. We provide high-resolution examples of the generated images on our website https://histodiffusion.github.io/docs/publications/zoomldm/.</description><subject>Digital imaging</subject><subject>Image processing</subject><subject>Image quality</subject><subject>Image resolution</subject><subject>Satellite imagery</subject><subject>Self-supervised learning</subject><subject>Synthesis</subject><subject>Thumbnail icons</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNyr0KwjAUQOEgCBbtO1xwLrS5jYqTYBWHdnNyKUFvSkqaaH7e3w4-gNMZzrdgGUesikPN-YrlIYxlWfLdnguBGTs9nJvapjtCKyPZCI1WKgXtLHTuRQaU8zAlE3URntIQ6EkOBANZ8jLObMOWSppA-a9rtr1e7udb8fbukyjEfnTJ23n1WCGKGnkt8D_1BeXmOHk</recordid><startdate>20241125</startdate><enddate>20241125</enddate><creator>Yellapragada, Srikar</creator><creator>Graikos, Alexandros</creator><creator>Triaridis, Kostas</creator><creator>Prasanna, Prateek</creator><creator>Gupta, Rajarsi R</creator><creator>Saltz, Joel</creator><creator>Samaras, Dimitris</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241125</creationdate><title>ZoomLDM: Latent Diffusion Model for multi-scale image generation</title><author>Yellapragada, Srikar ; Graikos, Alexandros ; Triaridis, Kostas ; Prasanna, Prateek ; Gupta, Rajarsi R ; Saltz, Joel ; Samaras, Dimitris</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31335432453</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Digital imaging</topic><topic>Image processing</topic><topic>Image quality</topic><topic>Image resolution</topic><topic>Satellite imagery</topic><topic>Self-supervised learning</topic><topic>Synthesis</topic><topic>Thumbnail icons</topic><toplevel>online_resources</toplevel><creatorcontrib>Yellapragada, Srikar</creatorcontrib><creatorcontrib>Graikos, Alexandros</creatorcontrib><creatorcontrib>Triaridis, Kostas</creatorcontrib><creatorcontrib>Prasanna, Prateek</creatorcontrib><creatorcontrib>Gupta, Rajarsi R</creatorcontrib><creatorcontrib>Saltz, Joel</creatorcontrib><creatorcontrib>Samaras, Dimitris</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yellapragada, Srikar</au><au>Graikos, Alexandros</au><au>Triaridis, Kostas</au><au>Prasanna, Prateek</au><au>Gupta, Rajarsi R</au><au>Saltz, Joel</au><au>Samaras, Dimitris</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>ZoomLDM: Latent Diffusion Model for multi-scale image generation</atitle><jtitle>arXiv.org</jtitle><date>2024-11-25</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Diffusion models have revolutionized image generation, yet several challenges restrict their application to large-image domains, such as digital pathology and satellite imagery. Given that it is infeasible to directly train a model on 'whole' images from domains with potential gigapixel sizes, diffusion-based generative methods have focused on synthesizing small, fixed-size patches extracted from these images. However, generating small patches has limited applicability since patch-based models fail to capture the global structures and wider context of large images, which can be crucial for synthesizing (semantically) accurate samples. In this paper, to overcome this limitation, we present ZoomLDM, a diffusion model tailored for generating images across multiple scales. Central to our approach is a novel magnification-aware conditioning mechanism that utilizes self-supervised learning (SSL) embeddings and allows the diffusion model to synthesize images at different 'zoom' levels, i.e., fixed-size patches extracted from large images at varying scales. ZoomLDM achieves state-of-the-art image generation quality across all scales, excelling particularly in the data-scarce setting of generating thumbnails of entire large images. The multi-scale nature of ZoomLDM unlocks additional capabilities in large image generation, enabling computationally tractable and globally coherent image synthesis up to \(4096 \times 4096\) pixels and \(4\times\) super-resolution. Additionally, multi-scale features extracted from ZoomLDM are highly effective in multiple instance learning experiments. We provide high-resolution examples of the generated images on our website https://histodiffusion.github.io/docs/publications/zoomldm/.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-11
issn 2331-8422
language eng
recordid cdi_proquest_journals_3133543245
source Free E- Journals
subjects Digital imaging
Image processing
Image quality
Image resolution
Satellite imagery
Self-supervised learning
Synthesis
Thumbnail icons
title ZoomLDM: Latent Diffusion Model for multi-scale image generation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-30T02%3A13%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=ZoomLDM:%20Latent%20Diffusion%20Model%20for%20multi-scale%20image%20generation&rft.jtitle=arXiv.org&rft.au=Yellapragada,%20Srikar&rft.date=2024-11-25&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3133543245%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3133543245&rft_id=info:pmid/&rfr_iscdi=true