Synthesis and generation for 3D architecture volume with generative modeling
Generative design in architecture has long been studied, yet most algorithms are parameter-based and require explicit rules, and the design solutions are heavily experience-based. In the absence of a real understanding of the generation process of designing architecture and consensus evaluation matr...
Gespeichert in:
Veröffentlicht in: | International journal of architectural computing 2023-06, Vol.21 (2), p.297-314 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 314 |
---|---|
container_issue | 2 |
container_start_page | 297 |
container_title | International journal of architectural computing |
container_volume | 21 |
creator | Zhuang, Xinwei Ju, Yi Yang, Allen Luisa Caldas |
description | Generative design in architecture has long been studied, yet most algorithms are parameter-based and require explicit rules, and the design solutions are heavily experience-based. In the absence of a real understanding of the generation process of designing architecture and consensus evaluation matrices, empirical knowledge may be difficult to apply to similar projects or deliver to the next generation. We propose a workflow in the early design phase to synthesize and generate building morphology with artificial neural networks. Using 3D building models from the financial district of New York City as a case study, this research shows that neural networks can capture the implicit features and styles of the input dataset and create a population of design solutions that are coherent with the styles. We constructed our database using two different data representation formats, voxel matrix and signed distance function, to investigate the effect of shape representations on the performance of the generation of building shapes. A generative adversarial neural network and an auto decoder were used to generate the volume. Our study establishes the use of implicit learning to inform the design solution. Results show that both networks can grasp the implicit building forms and generate them with a similar style to the input data, between which the auto decoder with signed distance function representation provides the highest resolution results. |
doi_str_mv | 10.1177/14780771231168233 |
format | Article |
fullrecord | <record><control><sourceid>sage_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1177_14780771231168233</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sage_id>10.1177_14780771231168233</sage_id><sourcerecordid>10.1177_14780771231168233</sourcerecordid><originalsourceid>FETCH-LOGICAL-c327t-131fd617795f9e97d030411fb361b66b7334228b38d9abed5905ffe82e2977263</originalsourceid><addsrcrecordid>eNp9kM1OwzAQhC0EEqHwANz8AilebxLbR1R-ilSJA3COnHidpmoSZKdFfXtSFXFB4jSHmW80GsZuQcwBlLqDTGmhFEgEKLREPGOJFJlO0Wh9zpKjnx4Dl-wqxo0QmAPohK3eDv24pthGbnvHG-op2LEdeu6HwPGB21Cv25HqcReI74ftriP-1Y7r3-ieeDc42rZ9c80uvN1GuvnRGft4enxfLNPV6_PL4n6V1ijVmAKCd8W02uTekFFOoMgAfIUFVEVRKcRMSl2hdsZW5HIjcu9JS5JGKVngjMGptw5DjIF8-RnazoZDCaI83lH-uWNi5icm2obKzbAL_TTxH-Ab4JhfWg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Synthesis and generation for 3D architecture volume with generative modeling</title><source>Access via SAGE</source><creator>Zhuang, Xinwei ; Ju, Yi ; Yang, Allen ; Luisa Caldas</creator><creatorcontrib>Zhuang, Xinwei ; Ju, Yi ; Yang, Allen ; Luisa Caldas</creatorcontrib><description>Generative design in architecture has long been studied, yet most algorithms are parameter-based and require explicit rules, and the design solutions are heavily experience-based. In the absence of a real understanding of the generation process of designing architecture and consensus evaluation matrices, empirical knowledge may be difficult to apply to similar projects or deliver to the next generation. We propose a workflow in the early design phase to synthesize and generate building morphology with artificial neural networks. Using 3D building models from the financial district of New York City as a case study, this research shows that neural networks can capture the implicit features and styles of the input dataset and create a population of design solutions that are coherent with the styles. We constructed our database using two different data representation formats, voxel matrix and signed distance function, to investigate the effect of shape representations on the performance of the generation of building shapes. A generative adversarial neural network and an auto decoder were used to generate the volume. Our study establishes the use of implicit learning to inform the design solution. Results show that both networks can grasp the implicit building forms and generate them with a similar style to the input data, between which the auto decoder with signed distance function representation provides the highest resolution results.</description><identifier>ISSN: 1478-0771</identifier><identifier>EISSN: 2048-3988</identifier><identifier>DOI: 10.1177/14780771231168233</identifier><language>eng</language><publisher>London, England: SAGE Publications</publisher><ispartof>International journal of architectural computing, 2023-06, Vol.21 (2), p.297-314</ispartof><rights>The Author(s) 2023</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c327t-131fd617795f9e97d030411fb361b66b7334228b38d9abed5905ffe82e2977263</citedby><cites>FETCH-LOGICAL-c327t-131fd617795f9e97d030411fb361b66b7334228b38d9abed5905ffe82e2977263</cites><orcidid>0000-0003-1636-3386 ; 0000-0003-3518-7488</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://journals.sagepub.com/doi/pdf/10.1177/14780771231168233$$EPDF$$P50$$Gsage$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://journals.sagepub.com/doi/10.1177/14780771231168233$$EHTML$$P50$$Gsage$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,21819,27924,27925,43621,43622</link.rule.ids></links><search><creatorcontrib>Zhuang, Xinwei</creatorcontrib><creatorcontrib>Ju, Yi</creatorcontrib><creatorcontrib>Yang, Allen</creatorcontrib><creatorcontrib>Luisa Caldas</creatorcontrib><title>Synthesis and generation for 3D architecture volume with generative modeling</title><title>International journal of architectural computing</title><description>Generative design in architecture has long been studied, yet most algorithms are parameter-based and require explicit rules, and the design solutions are heavily experience-based. In the absence of a real understanding of the generation process of designing architecture and consensus evaluation matrices, empirical knowledge may be difficult to apply to similar projects or deliver to the next generation. We propose a workflow in the early design phase to synthesize and generate building morphology with artificial neural networks. Using 3D building models from the financial district of New York City as a case study, this research shows that neural networks can capture the implicit features and styles of the input dataset and create a population of design solutions that are coherent with the styles. We constructed our database using two different data representation formats, voxel matrix and signed distance function, to investigate the effect of shape representations on the performance of the generation of building shapes. A generative adversarial neural network and an auto decoder were used to generate the volume. Our study establishes the use of implicit learning to inform the design solution. Results show that both networks can grasp the implicit building forms and generate them with a similar style to the input data, between which the auto decoder with signed distance function representation provides the highest resolution results.</description><issn>1478-0771</issn><issn>2048-3988</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>AFRWT</sourceid><recordid>eNp9kM1OwzAQhC0EEqHwANz8AilebxLbR1R-ilSJA3COnHidpmoSZKdFfXtSFXFB4jSHmW80GsZuQcwBlLqDTGmhFEgEKLREPGOJFJlO0Wh9zpKjnx4Dl-wqxo0QmAPohK3eDv24pthGbnvHG-op2LEdeu6HwPGB21Cv25HqcReI74ftriP-1Y7r3-ieeDc42rZ9c80uvN1GuvnRGft4enxfLNPV6_PL4n6V1ijVmAKCd8W02uTekFFOoMgAfIUFVEVRKcRMSl2hdsZW5HIjcu9JS5JGKVngjMGptw5DjIF8-RnazoZDCaI83lH-uWNi5icm2obKzbAL_TTxH-Ab4JhfWg</recordid><startdate>202306</startdate><enddate>202306</enddate><creator>Zhuang, Xinwei</creator><creator>Ju, Yi</creator><creator>Yang, Allen</creator><creator>Luisa Caldas</creator><general>SAGE Publications</general><scope>AFRWT</scope><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0003-1636-3386</orcidid><orcidid>https://orcid.org/0000-0003-3518-7488</orcidid></search><sort><creationdate>202306</creationdate><title>Synthesis and generation for 3D architecture volume with generative modeling</title><author>Zhuang, Xinwei ; Ju, Yi ; Yang, Allen ; Luisa Caldas</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c327t-131fd617795f9e97d030411fb361b66b7334228b38d9abed5905ffe82e2977263</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhuang, Xinwei</creatorcontrib><creatorcontrib>Ju, Yi</creatorcontrib><creatorcontrib>Yang, Allen</creatorcontrib><creatorcontrib>Luisa Caldas</creatorcontrib><collection>Sage Journals GOLD Open Access 2024</collection><collection>CrossRef</collection><jtitle>International journal of architectural computing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhuang, Xinwei</au><au>Ju, Yi</au><au>Yang, Allen</au><au>Luisa Caldas</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Synthesis and generation for 3D architecture volume with generative modeling</atitle><jtitle>International journal of architectural computing</jtitle><date>2023-06</date><risdate>2023</risdate><volume>21</volume><issue>2</issue><spage>297</spage><epage>314</epage><pages>297-314</pages><issn>1478-0771</issn><eissn>2048-3988</eissn><abstract>Generative design in architecture has long been studied, yet most algorithms are parameter-based and require explicit rules, and the design solutions are heavily experience-based. In the absence of a real understanding of the generation process of designing architecture and consensus evaluation matrices, empirical knowledge may be difficult to apply to similar projects or deliver to the next generation. We propose a workflow in the early design phase to synthesize and generate building morphology with artificial neural networks. Using 3D building models from the financial district of New York City as a case study, this research shows that neural networks can capture the implicit features and styles of the input dataset and create a population of design solutions that are coherent with the styles. We constructed our database using two different data representation formats, voxel matrix and signed distance function, to investigate the effect of shape representations on the performance of the generation of building shapes. A generative adversarial neural network and an auto decoder were used to generate the volume. Our study establishes the use of implicit learning to inform the design solution. Results show that both networks can grasp the implicit building forms and generate them with a similar style to the input data, between which the auto decoder with signed distance function representation provides the highest resolution results.</abstract><cop>London, England</cop><pub>SAGE Publications</pub><doi>10.1177/14780771231168233</doi><tpages>18</tpages><orcidid>https://orcid.org/0000-0003-1636-3386</orcidid><orcidid>https://orcid.org/0000-0003-3518-7488</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1478-0771 |
ispartof | International journal of architectural computing, 2023-06, Vol.21 (2), p.297-314 |
issn | 1478-0771 2048-3988 |
language | eng |
recordid | cdi_crossref_primary_10_1177_14780771231168233 |
source | Access via SAGE |
title | Synthesis and generation for 3D architecture volume with generative modeling |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T01%3A43%3A04IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-sage_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Synthesis%20and%20generation%20for%203D%20architecture%20volume%20with%20generative%20modeling&rft.jtitle=International%20journal%20of%20architectural%20computing&rft.au=Zhuang,%20Xinwei&rft.date=2023-06&rft.volume=21&rft.issue=2&rft.spage=297&rft.epage=314&rft.pages=297-314&rft.issn=1478-0771&rft.eissn=2048-3988&rft_id=info:doi/10.1177/14780771231168233&rft_dat=%3Csage_cross%3E10.1177_14780771231168233%3C/sage_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_sage_id=10.1177_14780771231168233&rfr_iscdi=true |