An AI-augmented multimodal application for sketching out conceptual design
The goal of this paper is to develop an interactive web-based machine learning application to assist architects with multimodal inputs (sketches and textual information) for conceptual design. With different textual inputs, the application generates the architectural stylistic variations of a user’s...
Gespeichert in:
Veröffentlicht in: | International journal of architectural computing 2023-12, Vol.21 (4), p.565-580 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 580 |
---|---|
container_issue | 4 |
container_start_page | 565 |
container_title | International journal of architectural computing |
container_volume | 21 |
creator | Zhou, Yifan Park, Hyoung-June |
description | The goal of this paper is to develop an interactive web-based machine learning application to assist architects with multimodal inputs (sketches and textual information) for conceptual design. With different textual inputs, the application generates the architectural stylistic variations of a user’s initial sketch input as a design inspiration. A novel machine learning model for the multimodal input application is introduced and compared to others. The machine learning model is performed through procedural training with the content curation of training data (1) to control the fidelity of generated designs from the input and (2) to manage their diversity. The web-based interface is at its work in progress as a frontend of the proposed application for better user experience and future data collection. In this paper, the framework of the proposed interactive application is explained. Furthermore, the implementation of its prototype is demonstrated with various examples. |
doi_str_mv | 10.1177/14780771221147605 |
format | Article |
fullrecord | <record><control><sourceid>sage_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1177_14780771221147605</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sage_id>10.1177_14780771221147605</sage_id><sourcerecordid>10.1177_14780771221147605</sourcerecordid><originalsourceid>FETCH-LOGICAL-c284t-3c8ce2e47332a032c7575ddf43f96f1645978267ee75cd835a12717558f8d0653</originalsourceid><addsrcrecordid>eNp9kM1OwzAQhC0EEqXwANz8AileO846x6rip6gSFzhHln9CSmJHsXPg7UlVbkicdqSdbzQaQu6BbQAQH6BExRCBc1hkxeQFWXFWqkLUSl2S1elfnAzX5CalI2NCAqgVed0Gut0Xem4HF7KzdJj73A3R6p7qcew7o3MXA_VxounLZfPZhZbGOVMTg3Fjnhejdalrwy258rpP7u73rsnH0-P77qU4vD3vd9tDYbgqcyGMMo67EoXgmgluUKK01pfC15WHqpQ1Kl6hcyiNVUJq4AgopfLKskqKNYFzrpliSpPzzTh1g56-G2DNaYzmzxgLszkzSbeuOcZ5CkvFf4Afw0NeTg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>An AI-augmented multimodal application for sketching out conceptual design</title><source>SAGE Complete</source><creator>Zhou, Yifan ; Park, Hyoung-June</creator><creatorcontrib>Zhou, Yifan ; Park, Hyoung-June</creatorcontrib><description>The goal of this paper is to develop an interactive web-based machine learning application to assist architects with multimodal inputs (sketches and textual information) for conceptual design. With different textual inputs, the application generates the architectural stylistic variations of a user’s initial sketch input as a design inspiration. A novel machine learning model for the multimodal input application is introduced and compared to others. The machine learning model is performed through procedural training with the content curation of training data (1) to control the fidelity of generated designs from the input and (2) to manage their diversity. The web-based interface is at its work in progress as a frontend of the proposed application for better user experience and future data collection. In this paper, the framework of the proposed interactive application is explained. Furthermore, the implementation of its prototype is demonstrated with various examples.</description><identifier>ISSN: 1478-0771</identifier><identifier>EISSN: 2048-3988</identifier><identifier>DOI: 10.1177/14780771221147605</identifier><language>eng</language><publisher>London, England: SAGE Publications</publisher><ispartof>International journal of architectural computing, 2023-12, Vol.21 (4), p.565-580</ispartof><rights>The Author(s) 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c284t-3c8ce2e47332a032c7575ddf43f96f1645978267ee75cd835a12717558f8d0653</citedby><cites>FETCH-LOGICAL-c284t-3c8ce2e47332a032c7575ddf43f96f1645978267ee75cd835a12717558f8d0653</cites><orcidid>0000-0002-0774-6871 ; 0000-0001-7871-1310</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://journals.sagepub.com/doi/pdf/10.1177/14780771221147605$$EPDF$$P50$$Gsage$$H</linktopdf><linktohtml>$$Uhttps://journals.sagepub.com/doi/10.1177/14780771221147605$$EHTML$$P50$$Gsage$$H</linktohtml><link.rule.ids>314,776,780,21799,27903,27904,43600,43601</link.rule.ids></links><search><creatorcontrib>Zhou, Yifan</creatorcontrib><creatorcontrib>Park, Hyoung-June</creatorcontrib><title>An AI-augmented multimodal application for sketching out conceptual design</title><title>International journal of architectural computing</title><description>The goal of this paper is to develop an interactive web-based machine learning application to assist architects with multimodal inputs (sketches and textual information) for conceptual design. With different textual inputs, the application generates the architectural stylistic variations of a user’s initial sketch input as a design inspiration. A novel machine learning model for the multimodal input application is introduced and compared to others. The machine learning model is performed through procedural training with the content curation of training data (1) to control the fidelity of generated designs from the input and (2) to manage their diversity. The web-based interface is at its work in progress as a frontend of the proposed application for better user experience and future data collection. In this paper, the framework of the proposed interactive application is explained. Furthermore, the implementation of its prototype is demonstrated with various examples.</description><issn>1478-0771</issn><issn>2048-3988</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp9kM1OwzAQhC0EEqXwANz8AileO846x6rip6gSFzhHln9CSmJHsXPg7UlVbkicdqSdbzQaQu6BbQAQH6BExRCBc1hkxeQFWXFWqkLUSl2S1elfnAzX5CalI2NCAqgVed0Gut0Xem4HF7KzdJj73A3R6p7qcew7o3MXA_VxounLZfPZhZbGOVMTg3Fjnhejdalrwy258rpP7u73rsnH0-P77qU4vD3vd9tDYbgqcyGMMo67EoXgmgluUKK01pfC15WHqpQ1Kl6hcyiNVUJq4AgopfLKskqKNYFzrpliSpPzzTh1g56-G2DNaYzmzxgLszkzSbeuOcZ5CkvFf4Afw0NeTg</recordid><startdate>202312</startdate><enddate>202312</enddate><creator>Zhou, Yifan</creator><creator>Park, Hyoung-June</creator><general>SAGE Publications</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-0774-6871</orcidid><orcidid>https://orcid.org/0000-0001-7871-1310</orcidid></search><sort><creationdate>202312</creationdate><title>An AI-augmented multimodal application for sketching out conceptual design</title><author>Zhou, Yifan ; Park, Hyoung-June</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c284t-3c8ce2e47332a032c7575ddf43f96f1645978267ee75cd835a12717558f8d0653</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhou, Yifan</creatorcontrib><creatorcontrib>Park, Hyoung-June</creatorcontrib><collection>CrossRef</collection><jtitle>International journal of architectural computing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhou, Yifan</au><au>Park, Hyoung-June</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>An AI-augmented multimodal application for sketching out conceptual design</atitle><jtitle>International journal of architectural computing</jtitle><date>2023-12</date><risdate>2023</risdate><volume>21</volume><issue>4</issue><spage>565</spage><epage>580</epage><pages>565-580</pages><issn>1478-0771</issn><eissn>2048-3988</eissn><abstract>The goal of this paper is to develop an interactive web-based machine learning application to assist architects with multimodal inputs (sketches and textual information) for conceptual design. With different textual inputs, the application generates the architectural stylistic variations of a user’s initial sketch input as a design inspiration. A novel machine learning model for the multimodal input application is introduced and compared to others. The machine learning model is performed through procedural training with the content curation of training data (1) to control the fidelity of generated designs from the input and (2) to manage their diversity. The web-based interface is at its work in progress as a frontend of the proposed application for better user experience and future data collection. In this paper, the framework of the proposed interactive application is explained. Furthermore, the implementation of its prototype is demonstrated with various examples.</abstract><cop>London, England</cop><pub>SAGE Publications</pub><doi>10.1177/14780771221147605</doi><tpages>16</tpages><orcidid>https://orcid.org/0000-0002-0774-6871</orcidid><orcidid>https://orcid.org/0000-0001-7871-1310</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1478-0771 |
ispartof | International journal of architectural computing, 2023-12, Vol.21 (4), p.565-580 |
issn | 1478-0771 2048-3988 |
language | eng |
recordid | cdi_crossref_primary_10_1177_14780771221147605 |
source | SAGE Complete |
title | An AI-augmented multimodal application for sketching out conceptual design |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-24T23%3A03%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-sage_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=An%20AI-augmented%20multimodal%20application%20for%20sketching%20out%20conceptual%20design&rft.jtitle=International%20journal%20of%20architectural%20computing&rft.au=Zhou,%20Yifan&rft.date=2023-12&rft.volume=21&rft.issue=4&rft.spage=565&rft.epage=580&rft.pages=565-580&rft.issn=1478-0771&rft.eissn=2048-3988&rft_id=info:doi/10.1177/14780771221147605&rft_dat=%3Csage_cross%3E10.1177_14780771221147605%3C/sage_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_sage_id=10.1177_14780771221147605&rfr_iscdi=true |