An AI-augmented multimodal application for sketching out conceptual design

The goal of this paper is to develop an interactive web-based machine learning application to assist architects with multimodal inputs (sketches and textual information) for conceptual design. With different textual inputs, the application generates the architectural stylistic variations of a user’s...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of architectural computing 2023-12, Vol.21 (4), p.565-580
Hauptverfasser: Zhou, Yifan, Park, Hyoung-June
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The goal of this paper is to develop an interactive web-based machine learning application to assist architects with multimodal inputs (sketches and textual information) for conceptual design. With different textual inputs, the application generates the architectural stylistic variations of a user’s initial sketch input as a design inspiration. A novel machine learning model for the multimodal input application is introduced and compared to others. The machine learning model is performed through procedural training with the content curation of training data (1) to control the fidelity of generated designs from the input and (2) to manage their diversity. The web-based interface is at its work in progress as a frontend of the proposed application for better user experience and future data collection. In this paper, the framework of the proposed interactive application is explained. Furthermore, the implementation of its prototype is demonstrated with various examples.
ISSN:1478-0771
2048-3988
DOI:10.1177/14780771221147605