Multimodal dynamic graph convolutional network for crowdfunding success prediction
Crowdfunding creates opportunities for creative people to raise funds so their ideas can be brought to life. However, the failure of fundraising leads to certain losses for project starters. Crowdfunding success prediction allows them to learn about the probability of fundraising success as early as...
Gespeichert in:
Veröffentlicht in: | Applied soft computing 2024-03, Vol.154, p.111313, Article 111313 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Crowdfunding creates opportunities for creative people to raise funds so their ideas can be brought to life. However, the failure of fundraising leads to certain losses for project starters. Crowdfunding success prediction allows them to learn about the probability of fundraising success as early as possible, which can reduce their losses or lead them to modify project content for increasing the probability. Crowdfunding success prediction is a challenging classification task because there is diverse descriptive data but relative little supervisory information. There has been a lot of work done based on post-launch factors, but pre-launch prediction is more valuable for project creators and crowdfunding platforms. Although some efforts on pre-launch prediction were also made, most of them only analyze one or more of metadata, text and image, and the multimodal study about text and video is lacked. Considering this, we propose a multimodal dynamic graph convolutional network for crowdfunding success prediction based on text and video data. Specifically, sentences and frames are viewed as nodes, and edges are constructed based on location relationships and cosine similarity, which reconstructs the text and video into a multimodal graph and allows multimodal features to interact in the form of a graph. Besides, dynamic graph convolution is utilized to fuse the features of nodes, where dynamic means the structure of graph changes during graph convolution. Besides, we make two multimodal crowdfunding datasets containing metadata, text, image, video and project status, and they are utilized to validate the effectiveness of crowdfunding prediction models. Experiments conducted on them show that our model outperforms existing state-of-the-art baselines, and interaction and fusion in the form of a graph can effectively integrate features from multiple modalities.
•Predict crowdfunding success based on only two pre-launch factors, texts and videos.•Propose a novel multimodal dynamic graph convolutional network (MDGCN).•Make two multimodal crowdfunding datasets.•MDGCN outperforms existing state-of-the-art methods. |
---|---|
ISSN: | 1568-4946 1872-9681 |
DOI: | 10.1016/j.asoc.2024.111313 |