Multitask Learning Using Feature Extraction Network for Smart Tourism Applications

Recently around half of the world’s current population resides in urban areas and benefit from rich services in the smart city. The majority of smart city services are recommendation-related services, and with the development of Internet, most recommendation services in smart economy are online reco...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE internet of things journal 2023-11, Vol.10 (21), p.18790-18798
Hauptverfasser: Li, Yu, Zeng, Fanxiang, Zhang, Nan, Chen, Zulong, Zhou, Li, Huang, Maolei, Zhu, Tianqi, Wang, Jing
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Recently around half of the world’s current population resides in urban areas and benefit from rich services in the smart city. The majority of smart city services are recommendation-related services, and with the development of Internet, most recommendation services in smart economy are online recommendations. Online travel platforms (OTPs) (like Booking, Airbnb, Ctrip, and Fliggy) provide people sufficient resources and convenient approaches to plan and enjoy their trips in smart city. Hotel recommendation is essential for the success of OTPs. However, it is more challenging compared to item recommendation in typical E-commerce scenarios (e.g., Taobao, Jd, and YouTube). The in-nature characteristics of low-frequency and high unit-price lead to more severe sparse and long-tail data distributions. Moreover, for enhancing user experience and business returns, the recommender system seeks to improve both click-through rate (CTR) and conversion rate (CVR) where the seesaw phenomenon may occur. In order to address the aforementioned shortages in hotel recommendation, a multitask learning (MTL) method with a novel flexible multilevel extraction network [denoted as flexible MTL (FMTL)] is proposed. Particularly, FMTL takes MTL into consideration in a unified representation learning framework and is divided into feature encoding and task prediction. In the feature encoding phase, we introduce a novel multirepresentation extractor with temperature-adjusted gating mechanism (T-MRE) for each task, producing more flexible representations for sparse and long-tail data. Moreover, we fuse different representations for each task with three strategies during the prediction phase and empirically demonstrate that the simple concatenation strategy is superior than other relatively complex gating approaches. Offline and live experiments with regard to both overall metrics and user group analysis based on the scarcity of user behaviors illustrate that without significantly increasing model parameters, our FMTL model outperforms substantially over several state-of-the-art models.
ISSN:2327-4662
2327-4662
DOI:10.1109/JIOT.2023.3281329