Mobility-LLM: Learning Visiting Intentions and Travel Preferences from Human Mobility Data with Large Language Models

Location-based services (LBS) have accumulated extensive human mobility data on diverse behaviors through check-in sequences. These sequences offer valuable insights into users' intentions and preferences. Yet, existing models analyzing check-in sequences fail to consider the semantics containe...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Gong, Letian, Lin, Yan, Zhang, Xinyue, Lu, Yiwen, Han, Xuedi, Liu, Yichen, Guo, Shengnan, Lin, Youfang, Wan, Huaiyu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Gong, Letian
Lin, Yan
Zhang, Xinyue
Lu, Yiwen
Han, Xuedi
Liu, Yichen
Guo, Shengnan
Lin, Youfang
Wan, Huaiyu
description Location-based services (LBS) have accumulated extensive human mobility data on diverse behaviors through check-in sequences. These sequences offer valuable insights into users' intentions and preferences. Yet, existing models analyzing check-in sequences fail to consider the semantics contained in these sequences, which closely reflect human visiting intentions and travel preferences, leading to an incomplete comprehension. Drawing inspiration from the exceptional semantic understanding and contextual information processing capabilities of large language models (LLMs) across various domains, we present Mobility-LLM, a novel framework that leverages LLMs to analyze check-in sequences for multiple tasks. Since LLMs cannot directly interpret check-ins, we reprogram these sequences to help LLMs comprehensively understand the semantics of human visiting intentions and travel preferences. Specifically, we introduce a visiting intention memory network (VIMN) to capture the visiting intentions at each record, along with a shared pool of human travel preference prompts (HTPP) to guide the LLM in understanding users' travel preferences. These components enhance the model's ability to extract and leverage semantic information from human mobility data effectively. Extensive experiments on four benchmark datasets and three downstream tasks demonstrate that our approach significantly outperforms existing models, underscoring the effectiveness of Mobility-LLM in advancing our understanding of human mobility data within LBS contexts.
doi_str_mv 10.48550/arxiv.2411.00823
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2411_00823</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2411_00823</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2411_008233</originalsourceid><addsrcrecordid>eNqFjrsKwlAQRG9jIeoHWLk_YEw0gtj6QCEBC7ENq9nEhWQje298_L0maG0zc4phOMYMA98LF_O5P0F98t2bhkHg-f5iOuuaOq7OXLB7jaMoXkJEqMKSw4ktuwb24kgcV2IBJYWj4p0KOChlpCQXspBpVcKuLlHgdwZrdAgPdleIUHP6pOQ1fiCuUips33QyLCwNvt0zo-3muNqNW8HkplyivpJGNGlFZ_8Xb2b7SZc</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Mobility-LLM: Learning Visiting Intentions and Travel Preferences from Human Mobility Data with Large Language Models</title><source>arXiv.org</source><creator>Gong, Letian ; Lin, Yan ; Zhang, Xinyue ; Lu, Yiwen ; Han, Xuedi ; Liu, Yichen ; Guo, Shengnan ; Lin, Youfang ; Wan, Huaiyu</creator><creatorcontrib>Gong, Letian ; Lin, Yan ; Zhang, Xinyue ; Lu, Yiwen ; Han, Xuedi ; Liu, Yichen ; Guo, Shengnan ; Lin, Youfang ; Wan, Huaiyu</creatorcontrib><description>Location-based services (LBS) have accumulated extensive human mobility data on diverse behaviors through check-in sequences. These sequences offer valuable insights into users' intentions and preferences. Yet, existing models analyzing check-in sequences fail to consider the semantics contained in these sequences, which closely reflect human visiting intentions and travel preferences, leading to an incomplete comprehension. Drawing inspiration from the exceptional semantic understanding and contextual information processing capabilities of large language models (LLMs) across various domains, we present Mobility-LLM, a novel framework that leverages LLMs to analyze check-in sequences for multiple tasks. Since LLMs cannot directly interpret check-ins, we reprogram these sequences to help LLMs comprehensively understand the semantics of human visiting intentions and travel preferences. Specifically, we introduce a visiting intention memory network (VIMN) to capture the visiting intentions at each record, along with a shared pool of human travel preference prompts (HTPP) to guide the LLM in understanding users' travel preferences. These components enhance the model's ability to extract and leverage semantic information from human mobility data effectively. Extensive experiments on four benchmark datasets and three downstream tasks demonstrate that our approach significantly outperforms existing models, underscoring the effectiveness of Mobility-LLM in advancing our understanding of human mobility data within LBS contexts.</description><identifier>DOI: 10.48550/arxiv.2411.00823</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Learning ; Computer Science - Social and Information Networks</subject><creationdate>2024-10</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2411.00823$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2411.00823$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Gong, Letian</creatorcontrib><creatorcontrib>Lin, Yan</creatorcontrib><creatorcontrib>Zhang, Xinyue</creatorcontrib><creatorcontrib>Lu, Yiwen</creatorcontrib><creatorcontrib>Han, Xuedi</creatorcontrib><creatorcontrib>Liu, Yichen</creatorcontrib><creatorcontrib>Guo, Shengnan</creatorcontrib><creatorcontrib>Lin, Youfang</creatorcontrib><creatorcontrib>Wan, Huaiyu</creatorcontrib><title>Mobility-LLM: Learning Visiting Intentions and Travel Preferences from Human Mobility Data with Large Language Models</title><description>Location-based services (LBS) have accumulated extensive human mobility data on diverse behaviors through check-in sequences. These sequences offer valuable insights into users' intentions and preferences. Yet, existing models analyzing check-in sequences fail to consider the semantics contained in these sequences, which closely reflect human visiting intentions and travel preferences, leading to an incomplete comprehension. Drawing inspiration from the exceptional semantic understanding and contextual information processing capabilities of large language models (LLMs) across various domains, we present Mobility-LLM, a novel framework that leverages LLMs to analyze check-in sequences for multiple tasks. Since LLMs cannot directly interpret check-ins, we reprogram these sequences to help LLMs comprehensively understand the semantics of human visiting intentions and travel preferences. Specifically, we introduce a visiting intention memory network (VIMN) to capture the visiting intentions at each record, along with a shared pool of human travel preference prompts (HTPP) to guide the LLM in understanding users' travel preferences. These components enhance the model's ability to extract and leverage semantic information from human mobility data effectively. Extensive experiments on four benchmark datasets and three downstream tasks demonstrate that our approach significantly outperforms existing models, underscoring the effectiveness of Mobility-LLM in advancing our understanding of human mobility data within LBS contexts.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><subject>Computer Science - Social and Information Networks</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFjrsKwlAQRG9jIeoHWLk_YEw0gtj6QCEBC7ENq9nEhWQje298_L0maG0zc4phOMYMA98LF_O5P0F98t2bhkHg-f5iOuuaOq7OXLB7jaMoXkJEqMKSw4ktuwb24kgcV2IBJYWj4p0KOChlpCQXspBpVcKuLlHgdwZrdAgPdleIUHP6pOQ1fiCuUips33QyLCwNvt0zo-3muNqNW8HkplyivpJGNGlFZ_8Xb2b7SZc</recordid><startdate>20241028</startdate><enddate>20241028</enddate><creator>Gong, Letian</creator><creator>Lin, Yan</creator><creator>Zhang, Xinyue</creator><creator>Lu, Yiwen</creator><creator>Han, Xuedi</creator><creator>Liu, Yichen</creator><creator>Guo, Shengnan</creator><creator>Lin, Youfang</creator><creator>Wan, Huaiyu</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241028</creationdate><title>Mobility-LLM: Learning Visiting Intentions and Travel Preferences from Human Mobility Data with Large Language Models</title><author>Gong, Letian ; Lin, Yan ; Zhang, Xinyue ; Lu, Yiwen ; Han, Xuedi ; Liu, Yichen ; Guo, Shengnan ; Lin, Youfang ; Wan, Huaiyu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2411_008233</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><topic>Computer Science - Social and Information Networks</topic><toplevel>online_resources</toplevel><creatorcontrib>Gong, Letian</creatorcontrib><creatorcontrib>Lin, Yan</creatorcontrib><creatorcontrib>Zhang, Xinyue</creatorcontrib><creatorcontrib>Lu, Yiwen</creatorcontrib><creatorcontrib>Han, Xuedi</creatorcontrib><creatorcontrib>Liu, Yichen</creatorcontrib><creatorcontrib>Guo, Shengnan</creatorcontrib><creatorcontrib>Lin, Youfang</creatorcontrib><creatorcontrib>Wan, Huaiyu</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Gong, Letian</au><au>Lin, Yan</au><au>Zhang, Xinyue</au><au>Lu, Yiwen</au><au>Han, Xuedi</au><au>Liu, Yichen</au><au>Guo, Shengnan</au><au>Lin, Youfang</au><au>Wan, Huaiyu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Mobility-LLM: Learning Visiting Intentions and Travel Preferences from Human Mobility Data with Large Language Models</atitle><date>2024-10-28</date><risdate>2024</risdate><abstract>Location-based services (LBS) have accumulated extensive human mobility data on diverse behaviors through check-in sequences. These sequences offer valuable insights into users' intentions and preferences. Yet, existing models analyzing check-in sequences fail to consider the semantics contained in these sequences, which closely reflect human visiting intentions and travel preferences, leading to an incomplete comprehension. Drawing inspiration from the exceptional semantic understanding and contextual information processing capabilities of large language models (LLMs) across various domains, we present Mobility-LLM, a novel framework that leverages LLMs to analyze check-in sequences for multiple tasks. Since LLMs cannot directly interpret check-ins, we reprogram these sequences to help LLMs comprehensively understand the semantics of human visiting intentions and travel preferences. Specifically, we introduce a visiting intention memory network (VIMN) to capture the visiting intentions at each record, along with a shared pool of human travel preference prompts (HTPP) to guide the LLM in understanding users' travel preferences. These components enhance the model's ability to extract and leverage semantic information from human mobility data effectively. Extensive experiments on four benchmark datasets and three downstream tasks demonstrate that our approach significantly outperforms existing models, underscoring the effectiveness of Mobility-LLM in advancing our understanding of human mobility data within LBS contexts.</abstract><doi>10.48550/arxiv.2411.00823</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2411.00823
ispartof
issn
language eng
recordid cdi_arxiv_primary_2411_00823
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
Computer Science - Learning
Computer Science - Social and Information Networks
title Mobility-LLM: Learning Visiting Intentions and Travel Preferences from Human Mobility Data with Large Language Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T14%3A05%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Mobility-LLM:%20Learning%20Visiting%20Intentions%20and%20Travel%20Preferences%20from%20Human%20Mobility%20Data%20with%20Large%20Language%20Models&rft.au=Gong,%20Letian&rft.date=2024-10-28&rft_id=info:doi/10.48550/arxiv.2411.00823&rft_dat=%3Carxiv_GOX%3E2411_00823%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true