Translating Human Mobility Forecasting through Natural Language Generation

Existing human mobility forecasting models follow the standard design of the time-series prediction model which takes a series of numerical values as input to generate a numerical value as a prediction. Although treating this as a regression problem seems straightforward, incorporating various contextual information such as the semantic category information of each Place-of-Interest (POI) is a necessary step, and often the bottleneck, in designing an effective mobility prediction model. As opposed to the typical approach, we treat forecasting as a translation problem and propose a novel forecasting through a language generation pipeline. The paper aims to address the human mobility forecasting problem as a language translation task in a sequence-to-sequence manner. A mobilityto-language template is first introduced to describe the numerical mobility data as natural language sentences. The core intuition of the human mobility forecasting translation task is to convert the input mobility description sentences into a future mobility description from which the prediction target can be obtained. Under this pipeline, a two-branch network, SHIFT (Translating Human Mobility Forecasting), is designed. Specifically, it consists of one main branch for language generation and one auxiliary branch to directly learn mobility patterns. During the training, we develop a momentum mode for better connecting and training the two branches. Extensive experiments on three real-world datasets demonstrate that the proposed SHIFT is effective and presents a new revolutionary approach to forecasting human mobility. CCS CONCEPTS • Computing methodologies→ Natural language generation; • Information systems→ Data mining.

[1]  Georg Heigold,et al.  An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale , 2021, ICLR.

[2]  Yu Zheng,et al.  Deep Spatio-Temporal Residual Networks for Citywide Crowd Flows Prediction , 2016, AAAI.

[3]  Junbo Zhang,et al.  Flow Prediction in Spatio-Temporal Networks Based on Multitask Deep Learning , 2020, IEEE Transactions on Knowledge and Data Engineering.

[4]  Tieniu Tan,et al.  Predicting the Next Location: A Recurrent Model with Spatial and Temporal Contexts , 2016, AAAI.

[5]  Qiang Liu,et al.  STAN: Spatio-Temporal Attention Network for Next Location Recommendation , 2021, WWW.

[6]  Flora D. Salim,et al.  TERMCast: Temporal Relation Modeling for Effective Urban Flow Forecasting , 2021, PAKDD.

[7]  Robert Fildes,et al.  Forecasting third-party mobile payments with implications for customer flow prediction , 2020, International Journal of Forecasting.

[8]  Depeng Jin,et al.  Predicting Human Mobility With Semantic Motivation via Multi-Task Attentional Recurrent Networks , 2020, IEEE Transactions on Knowledge and Data Engineering.

[9]  Omer Levy,et al.  BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension , 2019, ACL.

[10]  Xiaoli Li,et al.  Rank-GeoFM: A Ranking based Geographical Factorization Method for Point of Interest Recommendation , 2015, SIGIR.

[11]  Jilong Wang,et al.  Predicting Human Mobility via Attentive Convolutional Network , 2020, WSDM.

[12]  Yoshua Bengio,et al.  Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling , 2014, ArXiv.

[13]  Flora D. Salim,et al.  Exploring Self-Supervised Representation Ensembles for COVID-19 Cough Classification , 2021, KDD.

[14]  Lukasz Kaiser,et al.  Reformer: The Efficient Transformer , 2020, ICLR.

[15]  Xing Xie,et al.  GeoMF: joint geographical modeling and matrix factorization for point-of-interest recommendation , 2014, KDD.

[16]  Kwan Hui Lim,et al.  Transformer-Based Multi-task Learning for Queuing Time Aware Next POI Recommendation , 2021, PAKDD.

[17]  Paolo Frasconi,et al.  Short-Term Traffic Flow Forecasting: An Experimental Comparison of Time-Series Analysis and Supervised Learning , 2013, IEEE Transactions on Intelligent Transportation Systems.

[18]  Nitesh V. Chawla,et al.  Hierarchically Structured Transformer Networks for Fine-Grained Spatial Event Forecasting , 2020, WWW.

[19]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[20]  Jianmin Wang,et al.  Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting , 2021, NeurIPS.

[21]  Jie Chen,et al.  Predicting Crowd Flows via Pyramid Dilated Deeper Spatial-temporal Network , 2021, WSDM.

[22]  Gao Cong,et al.  Context-aware Deep Model for Joint Mobility and Time Prediction , 2020, WSDM.

[23]  Wenhu Chen,et al.  Enhancing the Locality and Breaking the Memory Bottleneck of Transformer on Time Series Forecasting , 2019, NeurIPS.

[24]  Billy M. Williams,et al.  Modeling and Forecasting Vehicular Traffic Flow as a Seasonal ARIMA Process: Theoretical Basis and Empirical Results , 2003, Journal of Transportation Engineering.

[25]  Kaiming He,et al.  Momentum Contrast for Unsupervised Visual Representation Learning , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[26]  Yoshua Bengio,et al.  Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.

[27]  Meina Song,et al.  Context-aware probabilistic matrix factorization modeling for point-of-interest recommendation , 2017, Neurocomputing.

[28]  Qing Guo,et al.  An Attentional Recurrent Neural Network for Personalized Next Location Recommendation , 2020, AAAI.

[29]  Paolo Rosso,et al.  Location Prediction over Sparse User Mobility Traces Using RNNs: Flashback in Hidden States! , 2020, IJCAI.

[30]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[31]  Cordelia Schmid,et al.  ViViT: A Video Vision Transformer , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).

[32]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[33]  Hui Xiong,et al.  Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting , 2020, AAAI.

[34]  Chao Zhang,et al.  DeepMove: Predicting Human Mobility with Attentional Recurrent Networks , 2018, WWW.