今日推荐

2018 - 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA)

A Comparison of ARIMA and LSTM in Forecasting Time Series

Forecasting time series data is an important subject in economics, business, and finance. Traditionally, there are several techniques to effectively forecast the next lag of time series data such as univariate Autoregressive (AR), univariate Moving Average (MA), Simple Exponential Smoothing (SES), and more notably Autoregressive Integrated Moving Average (ARIMA) with its many variations. In particular, ARIMA model has demonstrated its outperformance in precision and accuracy of predicting the next lags of time series. With the recent advancement in computational power of computers and more importantly development of more advanced machine learning algorithms and approaches such as deep learning, new algorithms are developed to analyze and forecast time series data. The research question investigated in this article is that whether and how the newly developed deep learning-based algorithms for forecasting time series data, such as "Long Short-Term Memory (LSTM)", are superior to the traditional algorithms. The empirical studies conducted and reported in this article show that deep learning-based algorithms such as LSTM outperform traditional-based algorithms such as ARIMA model. More specifically, the average reduction in error rates obtained by LSTM was between 84 - 87 percent when compared to ARIMA indicating the superiority of LSTM to ARIMA. Furthermore, it was noticed that the number of training times, known as "epoch" in deep learning, had no effect on the performance of the trained forecast model and it exhibited a truly random behavior.

2018 - ArXiv

Forecasting Economics and Financial Time Series: ARIMA vs. LSTM

Forecasting time series data is an important subject in economics, business, and finance. Traditionally, there are several techniques to effectively forecast the next lag of time series data such as univariate Autoregressive (AR), univariate Moving Average (MA), Simple Exponential Smoothing (SES), and more notably Autoregressive Integrated Moving Average (ARIMA) with its many variations. In particular, ARIMA model has demonstrated its outperformance in precision and accuracy of predicting the next lags of time series. With the recent advancement in computational power of computers and more importantly developing more advanced machine learning algorithms and approaches such as deep learning, new algorithms are developed to forecast time series data. The research question investigated in this article is that whether and how the newly developed deep learning-based algorithms for forecasting time series data, such as "Long Short-Term Memory (LSTM)", are superior to the traditional algorithms. The empirical studies conducted and reported in this article show that deep learning-based algorithms such as LSTM outperform traditional-based algorithms such as ARIMA model. More specifically, the average reduction in error rates obtained by LSTM is between 84 - 87 percent when compared to ARIMA indicating the superiority of LSTM to ARIMA. Furthermore, it was noticed that the number of training times, known as "epoch" in deep learning, has no effect on the performance of the trained forecast model and it exhibits a truly random behavior.

2016 - IJCAI

ST-MVL: Filling Missing Values in Geo-Sensory Time Series Data

Many sensors have been deployed in the physical world, generating massive geo-tagged time series data. In reality, readings of sensors are usually lost at various unexpected moments because of sensor or communication errors. Those missing readings do not only affect real-time monitoring but also compromise the performance of further data analysis. In this paper, we propose a spatio-temporal multi-view-based learning (ST-MVL) method to collectively fill missing readings in a collection of geosensory time series data, considering 1) the temporal correlation between readings at different timestamps in the same series and 2) the spatial correlation between different time series. Our method combines empirical statistic models, consisting of Inverse Distance Weighting and Simple Exponential Smoothing, with data-driven algorithms, comprised of User-based and Item-based Collaborative Filtering. The former models handle general missing cases based on empirical assumptions derived from history data over a long period, standing for two global views from spatial and temporal perspectives respectively. The latter algorithms deal with special cases where empirical assumptions may not hold, based on recent contexts of data, denoting two local views from spatial and temporal perspectives respectively. The predictions of the four views are aggregated to a final value in a multi-view learning algorithm. We evaluate our method based on Beijing air quality and meteorological data, finding advantages to our model compared with ten baseline approaches.

论文关键词

neural network sensor network machine learning artificial neural network support vector machine deep learning time series data mining support vector vector machine wavelet transform data analysi deep neural network neural network model hidden markov model regression model deep neural anomaly detection gene expression data base generative adversarial network generative adversarial time series datum adversarial network experimental datum fourier series nearest neighbor support vector regression time series analysi missing datum data based moving average gene expression datum time series model series analysi lyapunov exponent series datum outlier detection dynamic time warping time series forecasting data mining algorithm panel datum time series prediction series model multivariate time series finite time unit root dynamic time linear and nonlinear series forecasting time warping distance measure financial time series series prediction integrated moving average experimental comparison multivariate time financial time dependent variable chaotic time series nonlinear time vegetation index nonlinear time series arima model fuzzy time large time anomaly detection method fuzzy time series chaotic time autoregressive integrated moving time series based air pollutant time series classification representation method fokker-planck equation series representation similarity analysi series classification univariate time series time series clustering unsupervised anomaly detection periodic pattern nearest neighbor classification time series dataset series data mining time series regression anomaly detection approach time series database series clustering observed time series forecasting time series local similarity long time series time series similarity series database fmri time series complex time indian stock market time series representation symbolic aggregate approximation complex time series forecasting time series data set series similarity fmri time time series anomaly large time series series data analysi series anomaly detection analyzing time series expression time series interrupted time series ucr time series time correction modeling time series clustering time series mining time series interrupted time series data based fourier series representation simple exponential smoothing early classification forecast time series time series subsequence sensor networks pose distributed index piecewise constant approximation quality time series mining time microarray time series incomplete time series massive time series large-scale time series analysing time series microarray time neural time series mri time neural time series data generated time series experiment visualizing time series called time series data set