Path: blob/master/19_recurrent_neural_nets/04_multivariate_timeseries.ipynb
2923 views
Multivariate Time Series Regression
So far, we have limited our modeling efforts to single time series. RNNs are naturally well suited to multivariate time series and represent a non-linear alternative to the Vector Autoregressive (VAR) models we covered in Chapter 8, Time Series Models.
Imports & Settings
Load Data
For comparison, we illustrate the application of RNNs to modeling and forecasting several time series using the same dataset we used for the VAR example, monthly data on consumer sentiment, and industrial production from the Federal Reserve's FRED service in Chapter 8, Time Series Models:
Prepare Data
Stationarity
We apply the same transformation—annual difference for both series, prior log-transform for industrial production—to achieve stationarity that we used in Chapter 8 on Time Series Models:
Scaling
Then we scale the transformed data to the [0,1] interval:
Plot original and transformed series
Reshape data into RNN format
We can reshape directly to get non-overlapping series, i.e., one sample for each year (works only if the number of samples is divisible by window size):
However, we want rolling, not non-overlapping lagged values. The create_multivariate_rnn_data function transforms a dataset of several time series into the shape required by the Keras RNN layers, namely n_samples
x window_size
x n_series
, as follows:
We will use window_size of 24 months and obtain the desired inputs for our RNN model, as follows:
Finally, we split our data into a train and a test set, using the last 24 months to test the out-of-sample performance, as shown here:
Define Model Architecture
We use a similar architecture with two stacked LSTM layers with 12 and 6 units, respectively, followed by a fully-connected layer with 10 units. The output layer has two units, one for each time series. We compile them using mean absolute loss and the recommended RMSProp optimizer, as follows:
The model has 1,268 parameters, as shown here:
Train the Model
We train for 50 epochs with a batch_size value of 20 using early stopping:
Evaluate the Results
Training stops early after 22 epochs, yielding a test MAE of 1.71, which compares favorably to the test MAE for the VAR model of 1.91.
However, the two results are not fully comparable because the RNN model produces 24 one-step-ahead forecasts, whereas the VAR model uses its own predictions as input for its out-of-sample forecast. You may want to tweak the VAR setup to obtain comparable forecasts and compare their performance: