Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Path: blob/main/examples/multivariate_informer.ipynb
Views: 2535
Multivariate Probabilistic Time Series Forecasting with Informer
Introduction
A few months ago we introduced the Time Series Transformer, which is the vanilla Transformer (Vaswani et al., 2017) applied to forecasting, and showed an example for the univariate probabilistic forecasting task (i.e. predicting each time series' 1-d distribution individually). In this post we introduce the Informer model (Zhou, Haoyi, et al., 2021), AAAI21 best paper which is now available in 🤗 Transformers. We will show how to use the Informer model for the multivariate probabilistic forecasting task, i.e., predicting the distribution of a future vector of time-series target values. Note that this will also work for the vanilla Time Series Transformer model.
Multivariate Probabilistic Time Series Forecasting
As far as the modeling aspect of probabilistic forecasting is concerned, the Transformer/Informer will require no change when dealing with multivariate time series. In both the univariate and multivariate setting, the model will receive a sequence of vectors and thus the only change is on the output or emission side.
Modeling the full joint conditional distribution of high dimensional data can get computationally expensive and thus methods resort to some approximation of the distribution, the easiest being to model the data as an independent distribution from the same family, or some low-rank approximation to the full covariance, etc. Here we will just resort to the independent (or diagonal) emissions which are supported for the families of distributions we have implemented here.
Informer - Under The Hood
Based on the vanilla Transformer (Vaswani et al., 2017), Informer employs two major improvements. To understand these improvements, let's recall the drawbacks of the vanilla Transformer:
Quadratic computation of canonical self-attention: The vanilla Transformer has a computational complexity of where is the time series length and is the dimension of the hidden states. For long sequence time-series forecasting (also known as the LSTF problem), this might be really computationally expensive. To solve this problem, Informer employs a new self-attention mechanism called ProbSparse attention, which has time and space complexity.
Memory bottleneck when stacking layers: When stacking encoder/decoder layers, the vanilla Transformer has a memory usage of , which limits the model's capacity for long sequences. Informer uses a Distilling operation, for reducing the input size between layers into its half slice. By doing so, it reduces the whole memory usage to be .
As you can see, the motivation for the Informer model is similar to Longformer (Beltagy et el., 2020), Sparse Transformer (Child et al., 2019) and other NLP papers for reducing the quadratic complexity of the self-attention mechanism when the input sequence is long. Now, let's dive into ProbSparse attention and the Distilling operation with code examples.
ProbSparse Attention
The main idea of ProbSparse is that the canonical self-attention scores form a long-tail distribution, where the "active" queries lie in the "head" scores and "lazy" queries lie in the "tail" area. By "active" query we mean a query such that the dot-product contributes to the major attention, whereas a "lazy" query forms a dot-product which generates trivial attention. Here, and are the -th rows in and attention matrices respectively.
Vanilla self attention vs ProbSparse attention from Autoformer (Wu, Haixu, et al., 2021) |
Given the idea of "active" and "lazy" queries, the ProbSparse attention selects the "active" queries, and creates a reduced query matrix which is used to calculate the attention weights in . Let's see this more in detail with a code example.
Recall the canonical self-attention formula:
Where . Note that in practice, the input length of queries and keys are typically equivalent in the self-attention computation, i.e. where is the time series length. Therefore, the multiplication takes computational complexity. In ProbSparse attention, our goal is to create a new matrix and define:
where the matrix only selects the Top "active" queries. Here, and called the sampling factor hyperparameter for the ProbSparse attention. Since selects only the Top queries, its size is , so the multiplication takes only .
This is good! But how can we select the "active" queries to create ? Let's define the Query Sparsity Measurement.
Query Sparsity Measurement
Query Sparsity Measurement is used for selecting the "active" queries in to create . In theory, the dominant pairs encourage the "active" 's probability distribution away from the uniform distribution as can be seen in the figure below. Hence, the KL divergence between the actual queries distribution and the uniform distribution is used to define the sparsity measurement.
| | |:--😐 | The illustration of ProbSparse Attention from official repository|
In practice, the measurement is defined as:
The important thing to understand here is when is larger, the query should be in and vice versa.
But how can we calculate the term in non-quadratic time? Recall that most of the dot-product generate either way the trivial attention (i.e. long-tail distribution property), so it is enough to randomly sample a subset of keys from , which will be called K_sample
in the code.
Now, we are ready to see the code of probsparse_attention
:
Note that in the implementation, contain in the calculation for stability issues (see this disccusion for more information).
We did it! Please be aware that this is only a partial implementation of the probsparse_attention
, and the full implementation can be found in 🤗 Transformers.
Distilling
Because of the ProbSparse self-attention, the encoder’s feature map has some redundancy that can be removed. Therefore, the distilling operation is used to reduce the input size between encoder layers into its half slice, thus in theory removing this redundancy. In practice, Informer's "distilling" operation just adds 1D convolution layers with max pooling between each of the encoder layers. Let be the output of the -th encoder layer, the distilling operation is then defined as:
Let's see this in code:
By reducing the input of each layer by two, we get a memory usage of instead of where is the number of encoder/decoder layers. This is what we wanted!
The Informer model in now available in the 🤗 Transformers library, and simply called InformerModel
. In the sections below, we will show how to train this model on a custom multivariate time-series dataset.
Set-up Environment
First, let's install the necessary libraries: 🤗 Transformers, 🤗 Datasets, 🤗 Evaluate, 🤗 Accelerate and GluonTS.
As we will show, GluonTS will be used for transforming the data to create features as well as for creating appropriate training, validation and test batches.
We also quickly upload some telemetry - this tells us which examples and software versions are getting used so we know where to prioritize our maintenance efforts. We don't collect (or care about) any personally identifiable information, but if you'd prefer not to be counted, feel free to skip this step or delete this cell entirely.
Load Dataset
In this blog post, we'll use the traffic_hourly
dataset, which is available on the Hugging Face Hub. This dataset contains the San Francisco Traffic dataset used by Lai et al. (2017). It contains 862 hourly time series showing the road occupancy rates in the range on the San Francisco Bay area freeways from 2015 to 2016.
This dataset is part of the Monash Time Series Forecasting repository, a collection of time series datasets from a number of domains. It can be viewed as the GLUE benchmark of time series forecasting.
As can be seen, the dataset contains 3 splits: train, validation and test.
Each example contains a few keys, of which start
and target
are the most important ones. Let us have a look at the first time series in the dataset:
The start
simply indicates the start of the time series (as a datetime), and the target
contains the actual values of the time series.
The start
will be useful to add time related features to the time series values, as extra input to the model (such as "month of year"). Since we know the frequency of the data is hourly
, we know for instance that the second value has the timestamp 2015-01-01 01:00:01
, 2015-01-01 02:00:01
, etc.
The validation set contains the same data as the training set, just for a prediction_length
longer amount of time. This allows us to validate the model's predictions against the ground truth.
The test set is again one prediction_length
longer data compared to the validation set (or some multiple of prediction_length
longer data compared to the training set for testing on multiple rolling windows).
The initial values are exactly the same as the corresponding training example. However, this example has prediction_length=48
(48 hours, or 2 days) additional values compared to the training example. Let us verify it.
Let's visualize this:
Let's split up the data:
Update start
to pd.Period
The first thing we'll do is convert the start
feature of each time series to a pandas Period
index using the data's freq
:
We now use datasets
' set_transform
functionality to do this on-the-fly in place:
Now, let's convert the dataset into a multivariate time series using the MultivariateGrouper
from GluonTS. This grouper will convert the individual 1-dimensional time series into a single 2D matrix.
Note that now the target is 2 dimensional, where the first dim is the number of variates (number of time-series) and the second is the time-series values (time dimension):
Define the model
Next, let's instantiate a model. The model will be trained from scratch, hence we won't use the from_pretrained
method here, but rather randomly initialize the model from a config
.
We specify a couple of additional parameters to the model:
prediction_length
(in our case,48
hours): this is the horizon that the decoder of the Informer will learn to predict for;context_length
: the model will set thecontext_length
(input of the encoder) equal to theprediction_length
, if nocontext_length
is specified;lags
for a given frequency: these specify an efficient "look back" mechanism, where we concatenate values from the past to the current values as additional features, e.g. for aDaily
frequency we might consider a look back of[1, 7, 30, ...]
or forMinute
data we might consider[1, 30, 60, 60*24, ...]
etc.;the number of time features: in our case, this will be
5
as we'll addHourOfDay
,DayOfWeek
, ..., andAge
features (see below).
Let us check the default lags provided by GluonTS for the given frequency ("hourly"):
This means that this would look back up to 721 hours (~30 days) for each time step, as additional features. However, the resulting feature vector would end up being of size len(lags_sequence)*num_of_variates
which for our case will be 34480! This is not going to work so we will use our own sensible lags.
Let us also check the default time features which GluonTS provides us:
In this case, there are four additional features, namely "hour of day", "day of week", "day of month" and "day of year". This means that for each time step, we'll add these features as a scalar values. For example, consider the timestamp 2015-01-01 01:00:01
. The four additional features will be:
Note that hours and days are encoded as values between [-0.5, 0.5]
from GluonTS. For more information about time_features
, please see this. Besides those 4 features, we'll also add an "age" feature as we'll see later on in the data transformations.
We now have everything to define the model:
By default, the model uses a diagonal Student-t distribution (but this is configurable):
Define Transformations
Next, we define the transformations for the data, in particular for the creation of the time features (based on the dataset or universal ones).
Again, we'll use the GluonTS library for this. We define a Chain
of transformations (which is a bit comparable to torchvision.transforms.Compose
for images). It allows us to combine several transformations into a single pipeline.
The transformations below are annotated with comments, to explain what they do. At a high level, we will iterate over the individual time series of our dataset and add/remove fields or features:
Define InstanceSplitter
For training/validation/testing we next create an InstanceSplitter
which is used to sample windows from the dataset (as, remember, we can't pass the entire history of values to the model due to time- and memory constraints).
The instance splitter samples random context_length
sized and subsequent prediction_length
sized windows from the data, and appends a past_
or future_
key to any temporal keys in time_series_fields
for the respective windows. The instance splitter can be configured into three different modes:
mode="train"
: Here we sample the context and prediction length windows randomly from the dataset given to it (the training dataset)mode="validation"
: Here we sample the very last context length window and prediction window from the dataset given to it (for the back-testing or validation likelihood calculations)mode="test"
: Here we sample the very last context length window only (for the prediction use case)
Create DataLoaders
Next, it's time to create the DataLoaders, which allow us to have batches of (input, output) pairs - or in other words (past_values
, future_values
).
Let's check the first batch:
As can be seen, we don't feed input_ids
and attention_mask
to the encoder (as would be the case for NLP models), but rather past_values
, along with past_observed_mask
, past_time_features
and static_real_features
.
The decoder inputs consist of future_values
, future_observed_mask
and future_time_features
. The future_values
can be seen as the equivalent of decoder_input_ids
in NLP.
We refer to the docs for a detailed explanation for each of them.
Forward pass
Let's perform a single forward pass with the batch we just created:
Note that the model is returning a loss. This is possible as the decoder automatically shifts the future_values
one position to the right in order to have the labels. This allows computing a loss between the predicted values and the labels. The loss is the negative log-likelihood of the predicted distribution with respect to the ground truth values and tends to negative infinity.
Also note that the decoder uses a causal mask to not look into the future as the values it needs to predict are in the future_values
tensor.
Train the Model
It's time to train the model! We'll use a standard PyTorch training loop.
We will use the 🤗 Accelerate library here, which automatically places the model, optimizer and dataloader on the appropriate device
.
Inference
At inference time, it's recommended to use the generate()
method for autoregressive generation, similar to NLP models.
Forecasting involves getting data from the test instance sampler, which will sample the very last context_length
sized window of values from each time series in the dataset, and pass it to the model. Note that we pass future_time_features
, which are known ahead of time, to the decoder.
The model will autoregressively sample a certain number of values from the predicted distribution and pass them back to the decoder to return the prediction outputs:
The model outputs a tensor of shape (batch_size
, number of samples
, prediction length
, input_size
).
In this case, we get 100
possible values for the next 48
hours for each of the 862
time series (for each example in the batch which is of size 1
since we only have a single multivariate time series):
We'll stack them vertically, to get forecasts for all time-series in the test dataset (just in case there are more time series in the test set):
To plot the prediction for any time series variate with respect the ground truth test data we define the following helper:
For example:
Conclusion
How do we compare against other models? The Monash Time Series Repository has a comparison table of test set MASE metrics which we can add to:
|Dataset | SES| Theta | TBATS| ETS | (DHR-)ARIMA| PR| CatBoost | FFNN | DeepAR | N-BEATS | WaveNet| Transformer (uni.) | Informer (mv. our)| |:------------------😐:-----------------😐:--😐:--😐:--😐:--😐:--😐:--😐:---😐:---😐:--😐:--😐:--😐:--😐 |Traffic Hourly | 1.922 | 1.922 | 2.482 | 2.294| 2.535| 1.281| 1.571 |0.892| 0.825 |1.100| 1.066 | 0.821 | 1.191 |
As can be seen, and perhaps surprising to some, the multivariate forecasts are typically worse than the univariate ones, the reason being the difficulty in estimating the cross-series correlations/relationships. The additional variance added by the estimates often harms the resulting forecasts or the model learns spurious correlations. We refer to this paper for further reading. Multivariate models tend to work well when trained on a lot of data.
So the vanilla Transformer still performs best here! In the future, we hope to better benchmark these models in a central place to ease reproducing the results of several papers. Stay tuned for more!
Resources
We recommend to check out the Informer docs and the example notebook linked at the top of this blog post.