Using LSTM Encoder-Decoder in Univariate Horizon Style for Time Series Modeling

In time series analysis, various types of statistical models and deep learning models can be used for modeling purposes. Speaking specifically of deep learning models in time series, we see the huge success of LSTM or RNN models due to their performance. In this article, we are going to discuss a model that can be built using LSTM layers and that is a combination of two neural networks named encoder-decoder model. We will use this model in the analysis of univariate time series. The main points to be discussed in this article are listed below.
Contents
- Understand the encoder-decoder model?
- Why encode-decode for time series?
- Building an encoder-decoder with LSTM layers for time series forecasting
Understanding the encoder-decoder model
In machine learning we have seen different types of neural networks and encoder-decoder models are also a type of neural network in which recurrent neural networks are used to make prediction on sequential data such as text data, image data and time series data. . Speaking of the history of the generation of these models, they were developed to solve machine translation problems, although these models are used for sequential prediction problems such as text synthesis and answering questions.
When we talk about architecture, we see that in the whole model, there are two recurrent neural networks. One is to encode the sequence in the input and another is to decode the encoded sequence in the target sequence. There are various applications where one can see the application of encoder-decoder models. Some of the applications are:
- Automatic translation
- Text summary
- Image processing
- Chatbots
- Time series forecasting
Before going deeper into the network, we should have some prior knowledge about RNN and LSTM models. This can be achieved by using this article. Let us now see the architecture of the encoder-decoder model.
Image source
The image above is a representation of the architecture of an encoder-decoder model. Where x is the input of the model and y is the output of the model. Looking at the image, we can say that there are three main components of the architecture:
- Encoder
- Feature vector
- Decoder
Encoder: It receives the elements of the input sequence one by one at each time step, learns the information from the input, and propagates it for subsequent processes.
Feature vector: This is an internal and intermediate state which is responsible for keeping the sequential information of the input, which is useful for the decoder to make accurate predictions.
Decoder: The decoder part of the architecture can also be an RNN model which helps the model to make predictions by re-decoding the result by the encoder in a sequential format.
Why encode-decoder for time series?
Time series data is a type of sequential data that is generated by collecting the obtained data points in a sequence with time values. When we talk about NLP data, we think of the words in the data as data points and their meanings as their sequence. It becomes important to remember the sequence of data in the NLP. Likewise, the sequence we have in the time series is important to learn to make the forecast more accurate.
As we saw in the points above, the encoder-decoder models are very good with sequential data and the reason for this capability is the LSTM or RNN layer of the network, which are already developed to work with the sequential model. With just a finely tuned LSTM layer, we can make an entire network function appropriately with the sequential information of the data by simply having the network remember the sequence. Due to the high performance of the sequential data, we can use the encoder-decoder model with the time series data.
Building an encoder-decoder with LSTM layers for time series forecasting
In the section above, we explained how the encoder-decoder model works well with sequential information and how time series is made up of sequential data. This section of the article will be a representation of how we can use an encoder-decoder model for time series analysis where LSTM layers are used to create an encoder-decoder model.
Data processing
Our first step in the section will be to involve some of the basic procedures before modeling, such as importing libraries and preprocessing data.
Let’s start by importing libraries.
import pandas as pd
import numpy as np
import tensorflow as tf
from sklearn import preprocessing
import matplotlib.pyplot as plt
Now let’s import the dataset. We use Metro Interstate Traffic Volume data which shows the evolution of Interstate 94 Westbound traffic volume for MN DoT ATR station 301 with the information on temperature, rain, snow, cloud and weather. We can extract the data from here. Let’s take a look at the 10 data samples.
data.head()
To go out:

Let’s take a look at the data description.
data.describe()
To go out:

Here in the above output we can see the description of the data.
Before we move on to modeling, we need to do some of the data preprocessing like removing duplicate dates from the data, splitting the data.
data.drop_duplicates(subset=['date_time'], keep=False,inplace=True)
validate = data['traffic_volume'].tail(10)
data = data.drop(data['traffic_volume'].tail(10).index)
Let’s extract the volume of data traffic.
uni_data = data['traffic_volume']
uni_data.index = data['date_time']
uni_data.head()
To go out:

We can create a function to prepare univariate data like,
def custom_ts_univariate_data_prep(dataset, start, end, window, horizon):
  X = []
  y = []
  start = start + window
  if end is None:
    end = len(dataset) - horizon
  for i in range(start, end):
    indicesx = range(i-window, i)
    X.append(np.reshape(dataset[indicesx], (window, 1)))
    indicesy = range(i,i+horizon)
    y.append(dataset[indicesy])
  return np.array(X), np.array(y)
Let’s make the data univariate using the function.
  univar_hist_window = 48
horizon = 10
TRAIN_SPLIT = 30000
x_train_uni, y_train_uni = custom_ts_univariate_data_prep(x_rescaled, 0, TRAIN_SPLIT,univar_hist_window, horizon)
x_val_uni, y_val_uni = custom_ts_univariate_data_prep(x_rescaled, TRAIN_SPLIT, None,univar_hist_window,horizon)
print (x_train_uni[0])
To go out:

Here we can see the unique window of the past. Let’s take a look at the values ââin the target horizon.
print (y_train_uni[0])
To go out:

In this article, we will use the LSTM layers provided by Keras to build the model that should transform the data based on the layers in the tensor slices.
BATCH_SIZE = 256
BUFFER_SIZE = 150
train_univariate = tf.data.Dataset.from_tensor_slices((x_train_uni, y_train_uni))
train_univariate = train_univariate.cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE).repeat()
val_univariate = tf.data.Dataset.from_tensor_slices((x_val_uni, y_val_uni))
val_univariate = val_univariate.batch(BATCH_SIZE).repeat()
Build the model
As we saw previously, we are using the RNN or LSTM layers to build the encoder-decoder models, and here we will use the LSTM layers to build models.
# create model
from keras.models import Sequential
from keras import layer
from keras.layers import LSTM
enco_deco = Sequential()
# Encoder
enco_deco.add(LSTM(100, input_shape=x_train_uni.shape[-2:], return_sequences=True))
enco_deco.add(LSTM(units=50,return_sequences=True))
enco_deco.add(LSTM(units=15))
#feature vector
enco_deco.add(layers.RepeatVector(y_train_uni.shape[1]))
#decoder
enco_deco.add(LSTM(units=100,return_sequences=True))
enco_deco.add(LSTM(units=50,return_sequences=True))
enco_deco.add(TimeDistributed(tf.keras.layers.Dense(units=1))
Let’s compile the model.
enco_deco.compile(optimizer="adam", loss="mse")
Checking the model summary:
enco_deco.summary()
To go out:

Here in summary, we can see the architecture of our encoder-decoder model. We are now ready to fit the model on the tensors we have prepared for modeling.
history = enco_deco.fit(train_univariate, epochs=150,steps_per_epoch=100,validation_data=val_univariate, validation_steps=50,verbose =1)
To go out:

Here we have trained the model.
Make predictions
We can now use the model to make predictions for the future. But before that, we are required to create the data according to the requirements of the model.
Let’s take some samples of the data.
uni = df['traffic_volume']
validatehori = uni.tail(48)
Scaling and reshaping of samples:
validatehist = validatehori.values
scaler_val = preprocessing.MinMaxScaler()
val_rescaled = scaler_x.fit_transform(validatehist.reshape(-1, 1))
val_rescaled = val_rescaled.reshape((1, val_rescaled.shape[0], 1))
We are now ready to do the prediction on the samples from the data.
Predicted_results = enco_deco.predict(val_rescaled)
Predicted_results
To go out:

Let’s take a look at the to measure the performance of the model metrics so that we have a measure of the model performance.
from sklearn import metrics
print('Evaluation metric results:-')
print(f'MSE is : {metrics.mean_squared_error(validate,Predicted_inver_res)}')
print(f'MAE is : {metrics.mean_absolute_error(validate,Predicted_inver_res)}')
print(f'RMSE is : {np.sqrt(metrics.mean_squared_error(validate,Predicted_inver_res))}')
print(f'MAPE is : {mean_absolute_percentage_error(validate,Predicted_inver_res)}')
print(f'R2 is : {metrics.r2_score(validate,Predicted_inver_res)}',end='nn')
To go out:

Also, we can create a graph between actual and predicted values ââlike,
plt.plot( list(validate))
plt.plot( list(Predicted_inver_res))
plt.title("Actual vs Predicted")
plt.ylabel("Traffic volume")
plt.legend(('Actual','predicted'))
plt.rcParams["figure.figsize"] = [16,9]
plt.show()
To go out:

Here we can see the model performance which is quite satisfactory and we can also see in the graph how the slope of the actual and predicted values ââare similar and the values ââare also very similar. Adding more layers to the network can provide better results.
Final words
Here in the article, we have discussed an overview of the encoder-decoder model and we have discussed how it can be successful in time series modeling. At the same time, we have seen the implementation of the encoder-decoder model in the modeling of univariate horizon style time series.
The references
Subscribe to our newsletter
Receive the latest updates and relevant offers by sharing your email.
Join our Telegram Group. Be part of an engaging community