Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
ibm
GitHub Repository: ibm/watson-machine-learning-samples
Path: blob/master/cloud/notebooks/python_sdk/deployments/foundation_models/Use Time Series Foundation Models and timeseries data to predict energy demand.ipynb
6408 views
Kernel: .venv

Use Time Series Foundation Models and time series data to predict energy demand

Disclaimers

  • Use only Projects and Spaces that are available in watsonx context.

Notebook content

This notebook demonstrates the use of a pre-trained time series foundation model for multivariate forecasting tasks and showcases the variety of features available in Time Series Foundation Models.

Some familiarity with Python is helpful. This notebook uses Python 3.11.

Learning goals

The learning goals of this notebook are:

  • To explore Time Series Foundation Models

  • To initialize the model

  • To forecast based on historical data

Contents

This notebook contains the following parts:

  1. Setup

  2. Time series dataset

  3. Time Series Foundation Models in watsonx.ai

  4. Initialize the TSModelInference class.

  5. Forecast

  6. Summary and next steps

1. Set up the environment

Before you use the sample code in this notebook, you must perform the following setup tasks:

Install and import the ibm-watsonx-ai and dependecies

Note: ibm-watsonx-ai documentation can be found here.

!pip install wget | tail -n 1 !pip install -U matplotlib | tail -n 1 !pip install -U "ibm-watsonx-ai>=1.1.24" | tail -n 1

Connection to watsonx.ai Runtime

Authenticate the watsonx.ai Runtime service on IBM Cloud. You need to provide Cloud API key and location.

Tip: Your Cloud API key can be generated by going to the Users section of the Cloud console. From that page, click your name, scroll down to the API Keys section, and click Create an IBM Cloud API key. Give your key a name and click Create, then copy the created key and paste it below. You can also get a service specific url by going to the Endpoint URLs section of the watsonx.ai Runtime docs. You can check your instance location in your watsonx.ai Runtime Service instance details.

You can use IBM Cloud CLI to retrieve the instance location.

ibmcloud login --apikey API_KEY -a https://cloud.ibm.com ibmcloud resource service-instance INSTANCE_NAME

NOTE: You can also get a service specific apikey by going to the Service IDs section of the Cloud Console. From that page, click Create, and then copy the created key and paste it in the following cell.

Action: Enter your api_key and location in the following cell.

import getpass from ibm_watsonx_ai import Credentials credentials = Credentials( url="https://us-south.ml.cloud.ibm.com", api_key=getpass.getpass("Please enter your watsonx.ai api key (hit enter): "), )
from ibm_watsonx_ai import APIClient client = APIClient(credentials)

Action: Assign project ID below

import os try: project_id = os.environ["PROJECT_ID"] except KeyError: project_id = input("Please enter your project_id (hit enter): ")

To be able to interact with all resources available in watsonx.ai Runtime, you need to set the project which you will be using.

client.set.default_project(project_id)
'SUCCESS'

Training dataset

This tutorial uses the Hourly energy demand dataset dataset, which contains four years of electrical consumption and generation data for Spain. It is a modified version of the Hourly energy demand generation and weather. For simplicity, the dataset has been prepared to have no missing values and no irrelevant columns."

import os, wget filename = 'energy_dataset.csv' base_url = 'https://github.com/IBM/watson-machine-learning-samples/raw/refs/heads/master/cloud/data/energy/' if not os.path.isfile(filename): wget.download(base_url + filename)
import pandas as pd df = pd.read_csv(filename)

Show the last few rows of the dataset.

df.tail()

Describe the data.

df.describe()

Split the data

The purpose of this notebook is to demonstrate the core functionality of features available in Time Series Foundation Models. The selected model, ibm/granite-ttm-512-96-r2, requires a minimum context length of 512. Therefore, the dataset will be split into a historical dataset containing 512 rows, while the next 96 lines will be used to check the consistency of the predictions.

timestamp_column = "time" target_column = "total load actual" context_length = 608 future_context = 96
# Only use the last `context_length` rows for prediction. future_data = df.iloc[-future_context:,] data = df.iloc[-context_length:-future_context,]

Visualize the data

import matplotlib.pyplot as plt import numpy as np plt.figure(figsize=(10,2)) plt.plot(np.asarray(data[timestamp_column], 'datetime64[s]'), data[target_column]) plt.title("Actual Total Load") plt.show()
Image in a Jupyter notebook

Time Series Foundation Models in watsonx.ai

List available models

for model in client.foundation_models.get_time_series_model_specs()["resources"]: print('--------------------------------------------------') print(f'model_id: {model["model_id"]}') print(f'functions: {model["functions"]}') print(f'long_description: {model["long_description"]}') print(f'label: {model["label"]}')
-------------------------------------------------- model_id: ibm/granite-ttm-1024-96-r2 functions: [{'id': 'time_series_forecast'}] long_description: TinyTimeMixers (TTMs) are compact pre-trained models for Multivariate Time-Series Forecasting, open-sourced by IBM Research. Given the last 1024 time-points (i.e. context length), this model can forecast up to next 96 time-points (i.e. forecast length) in future. This model is targeted towards a forecasting setting of context length 1024 and forecast length 96 and recommended for hourly and minutely resolutions (Ex. 10 min, 15 min, 1 hour, etc) label: granite-ttm-1024-96-r2 -------------------------------------------------- model_id: ibm/granite-ttm-1536-96-r2 functions: [{'id': 'time_series_forecast'}] long_description: TinyTimeMixers (TTMs) are compact pre-trained models for Multivariate Time-Series Forecasting, open-sourced by IBM Research. Given the last 1536 time-points (i.e. context length), this model can forecast up to next 96 time-points (i.e. forecast length) in future. This model is targeted towards a forecasting setting of context length 1536 and forecast length 96 and recommended for hourly and minutely resolutions (Ex. 10 min, 15 min, 1 hour, etc) label: granite-ttm-1536-96-r2 -------------------------------------------------- model_id: ibm/granite-ttm-512-96-r2 functions: [{'id': 'time_series_forecast'}] long_description: TinyTimeMixers (TTMs) are compact pre-trained models for Multivariate Time-Series Forecasting, open-sourced by IBM Research. Given the last 512 time-points (i.e. context length), this model can forecast up to next 96 time-points (i.e. forecast length) in future. This model is targeted towards a forecasting setting of context length 512 and forecast length 96 and recommended for hourly and minutely resolutions (Ex. 10 min, 15 min, 1 hour, etc) label: granite-ttm-512-96-r2

Defining model

You need to specify model_id that will be used for inferencing:

ts_model_id = client.foundation_models.TimeSeriesModels.GRANITE_TTM_512_96_R2

Initialize the TSModelInference class.

TSModelInference is a wrapper for time series models available from watsonx.ai, designed to forecast future values based on historical data.

from ibm_watsonx_ai.foundation_models import TSModelInference ts_model = TSModelInference( model_id=ts_model_id, api_client=client )

Defining the model parameters

We need to provide a set of model parameters that will influence the result:

from ibm_watsonx_ai.foundation_models.schema import TSForecastParameters forecasting_params = TSForecastParameters( timestamp_column=timestamp_column, freq="1h", target_columns=[target_column], )

Forecasting

Call the forecast() method to predict electricity consumption.

results = ts_model.forecast(data=data, params=forecasting_params)['results'][0]

Plot predictions along with the historical data.

plt.figure(figsize=(10,2)) plt.plot(np.asarray(data[timestamp_column], dtype='datetime64[s]'), data[target_column], label="Historical data") plt.plot(np.asarray(results[timestamp_column], dtype='datetime64[s]'), results[target_column], label="Predicted") plt.plot(np.asarray(future_data[timestamp_column], dtype='datetime64[s]'), future_data[target_column], label="True", linestyle='dashed') plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.show()
Image in a Jupyter notebook

8. Summary and next steps

You successfully completed this notebook!

You learned how to use Time Series Foundation Models in real life applications.

Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.

Authors

Aleksandra Kłeczek, Software Engineer at watsonx.ai.

Copyright © 2024-2025 IBM. This notebook and its source code are released under the terms of the MIT License.