Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
ibm
GitHub Repository: ibm/watson-machine-learning-samples
Path: blob/master/cpd5.1/notebooks/python_sdk/deployments/foundation_models/Use Time Series Foundation Models and timeseries data to predict energy demand.ipynb
6405 views
Kernel: Python 3.9

Use Time Series Foundation Models and time series data to predict energy demand

Disclaimers

  • Use only Projects and Spaces that are available in watsonx context.

  • Time Series Foundation Models are supported since CPD 5.1.1 release.

Notebook content

This notebook demonstrates the use of a pre-trained time series foundation model for multivariate forecasting tasks and showcases the variety of features available in Time Series Foundation Models.

Some familiarity with Python is helpful. This notebook uses Python 3.11.

Learning goals

The learning goals of this notebook are:

  • To explore Time Series Foundation Models

  • To initialize the model

  • To forecast based on historical data

Contents

This notebook contains the following parts:

  1. Setup

  2. Time series dataset

  3. Time Series Foundation Models in watsonx.ai

  4. Initialize the TSModelInference class.

  5. Forecast

  6. Summary and next steps

Set up the environment

Before you use the sample code in this notebook, you must perform the following setup tasks:

  • Contact with your Cloud Pak for Data administrator and ask them for your account credentials

Install and import the ibm-watsonx-ai and dependecies

Note: ibm-watsonx-ai documentation can be found here.

!pip install wget | tail -n 1 !pip install -U matplotlib | tail -n 1 !pip install -U "ibm-watsonx-ai>=1.1.24" | tail -n 1

Connection to WML

Authenticate the Watson Machine Learning service on IBM Cloud Pak for Data. You need to provide platform url, your username and api_key.

username = 'PASTE YOUR USERNAME HERE' api_key = 'PASTE YOUR API_KEY HERE' url = 'PASTE THE PLATFORM URL HERE'
from ibm_watsonx_ai import Credentials credentials = Credentials( username=username, api_key=api_key, url=url, instance_id="openshift", version="5.1" )

Alternatively you can use username and password to authenticate WML services.

credentials = Credentials( username=***, password=***, url=***, instance_id="openshift", version="5.1" )

Working with projects

First of all, you need to create a project that will be used for your work. If you do not have project already created follow below steps.

  • Open IBM Cloud Pak main page

  • Click all projects

  • Create an empty project

  • Copy project_id from url and paste it below

Action: Assign project ID below

import os try: project_id = os.environ["PROJECT_ID"] except KeyError: project_id = input("Please enter your project_id (hit enter): ")

Create an instance of APIClient with authentication details.

from ibm_watsonx_ai import APIClient client = APIClient(credentials)

To be able to interact with all resources available in WML services, you need to set the project which you will be using.

client.set.default_project(project_id)
'SUCCESS'

Training dataset

This tutorial uses the Hourly energy demand dataset dataset, which contains four years of electrical consumption and generation data for Spain. It is a modified version of the Hourly energy demand generation and weather. For simplicity, the dataset has been prepared to have no missing values and no irrelevant columns."

import os, wget filename = 'energy_dataset.csv' base_url = 'https://github.com/IBM/watson-machine-learning-samples/raw/refs/heads/master/cloud/data/energy/' if not os.path.isfile(filename): wget.download(base_url + filename)
import pandas as pd df = pd.read_csv(filename)

Show the last few rows of the dataset.

df.tail()

Describe the data.

df.describe()

Split the data

The purpose of this notebook is to demonstrate the core functionality of features available in Time Series Foundation Models. The selected model, ibm/granite-ttm-512-96-r2, requires a minimum context length of 512. Therefore, the dataset will be split into a historical dataset containing 512 rows, while the next 96 lines will be used to check the consistency of the predictions.

timestamp_column = "time" target_column = "total load actual" context_length = 608 future_context = 96
# Only use the last `context_length` rows for prediction. future_data = df.iloc[-future_context:,] data = df.iloc[-context_length:-future_context,]

Visualize the data

import matplotlib.pyplot as plt import numpy as np plt.figure(figsize=(10,2)) plt.plot(np.asarray(data[timestamp_column], 'datetime64[s]'), data[target_column]) plt.title("Actual Total Load") plt.show()
Image in a Jupyter notebook

Time Series Foundation Models in watsonx.ai

List available models

for model in client.foundation_models.get_time_series_model_specs()["resources"]: print('--------------------------------------------------') print(f'model_id: {model["model_id"]}') print(f'functions: {model["functions"]}') print(f'long_description: {model["long_description"]}') print(f'label: {model["label"]}')
-------------------------------------------------- model_id: ibm/granite-ttm-1024-96-r2 functions: [{'id': 'time_series_forecast'}] long_description: TinyTimeMixers (TTMs) are compact pre-trained models for Multivariate Time-Series Forecasting, open-sourced by IBM Research. Given the last 1024 time-points (i.e. context length), this model can forecast up to next 96 time-points (i.e. forecast length) in future. This model is targeted towards a forecasting setting of context length 1024 and forecast length 96 and recommended for hourly and minutely resolutions (Ex. 10 min, 15 min, 1 hour, etc) label: granite-ttm-1024-96-r2 -------------------------------------------------- model_id: ibm/granite-ttm-1536-96-r2 functions: [{'id': 'time_series_forecast'}] long_description: TinyTimeMixers (TTMs) are compact pre-trained models for Multivariate Time-Series Forecasting, open-sourced by IBM Research. Given the last 1536 time-points (i.e. context length), this model can forecast up to next 96 time-points (i.e. forecast length) in future. This model is targeted towards a forecasting setting of context length 1536 and forecast length 96 and recommended for hourly and minutely resolutions (Ex. 10 min, 15 min, 1 hour, etc) label: granite-ttm-1536-96-r2 -------------------------------------------------- model_id: ibm/granite-ttm-512-96-r2 functions: [{'id': 'time_series_forecast'}] long_description: TinyTimeMixers (TTMs) are compact pre-trained models for Multivariate Time-Series Forecasting, open-sourced by IBM Research. Given the last 512 time-points (i.e. context length), this model can forecast up to next 96 time-points (i.e. forecast length) in future. This model is targeted towards a forecasting setting of context length 512 and forecast length 96 and recommended for hourly and minutely resolutions (Ex. 10 min, 15 min, 1 hour, etc) label: granite-ttm-512-96-r2

Defining model

You need to specify model_id that will be used for inferencing:

ts_model_id = client.foundation_models.TimeSeriesModels.GRANITE_TTM_512_96_R2

Initialize the TSModelInference class.

TSModelInference is a wrapper for time series models available from watsonx.ai, designed to forecast future values based on historical data.

from ibm_watsonx_ai.foundation_models import TSModelInference ts_model = TSModelInference( model_id=ts_model_id, api_client=client )

Defining the model parameters

We need to provide a set of model parameters that will influence the result:

from ibm_watsonx_ai.foundation_models.schema import TSForecastParameters forecasting_params = TSForecastParameters( timestamp_column=timestamp_column, freq="1h", target_columns=[target_column], )

Forecasting

Call the forecast() method to predict electricity consumption.

results = ts_model.forecast(data=data, params=forecasting_params)['results'][0]

Plot predictions along with the historical data.

plt.figure(figsize=(10,2)) plt.plot(np.asarray(data[timestamp_column], dtype='datetime64[s]'), data[target_column], label="Historical data") plt.plot(np.asarray(results[timestamp_column], dtype='datetime64[s]'), results[target_column], label="Predicted") plt.plot(np.asarray(future_data[timestamp_column], dtype='datetime64[s]'), future_data[target_column], label="True", linestyle='dashed') plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.show()
Image in a Jupyter notebook

8. Summary and next steps

You successfully completed this notebook!

You learned how to use Time Series Foundation Models in real life applications.

Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.

Authors

Aleksandra Kłeczek, Software Engineer at watsonx.ai.

Copyright © 2025 IBM. This notebook and its source code are released under the terms of the MIT License.