Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
ibm
GitHub Repository: ibm/watson-machine-learning-samples
Path: blob/master/cloud/notebooks/python_sdk/deployments/foundation_models/Use Time Series BYOM and time series data to predict energy demand.ipynb
6408 views
Kernel: ipynb

Use Time Series BYOM and time series data to predict energy demand

Disclaimers

  • Use only Projects and Spaces that are available in watsonx context.

Notebook content

This notebook demonstrates the process of deploying model for Time Series as well as its use for multivariate forecasting tasks and showcases the variety of features available in Time Series Foundation Models.

Some familiarity with Python is helpful. This notebook uses Python 3.11.

Learning goals

The learning goals of this notebook are:

  • Store TS model previously available in your COS

  • Deploy TS model

  • To explore Time Series BYOM

  • To forecast based on historical data

Contents

This notebook contains the following parts:

  1. Setup

  2. Deploy model

  3. Prepare dataset

  4. Initialize forecast

  5. Forecast

  6. Summary

1. Set up the environment

Before you use the sample code in this notebook, you must perform the following setup tasks:

Install and import the ibm-watsonx-ai and dependecies

Note: ibm-watsonx-ai documentation can be found here.

!pip install wget | tail -n 1 !pip install -U matplotlib | tail -n 1 !pip install -U "ibm-watsonx-ai>=1.1.31" | tail -n 1

Defining the watsonx.ai credentials

This cell defines the watsonx.ai credentials required to work with watsonx Foundation Model inferencing.

Action: Provide the IBM Cloud user API key. For details, see documentation.

import getpass from ibm_watsonx_ai import Credentials credentials = Credentials( url="https://us-south.ml.cloud.ibm.com", api_key=getpass.getpass("Please enter your watsonx.ai api key (hit enter): "), )

Defining the project id

The deploymeny of Time Series FM requires project id that provides the context for the call. We will obtain the id from the project in which this notebook runs. Otherwise, please provide the project id.

from ibm_watsonx_ai import APIClient client = APIClient( credentials, project_id=getpass.getpass("Please enter your project_id (hit enter): "), )

2. Deploy the model

Once you have initialized the client it is time to deploy custom model. Provide information about where your model is stored below, unless you have not created a connection.

  • CONNECTION_ID - connection asset ID to your data source which will store custom foundation model files.

  • BUCKET_NAME - bucket which will store your custom foundation models files.

  • FILE_PATH - path to your custom foundation models files. In most cases empty string - "".

CONNECTION_ID="PASTE YOUR CONNECTON ID HERE", BUCKET_NAME="PASTE YOUR BUCKET NAME HERE", FILE_PATH="PASTE YOUR FILE PATH HERE"

Create custom model repository

sw_spec_id = client.software_specifications.get_id_by_name("watsonx-tsfm-runtime-1.0")
metadata = { client.repository.ModelMetaNames.NAME: "time series FM", client.repository.ModelMetaNames.SOFTWARE_SPEC_ID: sw_spec_id, client.repository.ModelMetaNames.TYPE: "custom_foundation_model_1.0", client.repository.ModelMetaNames.FOUNDATION_MODEL: { "model_id": "my_test_time_series_model", }, client.repository.ModelMetaNames.MODEL_LOCATION: { "connection_id": CONNECTION_ID, "bucket": BUCKET_NAME, "file_path": FILE_PATH, }, } model = "my_test_time_series_model"
stored_base_model_details = client.repository.store_model( model=model, meta_props=metadata )
stored_model_id = stored_base_model_details["metadata"]["id"] stored_model_id
'209931e6-55c3-490b-ad2f-96cbfc3b5501'

Perform custom model deployment

meta_props = { client.deployments.ConfigurationMetaNames.NAME: "custom_tsfm_python_sdk", client.deployments.ConfigurationMetaNames.ONLINE: { "parameters": { "serving_name": "custom_tsfm_python_sdk", "foundation_model": {}, }, }, client.deployments.ConfigurationMetaNames.HARDWARE_SPEC: { "name": "S", "num_nodes": 1, }, }
base_model_deployment_details = client.deployments.create(stored_model_id, meta_props)
###################################################################################### Synchronous deployment creation for id: '209931e6-55c3-490b-ad2f-96cbfc3b5501' started ###################################################################################### initializing....... Note: Deployment of custom foundation model with architecture tinytimemixer is not supported. Deployment might fail. ........................ ready ----------------------------------------------------------------------------------------------- Successfully finished deployment creation, deployment_id='3c4af901-4460-4bf7-a274-670ff4a33f8d' -----------------------------------------------------------------------------------------------
deployment_id = base_model_deployment_details["metadata"]["id"]

3. Prepare training dataset

This tutorial uses the Hourly energy demand dataset dataset, which contains four years of electrical consumption and generation data for Spain. It is a modified version of the Hourly energy demand generation and weather. For simplicity, the dataset has been prepared to have no missing values and no irrelevant columns."

import os import wget filename = "energy_dataset.csv" base_url = "https://github.com/IBM/watsonx-ai-samples/raw/refs/heads/master/cloud/data/energy/" if not os.path.isfile(filename): wget.download(base_url + filename)
import pandas as pd df = pd.read_csv(filename)

Show the last few rows of the dataset.

df.tail()

Describe the data.

df.describe()

Split the data

The purpose of this notebook is to demonstrate the core functionality of features available in Time Series Foundation Models. The selected model, requires a minimum context length of 1024. Therefore, the dataset will be split into a historical dataset containing 1024 rows, while the next 96 lines will be used to check the consistency of the predictions.

timestamp_column = "time" target_column = "total load actual" context_length = 1320 future_context = 96
# Only use the last `context_length` rows for prediction. future_data = df.iloc[-future_context:,] data = df.iloc[-context_length:-future_context,]

Visualize the data

import matplotlib.pyplot as plt import numpy as np plt.figure(figsize=(20, 2)) plt.plot(np.asarray(data[timestamp_column], "datetime64[s]"), data[target_column]) plt.title("Actual Total Load") plt.show()
Image in a Jupyter notebook

4. Initialize the TSModelInference class.

TSModelInference is a wrapper for time series models available from watsonx.ai, designed to forecast future values based on historical data.

from ibm_watsonx_ai.foundation_models import TSModelInference ts_model = TSModelInference(deployment_id=deployment_id, api_client=client)

Defining the model parameters

We need to provide a set of model parameters that will influence the result:

from ibm_watsonx_ai.foundation_models.schema import TSForecastParameters forecasting_params = TSForecastParameters( timestamp_column=timestamp_column, freq="1h", target_columns=[target_column], )

5. Forecasting

Call the forecast() method to predict energy consumption.

results = ts_model.forecast(data=data, params=forecasting_params)["results"][0]

Plot predictions along with the historical data.

plt.figure(figsize=(20, 2)) plt.plot( np.asarray(data[timestamp_column], dtype="datetime64[s]"), data[target_column], label="Historical data", ) plt.plot( np.asarray(results[timestamp_column], dtype="datetime64[s]"), results[target_column], label="Predicted", ) plt.plot( np.asarray(future_data[timestamp_column], dtype="datetime64[s]"), future_data[target_column], label="True", linestyle="dashed", ) plt.legend(loc="center left", bbox_to_anchor=(1, 0.5)) plt.show()
Image in a Jupyter notebook

6. Summary and next steps

You successfully completed this notebook!

You learned how to use Time Series BYOM in real life applications.

Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.

Authors

Filip Żmijewski, Software Engineer at watsonx.ai.

Copyright © 2025 IBM. This notebook and its source code are released under the terms of the MIT License.