Path: blob/master/site/en-snapshot/io/tutorials/kafka.ipynb
25118 views
Copyright 2020 The TensorFlow IO Authors.
Robust machine learning on streaming data using Kafka and Tensorflow-IO
Overview
This tutorial focuses on streaming data from a Kafka cluster into a tf.data.Dataset
which is then used in conjunction with tf.keras
for training and inference.
Kafka is primarily a distributed event-streaming platform which provides scalable and fault-tolerant streaming data across data pipelines. It is an essential technical component of a plethora of major enterprises where mission-critical data delivery is a primary requirement.
NOTE: A basic understanding of the kafka components will help you in following the tutorial with ease.
NOTE: A Java runtime environment is required to run this tutorial.
Setup
Install the required tensorflow-io and kafka packages
Import packages
Validate tf and tfio imports
Download and setup Kafka and Zookeeper instances
For demo purposes, the following instances are setup locally:
Kafka (Brokers: 127.0.0.1:9092)
Zookeeper (Node: 127.0.0.1:2181)
Using the default configurations (provided by Apache Kafka) for spinning up the instances.
Once the instances are started as daemon processes, grep for kafka
in the processes list. The two java processes correspond to zookeeper and the kafka instances.
Create the kafka topics with the following specs:
susy-train: partitions=1, replication-factor=1
susy-test: partitions=2, replication-factor=1
Describe the topic for details on the configuration
The replication factor 1 indicates that the data is not being replicated. This is due to the presence of a single broker in our kafka setup. In production systems, the number of bootstrap servers can be in the range of 100's of nodes. That is where the fault-tolerance using replication comes into picture.
Please refer to the docs for more details.
SUSY Dataset
Kafka being an event streaming platform, enables data from various sources to be written into it. For instance:
Web traffic logs
Astronomical measurements
IoT sensor data
Product reviews and many more.
For the purpose of this tutorial, lets download the SUSY dataset and feed the data into kafka manually. The goal of this classification problem is to distinguish between a signal process which produces supersymmetric particles and a background process which does not.
Explore the dataset
The first column is the class label (1 for signal, 0 for background), followed by the 18 features (8 low-level features then 10 high-level features). The first 8 features are kinematic properties measured by the particle detectors in the accelerator. The last 10 features are functions of the first 8 features. These are high-level features derived by physicists to help discriminate between the two classes.
The entire dataset consists of 5 million rows. However, for the purpose of this tutorial, let's consider only a fraction of the dataset (100,000 rows) so that less time is spent on the moving the data and more time on understanding the functionality of the api.
Split the dataset
Store the train and test data in kafka
Storing the data in kafka simulates an environment for continuous remote data retrieval for training and inference purposes.
Define the tfio train dataset
The IODataset
class is utilized for streaming data from kafka into tensorflow. The class inherits from tf.data.Dataset
and thus has all the useful functionalities of tf.data.Dataset
out of the box.
Build and train the model
Note: Please do not confuse the training step with online training. It's an entirely different paradigm which will be covered in a later section.
Since only a fraction of the dataset is being utilized, our accuracy is limited to ~78% during the training phase. However, please feel free to store additional data in kafka for a better model performance. Also, since the goal was to just demonstrate the functionality of the tfio kafka datasets, a smaller and less-complicated neural network was used. However, one can increase the complexity of the model, modify the learning strategy, tune hyper-parameters etc for exploration purposes. For a baseline approach, please refer to this article.
Infer on the test data
To infer on the test data by adhering to the 'exactly-once' semantics along with fault-tolerance, the streaming.KafkaGroupIODataset
can be utilized.
Define the tfio test dataset
The stream_timeout
parameter blocks for the given duration for new data points to be streamed into the topic. This removes the need for creating new datasets if the data is being streamed into the topic in an intermittent fashion.
Though this class can be used for training purposes, there are caveats which need to be addressed. Once all the messages are read from kafka and the latest offsets are committed using the streaming.KafkaGroupIODataset
, the consumer doesn't restart reading the messages from the beginning. Thus, while training, it is possible only to train for a single epoch with the data continuously flowing in. This kind of a functionality has limited use cases during the training phase wherein, once a datapoint has been consumed by the model it is no longer required and can be discarded.
However, this functionality shines when it comes to robust inference with exactly-once semantics.
evaluate the performance on the test data
Since the inference is based on 'exactly-once' semantics, the evaluation on the test set can be run only once. In order to run the inference again on the test data, a new consumer group should be used.
Track the offset lag of the testcg
consumer group
Once the current-offset
matches the log-end-offset
for all the partitions, it indicates that the consumer(s) have completed fetching all the messages from the kafka topic.
Online learning
The online machine learning paradigm is a bit different from the traditional/conventional way of training machine learning models. In the former case, the model continues to incrementally learn/update it's parameters as soon as the new data points are available and this process is expected to continue indefinitely. This is unlike the latter approaches where the dataset is fixed and the model iterates over it n
number of times. In online learning, the data once consumed by the model may not be available for training again.
By utilizing the streaming.KafkaBatchIODataset
, it is now possible to train the models in this fashion. Let's continue to use our SUSY dataset for demonstrating this functionality.
The tfio training dataset for online learning
The streaming.KafkaBatchIODataset
is similar to the streaming.KafkaGroupIODataset
in it's API. Additionally, it is recommended to utilize the stream_timeout
parameter to configure the duration for which the dataset will block for new messages before timing out. In the instance below, the dataset is configured with a stream_timeout
of 10000
milliseconds. This implies that, after all the messages from the topic have been consumed, the dataset will wait for an additional 10 seconds before timing out and disconnecting from the kafka cluster. If new messages are streamed into the topic before timing out, the data consumption and model training resumes for those newly consumed data points. To block indefinitely, set it to -1
.
Every item that the online_train_ds
generates is a tf.data.Dataset
in itself. Thus, all the standard transformations can be applied as usual.
The incrementally trained model can be saved in a periodic fashion (based on use-cases) and can be utilized to infer on the test data in either online or offline modes.
Note: The streaming.KafkaBatchIODataset
and streaming.KafkaGroupIODataset
are still in experimental phase and have scope for improvements based on user-feedback.
References:
Baldi, P., P. Sadowski, and D. Whiteson. “Searching for Exotic Particles in High-energy Physics with Deep Learning.” Nature Communications 5 (July 2, 2014)
SUSY Dataset: https://archive.ics.uci.edu/ml/datasets/SUSY#