Path: blob/master/examples/timeseries/ipynb/eeg_signal_classification.ipynb
3236 views
Electroencephalogram Signal Classification for action identification
Author: Suvaditya Mukherjee
Date created: 2022/11/03
Last modified: 2022/11/05
Description: Training a Convolutional model to classify EEG signals produced by exposure to certain stimuli.
Introduction
The following example explores how we can make a Convolution-based Neural Network to perform classification on Electroencephalogram signals captured when subjects were exposed to different stimuli. We train a model from scratch since such signal-classification models are fairly scarce in pre-trained format. The data we use is sourced from the UC Berkeley-Biosense Lab where the data was collected from 15 subjects at the same time. Our process is as follows:
Load the UC Berkeley-Biosense Synchronized Brainwave Dataset
Visualize random samples from the data
Pre-process, collate and scale the data to finally make a
tf.data.Dataset
Prepare class weights in order to tackle major imbalances
Create a Conv1D and Dense-based model to perform classification
Define callbacks and hyperparameters
Train the model
Plot metrics from History and perform evaluation
This example needs the following external dependencies (Gdown, Scikit-learn, Pandas, Numpy, Matplotlib). You can install it via the following commands.
Gdown is an external package used to download large files from Google Drive. To know more, you can refer to its PyPi page here
Setup and Data Downloads
First, lets install our dependencies:
Next, lets download our dataset. The gdown package makes it easy to download the data from Google Drive:
Read data from eeg-data.csv
We use the Pandas library to read the eeg-data.csv
file and display the first 5 rows using the .head()
command
We remove unlabeled samples from our dataset as they do not contribute to the model. We also perform a .drop()
operation on the columns that are not required for training data preparation
In the data, the samples recorded are given a score from 0 to 128 based on how well-calibrated the sensor was (0 being best, 200 being worst). We filter the values based on an arbitrary cutoff limit of 128.
Visualize one random sample from the data
We visualize one sample from the data to understand how the stimulus-induced signal looks like
Pre-process and collate data
There are a total of 67 different labels present in the data, where there are numbered sub-labels. We collate them under a single label as per their numbering and replace them in the data itself. Following this process, we perform simple Label encoding to get them in an integer format.
We extract the number of unique classes present in the data
We now visualize the number of samples present in each class using a Bar plot.
Scale and split data
We perform a simple Min-Max scaling to bring the value-range between 0 and 1. We do not use Standard Scaling as the data does not follow a Gaussian distribution.
We now create a Train-test split with a 15% holdout set. Following this, we reshape the data to create a sequence of length 512. We also convert the labels from their current label-encoded form to a one-hot encoding to enable use of several different keras.metrics
functions.
Prepare tf.data.Dataset
We now create a tf.data.Dataset
from this data to prepare it for training. We also shuffle and batch the data for use later.
Make Class Weights using Naive method
As we can see from the plot of number of samples per class, the dataset is imbalanced. Hence, we calculate weights for each class to make sure that the model is trained in a fair manner without preference to any specific class due to greater number of samples.
We use a naive method to calculate these weights, finding an inverse proportion of each class and using that as the weight.
Define simple function to plot all the metrics present in a keras.callbacks.History
object
Define function to generate Convolutional model
Get Model summary
Define callbacks, optimizer, loss and metrics
We set the number of epochs at 30 after performing extensive experimentation. It was seen that this was the optimal number, after performing Early-Stopping analysis as well. We define a Model Checkpoint callback to make sure that we only get the best model weights. We also define a ReduceLROnPlateau as there were several cases found during experimentation where the loss stagnated after a certain point. On the other hand, a direct LRScheduler was found to be too aggressive in its decay.
Compile model and call model.fit()
We use the Adam
optimizer since it is commonly considered the best choice for preliminary training, and was found to be the best optimizer. We use CategoricalCrossentropy
as the loss as our labels are in a one-hot-encoded form.
We define the TopKCategoricalAccuracy(k=3)
, AUC
, Precision
and Recall
metrics to further aid in understanding the model better.
Visualize model metrics during training
We use the function defined above to see model metrics during training.