Path: blob/master/examples/vision/ipynb/attention_mil_classification.ipynb
3236 views
Classification using Attention-based Deep Multiple Instance Learning (MIL).
Author: Mohamad Jaber
Date created: 2021/08/16
Last modified: 2021/11/25
Description: MIL approach to classify bags of instances and get their individual instance score.
Introduction
What is Multiple Instance Learning (MIL)?
Usually, with supervised learning algorithms, the learner receives labels for a set of instances. In the case of MIL, the learner receives labels for a set of bags, each of which contains a set of instances. The bag is labeled positive if it contains at least one positive instance, and negative if it does not contain any.
Motivation
It is often assumed in image classification tasks that each image clearly represents a class label. In medical imaging (e.g. computational pathology, etc.) an entire image is represented by a single class label (cancerous/non-cancerous) or a region of interest could be given. However, one will be interested in knowing which patterns in the image is actually causing it to belong to that class. In this context, the image(s) will be divided and the subimages will form the bag of instances.
Therefore, the goals are to:
Learn a model to predict a class label for a bag of instances.
Find out which instances within the bag caused a position class label prediction.
Implementation
The following steps describe how the model works:
The feature extractor layers extract feature embeddings.
The embeddings are fed into the MIL attention layer to get the attention scores. The layer is designed as permutation-invariant.
Input features and their corresponding attention scores are multiplied together.
The resulting output is passed to a softmax function for classification.
References
Some of the attention operator code implementation was inspired from https://github.com/utayao/Atten_Deep_MIL.
Imbalanced data tutorial by TensorFlow.
Setup
Create dataset
We will create a set of bags and assign their labels according to their contents. If at least one positive instance is available in a bag, the bag is considered as a positive bag. If it does not contain any positive instance, the bag will be considered as negative.
Configuration parameters
POSITIVE_CLASS
: The desired class to be kept in the positive bag.BAG_COUNT
: The number of training bags.VAL_BAG_COUNT
: The number of validation bags.BAG_SIZE
: The number of instances in a bag.PLOT_SIZE
: The number of bags to plot.ENSEMBLE_AVG_COUNT
: The number of models to create and average together. (Optional: often results in better performance - set to 1 for single model)
Prepare bags
Since the attention operator is a permutation-invariant operator, an instance with a positive class label is randomly placed among the instances in the positive bag.
Create the model
We will now build the attention layer, prepare some utilities, then build and train the entire model.
Attention operator implementation
The output size of this layer is decided by the size of a single bag.
The attention mechanism uses a weighted average of instances in a bag, in which the sum of the weights must equal to 1 (invariant of the bag size).
The weight matrices (parameters) are w and v. To include positive and negative values, hyperbolic tangent element-wise non-linearity is utilized.
A Gated attention mechanism can be used to deal with complex relations. Another weight matrix, u, is added to the computation. A sigmoid non-linearity is used to overcome approximately linear behavior for x ∈ [−1, 1] by hyperbolic tangent non-linearity.
Visualizer tool
Plot the number of bags (given by PLOT_SIZE
) with respect to the class.
Moreover, if activated, the class label prediction with its associated instance score for each bag (after the model has been trained) can be seen.
Create model
First we will create some embeddings per instance, invoke the attention operator and then use the softmax function to output the class probabilities.
Class weights
Since this kind of problem could simply turn into imbalanced data classification problem, class weighting should be considered.
Let's say there are 1000 bags. There often could be cases were ~90 % of the bags do not contain any positive label and ~10 % do. Such data can be referred to as Imbalanced data.
Using class weights, the model will tend to give a higher weight to the rare class.
Build and train model
The model is built and trained in this section.
Model evaluation
The models are now ready for evaluation. With each model we also create an associated intermediate model to get the weights from the attention layer.
We will compute a prediction for each of our ENSEMBLE_AVG_COUNT
models, and average them together for our final prediction.
Conclusion
From the above plot, you can notice that the weights always sum to 1. In a positively predict bag, the instance which resulted in the positive labeling will have a substantially higher attention score than the rest of the bag. However, in a negatively predicted bag, there are two cases:
All instances will have approximately similar scores.
An instance will have relatively higher score (but not as high as of a positive instance). This is because the feature space of this instance is close to that of the positive instance.
Remarks
If the model is overfit, the weights will be equally distributed for all bags. Hence, the regularization techniques are necessary.
In the paper, the bag sizes can differ from one bag to another. For simplicity, the bag sizes are fixed here.
In order not to rely on the random initial weights of a single model, averaging ensemble methods should be considered.