Path: blob/master/examples/vision/ipynb/knowledge_distillation.ipynb
3236 views
Knowledge Distillation
Author: Kenneth Borup
Date created: 2020/09/01
Last modified: 2020/09/01
Description: Implementation of classical Knowledge Distillation.
Introduction to Knowledge Distillation
Knowledge Distillation is a procedure for model compression, in which a small (student) model is trained to match a large pre-trained (teacher) model. Knowledge is transferred from the teacher model to the student by minimizing a loss function, aimed at matching softened teacher logits as well as ground-truth labels.
The logits are softened by applying a "temperature" scaling function in the softmax, effectively smoothing out the probability distribution and revealing inter-class relationships learned by the teacher.
Reference:
Setup
Construct Distiller()
class
The custom Distiller()
class, overrides the Model
methods compile
, compute_loss
, and call
. In order to use the distiller, we need:
A trained teacher model
A student model to train
A student loss function on the difference between student predictions and ground-truth
A distillation loss function, along with a
temperature
, on the difference between the soft student predictions and the soft teacher labelsAn
alpha
factor to weight the student and distillation lossAn optimizer for the student and (optional) metrics to evaluate performance
In the compute_loss
method, we perform a forward pass of both the teacher and student, calculate the loss with weighting of the student_loss
and distillation_loss
by alpha
and 1 - alpha
, respectively. Note: only the student weights are updated.
Create student and teacher models
Initialy, we create a teacher model and a smaller student model. Both models are convolutional neural networks and created using Sequential()
, but could be any Keras model.
Train the teacher
In knowledge distillation we assume that the teacher is trained and fixed. Thus, we start by training the teacher model on the training set in the usual way.
Distill teacher to student
We have already trained the teacher model, and we only need to initialize a Distiller(student, teacher)
instance, compile()
it with the desired losses, hyperparameters and optimizer, and distill the teacher to the student.
Train student from scratch for comparison
We can also train an equivalent student model from scratch without the teacher, in order to evaluate the performance gain obtained by knowledge distillation.
If the teacher is trained for 5 full epochs and the student is distilled on this teacher for 3 full epochs, you should in this example experience a performance boost compared to training the same student model from scratch, and even compared to the teacher itself. You should expect the teacher to have accuracy around 97.6%, the student trained from scratch should be around 97.6%, and the distilled student should be around 98.1%. Remove or try out different seeds to use different weight initializations.