Path: blob/master/examples/vision/ipynb/gradient_centralization.ipynb
3236 views
Gradient Centralization for Better Training Performance
Author: Rishit Dagli
Date created: 06/18/21
Last modified: 07/25/23
Description: Implement Gradient Centralization to improve training performance of DNNs.
Introduction
This example implements Gradient Centralization, a new optimization technique for Deep Neural Networks by Yong et al., and demonstrates it on Laurence Moroney's Horses or Humans Dataset. Gradient Centralization can both speedup training process and improve the final generalization performance of DNNs. It operates directly on gradients by centralizing the gradient vectors to have zero mean. Gradient Centralization morever improves the Lipschitzness of the loss function and its gradient so that the training process becomes more efficient and stable.
This example requires tensorflow_datasets
which can be installed with this command:
Setup
Prepare the data
For this example, we will be using the Horses or Humans dataset.
Use Data Augmentation
We will rescale the data to [0, 1]
and perform simple augmentations to our data.
Rescale and augment the data
Define a model
In this section we will define a Convolutional neural network.
Implement Gradient Centralization
We will now subclass the RMSProp
optimizer class modifying the keras.optimizers.Optimizer.get_gradients()
method where we now implement Gradient Centralization. On a high level the idea is that let us say we obtain our gradients through back propagation for a Dense or Convolution layer we then compute the mean of the column vectors of the weight matrix, and then remove the mean from each column vector.
The experiments in this paper on various applications, including general image classification, fine-grained image classification, detection and segmentation and Person ReID demonstrate that GC can consistently improve the performance of DNN learning.
Also, for simplicity at the moment we are not implementing gradient cliiping functionality, however this quite easy to implement.
At the moment we are just creating a subclass for the RMSProp
optimizer however you could easily reproduce this for any other optimizer or on a custom optimizer in the same way. We will be using this class in the later section when we train a model with Gradient Centralization.
Training utilities
We will also create a callback which allows us to easily measure the total training time and the time taken for each epoch since we are interested in comparing the effect of Gradient Centralization on the model we built above.
Train the model without GC
We now train the model we built earlier without Gradient Centralization which we can compare to the training performance of the model trained with Gradient Centralization.
We also save the history since we later want to compare our model trained with and not trained with Gradient Centralization
Train the model with GC
We will now train the same model, this time using Gradient Centralization, notice our optimizer is the one using Gradient Centralization this time.
Comparing performance
Readers are encouraged to try out Gradient Centralization on different datasets from different domains and experiment with it's effect. You are strongly advised to check out the original paper as well - the authors present several studies on Gradient Centralization showing how it can improve general performance, generalization, training time as well as more efficient.
Many thanks to Ali Mustufa Shaikh for reviewing this implementation.