Path: blob/master/examples/vision/ipynb/involution.ipynb
3236 views
Involutional neural networks
Author: Aritra Roy Gosthipaty
Date created: 2021/07/25
Last modified: 2021/07/25
Description: Deep dive into location-specific and channel-agnostic "involution" kernels.
Introduction
Convolution has been the basis of most modern neural networks for computer vision. A convolution kernel is spatial-agnostic and channel-specific. Because of this, it isn't able to adapt to different visual patterns with respect to different spatial locations. Along with location-related problems, the receptive field of convolution creates challenges with regard to capturing long-range spatial interactions.
To address the above issues, Li et. al. rethink the properties of convolution in Involution: Inverting the Inherence of Convolution for VisualRecognition. The authors propose the "involution kernel", that is location-specific and channel-agnostic. Due to the location-specific nature of the operation, the authors say that self-attention falls under the design paradigm of involution.
This example describes the involution kernel, compares two image classification models, one with convolution and the other with involution, and also tries drawing a parallel with the self-attention layer.
Setup
Convolution
Convolution remains the mainstay of deep neural networks for computer vision. To understand Involution, it is necessary to talk about the convolution operation.
Consider an input tensor X with dimensions H, W and C_in. We take a collection of C_out convolution kernels each of shape K, K, C_in. With the multiply-add operation between the input tensor and the kernels we obtain an output tensor Y with dimensions H, W, C_out.
In the diagram above C_out=3
. This makes the output tensor of shape H, W and 3. One can notice that the convoltuion kernel does not depend on the spatial position of the input tensor which makes it location-agnostic. On the other hand, each channel in the output tensor is based on a specific convolution filter which makes is channel-specific.
Involution
The idea is to have an operation that is both location-specific and channel-agnostic. Trying to implement these specific properties poses a challenge. With a fixed number of involution kernels (for each spatial position) we will not be able to process variable-resolution input tensors.
To solve this problem, the authors have considered generating each kernel conditioned on specific spatial positions. With this method, we should be able to process variable-resolution input tensors with ease. The diagram below provides an intuition on this kernel generation method.
Testing the Involution layer
Image Classification
In this section, we will build an image-classifier model. There will be two models one with convolutions and the other with involutions.
The image-classification model is heavily inspired by this Convolutional Neural Network (CNN) tutorial from Google.
Get the CIFAR10 Dataset
Visualise the data
Convolutional Neural Network
Involutional Neural Network
Comparisons
In this section, we will be looking at both the models and compare a few pointers.
Parameters
One can see that with a similar architecture the parameters in a CNN is much larger than that of an INN (Involutional Neural Network).
Loss and Accuracy Plots
Here, the loss and the accuracy plots demonstrate that INNs are slow learners (with lower parameters).
Visualizing Involution Kernels
To visualize the kernels, we take the sum of K×K values from each involution kernel. All the representatives at different spatial locations frame the corresponding heat map.
The authors mention:
"Our proposed involution is reminiscent of self-attention and essentially could become a generalized version of it."
With the visualization of the kernel we can indeed obtain an attention map of the image. The learned involution kernels provides attention to individual spatial positions of the input tensor. The location-specific property makes involution a generic space of models in which self-attention belongs.
Conclusions
In this example, the main focus was to build an Involution
layer which can be easily reused. While our comparisons were based on a specific task, feel free to use the layer for different tasks and report your results.
According to me, the key take-away of involution is its relationship with self-attention. The intuition behind location-specific and channel-spefic processing makes sense in a lot of tasks.
Moving forward one can:
Look at Yannick's video on involution for a better understanding.
Experiment with the various hyperparameters of the involution layer.
Build different models with the involution layer.
Try building a different kernel generation method altogether.
You can use the trained model hosted on Hugging Face Hub and try the demo on Hugging Face Spaces.