Path: blob/master/examples/vision/ipynb/keypoint_detection.ipynb
3236 views
Keypoint Detection with Transfer Learning
Author: Sayak Paul, converted to Keras 3 by Muhammad Anas Raza
Date created: 2021/05/02
Last modified: 2023/07/19
Description: Training a keypoint detector with data augmentation and transfer learning.
Keypoint detection consists of locating key object parts. For example, the key parts of our faces include nose tips, eyebrows, eye corners, and so on. These parts help to represent the underlying object in a feature-rich manner. Keypoint detection has applications that include pose estimation, face detection, etc.
In this example, we will build a keypoint detector using the StanfordExtra dataset, using transfer learning. This example requires TensorFlow 2.4 or higher, as well as imgaug
library, which can be installed using the following command:
Data collection
The StanfordExtra dataset contains 12,000 images of dogs together with keypoints and segmentation maps. It is developed from the Stanford dogs dataset. It can be downloaded with the command below:
Annotations are provided as a single JSON file in the StanfordExtra dataset and one needs to fill this form to get access to it. The authors explicitly instruct users not to share the JSON file, and this example respects this wish: you should obtain the JSON file yourself.
The JSON file is expected to be locally available as stanfordextra_v12.zip
.
After the files are downloaded, we can extract the archives.
Imports
Define hyperparameters
Load data
The authors also provide a metadata file that specifies additional information about the keypoints, like color information, animal pose name, etc. We will load this file in a pandas
dataframe to extract information for visualization purposes.
A single entry of json_dict
looks like the following:
In this example, the keys we are interested in are:
img_path
joints
There are a total of 24 entries present inside joints
. Each entry has 3 values:
x-coordinate
y-coordinate
visibility flag of the keypoints (1 indicates visibility and 0 indicates non-visibility)
As we can see joints
contain multiple [0, 0, 0]
entries which denote that those keypoints were not labeled. In this example, we will consider both non-visible as well as unlabeled keypoints in order to allow mini-batch learning.
Visualize data
Now, we write a utility function to visualize the images and their keypoints.
The plots show that we have images of non-uniform sizes, which is expected in most real-world scenarios. However, if we resize these images to have a uniform shape (for instance (224 x 224)) their ground-truth annotations will also be affected. The same applies if we apply any geometric transformation (horizontal flip, for e.g.) to an image. Fortunately, imgaug
provides utilities that can handle this issue. In the next section, we will write a data generator inheriting the keras.utils.Sequence
class that applies data augmentation on batches of data using imgaug
.
Prepare data generator
To know more about how to operate with keypoints in imgaug
check out this document.
Define augmentation transforms
Create training and validation splits
Data generator investigation
Model building
The Stanford dogs dataset (on which the StanfordExtra dataset is based) was built using the ImageNet-1k dataset. So, it is likely that the models pretrained on the ImageNet-1k dataset would be useful for this task. We will use a MobileNetV2 pre-trained on this dataset as a backbone to extract meaningful features from the images and then pass those to a custom regression head for predicting coordinates.
Our custom network is fully-convolutional which makes it more parameter-friendly than the same version of the network having fully-connected dense layers.
Notice the output shape of the network: (None, 1, 1, 48)
. This is why we have reshaped the coordinates as: batch_keypoints[i, :] = np.array(kp_temp).reshape(1, 1, 24 * 2)
.
Model compilation and training
For this example, we will train the network only for five epochs.
Make predictions and visualize them
Predictions will likely improve with more training.
Going further
Try using other augmentation transforms from
imgaug
to investigate how that changes the results.Here, we transferred the features from the pre-trained network linearly that is we did not fine-tune it. You are encouraged to fine-tune it on this task and see if that improves the performance. You can also try different architectures and see how they affect the final performance.