Path: blob/master/guides/ipynb/keras_hub/object_detection_retinanet.ipynb
17132 views
Object Detection with KerasHub
Authors: Sachin Prasad, Siva Sravana Kumar Neeli
Date created: 2026/03/27
Last modified: 2026/03/27
Description: RetinaNet Object Detection: Training, Fine-tuning, and Inference.

Introduction
Object detection is a crucial computer vision task that goes beyond simple image classification. It requires models to not only identify the types of objects present in an image but also pinpoint their locations using bounding boxes. This dual requirement of classification and localization makes object detection a more complex and powerful tool. Object detection models are broadly classified into two categories: "two-stage" and "single-stage" detectors. Two-stage detectors often achieve higher accuracy by first proposing regions of interest and then classifying them. However, this approach can be computationally expensive. Single-stage detectors, on the other hand, aim for speed by directly predicting object classes and bounding boxes in a single pass.
In this tutorial, we'll be diving into RetinaNet, a powerful object detection model known for its speed and precision. RetinaNet is a single-stage detector, a design choice that allows it to be remarkably efficient. Its impressive performance stems from two key architectural innovations:
Feature Pyramid Network (FPN): FPN equips
RetinaNetwith the ability to seamlessly detect objects of all scales, from distant, tiny instances to large, prominent ones.Focal Loss: This ingenious loss function tackles the common challenge of imbalanced data by focusing the model's learning on the most crucial and challenging object examples, leading to enhanced accuracy without compromising speed.

References
Setup and Imports
Let's install the dependencies and import the necessary modules.
To run this tutorial, you will need to install the following packages:
keras-hubkerasopencv-python
Helper functions
We download the Pascal VOC 2012 and 2007 datasets using these helper functions, prepare them for the object detection task, and split them into training and validation datasets.
Load the dataset
Let's load the training data. Here, we load both the VOC 2007 and 2012 datasets and split them into training and validation sets.
Inference using a pre-trained object detector
Let's begin with the simplest KerasHub API: a pre-trained object detector. In this example, we will construct an object detector that was pre-trained on the COCO dataset. We'll use this model to detect objects in a sample image.
The highest-level module in KerasHub is a task. A task is a keras.Model consisting of a (generally pre-trained) backbone model and task-specific layers. Here's an example using keras_hub.models.ImageObjectDetector with the RetinaNet model architecture and ResNet50 as the backbone.
ResNet is a great starting model when constructing an image classification pipeline. This architecture manages to achieve high accuracy while using a relatively small number of parameters. If a ResNet isn't powerful enough for the task you are hoping to solve, be sure to check out KerasHub's other available backbones here https://keras.io/keras_hub/presets/
Preprocessing Layers
Let's define the below preprocessing layers:
Resizing Layer: Resizes the image and maintains the aspect ratio by applying padding when
pad_to_aspect_ratio=True. Also, sets the default bounding box format for representing the data.Max Bounding Box Layer: Limits the maximum number of bounding boxes per image.
Predict and Visualize
Next, let's obtain predictions from our object detector by loading the image and visualizing them. We'll apply the preprocessing pipeline defined in the preprocessing layers step.
Fine tuning a pretrained object detector
In this guide, we'll assemble a full training pipeline for a KerasHub RetinaNet object detection model. This includes data loading, augmentation, training, and inference using Pascal VOC 2007 & 2012 dataset!
TFDS Preprocessing
This preprocessing step prepares the TFDS dataset for object detection. It includes:
Merging the Pascal VOC 2007 and 2012 datasets.
Resizing all images to a resolution of 800x800 pixels.
Limiting the number of bounding boxes per image to a maximum of 100.
Finally, the resulting dataset is batched into sets of 4 images and bounding box annotations.
Now concatenate both 2007 and 2012 VOC data
Load the eval data
Let's visualize a batch of training data
Decode TFDS records to a tuple for KerasHub
Configure RetinaNet Model
Configure the model with backbone, num_classes and preprocessor. Use callbacks for recording logs and saving checkpoints.
Load backbone weights and preprocessor config
Let's use the "retinanet_resnet50_fpn_coco" pretrained weights as the backbone model, applying its predefined configuration from the preprocessor of the "retinanet_resnet50_fpn_coco" preset. Define a RetinaNet object detector model with the backbone and preprocessor specified above, and set num_classes to 20 to represent the object categories from Pascal VOC. Finally, compile the model using Mean Absolute Error (MAE) as the box loss.
Train the model
Now that the object detector model is compiled, let's train it using the training and validation data we created earlier. For demonstration purposes, we have used a small number of epochs. You can increase the number of epochs to achieve better results.
Note: The model is trained on an L4 GPU. Training for 5 epochs on a T4 GPU takes approximately 7 hours.
Prediction on evaluation data
Let's make predictions using our model on the evaluation dataset.
Plot the predictions
Custom training object detector
Additionally, you can customize the object detector by modifying the image converter, selecting a different image encoder, etc.
Image Converter
The RetinaNetImageConverter class prepares images for use with the RetinaNet object detection model. Here's what it does:
Scaling and Offsetting
ImageNet Normalization
Resizing
Image Encoder and RetinaNet Backbone
The image encoder, while typically initialized with pre-trained weights (e.g., from ImageNet), can also be instantiated without them. This results in the image encoder (and, consequently, the entire object detection network built upon it) having randomly initialized weights.
Here we load pre-trained ResNet50 model. This will serve as the base for extracting image features.
And then build the RetinaNet Feature Pyramid Network (FPN) on top of the ResNet50 backbone. The FPN creates multi-scale feature maps for better object detection at different sizes.
Note: use_p5: If True, the output of the last backbone layer (typically P5 in an FPN) is used as input to create higher-level feature maps (e.g., P6, P7) through additional convolutional layers. If False, the original P5 feature map from the backbone is directly used as input for creating the coarser levels, bypassing any further processing of P5 within the feature pyramid. Defaults to False.
Train and visualize RetinaNet model
Note: Training the model (for demonstration purposes only 5 epochs). In a real scenario, you would train for many more epochs (often hundreds) to achieve good results.
Conclusion
In this tutorial, you learned how to custom train and fine-tune the RetinaNet object detector.
You can experiment with different existing backbones trained on ImageNet as the image encoder, or you can fine-tune your own backbone.
This configuration is equivalent to training the model from scratch, as opposed to fine-tuning a pre-trained model.
Training from scratch generally requires significantly more data and computational resources to achieve performance comparable to fine-tuning.
To achieve better results when fine-tuning the model, you can increase the number of epochs and experiment with different hyperparameter values. In addition to the training data used here, you can also use other object detection datasets, but keep in mind that custom training these requires high GPU memory.