Call for code example contributions
This is a constantly-updated list of code examples that we're currently interested in.
If you're not sure whether your idea would make a good code example, please ask us first!
Structured data examples featuring Keras Preprocessing Layers (KPL)
E.g. feature hashing, feature indexing with handling of missing values, mixing numerical, categorical, and text features, doing feature engineering with KPL, etc.
Transformer model for MIDI music generation
Reference TF/Keras implementation
Text-to-image
A text-to-image diffusion model in the style of Imagen, using a frozen BERT encoder from KerasHub and a multi-stage diffusion model.
Text-to-speech
Example TF2/Keras implementation
Learning to rank
DETR: End-to-End Object Detection with Transformers
3D image segmentation
Question answering from structured knowledge base and freeform documents
Instance segmentation
EEG & MEG signal classification
Text summarization
Audio track separation
Audio style transfer
Timeseries imputation
Customer lifetime value prediction
Keras reproducibility recipes
Standalone Mixture-of-Experts (MoE) layer
MoE layers provide a flexible way to scale deep models to train on larger datasets. The aim of this example should be to show how replace the regular layers (such as Dense
, Conv2D
) with compatible MoE layers.
References:
A relevant paper on MoE: https://arxiv.org/abs/1701.06538
Guide to report the efficiency of a Keras model
It's often important to report the efficiency of a model. But what factors should be included when reporting the efficiency of a deep learning model? The Efficiency Misnomer paper discusses this thoroughly and provides guidelines for practitioners on how to properly report model efficiency.
The objectives of this guide will include the following:
What factors to consider when reporting model efficiency?
How to calculate certain metrics like FLOPS, number of examples a model can process per second (both in training and inference mode), etc?