Path: blob/master/notebooks/book1/13/mlp_cifar_pytorch.ipynb
1192 views
MLP for image classification using PyTorch
In this section, we follow Chap. 7 of the Deep Learning With PyTorch book, and illustrate how to fit an MLP to a two-class version of CIFAR. (We modify the code from here.)
Get the CIFAR dataset
Convert to tensors
Standardize the inputs
We standardize the features by computing the mean and std of each channel, averaging across all pixels and all images. This will help optimization.
Create two-class version of dataset
We extract data which correspond to airplane or bird. The result object is a list of pairs. This "acts like" an object of type torch.utilts.data.dataset.Dataset, since it implements the len() and item index methods.
A shallow, fully connected model
We can name the layers so we can access their activations and/or parameters more easily.
Let's test the model.
Negative log likelihood loss.
Let's access the output of the logit layer directly, bypassing the final log softmax. (We borrow a trick from here).
We can also modify the model to return logits.
In this case, we need to modify the loss to take in logits.
We can also use the functional API to specify the model. This avoids having to create stateless layers (i.e., layers with no adjustable parameters), such as the tanh or softmax layers.