Kernel: Python 3 (ipykernel)
Machine Learning with PyTorch and Scikit-Learn
-- Code Examples
Package version checks
Add folder to path in order to load from the check_packages.py script:
In [1]:
Check recommended package versions:
In [2]:
Out[2]:
[OK] Your Python version is 3.8.12 | packaged by conda-forge | (default, Oct 12 2021, 21:59:51)
[GCC 9.4.0]
[OK] numpy 1.22.0
[FAIL] scipy 1.6.3, please upgrade to >= 1.7.0
[FAIL] matplotlib 3.3.4, please upgrade to >= 3.4.3
[OK] torch 1.10.1+cu102
[OK] torchvision 0.11.2+cu102
Chapter 14: Classifying Images with Deep Convolutional Neural Networks (Part 1/2)
Outline
In [1]:
The building blocks of convolutional neural networks
Understanding CNNs and feature hierarchies
In [4]:
Out[4]:
Performing discrete convolutions
Discrete convolutions in one dimension
In [5]:
Out[5]:
In [6]:
Out[6]:
Padding inputs to control the size of the output feature maps
In [7]:
Out[7]:
Determining the size of the convolution output
In [8]:
Out[8]:
PyTorch version: 1.10.1+cu102
NumPy version: 1.22.0
In [9]:
Out[9]:
Conv1d Implementation: [ 5. 14. 16. 26. 24. 34. 19. 22.]
Numpy Results: [ 5 14 16 26 24 34 19 22]
Performing a discrete convolution in 2D
In [10]:
Out[10]:
In [11]:
Out[11]:
In [12]:
Out[12]:
In [13]:
Out[13]:
Conv2d Implementation:
[[11. 25. 32. 13.]
[19. 25. 24. 13.]
[13. 28. 25. 17.]
[11. 17. 14. 9.]]
SciPy Results:
[[11 25 32 13]
[19 25 24 13]
[13 28 25 17]
[11 17 14 9]]
Subsampling layers
In [14]:
Out[14]:
Putting everything together – implementing a CNN
Working with multiple input or color channels
In [15]:
Out[15]:
TIP: Reading an image file
In [16]:
Out[16]:
Image shape: torch.Size([3, 252, 221])
Number of channels: 3
Image data type: torch.uint8
tensor([[[179, 182],
[180, 182]],
[[134, 136],
[135, 137]],
[[110, 112],
[111, 113]]], dtype=torch.uint8)
Regularizing a neural network with L2 regularization and dropout
In [17]:
Out[17]:
In [18]:
Loss Functions for Classification
nn.BCELoss()
from_logits=False
from_logits=True
nn.CrossEntropyLoss()
from_logits=False
from_logits=True
In [2]:
Out[2]:
In [20]:
Out[20]:
BCE (w Probas): 0.3711
BCE (w Logits): 0.3711
CCE (w Logits): 0.5996
CCE (w Probas): 0.5996
Implementing a deep convolutional neural network using PyTorch
The multilayer CNN architecture
In [21]:
Out[21]:
Loading and preprocessing the data
In [23]:
Out[23]:
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to ./MNIST/raw/train-images-idx3-ubyte.gz
0%| | 0/9912422 [00:00<?, ?it/s]
Extracting ./MNIST/raw/train-images-idx3-ubyte.gz to ./MNIST/raw
Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz to ./MNIST/raw/train-labels-idx1-ubyte.gz
0%| | 0/28881 [00:00<?, ?it/s]
Extracting ./MNIST/raw/train-labels-idx1-ubyte.gz to ./MNIST/raw
Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz to ./MNIST/raw/t10k-images-idx3-ubyte.gz
0%| | 0/1648877 [00:00<?, ?it/s]
Extracting ./MNIST/raw/t10k-images-idx3-ubyte.gz to ./MNIST/raw
Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz to ./MNIST/raw/t10k-labels-idx1-ubyte.gz
0%| | 0/4542 [00:00<?, ?it/s]
Extracting ./MNIST/raw/t10k-labels-idx1-ubyte.gz to ./MNIST/raw
In [24]:
Implementing a CNN using the torch.nn module
Configuring CNN layers in PyTorch
Conv2d:
torch.nn.Conv2d
out_channels
kernel_size
stride
padding
MaxPool2d:
torch.nn.MaxPool2d
kernel_size
stride
padding
Dropout
torch.nn.Dropout
p
Constructing a CNN in PyTorch
In [25]:
Out[25]:
torch.Size([4, 64, 7, 7])
In [26]:
Out[26]:
torch.Size([4, 3136])
In [27]:
In [28]:
In [29]:
Out[29]:
Epoch 1 accuracy: 0.9503 val_accuracy: 0.9802
Epoch 2 accuracy: 0.9837 val_accuracy: 0.9861
Epoch 3 accuracy: 0.9900 val_accuracy: 0.9860
Epoch 4 accuracy: 0.9919 val_accuracy: 0.9902
Epoch 5 accuracy: 0.9932 val_accuracy: 0.9906
Epoch 6 accuracy: 0.9947 val_accuracy: 0.9901
Epoch 7 accuracy: 0.9951 val_accuracy: 0.9895
Epoch 8 accuracy: 0.9954 val_accuracy: 0.9898
Epoch 9 accuracy: 0.9968 val_accuracy: 0.9892
Epoch 10 accuracy: 0.9967 val_accuracy: 0.9899
Epoch 11 accuracy: 0.9971 val_accuracy: 0.9886
Epoch 12 accuracy: 0.9974 val_accuracy: 0.9899
Epoch 13 accuracy: 0.9972 val_accuracy: 0.9900
Epoch 14 accuracy: 0.9980 val_accuracy: 0.9888
Epoch 15 accuracy: 0.9977 val_accuracy: 0.9910
Epoch 16 accuracy: 0.9985 val_accuracy: 0.9900
Epoch 17 accuracy: 0.9983 val_accuracy: 0.9899
Epoch 18 accuracy: 0.9979 val_accuracy: 0.9887
Epoch 19 accuracy: 0.9983 val_accuracy: 0.9894
Epoch 20 accuracy: 0.9979 val_accuracy: 0.9907
In [30]:
Out[30]:
In [31]:
Out[31]:
Test accuracy: 0.9914
In [33]:
Out[33]:
In [34]:
Readers may ignore the next cell.
In [35]:
Out[35]:
[NbConvertApp] Converting notebook ch14_part1.ipynb to script
[NbConvertApp] Writing 13214 bytes to ch14_part1.py
In [ ]: