Kernel: Python 3 (ipykernel)
Machine Learning with PyTorch and Scikit-Learn
-- Code Examples
Package version checks
Add folder to path in order to load from the check_packages.py script:
In [1]:
Check recommended package versions:
In [2]:
Out[2]:
[OK] Your Python version is 3.9.7 | packaged by conda-forge | (default, Sep 29 2021, 19:24:02)
[Clang 11.1.0 ]
[OK] numpy 1.23.1
[OK] matplotlib 3.5.1
[OK] torch 1.12.0
[OK] mlxtend 0.19.0
Chapter 13: Going Deeper -- the Mechanics of PyTorch (Part 1/3)
In [3]:
The key features of PyTorch
PyTorch's computation graphs
Understanding computation graphs
In [4]:
Out[4]:
Creating a graph in PyTorch
In [5]:
In [6]:
Out[6]:
Scalar Inputs: tensor(1)
Rank 1 Inputs: tensor([1])
Rank 2 Inputs: tensor([[1]])
PyTorch Tensor objects for storing and updating model parameters
In [7]:
Out[7]:
tensor(3.1400, requires_grad=True)
tensor([1., 2., 3.], requires_grad=True)
In [8]:
Out[8]:
True
In [9]:
Out[9]:
False
In [10]:
Out[10]:
True
In [11]:
Out[11]:
tensor([[ 0.4183, 0.1688, 0.0390],
[ 0.3930, -0.2858, -0.1051]])
In [12]:
Computing gradients via automatic differentiation and GradientTape
Computing the gradients of the loss with respect to trainable variables
In [13]:
Out[13]:
dL/dw : tensor(-0.5600)
dL/db : tensor(-0.4000)
In [14]:
Out[14]:
tensor([-0.5600], grad_fn=<MulBackward0>)
Simplifying implementations of common architectures via the torch.nn module
Implementing models based on nn.Sequential
In [15]:
Out[15]:
Sequential(
(0): Linear(in_features=4, out_features=16, bias=True)
(1): ReLU()
(2): Linear(in_features=16, out_features=32, bias=True)
(3): ReLU()
)
Configuring layers
Initializers
nn.init
: https://pytorch.org/docs/stable/nn.init.htmlL1 Regularizers
nn.L1Loss
: https://pytorch.org/docs/stable/generated/torch.nn.L1Loss.html#torch.nn.L1LossL2 Regularizers
weight_decay
: https://pytorch.org/docs/stable/optim.htmlActivations: https://pytorch.org/docs/stable/nn.html#non-linear-activations-weighted-sum-nonlinearity
In [16]:
Compiling a model
Optimizers
torch.optim
: https://pytorch.org/docs/stable/optim.html#algorithmsLoss Functions
tf.keras.losses
: https://pytorch.org/docs/stable/nn.html#loss-functions
In [17]:
Solving an XOR classification problem
In [18]:
Out[18]:
In [19]:
In [20]:
Out[20]:
Sequential(
(0): Linear(in_features=2, out_features=1, bias=True)
(1): Sigmoid()
)
In [21]:
In [22]:
In [23]:
Out[23]:
Text(0.5, 0, 'Epochs')
In [24]:
Out[24]:
Sequential(
(0): Linear(in_features=2, out_features=4, bias=True)
(1): ReLU()
(2): Linear(in_features=4, out_features=4, bias=True)
(3): ReLU()
(4): Linear(in_features=4, out_features=1, bias=True)
(5): Sigmoid()
)
In [25]:
In [26]:
Out[26]:
Text(0.5, 0, 'Epochs')
Making model building more flexible with nn.Module
In [27]:
Out[27]:
MyModule(
(module_list): ModuleList(
(0): Linear(in_features=2, out_features=4, bias=True)
(1): ReLU()
(2): Linear(in_features=4, out_features=4, bias=True)
(3): ReLU()
(4): Linear(in_features=4, out_features=1, bias=True)
(5): Sigmoid()
)
)
In [28]:
In [29]:
In [30]:
Out[30]:
Writing custom layers in PyTorch
In [31]:
In [32]:
Out[32]:
tensor([[ 0.1154, -0.0598]], grad_fn=<AddBackward0>)
tensor([[ 0.0432, -0.0375]], grad_fn=<AddBackward0>)
tensor([[0., 0.]], grad_fn=<AddBackward0>)
In [33]:
Out[33]:
MyNoisyModule(
(l1): NoisyLinear()
(a1): ReLU()
(l2): Linear(in_features=4, out_features=4, bias=True)
(a2): ReLU()
(l3): Linear(in_features=4, out_features=1, bias=True)
(a3): Sigmoid()
)
In [34]:
In [35]:
Out[35]:
Readers may ignore the next cell.
In [36]:
Out[36]:
[NbConvertApp] Converting notebook ch13_part1.ipynb to script
[NbConvertApp] Writing 14084 bytes to ch13_part1.py