Kernel: Python 3 (ipykernel)
Machine Learning with PyTorch and Scikit-Learn
-- Code Examples
Package version checks
Add folder to path in order to load from the check_packages.py script:
In [1]:
Check recommended package versions:
In [2]:
Out[2]:
[OK] Your Python version is 3.8.8 | packaged by conda-forge | (default, Feb 20 2021, 16:22:27)
[GCC 9.3.0]
[OK] numpy 1.23.0
[OK] scipy 1.8.1
[OK] sklearn 1.1.1
[OK] matplotlib 3.5.2
[OK] torch 1.11.0+cu102
Chapter 12: Parallelizing Neural Network Training with PyTorch (Part 2/2)
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
In [3]:
Building a neural network model in PyTorch
The PyTorch neural network module (torch.nn)
Building a linear regression model
In [4]:
In [5]:
Out[5]:
In [6]:
In [7]:
Out[7]:
Epoch 0 Loss 45.0782
Epoch 10 Loss 26.4366
Epoch 20 Loss 1.5918
Epoch 30 Loss 14.1307
Epoch 40 Loss 11.6038
Epoch 50 Loss 6.3084
Epoch 60 Loss 0.6349
Epoch 70 Loss 3.1374
Epoch 80 Loss 1.9999
Epoch 90 Loss 0.3133
Epoch 100 Loss 0.7653
Epoch 110 Loss 1.0039
Epoch 120 Loss 0.0235
Epoch 130 Loss 0.5176
Epoch 140 Loss 0.0759
Epoch 150 Loss 1.8789
Epoch 160 Loss 0.0008
Epoch 170 Loss 0.0866
Epoch 180 Loss 0.0646
Epoch 190 Loss 0.0011
In [8]:
Out[8]:
Final Parameters: 2.6696107387542725 4.879678249359131
Model training via the torch.nn and torch.optim modules
In [9]:
Out[9]:
Epoch 0 Loss 24.6684
Epoch 10 Loss 29.1377
Epoch 20 Loss 20.9207
Epoch 30 Loss 0.1257
Epoch 40 Loss 12.4922
Epoch 50 Loss 1.7845
Epoch 60 Loss 7.6425
Epoch 70 Loss 2.5606
Epoch 80 Loss 0.0157
Epoch 90 Loss 0.7548
Epoch 100 Loss 0.8412
Epoch 110 Loss 0.4923
Epoch 120 Loss 0.0823
Epoch 130 Loss 0.0794
Epoch 140 Loss 0.0891
Epoch 150 Loss 0.0973
Epoch 160 Loss 0.1043
Epoch 170 Loss 0.1103
Epoch 180 Loss 0.0009
Epoch 190 Loss 0.0764
In [10]:
Out[10]:
Final Parameters: 2.6496422290802 4.87706995010376
Building a multilayer perceptron for classifying flowers in the Iris dataset
In [11]:
In [12]:
In [13]:
In [14]:
In [15]:
Out[15]:
Evaluating the trained model on the test dataset
In [16]:
Out[16]:
Test Acc.: 0.9800
Saving and reloading the trained model
In [17]:
In [18]:
Out[18]:
Model(
(layer1): Linear(in_features=4, out_features=16, bias=True)
(layer2): Linear(in_features=16, out_features=3, bias=True)
)
In [19]:
Out[19]:
Test Acc.: 0.9800
In [20]:
In [21]:
Out[21]:
<All keys matched successfully>
Choosing activation functions for multilayer neural networks
Logistic function recap
In [22]:
Out[22]:
P(y=1|x) = 0.888
In [23]:
Out[23]:
Net Input:
[1.78 0.76 1.65]
Output Units:
[0.85569687 0.68135373 0.83889105]
In [24]:
Out[24]:
Predicted class label: 0
Estimating class probabilities in multiclass classification via the softmax function
In [25]:
Out[25]:
Probabilities:
[0.44668973 0.16107406 0.39223621]
1.0
In [26]:
Out[26]:
tensor([0.4467, 0.1611, 0.3922], dtype=torch.float64)
Broadening the output spectrum using a hyperbolic tangent
In [27]:
Out[27]:
In [28]:
Out[28]:
array([-0.9999092 , -0.99990829, -0.99990737, ..., 0.99990644,
0.99990737, 0.99990829])
In [29]:
Out[29]:
tensor([-0.9999, -0.9999, -0.9999, ..., 0.9999, 0.9999, 0.9999],
dtype=torch.float64)
In [30]:
Out[30]:
array([0.00669285, 0.00672617, 0.00675966, ..., 0.99320669, 0.99324034,
0.99327383])
In [31]:
Out[31]:
tensor([0.0067, 0.0067, 0.0068, ..., 0.9932, 0.9932, 0.9933],
dtype=torch.float64)
Rectified linear unit activation
In [32]:
Out[32]:
tensor([0.0000, 0.0000, 0.0000, ..., 4.9850, 4.9900, 4.9950],
dtype=torch.float64)
In [33]:
Out[33]:
Summary
Readers may ignore the next cell.
In [34]:
Out[34]:
[NbConvertApp] Converting notebook ch12_part2.ipynb to script
[NbConvertApp] Writing 12165 bytes to ch12_part2.py