Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
rasbt
GitHub Repository: rasbt/machine-learning-book
Path: blob/main/ch17/ch17_part2.ipynb
1245 views
Kernel: Python 3 (ipykernel)

Machine Learning with PyTorch and Scikit-Learn

-- Code Examples

Package version checks

Add folder to path in order to load from the check_packages.py script:

import sys sys.path.insert(0, '..')

Check recommended package versions:

from python_environment_check import check_packages d = { 'torch': '1.8.0', 'torchvision': '0.9.0', 'numpy': '1.21.2', 'matplotlib': '3.4.3', } check_packages(d)
[OK] torch 1.10.1 [OK] torchvision 0.11.2 [OK] numpy 1.21.5 [OK] matplotlib 3.5.1

Chapter 17 - Generative Adversarial Networks for Synthesizing New Data (Part 2/2)

Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).

%load_ext watermark %watermark -a "Sebastian Raschka, Yuxi (Hayden) Liu & Vahid Mirjalili" -u -d -p numpy,scipy,matplotlib,torch
Author: Sebastian Raschka, Yuxi (Hayden) Liu & Vahid Mirjalili Last updated: 2021-12-27 numpy : 1.21.5 scipy : 1.7.3 matplotlib: 3.5.1 torch : 1.10.1
from IPython.display import Image %matplotlib inline

Improving the quality of synthesized images using a convolutional and Wasserstein GAN

Transposed convolution

Image(filename='figures/17_09.png', width=700)
Image in a Jupyter notebook
Image(filename='figures/17_10.png', width=700)
Image in a Jupyter notebook

Batch normalization

Image(filename='figures/17_11.png', width=700)
Image in a Jupyter notebook

Implementing the generator and discriminator

Image(filename='figures/17_12.png', width=700)
Image in a Jupyter notebook
Image(filename='figures/17_13.png', width=700)
Image in a Jupyter notebook
  • Setting up the Google Colab

#from google.colab import drive #drive.mount('/content/drive/')
import torch print(torch.__version__) print("GPU Available:", torch.cuda.is_available()) if torch.cuda.is_available(): device = torch.device("cuda:0") else: device = "cpu"
1.10.0+cu113 GPU Available: True
import torch.nn as nn import numpy as np import matplotlib.pyplot as plt %matplotlib inline

Train the DCGAN model

import torchvision from torchvision import transforms image_path = './' transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize(mean=(0.5), std=(0.5)) ]) mnist_dataset = torchvision.datasets.MNIST(root=image_path, train=True, transform=transform, download=False) batch_size = 64 torch.manual_seed(1) np.random.seed(1) ## Set up the dataset from torch.utils.data import DataLoader mnist_dl = DataLoader(mnist_dataset, batch_size=batch_size, shuffle=True, drop_last=True)
def make_generator_network(input_size, n_filters): model = nn.Sequential( nn.ConvTranspose2d(input_size, n_filters*4, 4, 1, 0, bias=False), nn.BatchNorm2d(n_filters*4), nn.LeakyReLU(0.2), nn.ConvTranspose2d(n_filters*4, n_filters*2, 3, 2, 1, bias=False), nn.BatchNorm2d(n_filters*2), nn.LeakyReLU(0.2), nn.ConvTranspose2d(n_filters*2, n_filters, 4, 2, 1, bias=False), nn.BatchNorm2d(n_filters), nn.LeakyReLU(0.2), nn.ConvTranspose2d(n_filters, 1, 4, 2, 1, bias=False), nn.Tanh()) return model class Discriminator(nn.Module): def __init__(self, n_filters): super().__init__() self.network = nn.Sequential( nn.Conv2d(1, n_filters, 4, 2, 1, bias=False), nn.LeakyReLU(0.2), nn.Conv2d(n_filters, n_filters*2, 4, 2, 1, bias=False), nn.BatchNorm2d(n_filters * 2), nn.LeakyReLU(0.2), nn.Conv2d(n_filters*2, n_filters*4, 3, 2, 1, bias=False), nn.BatchNorm2d(n_filters*4), nn.LeakyReLU(0.2), nn.Conv2d(n_filters*4, 1, 4, 1, 0, bias=False), nn.Sigmoid()) def forward(self, input): output = self.network(input) return output.view(-1, 1).squeeze(0)
z_size = 100 image_size = (28, 28) n_filters = 32 gen_model = make_generator_network(z_size, n_filters).to(device) print(gen_model) disc_model = Discriminator(n_filters).to(device) print(disc_model)
Sequential( (0): ConvTranspose2d(100, 128, kernel_size=(4, 4), stride=(1, 1), bias=False) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.2) (3): ConvTranspose2d(128, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): LeakyReLU(negative_slope=0.2) (6): ConvTranspose2d(64, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False) (7): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (8): LeakyReLU(negative_slope=0.2) (9): ConvTranspose2d(32, 1, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False) (10): Tanh() ) Discriminator( (network): Sequential( (0): Conv2d(1, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False) (1): LeakyReLU(negative_slope=0.2) (2): Conv2d(32, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False) (3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (4): LeakyReLU(negative_slope=0.2) (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (6): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (7): LeakyReLU(negative_slope=0.2) (8): Conv2d(128, 1, kernel_size=(4, 4), stride=(1, 1), bias=False) (9): Sigmoid() ) )
## Loss function and optimizers: loss_fn = nn.BCELoss() g_optimizer = torch.optim.Adam(gen_model.parameters(), 0.0003) d_optimizer = torch.optim.Adam(disc_model.parameters(), 0.0002)
def create_noise(batch_size, z_size, mode_z): if mode_z == 'uniform': input_z = torch.rand(batch_size, z_size, 1, 1)*2 - 1 elif mode_z == 'normal': input_z = torch.randn(batch_size, z_size, 1, 1) return input_z
## Train the discriminator def d_train(x): disc_model.zero_grad() # Train discriminator with a real batch batch_size = x.size(0) x = x.to(device) d_labels_real = torch.ones(batch_size, 1, device=device) d_proba_real = disc_model(x) d_loss_real = loss_fn(d_proba_real, d_labels_real) # Train discriminator on a fake batch input_z = create_noise(batch_size, z_size, mode_z).to(device) g_output = gen_model(input_z) d_proba_fake = disc_model(g_output) d_labels_fake = torch.zeros(batch_size, 1, device=device) d_loss_fake = loss_fn(d_proba_fake, d_labels_fake) # gradient backprop & optimize ONLY D's parameters d_loss = d_loss_real + d_loss_fake d_loss.backward() d_optimizer.step() return d_loss.data.item(), d_proba_real.detach(), d_proba_fake.detach()
## Train the generator def g_train(x): gen_model.zero_grad() batch_size = x.size(0) input_z = create_noise(batch_size, z_size, mode_z).to(device) g_labels_real = torch.ones((batch_size, 1), device=device) g_output = gen_model(input_z) d_proba_fake = disc_model(g_output) g_loss = loss_fn(d_proba_fake, g_labels_real) # gradient backprop & optimize ONLY G's parameters g_loss.backward() g_optimizer.step() return g_loss.data.item()
mode_z = 'uniform' fixed_z = create_noise(batch_size, z_size, mode_z).to(device) def create_samples(g_model, input_z): g_output = g_model(input_z) images = torch.reshape(g_output, (batch_size, *image_size)) return (images+1)/2.0 epoch_samples = [] num_epochs = 100 torch.manual_seed(1) for epoch in range(1, num_epochs+1): gen_model.train() d_losses, g_losses = [], [] for i, (x, _) in enumerate(mnist_dl): d_loss, d_proba_real, d_proba_fake = d_train(x) d_losses.append(d_loss) g_losses.append(g_train(x)) print(f'Epoch {epoch:03d} | Avg Losses >>' f' G/D {torch.FloatTensor(g_losses).mean():.4f}' f'/{torch.FloatTensor(d_losses).mean():.4f}') gen_model.eval() epoch_samples.append( create_samples(gen_model, fixed_z).detach().cpu().numpy())
Epoch 001 | Avg Losses >> G/D 4.5246/0.1213 Epoch 002 | Avg Losses >> G/D 4.3550/0.1622 Epoch 003 | Avg Losses >> G/D 3.5067/0.2883 Epoch 004 | Avg Losses >> G/D 3.0862/0.3388 Epoch 005 | Avg Losses >> G/D 2.9113/0.3387 Epoch 006 | Avg Losses >> G/D 2.8452/0.3513 Epoch 007 | Avg Losses >> G/D 2.8693/0.3268 Epoch 008 | Avg Losses >> G/D 2.9476/0.3122 Epoch 009 | Avg Losses >> G/D 2.9538/0.3233 Epoch 010 | Avg Losses >> G/D 2.9754/0.3221 Epoch 011 | Avg Losses >> G/D 3.0405/0.2882 Epoch 012 | Avg Losses >> G/D 3.0717/0.2732 Epoch 013 | Avg Losses >> G/D 3.1362/0.2705 Epoch 014 | Avg Losses >> G/D 3.2441/0.2409 Epoch 015 | Avg Losses >> G/D 3.3397/0.2372 Epoch 016 | Avg Losses >> G/D 3.4194/0.2276 Epoch 017 | Avg Losses >> G/D 3.3906/0.2368 Epoch 018 | Avg Losses >> G/D 3.4867/0.2339 Epoch 019 | Avg Losses >> G/D 3.4793/0.2192 Epoch 020 | Avg Losses >> G/D 3.4953/0.2327 Epoch 021 | Avg Losses >> G/D 3.5337/0.2347 Epoch 022 | Avg Losses >> G/D 3.5944/0.1870 Epoch 023 | Avg Losses >> G/D 3.7159/0.1911 Epoch 024 | Avg Losses >> G/D 3.7075/0.2069 Epoch 025 | Avg Losses >> G/D 3.7487/0.2201 Epoch 026 | Avg Losses >> G/D 3.7958/0.1948 Epoch 027 | Avg Losses >> G/D 3.7065/0.2164 Epoch 028 | Avg Losses >> G/D 3.7537/0.2130 Epoch 029 | Avg Losses >> G/D 3.8666/0.1889 Epoch 030 | Avg Losses >> G/D 3.8544/0.1767 Epoch 031 | Avg Losses >> G/D 3.8471/0.1977 Epoch 032 | Avg Losses >> G/D 3.9974/0.1798 Epoch 033 | Avg Losses >> G/D 3.9846/0.1895 Epoch 034 | Avg Losses >> G/D 4.0168/0.1613 Epoch 035 | Avg Losses >> G/D 4.0456/0.1923 Epoch 036 | Avg Losses >> G/D 4.0748/0.1498 Epoch 037 | Avg Losses >> G/D 4.1228/0.1949 Epoch 038 | Avg Losses >> G/D 4.1116/0.1533 Epoch 039 | Avg Losses >> G/D 4.1364/0.1808 Epoch 040 | Avg Losses >> G/D 4.1411/0.1698 Epoch 041 | Avg Losses >> G/D 4.0890/0.1765 Epoch 042 | Avg Losses >> G/D 4.2081/0.1688 Epoch 043 | Avg Losses >> G/D 4.1475/0.1516 Epoch 044 | Avg Losses >> G/D 4.1919/0.1799 Epoch 045 | Avg Losses >> G/D 4.1874/0.1800 Epoch 046 | Avg Losses >> G/D 4.3176/0.1370 Epoch 047 | Avg Losses >> G/D 4.3111/0.1493 Epoch 048 | Avg Losses >> G/D 4.3623/0.1528 Epoch 049 | Avg Losses >> G/D 4.4283/0.1412 Epoch 050 | Avg Losses >> G/D 4.4012/0.1598 Epoch 051 | Avg Losses >> G/D 4.3176/0.1499 Epoch 052 | Avg Losses >> G/D 4.4296/0.1422 Epoch 053 | Avg Losses >> G/D 4.3227/0.1637 Epoch 054 | Avg Losses >> G/D 4.3717/0.1721 Epoch 055 | Avg Losses >> G/D 4.5143/0.1397 Epoch 056 | Avg Losses >> G/D 4.4612/0.1260 Epoch 057 | Avg Losses >> G/D 4.6134/0.1242 Epoch 058 | Avg Losses >> G/D 4.6475/0.1305 Epoch 059 | Avg Losses >> G/D 4.4129/0.1753 Epoch 060 | Avg Losses >> G/D 4.5356/0.1320 Epoch 061 | Avg Losses >> G/D 4.4809/0.1574 Epoch 062 | Avg Losses >> G/D 4.5545/0.1480 Epoch 063 | Avg Losses >> G/D 4.6331/0.1299 Epoch 064 | Avg Losses >> G/D 4.6304/0.1485 Epoch 065 | Avg Losses >> G/D 4.6701/0.1383 Epoch 066 | Avg Losses >> G/D 4.7063/0.1308 Epoch 067 | Avg Losses >> G/D 4.6042/0.1421 Epoch 068 | Avg Losses >> G/D 4.6230/0.1342 Epoch 069 | Avg Losses >> G/D 4.7576/0.1189 Epoch 070 | Avg Losses >> G/D 4.7010/0.1398 Epoch 071 | Avg Losses >> G/D 4.7855/0.1310 Epoch 072 | Avg Losses >> G/D 4.5200/0.1621 Epoch 073 | Avg Losses >> G/D 4.7661/0.1308 Epoch 074 | Avg Losses >> G/D 4.7582/0.1241 Epoch 075 | Avg Losses >> G/D 4.7590/0.1085 Epoch 076 | Avg Losses >> G/D 4.8761/0.1062 Epoch 077 | Avg Losses >> G/D 4.8262/0.1239 Epoch 078 | Avg Losses >> G/D 4.8433/0.1375 Epoch 079 | Avg Losses >> G/D 4.9040/0.1087 Epoch 080 | Avg Losses >> G/D 4.9725/0.0859 Epoch 081 | Avg Losses >> G/D 4.8008/0.1711 Epoch 082 | Avg Losses >> G/D 4.8954/0.1103 Epoch 083 | Avg Losses >> G/D 4.9791/0.1283 Epoch 084 | Avg Losses >> G/D 4.9569/0.1007 Epoch 085 | Avg Losses >> G/D 4.8485/0.1428 Epoch 086 | Avg Losses >> G/D 5.0601/0.0951 Epoch 087 | Avg Losses >> G/D 4.9004/0.1282 Epoch 088 | Avg Losses >> G/D 4.9365/0.1444 Epoch 089 | Avg Losses >> G/D 4.8809/0.1165 Epoch 090 | Avg Losses >> G/D 5.0286/0.1063 Epoch 091 | Avg Losses >> G/D 4.8509/0.1583 Epoch 092 | Avg Losses >> G/D 5.0228/0.1115 Epoch 093 | Avg Losses >> G/D 5.1098/0.0902 Epoch 094 | Avg Losses >> G/D 5.0027/0.1224 Epoch 095 | Avg Losses >> G/D 5.1134/0.1100 Epoch 096 | Avg Losses >> G/D 5.0539/0.1301 Epoch 097 | Avg Losses >> G/D 5.0903/0.1090 Epoch 098 | Avg Losses >> G/D 5.1078/0.0903 Epoch 099 | Avg Losses >> G/D 5.1667/0.0946 Epoch 100 | Avg Losses >> G/D 5.1004/0.1002
selected_epochs = [1, 2, 4, 10, 50, 100] fig = plt.figure(figsize=(10, 14)) for i,e in enumerate(selected_epochs): for j in range(5): ax = fig.add_subplot(6, 5, i*5+j+1) ax.set_xticks([]) ax.set_yticks([]) if j == 0: ax.text( -0.06, 0.5, f'Epoch {e}', rotation=90, size=18, color='red', horizontalalignment='right', verticalalignment='center', transform=ax.transAxes) image = epoch_samples[e-1][j] ax.imshow(image, cmap='gray_r') # plt.savefig('figures/ch17-dcgan-samples.pdf') plt.show()
Image in a Jupyter notebook

Dissimilarity measures between two distributions

Image(filename='figures/17_14.png', width=700)
Image in a Jupyter notebook
Image(filename='figures/17_15.png', width=800)
Image in a Jupyter notebook

Using EM distance in practice for GANs

Gradient penalty

Implementing WGAN-GP to train the DCGAN model

def make_generator_network_wgan(input_size, n_filters): model = nn.Sequential( nn.ConvTranspose2d(input_size, n_filters*4, 4, 1, 0, bias=False), nn.InstanceNorm2d(n_filters*4), nn.LeakyReLU(0.2), nn.ConvTranspose2d(n_filters*4, n_filters*2, 3, 2, 1, bias=False), nn.InstanceNorm2d(n_filters*2), nn.LeakyReLU(0.2), nn.ConvTranspose2d(n_filters*2, n_filters, 4, 2, 1, bias=False), nn.InstanceNorm2d(n_filters), nn.LeakyReLU(0.2), nn.ConvTranspose2d(n_filters, 1, 4, 2, 1, bias=False), nn.Tanh()) return model class DiscriminatorWGAN(nn.Module): def __init__(self, n_filters): super().__init__() self.network = nn.Sequential( nn.Conv2d(1, n_filters, 4, 2, 1, bias=False), nn.LeakyReLU(0.2), nn.Conv2d(n_filters, n_filters*2, 4, 2, 1, bias=False), nn.InstanceNorm2d(n_filters * 2), nn.LeakyReLU(0.2), nn.Conv2d(n_filters*2, n_filters*4, 3, 2, 1, bias=False), nn.InstanceNorm2d(n_filters*4), nn.LeakyReLU(0.2), nn.Conv2d(n_filters*4, 1, 4, 1, 0, bias=False), nn.Sigmoid()) def forward(self, input): output = self.network(input) return output.view(-1, 1).squeeze(0)
gen_model = make_generator_network_wgan(z_size, n_filters).to(device) disc_model = DiscriminatorWGAN(n_filters).to(device) g_optimizer = torch.optim.Adam(gen_model.parameters(), 0.0002) d_optimizer = torch.optim.Adam(disc_model.parameters(), 0.0002)
from torch.autograd import grad as torch_grad def gradient_penalty(real_data, generated_data): batch_size = real_data.size(0) # Calculate interpolation alpha = torch.rand(real_data.shape[0], 1, 1, 1, requires_grad=True, device=device) interpolated = alpha * real_data + (1 - alpha) * generated_data # Calculate probability of interpolated examples proba_interpolated = disc_model(interpolated) # Calculate gradients of probabilities with respect to examples gradients = torch_grad(outputs=proba_interpolated, inputs=interpolated, grad_outputs=torch.ones(proba_interpolated.size(), device=device), create_graph=True, retain_graph=True)[0] gradients = gradients.view(batch_size, -1) gradients_norm = gradients.norm(2, dim=1) return lambda_gp * ((gradients_norm - 1)**2).mean()
## Train the discriminator def d_train_wgan(x): disc_model.zero_grad() batch_size = x.size(0) x = x.to(device) # Calculate probabilities on real and generated data d_real = disc_model(x) input_z = create_noise(batch_size, z_size, mode_z).to(device) g_output = gen_model(input_z) d_generated = disc_model(g_output) d_loss = d_generated.mean() - d_real.mean() + gradient_penalty(x.data, g_output.data) d_loss.backward() d_optimizer.step() return d_loss.data.item()
## Train the generator def g_train_wgan(x): gen_model.zero_grad() batch_size = x.size(0) input_z = create_noise(batch_size, z_size, mode_z).to(device) g_output = gen_model(input_z) d_generated = disc_model(g_output) g_loss = -d_generated.mean() # gradient backprop & optimize ONLY G's parameters g_loss.backward() g_optimizer.step() return g_loss.data.item()
epoch_samples_wgan = [] lambda_gp = 10.0 num_epochs = 100 torch.manual_seed(1) critic_iterations = 5 for epoch in range(1, num_epochs+1): gen_model.train() d_losses, g_losses = [], [] for i, (x, _) in enumerate(mnist_dl): for _ in range(critic_iterations): d_loss = d_train_wgan(x) d_losses.append(d_loss) g_losses.append(g_train_wgan(x)) print(f'Epoch {epoch:03d} | D Loss >>' f' {torch.FloatTensor(d_losses).mean():.4f}') gen_model.eval() epoch_samples_wgan.append( create_samples(gen_model, fixed_z).detach().cpu().numpy())
Epoch 001 | D Loss >> -0.5891 Epoch 002 | D Loss >> -0.6277 Epoch 003 | D Loss >> -0.6074 Epoch 004 | D Loss >> -0.6043 Epoch 005 | D Loss >> -0.5765 Epoch 006 | D Loss >> -0.5410 Epoch 007 | D Loss >> -0.5239 Epoch 008 | D Loss >> -0.4946 Epoch 009 | D Loss >> -0.4813 Epoch 010 | D Loss >> -0.4701 Epoch 011 | D Loss >> -0.4644 Epoch 012 | D Loss >> -0.4613 Epoch 013 | D Loss >> -0.4554 Epoch 014 | D Loss >> -0.4508 Epoch 015 | D Loss >> -0.4517 Epoch 016 | D Loss >> -0.4494 Epoch 017 | D Loss >> -0.4498 Epoch 018 | D Loss >> -0.4505 Epoch 019 | D Loss >> -0.4476 Epoch 020 | D Loss >> -0.4534 Epoch 021 | D Loss >> -0.4516 Epoch 022 | D Loss >> -0.4560 Epoch 023 | D Loss >> -0.4531 Epoch 024 | D Loss >> -0.4543 Epoch 025 | D Loss >> -0.4548 Epoch 026 | D Loss >> -0.4536 Epoch 027 | D Loss >> -0.4508 Epoch 028 | D Loss >> -0.4526 Epoch 029 | D Loss >> -0.4547 Epoch 030 | D Loss >> -0.4585 Epoch 031 | D Loss >> -0.4624 Epoch 032 | D Loss >> -0.4575 Epoch 033 | D Loss >> -0.4569 Epoch 034 | D Loss >> -0.4602 Epoch 035 | D Loss >> -0.4634 Epoch 036 | D Loss >> -0.4656 Epoch 037 | D Loss >> -0.4623 Epoch 038 | D Loss >> -0.4629 Epoch 039 | D Loss >> -0.4692 Epoch 040 | D Loss >> -0.4627 Epoch 041 | D Loss >> -0.4696 Epoch 042 | D Loss >> -0.4640 Epoch 043 | D Loss >> -0.4694 Epoch 044 | D Loss >> -0.4657 Epoch 045 | D Loss >> -0.4717 Epoch 046 | D Loss >> -0.4667 Epoch 047 | D Loss >> -0.4726 Epoch 048 | D Loss >> -0.4695 Epoch 049 | D Loss >> -0.4732 Epoch 050 | D Loss >> -0.4656 Epoch 051 | D Loss >> -0.4696 Epoch 052 | D Loss >> -0.4681 Epoch 053 | D Loss >> -0.4673 Epoch 054 | D Loss >> -0.4726 Epoch 055 | D Loss >> -0.4760 Epoch 056 | D Loss >> -0.4732 Epoch 057 | D Loss >> -0.4789 Epoch 058 | D Loss >> -0.4750 Epoch 059 | D Loss >> -0.4715 Epoch 060 | D Loss >> -0.4726 Epoch 061 | D Loss >> -0.4750 Epoch 062 | D Loss >> -0.4770 Epoch 063 | D Loss >> -0.4805 Epoch 064 | D Loss >> -0.4738 Epoch 065 | D Loss >> -0.4745 Epoch 066 | D Loss >> -0.4744 Epoch 067 | D Loss >> -0.4759 Epoch 068 | D Loss >> -0.4729 Epoch 069 | D Loss >> -0.4790 Epoch 070 | D Loss >> -0.4756 Epoch 071 | D Loss >> -0.4816 Epoch 072 | D Loss >> -0.4716 Epoch 073 | D Loss >> -0.4740 Epoch 074 | D Loss >> -0.4742 Epoch 075 | D Loss >> -0.4829 Epoch 076 | D Loss >> -0.4780 Epoch 077 | D Loss >> -0.4805 Epoch 078 | D Loss >> -0.4764 Epoch 079 | D Loss >> -0.4754 Epoch 080 | D Loss >> -0.4743 Epoch 081 | D Loss >> -0.4777 Epoch 082 | D Loss >> -0.4824 Epoch 083 | D Loss >> -0.4813 Epoch 084 | D Loss >> -0.4811 Epoch 085 | D Loss >> -0.4794 Epoch 086 | D Loss >> -0.4782 Epoch 087 | D Loss >> -0.4737 Epoch 088 | D Loss >> -0.4800 Epoch 089 | D Loss >> -0.4820 Epoch 090 | D Loss >> -0.4818 Epoch 091 | D Loss >> -0.4793 Epoch 092 | D Loss >> -0.4786 Epoch 093 | D Loss >> -0.4795 Epoch 094 | D Loss >> -0.4800 Epoch 095 | D Loss >> -0.4808 Epoch 096 | D Loss >> -0.4787 Epoch 097 | D Loss >> -0.4825 Epoch 098 | D Loss >> -0.4768 Epoch 099 | D Loss >> -0.4849 Epoch 100 | D Loss >> -0.4794
selected_epochs = [1, 2, 4, 10, 50, 100] # selected_epochs = [1, 10, 20, 30, 50, 70] fig = plt.figure(figsize=(10, 14)) for i,e in enumerate(selected_epochs): for j in range(5): ax = fig.add_subplot(6, 5, i*5+j+1) ax.set_xticks([]) ax.set_yticks([]) if j == 0: ax.text( -0.06, 0.5, f'Epoch {e}', rotation=90, size=18, color='red', horizontalalignment='right', verticalalignment='center', transform=ax.transAxes) image = epoch_samples_wgan[e-1][j] ax.imshow(image, cmap='gray_r') # plt.savefig('figures/ch17-wgan-gp-samples.pdf') plt.show()
Image in a Jupyter notebook

Mode collapse

Image(filename='figures/17_16.png', width=600)
Image in a Jupyter notebook



Readers may ignore the next cell.

! python ../.convert_notebook_to_script.py --input ch17_part2.ipynb --output ch17_part2.py
[NbConvertApp] WARNING | Config option `kernel_spec_manager_class` not recognized by `NbConvertApp`. [NbConvertApp] Converting notebook ch17_part2.ipynb to script [NbConvertApp] Writing 13766 bytes to ch17_part2.py