Path: blob/master/notebooks/book1/20/ae_mnist_gdl_tf.ipynb
1192 views
Kernel: Python 3
Autoencoder on mnist using 2d latent space
We fit the model to MNIST and use a 2d latent space. Code is based on chapter 3 of David Foster's book: https://github.com/davidADSP/GDL_code/. We have added all the necessary libraries into a single notebook. We have modified it to work with TF 2.0.
In [ ]:
In [ ]:
In [ ]:
(60000, 28, 28, 1)
In [ ]:
In [ ]:
In [ ]:
In [ ]:
In [ ]:
In [ ]:
In [ ]:
In [ ]:
Model: "functional_13"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
encoder_input (InputLayer) [(None, 28, 28, 1)] 0
_________________________________________________________________
encoder_conv_0 (Conv2D) (None, 28, 28, 32) 320
_________________________________________________________________
leaky_re_lu_14 (LeakyReLU) (None, 28, 28, 32) 0
_________________________________________________________________
encoder_conv_1 (Conv2D) (None, 14, 14, 64) 18496
_________________________________________________________________
leaky_re_lu_15 (LeakyReLU) (None, 14, 14, 64) 0
_________________________________________________________________
encoder_conv_2 (Conv2D) (None, 7, 7, 64) 36928
_________________________________________________________________
leaky_re_lu_16 (LeakyReLU) (None, 7, 7, 64) 0
_________________________________________________________________
encoder_conv_3 (Conv2D) (None, 7, 7, 64) 36928
_________________________________________________________________
leaky_re_lu_17 (LeakyReLU) (None, 7, 7, 64) 0
_________________________________________________________________
flatten_2 (Flatten) (None, 3136) 0
_________________________________________________________________
encoder_output (Dense) (None, 2) 6274
=================================================================
Total params: 98,946
Trainable params: 98,946
Non-trainable params: 0
_________________________________________________________________
In [ ]:
Model: "functional_15"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
decoder_input (InputLayer) [(None, 2)] 0
_________________________________________________________________
dense_2 (Dense) (None, 3136) 9408
_________________________________________________________________
reshape_2 (Reshape) (None, 7, 7, 64) 0
_________________________________________________________________
decoder_conv_t_0 (Conv2DTran (None, 7, 7, 64) 36928
_________________________________________________________________
leaky_re_lu_18 (LeakyReLU) (None, 7, 7, 64) 0
_________________________________________________________________
decoder_conv_t_1 (Conv2DTran (None, 14, 14, 64) 36928
_________________________________________________________________
leaky_re_lu_19 (LeakyReLU) (None, 14, 14, 64) 0
_________________________________________________________________
decoder_conv_t_2 (Conv2DTran (None, 28, 28, 32) 18464
_________________________________________________________________
leaky_re_lu_20 (LeakyReLU) (None, 28, 28, 32) 0
_________________________________________________________________
decoder_conv_t_3 (Conv2DTran (None, 28, 28, 1) 289
_________________________________________________________________
activation_2 (Activation) (None, 28, 28, 1) 0
=================================================================
Total params: 102,017
Trainable params: 102,017
Non-trainable params: 0
_________________________________________________________________
In [ ]:
(60000, 28, 28, 1)
In [ ]:
Epoch 1/30
79/79 [==============================] - 3s 34ms/step - loss: 0.1026
Epoch 2/30
79/79 [==============================] - 2s 26ms/step - loss: 0.0596
Epoch 3/30
79/79 [==============================] - 2s 26ms/step - loss: 0.0557
Epoch 4/30
79/79 [==============================] - 2s 26ms/step - loss: 0.0538
Epoch 5/30
79/79 [==============================] - 2s 26ms/step - loss: 0.0524
Epoch 6/30
79/79 [==============================] - 2s 26ms/step - loss: 0.0510
Epoch 7/30
79/79 [==============================] - 2s 26ms/step - loss: 0.0500
Epoch 8/30
79/79 [==============================] - 2s 26ms/step - loss: 0.0491
Epoch 9/30
79/79 [==============================] - 2s 26ms/step - loss: 0.0485
Epoch 10/30
79/79 [==============================] - 2s 26ms/step - loss: 0.0477
Epoch 11/30
79/79 [==============================] - 2s 26ms/step - loss: 0.0471
Epoch 12/30
79/79 [==============================] - 2s 26ms/step - loss: 0.0465
Epoch 13/30
79/79 [==============================] - 2s 26ms/step - loss: 0.0461
Epoch 14/30
79/79 [==============================] - 2s 26ms/step - loss: 0.0456
Epoch 15/30
79/79 [==============================] - 2s 26ms/step - loss: 0.0452
Epoch 16/30
79/79 [==============================] - 2s 26ms/step - loss: 0.0449
Epoch 17/30
79/79 [==============================] - 2s 26ms/step - loss: 0.0447
Epoch 18/30
79/79 [==============================] - 2s 26ms/step - loss: 0.0443
Epoch 19/30
79/79 [==============================] - 2s 26ms/step - loss: 0.0440
Epoch 20/30
79/79 [==============================] - 2s 26ms/step - loss: 0.0438
Epoch 21/30
79/79 [==============================] - 2s 26ms/step - loss: 0.0435
Epoch 22/30
79/79 [==============================] - 2s 26ms/step - loss: 0.0434
Epoch 23/30
79/79 [==============================] - 2s 26ms/step - loss: 0.0431
Epoch 24/30
79/79 [==============================] - 2s 26ms/step - loss: 0.0429
Epoch 25/30
79/79 [==============================] - 2s 26ms/step - loss: 0.0426
Epoch 26/30
79/79 [==============================] - 2s 26ms/step - loss: 0.0427
Epoch 27/30
79/79 [==============================] - 2s 26ms/step - loss: 0.0424
Epoch 28/30
79/79 [==============================] - 2s 26ms/step - loss: 0.0421
Epoch 29/30
79/79 [==============================] - 2s 26ms/step - loss: 0.0420
Epoch 30/30
79/79 [==============================] - 2s 26ms/step - loss: 0.0419
In [ ]:
In [ ]:
weights.h5
In [ ]:
Reconstruction
In [ ]:
Generation
In [ ]:
In [ ]:
In [ ]:
In [ ]:
In [ ]: