Path: blob/master/DenoisingAutoencoder/Denoising-Autoencoder-using-Tensorflow.ipynb
3119 views
Kernel: Python 3
In [1]:
Let's configure all random numbers generators to support determinism and obtain reproducible results.
In [2]:
In [3]:
Out[3]:
Size of train images: (28, 28), Number of train images: 60000
Size of test images: (28, 28), Number of test images: 10000
In [4]:
In [5]:
In [6]:
In [7]:
Prepare input data for the model:
In [8]:
In [9]:
Out[9]:
In [10]:
In [11]:
The following block describes basic training pipeline:
In [12]:
Out[12]:
Train on 60000 samples, validate on 10000 samples
Epoch 1/25
60000/60000 [==============================] - 8s 130us/sample - loss: 0.5725 - val_loss: 0.4929
Epoch 2/25
60000/60000 [==============================] - 6s 93us/sample - loss: 0.4393 - val_loss: 0.3462
Epoch 3/25
60000/60000 [==============================] - 6s 93us/sample - loss: 0.2448 - val_loss: 0.2063
Epoch 4/25
60000/60000 [==============================] - 6s 92us/sample - loss: 0.1895 - val_loss: 0.1741
Epoch 5/25
60000/60000 [==============================] - 6s 95us/sample - loss: 0.1671 - val_loss: 0.1590
Epoch 6/25
60000/60000 [==============================] - 6s 93us/sample - loss: 0.1548 - val_loss: 0.1485
Epoch 7/25
60000/60000 [==============================] - 5s 91us/sample - loss: 0.1457 - val_loss: 0.1407
Epoch 8/25
60000/60000 [==============================] - 5s 91us/sample - loss: 0.1389 - val_loss: 0.1349
Epoch 9/25
60000/60000 [==============================] - 5s 91us/sample - loss: 0.1342 - val_loss: 0.1310
Epoch 10/25
60000/60000 [==============================] - 5s 91us/sample - loss: 0.1308 - val_loss: 0.1281
Epoch 11/25
60000/60000 [==============================] - 5s 91us/sample - loss: 0.1284 - val_loss: 0.1259
Epoch 12/25
60000/60000 [==============================] - 5s 91us/sample - loss: 0.1265 - val_loss: 0.1243
Epoch 13/25
60000/60000 [==============================] - 5s 91us/sample - loss: 0.1250 - val_loss: 0.1229
Epoch 14/25
60000/60000 [==============================] - 5s 91us/sample - loss: 0.1238 - val_loss: 0.1218
Epoch 15/25
60000/60000 [==============================] - 5s 91us/sample - loss: 0.1227 - val_loss: 0.1208
Epoch 16/25
60000/60000 [==============================] - 5s 91us/sample - loss: 0.1217 - val_loss: 0.1199
Epoch 17/25
60000/60000 [==============================] - 5s 91us/sample - loss: 0.1209 - val_loss: 0.1192
Epoch 18/25
60000/60000 [==============================] - 5s 91us/sample - loss: 0.1202 - val_loss: 0.1185
Epoch 19/25
60000/60000 [==============================] - 5s 91us/sample - loss: 0.1195 - val_loss: 0.1179
Epoch 20/25
60000/60000 [==============================] - 5s 91us/sample - loss: 0.1190 - val_loss: 0.1173
Epoch 21/25
60000/60000 [==============================] - 5s 91us/sample - loss: 0.1184 - val_loss: 0.1168
Epoch 22/25
60000/60000 [==============================] - 5s 91us/sample - loss: 0.1179 - val_loss: 0.1163
Epoch 23/25
60000/60000 [==============================] - 5s 91us/sample - loss: 0.1175 - val_loss: 0.1159
Epoch 24/25
60000/60000 [==============================] - 5s 91us/sample - loss: 0.1170 - val_loss: 0.1155
Epoch 25/25
60000/60000 [==============================] - 6s 93us/sample - loss: 0.1166 - val_loss: 0.1152
Let's view the decoding process on the same data:
In [13]:
In [14]:
Out[14]:
Original Images
Noisy Images
Reconstruction of Noisy Images