Path: blob/main/Autoencoder - Deep CNN/Deep CNN Autoencoder - Denoising Image.ipynb
578 views
Kernel: Python 3
Autoencoder
An autoencoder is an unsupervised learning technique for neural networks that learns efficient data representations (encoding) by training the network to ignore signal “noise.” Autoencoders can be used for image denoising, image compression, and, in some cases, even generation of image data.
Flow of Autoencoder
Noisy Image -> Encoder -> Compressed Representation -> Decoder -> Reconstruct Clear Image
Import Modules
In [1]:
Load the Dataset
In [2]:
Out[2]:
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
11501568/11490434 [==============================] - 0s 0us/step
In [3]:
In [4]:
Out[4]:
(10000, 28, 28, 1)
Add Noise to the Image
In [5]:
In [6]:
Exploratory Data Analysis
In [7]:
Out[7]:
In [8]:
Out[8]:
In [9]:
Out[9]:
In [10]:
Out[10]:
Model Creation
In [11]:
Out[11]:
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 28, 28, 32) 320
max_pooling2d (MaxPooling2D (None, 14, 14, 32) 0
)
conv2d_1 (Conv2D) (None, 14, 14, 16) 4624
max_pooling2d_1 (MaxPooling (None, 7, 7, 16) 0
2D)
conv2d_2 (Conv2D) (None, 7, 7, 16) 2320
up_sampling2d (UpSampling2D (None, 14, 14, 16) 0
)
conv2d_3 (Conv2D) (None, 14, 14, 32) 4640
up_sampling2d_1 (UpSampling (None, 28, 28, 32) 0
2D)
conv2d_4 (Conv2D) (None, 28, 28, 1) 289
=================================================================
Total params: 12,193
Trainable params: 12,193
Non-trainable params: 0
_________________________________________________________________
In [12]:
Out[12]:
Epoch 1/20
235/235 [==============================] - 14s 12ms/step - loss: 0.2730 - val_loss: 0.1604
Epoch 2/20
235/235 [==============================] - 2s 10ms/step - loss: 0.1481 - val_loss: 0.1400
Epoch 3/20
235/235 [==============================] - 2s 11ms/step - loss: 0.1376 - val_loss: 0.1336
Epoch 4/20
235/235 [==============================] - 3s 11ms/step - loss: 0.1327 - val_loss: 0.1305
Epoch 5/20
235/235 [==============================] - 2s 10ms/step - loss: 0.1298 - val_loss: 0.1277
Epoch 6/20
235/235 [==============================] - 2s 10ms/step - loss: 0.1276 - val_loss: 0.1256
Epoch 7/20
235/235 [==============================] - 2s 10ms/step - loss: 0.1256 - val_loss: 0.1237
Epoch 8/20
235/235 [==============================] - 2s 11ms/step - loss: 0.1240 - val_loss: 0.1223
Epoch 9/20
235/235 [==============================] - 2s 10ms/step - loss: 0.1227 - val_loss: 0.1213
Epoch 10/20
235/235 [==============================] - 2s 10ms/step - loss: 0.1217 - val_loss: 0.1202
Epoch 11/20
235/235 [==============================] - 2s 10ms/step - loss: 0.1209 - val_loss: 0.1198
Epoch 12/20
235/235 [==============================] - 2s 11ms/step - loss: 0.1200 - val_loss: 0.1188
Epoch 13/20
235/235 [==============================] - 3s 11ms/step - loss: 0.1193 - val_loss: 0.1182
Epoch 14/20
235/235 [==============================] - 3s 11ms/step - loss: 0.1187 - val_loss: 0.1173
Epoch 15/20
235/235 [==============================] - 2s 11ms/step - loss: 0.1181 - val_loss: 0.1168
Epoch 16/20
235/235 [==============================] - 2s 11ms/step - loss: 0.1175 - val_loss: 0.1164
Epoch 17/20
235/235 [==============================] - 2s 10ms/step - loss: 0.1170 - val_loss: 0.1157
Epoch 18/20
235/235 [==============================] - 2s 10ms/step - loss: 0.1164 - val_loss: 0.1153
Epoch 19/20
235/235 [==============================] - 2s 11ms/step - loss: 0.1162 - val_loss: 0.1150
Epoch 20/20
235/235 [==============================] - 2s 11ms/step - loss: 0.1158 - val_loss: 0.1156
<keras.callbacks.History at 0x7fd81036dd90>
Visualize the Results
In [13]:
In [14]:
Out[14]:
In [15]:
Out[15]:
In [16]:
Out[16]:
In [17]:
Out[17]:
In [ ]: