Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
suyashi29
GitHub Repository: suyashi29/python-su
Path: blob/master/Generative AI for Intelligent Data Handling/ Day 5.2 LSTM (Long Short-Term Memory) network using TensorFlow and Keras.ipynb
3074 views
Kernel: Python 3 (ipykernel)

LSTM (Long Short-Term Memory) is a type of recurrent neural network (RNN) designed to address the problem of capturing long-term dependencies in sequential data.

  • It consists of a memory cell that can maintain information over long sequences, controlled by three gates: forget gate, input gate, and output gate.

  • The forget gate decides what information to discard from the cell state.

  • The input gate decides what new information to store in the cell state.

  • The output gate decides what information to output from the cell state.

  • LSTM's ability to retain and forget information over long periods makes it effective for tasks involving sequential data with long-term dependencies.

import numpy as np import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import LSTM, Dense # Generate some random data for demonstration # Input sequence: [0.1, 0.2, 0.3, 0.4, 0.5] # Output sequence: [0.6, 0.7, 0.8, 0.9, 1.0]
# Define input sequence X = np.array([[[0.1], [0.2], [0.3], [0.4], [0.5]]]) # Define output sequence y = np.array([[0.6, 0.7, 0.8, 0.9, 1.0]])
# Define and build the LSTM model model = Sequential([ LSTM(50, input_shape=(5, 1)), # 50 units in LSTM layer Dense(5) # Output layer ]) # Compile the model model.compile(optimizer='adam', loss='mse')
# Print model summary model.summary()
Model: "sequential_2" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= lstm_2 (LSTM) (None, 50) 10400 dense_2 (Dense) (None, 5) 255 ================================================================= Total params: 10,655 Trainable params: 10,655 Non-trainable params: 0 _________________________________________________________________
# Train the model model.fit(X, y, epochs=100, verbose=1) # Make predictions predictions = model.predict(X) # Print predictions print("Predictions:") print(predictions)
Epoch 1/100 1/1 [==============================] - 3s 3s/step - loss: 0.6636 Epoch 2/100 1/1 [==============================] - 0s 13ms/step - loss: 0.6515 Epoch 3/100 1/1 [==============================] - 0s 17ms/step - loss: 0.6395 Epoch 4/100 1/1 [==============================] - 0s 19ms/step - loss: 0.6276 Epoch 5/100 1/1 [==============================] - 0s 21ms/step - loss: 0.6158 Epoch 6/100 1/1 [==============================] - 0s 18ms/step - loss: 0.6040 Epoch 7/100 1/1 [==============================] - 0s 21ms/step - loss: 0.5922 Epoch 8/100 1/1 [==============================] - 0s 16ms/step - loss: 0.5803 Epoch 9/100 1/1 [==============================] - 0s 13ms/step - loss: 0.5684 Epoch 10/100 1/1 [==============================] - 0s 23ms/step - loss: 0.5565 Epoch 11/100 1/1 [==============================] - 0s 25ms/step - loss: 0.5444 Epoch 12/100 1/1 [==============================] - 0s 26ms/step - loss: 0.5321 Epoch 13/100 1/1 [==============================] - 0s 20ms/step - loss: 0.5197 Epoch 14/100 1/1 [==============================] - 0s 16ms/step - loss: 0.5071 Epoch 15/100 1/1 [==============================] - 0s 21ms/step - loss: 0.4942 Epoch 16/100 1/1 [==============================] - 0s 24ms/step - loss: 0.4810 Epoch 17/100 1/1 [==============================] - 0s 28ms/step - loss: 0.4675 Epoch 18/100 1/1 [==============================] - 0s 23ms/step - loss: 0.4537 Epoch 19/100 1/1 [==============================] - 0s 27ms/step - loss: 0.4394 Epoch 20/100 1/1 [==============================] - 0s 20ms/step - loss: 0.4247 Epoch 21/100 1/1 [==============================] - 0s 23ms/step - loss: 0.4096 Epoch 22/100 1/1 [==============================] - 0s 32ms/step - loss: 0.3939 Epoch 23/100 1/1 [==============================] - 0s 31ms/step - loss: 0.3776 Epoch 24/100 1/1 [==============================] - 0s 28ms/step - loss: 0.3607 Epoch 25/100 1/1 [==============================] - 0s 94ms/step - loss: 0.3432 Epoch 26/100 1/1 [==============================] - 0s 23ms/step - loss: 0.3251 Epoch 27/100 1/1 [==============================] - 0s 41ms/step - loss: 0.3062 Epoch 28/100 1/1 [==============================] - 0s 33ms/step - loss: 0.2867 Epoch 29/100 1/1 [==============================] - 0s 30ms/step - loss: 0.2665 Epoch 30/100 1/1 [==============================] - 0s 39ms/step - loss: 0.2456 Epoch 31/100 1/1 [==============================] - 0s 28ms/step - loss: 0.2242 Epoch 32/100 1/1 [==============================] - 0s 34ms/step - loss: 0.2022 Epoch 33/100 1/1 [==============================] - 0s 35ms/step - loss: 0.1798 Epoch 34/100 1/1 [==============================] - 0s 40ms/step - loss: 0.1572 Epoch 35/100 1/1 [==============================] - 0s 22ms/step - loss: 0.1346 Epoch 36/100 1/1 [==============================] - 0s 25ms/step - loss: 0.1122 Epoch 37/100 1/1 [==============================] - 0s 33ms/step - loss: 0.0904 Epoch 38/100 1/1 [==============================] - 0s 24ms/step - loss: 0.0697 Epoch 39/100 1/1 [==============================] - 0s 34ms/step - loss: 0.0506 Epoch 40/100 1/1 [==============================] - 0s 20ms/step - loss: 0.0336 Epoch 41/100 1/1 [==============================] - 0s 19ms/step - loss: 0.0196 Epoch 42/100 1/1 [==============================] - 0s 20ms/step - loss: 0.0092 Epoch 43/100 1/1 [==============================] - 0s 22ms/step - loss: 0.0030 Epoch 44/100 1/1 [==============================] - 0s 18ms/step - loss: 0.0013 Epoch 45/100 1/1 [==============================] - 0s 23ms/step - loss: 0.0038 Epoch 46/100 1/1 [==============================] - 0s 21ms/step - loss: 0.0096 Epoch 47/100 1/1 [==============================] - 0s 23ms/step - loss: 0.0169 Epoch 48/100 1/1 [==============================] - 0s 25ms/step - loss: 0.0235 Epoch 49/100 1/1 [==============================] - 0s 17ms/step - loss: 0.0278 Epoch 50/100 1/1 [==============================] - 0s 20ms/step - loss: 0.0289 Epoch 51/100 1/1 [==============================] - 0s 19ms/step - loss: 0.0270 Epoch 52/100 1/1 [==============================] - 0s 18ms/step - loss: 0.0230 Epoch 53/100 1/1 [==============================] - 0s 22ms/step - loss: 0.0178 Epoch 54/100 1/1 [==============================] - 0s 22ms/step - loss: 0.0125 Epoch 55/100 1/1 [==============================] - 0s 19ms/step - loss: 0.0078 Epoch 56/100 1/1 [==============================] - 0s 17ms/step - loss: 0.0041 Epoch 57/100 1/1 [==============================] - 0s 18ms/step - loss: 0.0017 Epoch 58/100 1/1 [==============================] - 0s 19ms/step - loss: 3.6679e-04 Epoch 59/100 1/1 [==============================] - 0s 19ms/step - loss: 3.3635e-05 Epoch 60/100 1/1 [==============================] - 0s 19ms/step - loss: 4.0104e-04 Epoch 61/100 1/1 [==============================] - 0s 16ms/step - loss: 0.0012 Epoch 62/100 1/1 [==============================] - 0s 17ms/step - loss: 0.0022 Epoch 63/100 1/1 [==============================] - 0s 16ms/step - loss: 0.0032 Epoch 64/100 1/1 [==============================] - 0s 13ms/step - loss: 0.0040 Epoch 65/100 1/1 [==============================] - 0s 19ms/step - loss: 0.0046 Epoch 66/100 1/1 [==============================] - 0s 29ms/step - loss: 0.0049 Epoch 67/100 1/1 [==============================] - 0s 24ms/step - loss: 0.0049 Epoch 68/100 1/1 [==============================] - 0s 18ms/step - loss: 0.0046 Epoch 69/100 1/1 [==============================] - 0s 25ms/step - loss: 0.0041 Epoch 70/100 1/1 [==============================] - 0s 25ms/step - loss: 0.0035 Epoch 71/100 1/1 [==============================] - 0s 24ms/step - loss: 0.0028 Epoch 72/100 1/1 [==============================] - 0s 25ms/step - loss: 0.0021 Epoch 73/100 1/1 [==============================] - 0s 29ms/step - loss: 0.0014 Epoch 74/100 1/1 [==============================] - 0s 31ms/step - loss: 8.5391e-04 Epoch 75/100 1/1 [==============================] - 0s 17ms/step - loss: 4.3130e-04 Epoch 76/100 1/1 [==============================] - 0s 23ms/step - loss: 1.6618e-04 Epoch 77/100 1/1 [==============================] - 0s 27ms/step - loss: 5.5138e-05 Epoch 78/100 1/1 [==============================] - 0s 21ms/step - loss: 7.6724e-05 Epoch 79/100 1/1 [==============================] - 0s 17ms/step - loss: 1.9566e-04 Epoch 80/100 1/1 [==============================] - 0s 18ms/step - loss: 3.6887e-04 Epoch 81/100 1/1 [==============================] - 0s 21ms/step - loss: 5.5240e-04 Epoch 82/100 1/1 [==============================] - 0s 16ms/step - loss: 7.0809e-04 Epoch 83/100 1/1 [==============================] - 0s 21ms/step - loss: 8.0866e-04 Epoch 84/100 1/1 [==============================] - 0s 16ms/step - loss: 8.4051e-04 Epoch 85/100 1/1 [==============================] - 0s 25ms/step - loss: 8.0377e-04 Epoch 86/100 1/1 [==============================] - 0s 19ms/step - loss: 7.1000e-04 Epoch 87/100 1/1 [==============================] - 0s 22ms/step - loss: 5.7829e-04 Epoch 88/100 1/1 [==============================] - 0s 21ms/step - loss: 4.3067e-04 Epoch 89/100 1/1 [==============================] - 0s 19ms/step - loss: 2.8802e-04 Epoch 90/100 1/1 [==============================] - 0s 22ms/step - loss: 1.6686e-04 Epoch 91/100 1/1 [==============================] - 0s 22ms/step - loss: 7.7621e-05 Epoch 92/100 1/1 [==============================] - 0s 23ms/step - loss: 2.4196e-05 Epoch 93/100 1/1 [==============================] - 0s 22ms/step - loss: 4.6745e-06 Epoch 94/100 1/1 [==============================] - 0s 22ms/step - loss: 1.2784e-05 Epoch 95/100 1/1 [==============================] - 0s 24ms/step - loss: 3.9677e-05 Epoch 96/100 1/1 [==============================] - 0s 22ms/step - loss: 7.5722e-05 Epoch 97/100 1/1 [==============================] - 0s 67ms/step - loss: 1.1201e-04 Epoch 98/100 1/1 [==============================] - 0s 23ms/step - loss: 1.4147e-04 Epoch 99/100 1/1 [==============================] - 0s 21ms/step - loss: 1.5949e-04 Epoch 100/100 1/1 [==============================] - 0s 22ms/step - loss: 1.6409e-04 1/1 [==============================] - 1s 1s/step Predictions: [[0.5899611 0.69120526 0.7883316 0.88775873 0.9822645 ]]
Predictions: [[0.5899611 0.69120526 0.7883316 0.88775873 0.9822645 ]] ##Actual y = np.array([[0.6, 0.7, 0.8, 0.9, 1.0]])

Quick Practice Generate an input sequence consisting of Even numbers and predict the next odd number in the sequence

import numpy as np import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import LSTM, Dense # Function to generate an input sequence of even numbers def generate_input_sequence(start, length): sequence = [2 * i for i in range(start, start + length)] return np.array(sequence) # Generate input sequence with 100 rows input_sequence = generate_input_sequence(start=1, length=100) # Output sequence (next even number in the sequence) output_sequence = input_sequence[1:]
input_sequence
array([ 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 42, 44, 46, 48, 50, 52, 54, 56, 58, 60, 62, 64, 66, 68, 70, 72, 74, 76, 78, 80, 82, 84, 86, 88, 90, 92, 94, 96, 98, 100, 102, 104, 106, 108, 110, 112, 114, 116, 118, 120, 122, 124, 126, 128, 130, 132, 134, 136, 138, 140, 142, 144, 146, 148, 150, 152, 154, 156, 158, 160, 162, 164, 166, 168, 170, 172, 174, 176, 178, 180, 182, 184, 186, 188, 190, 192, 194, 196, 198, 200])
output_sequence = input_sequence[1:] output_sequence
array([ 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 42, 44, 46, 48, 50, 52, 54, 56, 58, 60, 62, 64, 66, 68, 70, 72, 74, 76, 78, 80, 82, 84, 86, 88, 90, 92, 94, 96, 98, 100, 102, 104, 106, 108, 110, 112, 114, 116, 118, 120, 122, 124, 126, 128, 130, 132, 134, 136, 138, 140, 142, 144, 146, 148, 150, 152, 154, 156, 158, 160, 162, 164, 166, 168, 170, 172, 174, 176, 178, 180, 182, 184, 186, 188, 190, 192, 194, 196, 198, 200])
# Preprocess the data def create_dataset(input_sequence, output_sequence, time_steps): X, y = [], [] for i in range(len(input_sequence) - time_steps): X.append(input_sequence[i:i+time_steps]) y.append(output_sequence[i]) return np.array(X), np.array(y) time_steps = 3 # Number of time steps (length of input sequence for each training example) X, y = create_dataset(input_sequence, output_sequence, time_steps) # Define the RNN model model = Sequential([ LSTM(50, activation='relu', input_shape=(time_steps, 1)), Dense(1) ]) # Compile the model model.compile(optimizer='adam', loss='mse') # Reshape input for LSTM X = X.reshape((X.shape[0], X.shape[1], 1)) # Train the model model.fit(X, y, epochs=100, verbose=0) # Test data for final prediction test_data = np.array([input_sequence[-3:]]) # Taking the last three elements of the input sequence # Generate the output sequence def generate_prediction(model, test_data): x_input = test_data.reshape((1, time_steps, 1)) y_pred = model.predict(x_input, verbose=0) return int(y_pred[0][0]) # Final prediction next_even_number = generate_prediction(model, test_data) print("Final Prediction (Next Even Number):", next_even_number)
Final Prediction (Next Even Number): 198