Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
suyashi29
GitHub Repository: suyashi29/python-su
Path: blob/master/Generative AI for Intelligent Data Handling/Lab 2 Implement RNN.ipynb
3074 views
Kernel: Python 3 (ipykernel)

Data Preparation:

  • How would you generate sample alphabet data programmatically using Python?

  • Explain how to represent each alphabet as numerical data suitable for input into an RNN.

RNN Implementation:

  • Can you outline the necessary steps to set up an RNN model using TensorFlow/Keras for alphabet sequence prediction?

  • Discuss the role of the SimpleRNN layer in the RNN model and its parameters.

Training Process:

  • How do you split the alphabet sequence data into training and testing sets? What is the purpose of this separation?

  • Explain the significance of specifying the epochs parameter when training the RNN model.

Loss Function and Optimization:

  • Describe the purpose of the loss function in the context of training the RNN model.

  • Why is the 'adam' optimizer commonly used in training neural networks like RNNs?

Model Evaluation:

  • How do you evaluate the performance of the trained RNN model on the test dataset?

  • Suggest some metrics that could be used to assess the accuracy and performance of the RNN model in predicting alphabet sequences.

Hyperparameter Tuning:

  • Discuss potential hyperparameters that could be tuned to improve the performance of the RNN model.

  • How would you adjust the learning rate of the optimizer to optimize the training process?

import numpy as np import matplotlib.pyplot as plt # Generate sample image data image_data = np.random.rand(100, 100) # Generating a random 100x100 image plt.imshow(image_data, cmap='gray') plt.axis('off') # Save the image to a specified location image_path = 'sample_image.png' plt.savefig(image_path) plt.close() print("Sample image saved successfully.") # Basic implementation of RNN using TensorFlow import tensorflow as tf # Generate some sample sequence data # Suppose we have a sequence of numbers from 0 to 99 sequence_data = np.arange(100).reshape(1, -1, 1).astype(np.float32) # Define the RNN model model = tf.keras.Sequential([ tf.keras.layers.SimpleRNN(64, return_sequences=True), tf.keras.layers.Dense(1) ]) # Compile the model model.compile(optimizer='adam', loss='mse') # Train the model model.fit(sequence_data, sequence_data, epochs=10) # Predict using the trained model predicted_sequence = model.predict(sequence_data) print("Predicted sequence:") print(predicted_sequence)
Sample image saved successfully. Epoch 1/10 1/1 [==============================] - 1s 708ms/step - loss: 3325.9153 Epoch 2/10 1/1 [==============================] - 0s 10ms/step - loss: 3312.3162 Epoch 3/10 1/1 [==============================] - 0s 9ms/step - loss: 3299.3682 Epoch 4/10 1/1 [==============================] - 0s 0s/step - loss: 3287.0168 Epoch 5/10 1/1 [==============================] - 0s 16ms/step - loss: 3275.1812 Epoch 6/10 1/1 [==============================] - 0s 0s/step - loss: 3263.7783 Epoch 7/10 1/1 [==============================] - 0s 12ms/step - loss: 3252.7368 Epoch 8/10 1/1 [==============================] - 0s 14ms/step - loss: 3241.9968 Epoch 9/10 1/1 [==============================] - 0s 7ms/step - loss: 3231.5063 Epoch 10/10 1/1 [==============================] - 0s 15ms/step - loss: 3221.2219 1/1 [==============================] - 0s 128ms/step Predicted sequence: [[[0.07404336] [0.16300163] [0.8172609 ] [1.556626 ] [1.8381275 ] [1.8777179 ] [1.79217 ] [1.7712901 ] [1.717089 ] [1.6804506 ] [1.6288078 ] [1.5679302 ] [1.4997193 ] [1.4302047 ] [1.363735 ] [1.3028378 ] [1.2481234 ] [1.1991804 ] [1.1553096 ] [1.1157746 ] [1.0799419 ] [1.0472937 ] [1.0174487 ] [0.9901552 ] [0.9652612 ] [0.942677 ] [0.9223328 ] [0.90414643] [0.88800544] [0.87376404] [0.8612492 ] [0.85027313] [0.840644 ] [0.8321774 ] [0.82470214] [0.8180642 ] [0.8121266 ] [0.8067713 ] [0.8018962 ] [0.7974152 ] [0.79325575] [0.7893572 ] [0.78567016] [0.782155 ] [0.7787795 ] [0.7755181 ] [0.7723513 ] [0.7692627 ] [0.7662412 ] [0.76327574] [0.7603586 ] [0.7574828 ] [0.7546412 ] [0.75182784] [0.7490367 ] [0.7462618 ] [0.7434968 ] [0.7407359 ] [0.73797345] [0.7352035 ] [0.7324212 ] [0.7296217 ] [0.7268005 ] [0.7239543 ] [0.7210798 ] [0.7181748 ] [0.71523726] [0.7122668 ] [0.70926285] [0.7062259 ] [0.7031571 ] [0.7000581 ] [0.69693136] [0.69377935] [0.6906059 ] [0.6874145 ] [0.68420917] [0.6809943 ] [0.67777455] [0.6745548 ] [0.67133975] [0.6681348 ] [0.6649449 ] [0.6617749 ] [0.6586295 ] [0.65551424] [0.65243316] [0.64939106] [0.64639235] [0.6434408 ] [0.6405399 ] [0.63769364] [0.6349045 ] [0.6321759 ] [0.6295096 ] [0.6269081 ] [0.6243731 ] [0.62190557] [0.6195067 ] [0.6171771 ]]]
import numpy as np import matplotlib.pyplot as plt import tensorflow as tf from tensorflow.keras.layers import Dense, SimpleRNN, Flatten from tensorflow.keras.models import Sequential # Generate alphabet image data def generate_alphabet_images(): fig, ax = plt.subplots(5, 6, figsize=(20, 20)) # Increase the size of the figure to fit the larger images letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' idx = 0 for i in range(5): for j in range(6): if idx >= 26: break ax[i, j].imshow(np.zeros((28, 28)), cmap='gray') # Adjust the size of the image to match the background ax[i, j].set_facecolor('black') # Set the background color to black ax[i, j].axis('off') ax[i, j].text(7, 14, letters[idx], fontsize=25, color='white') # Increase the font size of the text idx += 1 plt.tight_layout() plt.show() # Model architecture model = Sequential([ SimpleRNN(128, input_shape=(28, 28)), Dense(26, activation='softmax') ]) # Compile the model model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) # Generate sample alphabet data for training X_train = np.zeros((26, 28, 28)) y_train = np.eye(26) for i in range(26): X_train[i] = np.zeros((28, 28)) # Replace with your actual alphabet image data # Train the model history = model.fit(X_train, y_train, epochs=10, batch_size=1) # Plot training history plt.plot(history.history['loss'], label='loss') plt.plot(history.history['accuracy'], label='accuracy') plt.xlabel('Epoch') plt.ylabel('Value') plt.legend() plt.show() # Generate some sample new data to evaluate the model X_test = np.zeros((5, 28, 28)) # Evaluate the model on the sample data predictions = model.predict(X_test) # Display the predictions print("Predictions:") print(predictions)
Epoch 1/10 26/26 [==============================] - 1s 3ms/step - loss: 3.2927 - accuracy: 0.0000e+00 Epoch 2/10 26/26 [==============================] - 0s 3ms/step - loss: 3.2722 - accuracy: 0.0385 Epoch 3/10 26/26 [==============================] - 0s 3ms/step - loss: 3.2687 - accuracy: 0.0385 Epoch 4/10 26/26 [==============================] - 0s 2ms/step - loss: 3.2660 - accuracy: 0.0000e+00 Epoch 5/10 26/26 [==============================] - 0s 2ms/step - loss: 3.2656 - accuracy: 0.0385 Epoch 6/10 26/26 [==============================] - 0s 2ms/step - loss: 3.2637 - accuracy: 0.0000e+00 Epoch 7/10 26/26 [==============================] - 0s 2ms/step - loss: 3.2635 - accuracy: 0.0000e+00 Epoch 8/10 26/26 [==============================] - 0s 2ms/step - loss: 3.2637 - accuracy: 0.0000e+00 Epoch 9/10 26/26 [==============================] - 0s 2ms/step - loss: 3.2628 - accuracy: 0.0000e+00 Epoch 10/10 26/26 [==============================] - 0s 3ms/step - loss: 3.2629 - accuracy: 0.0000e+00
Image in a Jupyter notebook
1/1 [==============================] - 0s 115ms/step Predictions: [[0.03778179 0.03854322 0.03799965 0.03839246 0.03846845 0.03838743 0.03865991 0.03885389 0.03882401 0.03835456 0.03852516 0.0385556 0.03833072 0.03850435 0.03846415 0.03879601 0.03860387 0.03849811 0.03796021 0.03844815 0.03870932 0.03879022 0.03841153 0.03882524 0.03835987 0.03795219] [0.03778179 0.03854322 0.03799965 0.03839246 0.03846845 0.03838743 0.03865991 0.03885389 0.03882401 0.03835456 0.03852516 0.0385556 0.03833072 0.03850435 0.03846415 0.03879601 0.03860387 0.03849811 0.03796021 0.03844815 0.03870932 0.03879022 0.03841153 0.03882524 0.03835987 0.03795219] [0.03778179 0.03854322 0.03799965 0.03839246 0.03846845 0.03838743 0.03865991 0.03885389 0.03882401 0.03835456 0.03852516 0.0385556 0.03833072 0.03850435 0.03846415 0.03879601 0.03860387 0.03849811 0.03796021 0.03844815 0.03870932 0.03879022 0.03841153 0.03882524 0.03835987 0.03795219] [0.03778179 0.03854322 0.03799965 0.03839246 0.03846845 0.03838743 0.03865991 0.03885389 0.03882401 0.03835456 0.03852516 0.0385556 0.03833072 0.03850435 0.03846415 0.03879601 0.03860387 0.03849811 0.03796021 0.03844815 0.03870932 0.03879022 0.03841153 0.03882524 0.03835987 0.03795219] [0.03778179 0.03854322 0.03799965 0.03839246 0.03846845 0.03838743 0.03865991 0.03885389 0.03882401 0.03835456 0.03852516 0.0385556 0.03833072 0.03850435 0.03846415 0.03879601 0.03860387 0.03849811 0.03796021 0.03844815 0.03870932 0.03879022 0.03841153 0.03882524 0.03835987 0.0379522 ]]