Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
suyashi29
GitHub Repository: suyashi29/python-su
Path: blob/master/GenAI Transformers Basics/1 Understanding Neural Network.ipynb
3074 views
Kernel: Python 3 (ipykernel)

Neural Networks: Introduction

  • Biological neurons and synapses have inspired the development of artificial neural networks (ANNs) in the field of machine learning. Let's explore this biological inspiration:

1. Biological Neurons:

Structure: Neurons are the basic building blocks of the nervous system in living organisms. They consist of a cell body, dendrites (input receptors), an axon (output transmitter), and synapses

  • Function: Neurons communicate with each other through electrochemical signals. When a neuron receives a sufficient amount of input from its dendrites, it "fires" and sends an electrical signal down its axon.

  • Activation Threshold: Neurons have an activation threshold; they only fire if the combined inputs reach a certain level.

2. Synapses:

  • Connection Points: Synapses are the junctions between neurons. They allow information to pass from one neuron to the next.

  • Strength of Connection: The strength of a synapse can be modified over time. If two neurons frequently communicate, the synapse between them strengthens, making future communication more efficient.

  • Plasticity: Synaptic plasticity is the ability of synapses to change their strength. This is fundamental to learning and memory in biological systems.

3. Inspiration for Artificial Neural Networks (ANNs):

Mimicking Neurons:

  • ANNs are composed of artificial neurons, which are mathematical models inspired by biological neurons. These neurons take inputs, apply weights, sum them up, and pass them through an activation function to produce an output.

Layers and Connectivity:

  • ANNs have layers of neurons, similar to the layers of neurons in the brain. Information flows from the input layer through one or more hidden layers to the output layer.

Weights as Synaptic Strengths:

  • The weights in an artificial neuron represent the strengths of the connections, similar to the synaptic strengths in biological neurons. Learning and Adaptation

  • ANNs can learn from data through a process analogous to synaptic plasticity. During training, the weights are adjusted based on the errors in the predictions, allowing the network to adapt and improve its performance.

4. Advantages of Biological Inspiration:

  • Parallel Processing: Like the brain, ANNs can process information in parallel, making them efficient for tasks like pattern recognition and image processing.

  • Robustness and Fault Tolerance: ANNs can be robust to noisy data and can continue functioning even if some neurons or connections are damaged or missing.

5. Differences and Abstractions:

  • While ANNs are inspired by biological systems, they are highly simplified abstractions. They don't capture the full complexity of biological neurons and synapses, but they provide a powerful computational framework.

  • Overall, the biological inspiration of neurons and synapses has provided a valuable conceptual framework for developing artificial intelligence and machine learning algorithms. ANNs have demonstrated remarkable capabilities in tasks such as image recognition, natural language processing, and more, showcasing the effectiveness of this approach.

6. . Perceptrons:

A perceptron is the simplest form of a neural network. It's a mathematical model inspired by the way individual neurons work in the brain.

Characterstics of Perceptrons:

  • Structure:A perceptron takes multiple binary inputs (0 or 1) and produces a single binary output. Each input is associated with a weight, which represents the importance of that input

  • Activation Function: The weighted sum of inputs is passed through an activation function. In the original perceptron model, this function is a step function (e.g., if the weighted sum is greater than a threshold, the output is 1; otherwise, it's 0)

  • Learning Rule: The perceptron learning rule is a way to adjust the weights based on the error in the output. It updates the weights to reduce the error in future predictions

  • Single-Layer Model: A perceptron is a single-layer neural network. It can only learn linearly separable functions, meaning it can only classify data that can be separated by a straight line (or plane in higher dimensions).

Multi-layer Perceptrons (MLPs):

  • Multi-layer Perceptrons, often referred to simply as neural networks, are a more powerful and versatile extension of perceptrons. They overcome the limitations of single-layer models and can learn complex, non-linear relationships. Here are the key characteristics of MLPs:

  • Structure: An MLP consists of multiple layers of neurons, including an input layer, one or more hidden layers, and an output layer. Each neuron in a hidden layer and the output layer is connected to every neuron in the previous layer.

  • Activation Functions: Unlike perceptrons, neurons in an MLP typically use non-linear activation functions (e.g., sigmoid, ReLU) which allow the network to learn non-linear relationships.

  • Universal Function Approximator:

An MLP with a sufficient number of neurons in its hidden layers can approximate any continuous function. This property is known as the Universal Approximation Theorem.

  • Learning Algorithms: Backpropagation is the primary learning algorithm used for training MLPs. It involves computing gradients of the loss function with respect to the weights and adjusting the weights using gradient descent or other optimization methods.

  • Deep Learning: When an MLP has multiple hidden layers (more than one), it is often referred to as a deep neural network. Deep learning is a subset of machine learning that focuses on networks with many hidden layers. MLPs are the foundation of modern deep learning and have been highly successful in a wide range of applications, including image recognition, natural language processing, speech recognition, and more.

In summary, while perceptrons are simple, single-layer models with binary outputs, multi-layer perceptrons (MLPs) are more complex, multi-layer networks with non-linear activation functions that can learn complex relationships between inputs and outputs.

A neural network, also known as an artificial neural network (ANN), is a computational model inspired by the structure and functioning of the human brain. It's a fundamental concept in machine learning and deep learning.

  • The neural network is simulated by a new environment.

  • Then the free parameters of the neural network are changed as a result of this simulation.

  • The neural network then responds in a new way to the environment because of the changes in its free parameters.

image-3.png

Structure of a Neural Network:

A neural network is composed of interconnected nodes, or "neurons," organized into layers.

  • Neural networks extract identifying features from data, lacking pre-programmed understanding. Network components include neurons, connections, weights, biases, propagation functions, and a learning rule. Neurons receive inputs, governed by thresholds and activation functions. The three main types of layers are:

Input Layer:

  • This layer contains nodes that represent the input features or variables of the problem. Each node corresponds to a feature, and the number of nodes in the input layer is equal to the number of features.

Hidden Layers:

  • These layers are between the input and output layers. Each hidden layer consists of multiple nodes (neurons). Deep learning, a subset of neural networks, is characterized by the presence of multiple hidden layers.

Output Layer:

  • This layer produces the final output of the neural network. The number of nodes in the output layer depends on the type of problem. For example, in binary classification, there might be one node (0 or 1), while in multi-class classification, there might be multiple nodes representing different classes.

Connections and Weights:

Every connection between nodes in adjacent layers is associated with a weight. These weights are the learnable parameters of the neural network and are adjusted during the training process.

Activation Functions:

Each node (except in the input layer) has an activation function that determines the output of the node based on the weighted sum of its inputs. Common activation functions include ReLU (Rectified Linear Unit), sigmoid, and tanh.

Training Process:

  • Forward Propagation: The input data is fed forward through the network. The weights and biases are applied to the inputs, and the output of each node is computed.

  • Loss Calculation: The output of the network is compared to the true target values, and a loss (or error) is calculated. This quantifies how far off the predictions are from the actual targets.

  • Backpropagation: This is the heart of the training process. The gradients (derivatives of the loss with respect to the weights) are calculated by propagating the errors backward through the network. This allows for the adjustment of weights in a direction that minimizes the loss.

  • Gradient Descent: The weights are updated using optimization algorithms like stochastic gradient descent (SGD). This process iteratively adjusts the weights to reduce the loss. Learning and Generalization: Through this iterative process of forward propagation, loss calculation, backpropagation, and weight adjustment, the neural network learns to make better predictions on the training data. The goal is for the network to generalize its learning to make accurate predictions on new, unseen data.

Applications:

Neural networks have found applications in a wide range of fields, including computer vision (e.g., image recognition), natural language processing (e.g., language translation), speech recognition, game playing (e.g., AlphaGo), autonomous vehicles, healthcare, and many more.

Deep Learning:

When a neural network has multiple hidden layers (more than one), it's often referred to as a deep neural network, and the process is known as deep learning. Deep learning has demonstrated remarkable capabilities in handling complex tasks and processing large amounts of data.

Python Libraries for Neural Network:

To work with neural networks in Python, you'll typically need a combination of the following libraries:

  • NumPy: NumPy is a fundamental package for numerical computations in Python. It provides support for working with arrays and matrices, which are essential for handling the mathematical operations involved in neural networks.

Keras or TensorFlow (or both):

  • TensorFlow: TensorFlow is a powerful open-source library for numerical computation and machine learning. It provides a comprehensive set of tools for building and training various types of neural networks.

  • Keras: Keras is an open-source neural network library that acts as an interface for various backends, including TensorFlow. It provides a high-level API for building and training neural networks, making it easy to get started.

  • PyTorch (Optional): PyTorch is another popular open-source deep learning library that provides dynamic computation graphs. It's known for its flexible and dynamic approach to building neural networks.

  • Scikit-learn (Optional): While primarily a library for traditional machine learning, scikit-learn includes various tools for preprocessing data and evaluating the performance of machine learning models, which can be useful in conjunction with neural networks.

  • Matplotlib or Seaborn (Optional): These libraries are used for data visualization, which can be helpful for understanding the performance of neural networks and visualizing results.

  • Pandas (Optional): Pandas is a versatile library for data manipulation and analysis. While not directly related to neural networks, it's often used for tasks like data preprocessing.

  • Jupyter Notebook (Optional): Jupyter notebooks are interactive environments that allow you to write and execute Python code in a document-style format. They are popular for experimenting with and visualizing neural network models.

  • CUDA and cuDNN (Optional): If you're working with deep learning models on GPU hardware, installing CUDA (NVIDIA's parallel computing platform) and cuDNN (a GPU-accelerated library for deep neural networks) can significantly speed up computations.

Basic feedforward neural network for binary classification.

# Regression X = np.random.rand(10000, 10) # Features (2 input neurons) y = 4*X+8 y=0,1- binary classification Profit , sales values Profit and Sales given yeares 2025,2026 Profit = m(Sales) + c m and c # A B y=A , B x1, x3 y=1,0 x=[1,1,1,]
import numpy as np np.random.seed(0) X = np.random.rand(10, 3) X
array([[0.5488135 , 0.71518937, 0.60276338], [0.54488318, 0.4236548 , 0.64589411], [0.43758721, 0.891773 , 0.96366276], [0.38344152, 0.79172504, 0.52889492], [0.56804456, 0.92559664, 0.07103606], [0.0871293 , 0.0202184 , 0.83261985], [0.77815675, 0.87001215, 0.97861834], [0.79915856, 0.46147936, 0.78052918], [0.11827443, 0.63992102, 0.14335329], [0.94466892, 0.52184832, 0.41466194]])
# Step 1: Import necessary libraries import numpy as np from keras.models import Sequential from keras.layers import Dense # Step 2: Generate some random data for demonstration np.random.seed(0) X = np.random.rand(100, 2) # Features (2 input neurons) y = (X[:, 0] + X[:, 1] > 1).astype(int) # Binary target variable
X
array([[0.5488135 , 0.71518937], [0.60276338, 0.54488318], [0.4236548 , 0.64589411], [0.43758721, 0.891773 ], [0.96366276, 0.38344152], [0.79172504, 0.52889492], [0.56804456, 0.92559664], [0.07103606, 0.0871293 ], [0.0202184 , 0.83261985], [0.77815675, 0.87001215], [0.97861834, 0.79915856], [0.46147936, 0.78052918], [0.11827443, 0.63992102], [0.14335329, 0.94466892], [0.52184832, 0.41466194], [0.26455561, 0.77423369], [0.45615033, 0.56843395], [0.0187898 , 0.6176355 ], [0.61209572, 0.616934 ], [0.94374808, 0.6818203 ], [0.3595079 , 0.43703195], [0.6976312 , 0.06022547], [0.66676672, 0.67063787], [0.21038256, 0.1289263 ], [0.31542835, 0.36371077], [0.57019677, 0.43860151], [0.98837384, 0.10204481], [0.20887676, 0.16130952], [0.65310833, 0.2532916 ], [0.46631077, 0.24442559], [0.15896958, 0.11037514], [0.65632959, 0.13818295], [0.19658236, 0.36872517], [0.82099323, 0.09710128], [0.83794491, 0.09609841], [0.97645947, 0.4686512 ], [0.97676109, 0.60484552], [0.73926358, 0.03918779], [0.28280696, 0.12019656], [0.2961402 , 0.11872772], [0.31798318, 0.41426299], [0.0641475 , 0.69247212], [0.56660145, 0.26538949], [0.52324805, 0.09394051], [0.5759465 , 0.9292962 ], [0.31856895, 0.66741038], [0.13179786, 0.7163272 ], [0.28940609, 0.18319136], [0.58651293, 0.02010755], [0.82894003, 0.00469548], [0.67781654, 0.27000797], [0.73519402, 0.96218855], [0.24875314, 0.57615733], [0.59204193, 0.57225191], [0.22308163, 0.95274901], [0.44712538, 0.84640867], [0.69947928, 0.29743695], [0.81379782, 0.39650574], [0.8811032 , 0.58127287], [0.88173536, 0.69253159], [0.72525428, 0.50132438], [0.95608363, 0.6439902 ], [0.42385505, 0.60639321], [0.0191932 , 0.30157482], [0.66017354, 0.29007761], [0.61801543, 0.4287687 ], [0.13547406, 0.29828233], [0.56996491, 0.59087276], [0.57432525, 0.65320082], [0.65210327, 0.43141844], [0.8965466 , 0.36756187], [0.43586493, 0.89192336], [0.80619399, 0.70388858], [0.10022689, 0.91948261], [0.7142413 , 0.99884701], [0.1494483 , 0.86812606], [0.16249293, 0.61555956], [0.12381998, 0.84800823], [0.80731896, 0.56910074], [0.4071833 , 0.069167 ], [0.69742877, 0.45354268], [0.7220556 , 0.86638233], [0.97552151, 0.85580334], [0.01171408, 0.35997806], [0.72999056, 0.17162968], [0.52103661, 0.05433799], [0.19999652, 0.01852179], [0.7936977 , 0.22392469], [0.34535168, 0.92808129], [0.7044144 , 0.03183893], [0.16469416, 0.6214784 ], [0.57722859, 0.23789282], [0.934214 , 0.61396596], [0.5356328 , 0.58990998], [0.73012203, 0.311945 ], [0.39822106, 0.20984375], [0.18619301, 0.94437239], [0.7395508 , 0.49045881], [0.22741463, 0.25435648], [0.05802916, 0.43441663]])

Create a Sequential model

## Step 3: Create a Sequential model first_model = Sequential() # Step 4: Add layers to the model first_model.add(Dense(3, input_dim=2, activation='relu')) # Hidden layer with 3 neurons first_model.add(Dense(1, activation='sigmoid')) # Output layer with 1 neuron (binary classification)
C:\Users\Suyashi144893\AppData\Local\anaconda3\Lib\site-packages\keras\src\layers\core\dense.py:87: UserWarning: Do not pass an `input_shape`/`input_dim` argument to a layer. When using Sequential models, prefer using an `Input(shape)` object as the first layer in the model instead. super().__init__(activity_regularizer=activity_regularizer, **kwargs)
Accuracy = TP+TN/ (TP+TN+FP+FN) MSE=(Actual) - (Predicted) R2=
# Step 5: Compile the model first_model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Step 6: Train the model first_model.fit(X, y, epochs=25, batch_size=30)
Epoch 1/25 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - accuracy: 0.5347 - loss: 0.6574 Epoch 2/25 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - accuracy: 0.5313 - loss: 0.6549 Epoch 3/25 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - accuracy: 0.5458 - loss: 0.6517 Epoch 4/25 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - accuracy: 0.5491 - loss: 0.6516 Epoch 5/25 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - accuracy: 0.5069 - loss: 0.6597 Epoch 6/25 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 10ms/step - accuracy: 0.5502 - loss: 0.6517 Epoch 7/25 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.4969 - loss: 0.6619 Epoch 8/25 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 7ms/step - accuracy: 0.5291 - loss: 0.6553 Epoch 9/25 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step - accuracy: 0.5191 - loss: 0.6546 Epoch 10/25 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 7ms/step - accuracy: 0.5013 - loss: 0.6590 Epoch 11/25 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step - accuracy: 0.5047 - loss: 0.6532 Epoch 12/25 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.5491 - loss: 0.6451 Epoch 13/25 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 7ms/step - accuracy: 0.5169 - loss: 0.6526 Epoch 14/25 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - accuracy: 0.5624 - loss: 0.6396 Epoch 15/25 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step - accuracy: 0.5342 - loss: 0.6487 Epoch 16/25 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 7ms/step - accuracy: 0.5342 - loss: 0.6478 Epoch 17/25 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step - accuracy: 0.4842 - loss: 0.6612 Epoch 18/25 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step - accuracy: 0.5464 - loss: 0.6437 Epoch 19/25 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - accuracy: 0.5642 - loss: 0.6372 Epoch 20/25 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 10ms/step - accuracy: 0.5042 - loss: 0.6519 Epoch 21/25 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 10ms/step - accuracy: 0.5176 - loss: 0.6500 Epoch 22/25 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 7ms/step - accuracy: 0.5542 - loss: 0.6391 Epoch 23/25 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - accuracy: 0.5498 - loss: 0.6416 Epoch 24/25 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 7ms/step - accuracy: 0.5076 - loss: 0.6505 Epoch 25/25 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - accuracy: 0.5131 - loss: 0.6507
<keras.src.callbacks.history.History at 0x17c7e1d4d50>
# Step 7: Evaluate the model loss, accuracy = first_model.evaluate(X, y) print(f'Loss: {loss:.4f}') print(f'Accuracy: {accuracy*100:.2f}%')
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step - accuracy: 0.5370 - loss: 0.6431 Loss: 0.6438 Accuracy: 53.00%

Quick Practice:

Create a model by your own name with a random input function of shape 200*4 for Binary clasiification. Try with different sets of epoches and batch size.

Activation Functions:

  • Activation functions introduce non-linearity into the neural network. This allows the network to learn complex relationships between inputs and outputs.

Types of Activation functions:

1. Sigmoid Function:

  • Range: (0, 1) - Pros: Smooth, interpretable as probabilities, historically popular.

    • Cons: Suffers from vanishing gradients problem (gradients become very small), not used much in hidden layers anymore.

      • Formula: image.png

#p(cat)=10/20+10 = 25/30 F(9) = max(0,9)= 9 F(-8) = max(0,-8)=0

2. ReLU (Rectified Linear Unit)

  • Range: [0, +∞)

  • Pros: Fast to compute, helps with the vanishing gradients problem, widely used in hidden layers

  • Cons: Can suffer from the dying ReLU problem (neurons can get stuck during training and stop learning).

    • Formula: F(x) = max(0,x)

    image.png

3. Leaky ReLU

  • Range: (-∞, +∞)

  • Pros: Addresses the dying ReLU problem by allowing a small gradient for negative inputs.

  • Cons: Slightly more computationally expensive than ReLU.

  • Formula: f(x)=max(0.01x,x) (or a small, non-zero slope for negative values

image.png

3. Softmax (for multi-class classification)

  • Range: (0, 1), and the sum of all outputs is 1.

  • Purpose: Scales the outputs so that they can be interpreted as probabilities, commonly used in the output layer for multi-class classification.

image.png

  • formula: image.png

Choosing an Activation Function:

  • Generally, ReLU or its variants are preferred in hidden layers due to their computational efficiency and better performance in practice.

  • Sigmoid or softmax is commonly used in the output layer for binary or multi-class classification task

Loss Functions

  • Loss functions quantify the error or discrepancy between the predicted output of the neural network and the actual target values during training. image.png

Sparse Categorical Cross Entropy:

  • Similar to categorical cross entropy, but when target values are integers (class labels) instead of one-hot encoded vectors

Choosing a Loss Function:

The choice depends on the nature of the problem (regression, binary classification, multi-class classification). Ensure that the chosen loss function aligns with the activation function used in the output layer. Remember, the selection of activation and loss functions can significantly impact the performance of your neural network, so it's important to experiment and choose the right combination for your specific task

Python code using Keras to perform linear regression on randomly generated data

import numpy as np from keras.models import Sequential from keras.layers import Dense import matplotlib.pyplot as plt # Generating random data np.random.seed(0) X = np.random.rand(200, 1) y = 5 * X + 3 + np.random.randn(200, 1) * 0.1 # Adding some noise
# Creating a neural network model model = Sequential() model.add(Dense(1, input_dim=1)) # Compiling the model model.compile(optimizer='sgd', loss='mse')
# Training the model history = model.fit(X, y, epochs=50, verbose=0) # Plotting the data and the regression line plt.scatter(X, y) plt.plot(X, model.predict(X), color='red') plt.xlabel('X') plt.ylabel('y') plt.title('Linear Regression with Keras') plt.show() print("loss =",loss)

In Keras, as well as in other machine learning frameworks, AUC stands for Area Under the ROC Curve. It's a metric used to evaluate the performance of a binary classification model. The ROC (Receiver Operating Characteristic) curve is a graphical representation of the trade-off between the true positive rate (sensitivity) and the false positive rate (1 - specificity) for different threshold values. The AUC value represents the area under this curve, which ranges from 0 to 1.

In Keras, you can calculate AUC using the tf.keras.metrics.AUC class. Here's a basic example of how to use it:

import tensorflow as tf # Assuming y_true and y_pred are your true labels and predicted probabilities respectively y_true = [0, 1, 1, 0] y_pred = [0.9, 0.9, 0.8, 0.2] auc = tf.keras.metrics.AUC() auc.update_state(y_true, y_pred) print("AUC Score:", auc.result().numpy())