Path: blob/master/RNN Fundamentals/1 Understanding Neural Network.ipynb
3074 views
Neural Networks
A neural network, also known as an artificial neural network (ANN), is a computational model inspired by the structure and functioning of the human brain. It's a fundamental concept in machine learning and deep learning.
The neural network is simulated by a new environment.
Then the free parameters of the neural network are changed as a result of this simulation.
The neural network then responds in a new way to the environment because of the changes in its free parameters.
Single Neuron Unit
Structure of a Neural Network:
A neural network is composed of interconnected nodes, or "neurons," organized into layers.
Neural networks extract identifying features from data, lacking pre-programmed understanding. Network components include neurons, connections, weights, biases, propagation functions, and a learning rule. Neurons receive inputs, governed by thresholds and activation functions.
The three main types of layers are:
Input Layer:
This layer contains nodes that represent the input features or variables of the problem. Each node corresponds to a feature, and the number of nodes in the input layer is equal to the number of features.
Hidden Layers:
These layers are between the input and output layers. Each hidden layer consists of multiple nodes (neurons). Deep learning, a subset of neural networks, is characterized by the presence of multiple hidden layers.
Output Layer:
This layer produces the final output of the neural network. The number of nodes in the output layer depends on the type of problem. For example, in binary classification, there might be one node (0 or 1), while in multi-class classification, there might be multiple nodes representing different classes.
Connections and Weights:
Every connection between nodes in adjacent layers is associated with a weight. These weights are the learnable parameters of the neural network and are adjusted during the training process.
Activation Functions:
Each node (except in the input layer) has an activation function that determines the output of the node based on the weighted sum of its inputs. Common activation functions include ReLU (Rectified Linear Unit), sigmoid, and tanh.
Training Process:
Forward Propagation: The input data is fed forward through the network. The weights and biases are applied to the inputs, and the output of each node is computed.
Loss Calculation: The output of the network is compared to the true target values, and a loss (or error) is calculated. This quantifies how far off the predictions are from the actual targets.
Backpropagation: This is the heart of the training process. The gradients (derivatives of the loss with respect to the weights) are calculated by propagating the errors backward through the network. This allows for the adjustment of weights in a direction that minimizes the loss.
Gradient Descent: The weights are updated using optimization algorithms like stochastic gradient descent (SGD). This process iteratively adjusts the weights to reduce the loss. Learning and Generalization: Through this iterative process of forward propagation, loss calculation, backpropagation, and weight adjustment, the neural network learns to make better predictions on the training data. The goal is for the network to generalize its learning to make accurate predictions on new, unseen data.
Applications:
Neural networks have found applications in a wide range of fields, including computer vision (e.g., image recognition), natural language processing (e.g., language translation), speech recognition, game playing (e.g., AlphaGo), autonomous vehicles, healthcare, and many more.
Deep Learning:
When a neural network has multiple hidden layers (more than one), it's often referred to as a deep neural network, and the process is known as deep learning. Deep learning has demonstrated remarkable capabilities in handling complex tasks and processing large amounts of data.
Python Libraries for Neural Network:
To work with neural networks in Python, you'll typically need a combination of the following libraries:
NumPy: NumPy is a fundamental package for numerical computations in Python. It provides support for working with arrays and matrices, which are essential for handling the mathematical operations involved in neural networks.
Keras or TensorFlow (or both):
TensorFlow: TensorFlow is a powerful open-source library for numerical computation and machine learning. It provides a comprehensive set of tools for building and training various types of neural networks.
Keras: Keras is an open-source neural network library that acts as an interface for various backends, including TensorFlow. It provides a high-level API for building and training neural networks, making it easy to get started.
PyTorch (Optional): PyTorch is another popular open-source deep learning library that provides dynamic computation graphs. It's known for its flexible and dynamic approach to building neural networks.
Scikit-learn (Optional): While primarily a library for traditional machine learning, scikit-learn includes various tools for preprocessing data and evaluating the performance of machine learning models, which can be useful in conjunction with neural networks.
Matplotlib or Seaborn (Optional): These libraries are used for data visualization, which can be helpful for understanding the performance of neural networks and visualizing results.
Pandas (Optional): Pandas is a versatile library for data manipulation and analysis. While not directly related to neural networks, it's often used for tasks like data preprocessing.
Jupyter Notebook (Optional): Jupyter notebooks are interactive environments that allow you to write and execute Python code in a document-style format. They are popular for experimenting with and visualizing neural network models.
CUDA and cuDNN (Optional): If you're working with deep learning models on GPU hardware, installing CUDA (NVIDIA's parallel computing platform) and cuDNN (a GPU-accelerated library for deep neural networks) can significantly speed up computations.
Multilayer Neural Network
Basic feedforward neural network for binary classification.
Create a Sequential model
Activation Functions:
Activation functions introduce non-linearity into the neural network. This allows the network to learn complex relationships between inputs and outputs.
Types of Activation functions:
1. Sigmoid Function:
Range: (0, 1) - Pros: Smooth, interpretable as probabilities, historically popular.
Cons: Suffers from vanishing gradients problem (gradients become very small), not used much in hidden layers anymore.
Formula:
2. ReLU (Rectified Linear Unit)
Range: [0, +∞)
Pros: Fast to compute, helps with the vanishing gradients problem, widely used in hidden layers
Cons: Can suffer from the dying ReLU problem (neurons can get stuck during training and stop learning).
Formula: F(x) = max(0,x)
3. Leaky ReLU
Range: (-∞, +∞)
Pros: Addresses the dying ReLU problem by allowing a small gradient for negative inputs.
Cons: Slightly more computationally expensive than ReLU.
Formula: f(x)=max(0.01x,x) (or a small, non-zero slope for negative values
3. Softmax (for multi-class classification)
Range: (0, 1), and the sum of all outputs is 1.
Purpose: Scales the outputs so that they can be interpreted as probabilities, commonly used in the output layer for multi-class classification.
formula:
Choosing an Activation Function:
Generally, ReLU or its variants are preferred in hidden layers due to their computational efficiency and better performance in practice.
Sigmoid or softmax is commonly used in the output layer for binary or multi-class classification task
Loss Functions
Loss functions quantify the error or discrepancy between the predicted output of the neural network and the actual target values during training.
Sparse Categorical Cross Entropy:
Similar to categorical cross entropy, but when target values are integers (class labels) instead of one-hot encoded vectors
Choosing a Loss Function:
The choice depends on the nature of the problem (regression, binary classification, multi-class classification). Ensure that the chosen loss function aligns with the activation function used in the output layer. Remember, the selection of activation and loss functions can significantly impact the performance of your neural network, so it's important to experiment and choose the right combination for your specific task
Implementation of a basic regression model using a feedforward neural network with PyTorch
PyTorch, a popular deep learning library. Specifically, PyTorch offers two key features:
Tensor computation (like NumPy): With strong GPU acceleration, PyTorch allows users to perform various mathematical operations.
Dynamic neural networks: PyTorch is commonly used for creating deep learning models with a tape-based autograd system.