Path: blob/master/Applied Generative AI with GANS/4 Feedforward_Neural_Network (Quick Revision).ipynb
4866 views
What is a Feedforward Neural Network?
A Feedforward Neural Network (FNN) is the simplest type of neural network.
Data moves only in one direction
No loops or feedback
Used for prediction and classification
Simple FNN using NumPy
This example shows how a neural network works internally using matrix multiplication.
What is the relevance of adding weights and bias in Neural Network?
In a Feedforward Neural Network (FNN), weights and biases are the fundamental parameters that give the network its learning capacity and expressive power.
1. Role of Weights
a) Control the strength of input influence
Weights determine how strongly each input feature affects a neuron’s output.
Mathematically, for a neuron: 
A large positive weight amplifies the input.
A negative weight inverts the influence.
A near-zero weight effectively ignores the input.
Without weights, all inputs would contribute equally, making learning impossible.
b) Enable feature learning
During training, weight updates allow the network to:
Identify important features
Suppress irrelevant or noisy features
Learn complex patterns across layers (hierarchical representations)
This is the core mechanism by which FNNs generalize from data.
c) Shape the decision boundary
Weights define the orientation and curvature of decision boundaries in the input space.
Linear models learn straight boundaries
Multi-layer FNNs with non-linear activations learn highly non-linear boundaries
2. Role of Bias
a) Shift the activation function
Bias allows the activation function to be shifted left or right, instead of being forced to pass through the origin.

This means:
The neuron can activate even when inputs are zero
The network gains flexibility in fitting real-world data
b) Improves model expressiveness
Bias acts as an intercept term, similar to linear regression. It allows neurons to:
Learn thresholds
Model asymmetrical patterns
Fit data that is not centered around zero
Without bias, many patterns would be impossible or inefficient to learn.
c) Stabilizes learning
Bias helps neurons activate in useful regions of the activation function (e.g., avoiding dead ReLUs), improving:
Convergence speed
Gradient flow
Training stability
3. Why Both Are Necessary (Together)
| Component | What Happens If Removed |
|---|---|
| No Weights | No learning; all inputs treated the same |
| No Bias | Limited expressiveness; poor fit to data |
| Both Present | Flexible, learnable, expressive model |
Weights learn what to pay attention to, while bias learns when to activate.
4. Intuitive Analogy
Think of a neuron as a decision-maker:
Weights are the importance given to each factor
Bias is the baseline tendency to say “yes” or “no”
Both are required for nuanced decisions.
5. Summary
Weights define input importance and shape decision boundaries
Bias provides flexibility by shifting activation thresholds
Together, they enable FNNs to learn complex, non-linear relationships
Removing either severely limits the network’s learning capability
Feedforward Neural Network using PyTorch
PyTorch is an open-source deep learning library developed by Facebook’s AI Research (FAIR).
PyTorch builds the computational graph on the fly, meaning the graph is created as the code runs.This makes it easier to debug and modify networks dynamically—very useful when experimenting with different FNN architectures.
Backpropagation is handled automatically by PyTorch’s autograd engine
Tracks computations
Computes gradients
Optimizes weights
What Are Tensors in PyTorch?
A tensor is the core data structure in PyTorch.
You can think of a tensor as a generalized NumPy array, but with extra powers:
Training a Feedforward Network
Here we train a neural network to learn y = x × 2.
Key Takeaways
Feedforward Neural Networks are the foundation of GANs
Generator and Discriminator are both FNNs (initially)