Contact Us!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In

Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.

| Download
Project: My Project
Views: 43
Image: ubuntu2204
Kernel: Python 3 (system-wide)

Import necessary libraries

We will use matplotlib for plotting and numpy for calculations. We will also use imageio to create a GIF.

import numpy as np import matplotlib.pyplot as plt import imageio

Define a simple neural network function

We will use a basic perceptron model with one hidden layer for illustration. The neural network has an input layer, a hidden layer, and an output layer.

def simple_neural_network(x): # Adjust the weight matrix dimensions W1 = np.array([[0.5], [0.1]]) # Changed from (2, 2) to (2, 1) b1 = np.array([0.2, 0.5]) W2 = np.array([[0.6], [0.9]]) # Output layer weights remain, adjust as needed b2 = np.array([0.3]) # Compute hidden layer activations z1 = np.dot(x, W1.T) + b1 # Transpose W1 for proper shape a1 = np.tanh(z1) # Compute output layer z2 = np.dot(a1, W2) + b2 output = np.tanh(z2) return output

Create and save a series of plots illustrating the learning process

We'll use random data to illustrate how the neural network might adjust over time.

frames = [] for epoch in range(1, 11): fig, ax = plt.subplots() # Simulating data points x_points = np.linspace(-1, 1, 10) # Ensure both arrays have the same shape before adding network_output = simple_neural_network(x_points.reshape(-1, 1)).flatten() noise = 0.1 * epoch * np.random.randn(10) y_points = network_output + noise ax.scatter(x_points, y_points, color='blue', label='Data Points') # Simulate a simple linear fit over time fit_y = network_output ax.plot(x_points, fit_y, color='red', label=f'Epoch {epoch}: Model Output') ax.set_ylim([-1.5, 1.5]) ax.set_title(f'Neural Network Learning - Epoch {epoch}') ax.set_xlabel('Input Feature') ax.set_ylabel('Output') ax.legend() fig.canvas.draw() image = np.frombuffer(fig.canvas.tostring_rgb(), dtype='uint8').reshape(fig.canvas.get_width_height()[::-1] + (3,)) frames.append(image) plt.close(fig)

Create a GIF from the frames

imageio.mimsave('neural_network_learning.gif', frames, duration=0.5)

Display GIF

To display the created GIF in the notebook:

from IPython.display import Image Image(open('neural_network_learning.gif','rb').read())
Image in a Jupyter notebook

Theoretical Background

The behavior of a neural network is inspired by how neurons in the brain work. At its simplest, a neural network comprises:

  • Input Layer: Neurons representing input features.

  • Hidden Layer: Neurons that capture complex features through layers of weights and activation functions.

  • Output Layer: Neurons that produce the final prediction or decision.

The process of training involves minimizing a loss function using gradient descent, derived from the backpropagation algorithm. Suppose the loss is LL. For weight ww and learning rate η\eta, the update rule is:

w=w−η∂L∂ww = w - \eta \frac{\partial L}{\partial w}

Where ∂L∂w\frac{\partial L}{\partial w} is the gradient of the loss with respect to the weight.

# python import numpy as np from skopt import gp_minimize from skopt.space import Real, Categorical, Integer from skopt.utils import use_named_args # Define the function to minimize def black_box_function(x): return (x[0] - 2) ** 2 # Define the search space space = [Real(0, 5, name='x')] # Run Bayesian optimization res = gp_minimize(black_box_function, space, n_calls=20, random_state=0) print(f"Best found solution: {res.x}")
Best found solution: [2.0006642926960443]

Intro


  • σ−ν=γ\sigma-\nu=\gamma

Hi

a=2 b=3 b-a