CoCalc provides the best real-time collaborative environment for Jupyter Notebooks, LaTeX documents, and SageMath, scalable from individual use to large groups and classes! Also, H100 GPUs starting at $2/hour.

CoCalc provides the best real-time collaborative environment for Jupyter Notebooks, LaTeX documents, and SageMath, scalable from individual use to large groups and classes! Also, H100 GPUs starting at $2/hour.

**Views:**

^{41}

**Visibility:**Unlisted (only visible to those who know the link)

**Image:**ubuntu2204

**Kernel:**Python 3 (system-wide)

## Section One; Random Numbers

The use of random numbers in simulations has become an essential tool in various fields such as engineering, physics, economics, and computer science. Random numbers help generate random events, allowing simulations to model complex scenarios that cannot be replicated in real life. In this essay, I will discuss the applications of random numbers in simulations and their benefits.

Random numbers are a key component of Monte Carlo simulations, a technique used to model complex scenarios and estimate probabilities in a wide range of fields. Monte Carlo simulations use random numbers to generate a large number of possible outcomes, which can be used to estimate the likelihood of different scenarios and analyze risk.

A flip of a coin is a good first look at random, the result of the toss is either a head or tail. In a 'fair' coin the chance should be equal. For ease of programming we will assign 0 to a head and 1 to a tail.

Much of the computing in Python is done in computer programs that have been written and placed in packages. These generally make the computer code more readable and also allow powerful tools that don't need to be rewritten. So first we will import some useful packages

In the previous cell the toss() function was executed ten times with print statements and then 1,000 times.

In the next example we will use the tossing of dice to explore the nature of random numbers. If you execute the previous cell, you will generally get different answers each time you run it. That will be shown in the following examples. The Python code may look more complicated but the only new item is the list denoted by brackets [ ] . Instead of just adding up the number of each, we can store the individual results in a list. That will be explained in the computer code.

Monte Carlo simulation is a computational technique that utilizes random numbers to simulate a wide range of processes and systems. The method derives its name from the renowned Monte Carlo Casino, known for its games of chance and unpredictability. The core idea behind Monte Carlo simulation is to repeatedly sample random values for uncertain variables within a model, allowing us to estimate the behavior and outcomes of the system.

In the next few examples we will look at the impact of using more sets of data in creating random number simulations. The first example uses 5 sets of the 10 rolls. The num_simulations = 5 and num_roll = 10 as defined in the previous cell

There you see the results of the two dice, each roll for each set and then each of the five sets are plotted separately. The only real information here is a visualization of the randomness of each roll.

Here we would expect a Gaussian or Normal curve: this will look better as we increase the number of sets.

Next we will increase the number of sets from 5 to 30

While the frequency of occurrence for 30 sets looks closer to what we expect we still don't see the flat histogram for the single die or the Normal curve for the sum.

Next we will increase the number of sets from 30 to 100, we still do 10 rolls per set

Even at one hundred rolls the histograms are not quite what we desire. So, we will increase the number of rolls to 1000.

In the next cell, we use the gaussian_kde function to create the data set.

gaussian_kde stands for Gaussian Kernel Density Estimation. It is used for non-parametric estimation of the probability density function (PDF) of a random variable. This is useful in statistics when you want to estimate the underlying distribution of a data set without assuming any particular underlying distribution.

The method uses a Gaussian (also known as normal) distribution as the kernel. The kernel acts as a smooth, localized weighting function over the data points. By summing these functions, it creates a smooth approximation of the frequency distribution. The bandwidth of the kernel (which controls the smoothness of the resulting estimate) can be adjusted to fit the data appropriately.

gaussian_kde is particularly useful in data science and statistics for visualizing and analyzing data where you don't have a clear expectation of what the distribution should look like, or when the data doesn't fit traditional parametric distributions well.

Much closer to equal probability of obtaining a 1-6 on each die and much closer to a Normal Curve. Therefore it is seen that even in this simple model, a large number of sets provides a result that is closer to the analytic result.