Path: blob/master/material/algorithms-convergence.ipynb
934 views
Algorithms and Convergence
In simple terms, an algorithm can be defined as a finite sequence of unambiguous steps that must be followed in order to solve or approximate the solution to some problem. The stated procedure should be translatable into computer code through a programming language.
[Computing Time](#Computing Time)
There several ways to classify an algorithm, however those that refer to their numerical convergence and accuracy are more interesting. Among them we have:
Types of Algorithms
Linear algorithms
They are algorithms where errors scale as the number of performed iterations. This definition usually coincides with the scaling of the time computing.
Example 1:
Consider the recursive equation:
the solution to this equation for is given by
for any constants and .
Setting the initial conditions as and (implying and ), show that a float32 arithmetics would lead a linear error scaling as the number of iterations .
Example 2:
A linear algorithm may also refer to the computing time required for performing iterations. This example evaluates the time required for evaluating the sum of random numbers as a function of how many number are going to be added. A comparison with the built-in function of python is also performed.
Exponential algorithms
This type of algorithms scales as an exponential factor of the number of iterations. These algorithms are usually unstable, where a small perturbation leads to a exponential growth after a few iterations.
Example 3:
Consider the recursive equation:
the solution to this equation for is given by
for any constants and .
Setting the initial conditions as and (implying and ), show that a float32 arithmetics would lead an error scaling as the exponential of the number of iterations .
Stable algorithms
These algorithms exhibit a small change when an initial perturbation to the initial conditions is introduced. Linear algorithms can be catalogued within this category.
Unstable algorithms
Unlike stable algorithms, unstable algorithms produce a large change when iterating with respect to a small perturbation in the initial conditions. Exponential algorithms fit in this category.
At first glance, unstable algorithms may seem undesirable, however there are some useful applications for them. Among the more interesting ones is the generation of random numbers in a computer. Pseudorandom number generator
Computing Time
One of the most important skills a programmer must develop is to evaluate the computing time of certain process as well as the consumed memory. When computational resources are limited (as always happens!), estimating beforehand the computing time of an algorithm is certainly important.
Example 4:
An interesting example that illustrates very well the issue of computing time is the N-body problem.
Consider a set of masses . The total gravitational potential energy of the -th particle due to the other ones is given by
Now, if we want to calculate the total energy of the system, it is necessary to add each contribution, i.e.
This implies if we want to calculate in a computer, it is required iterations, so the computing time scales as . An estimation of the total time is then reached by first measuring the required time for 1 iteration, i.e., the energy of the -th particle due to the -th particle, and then multiplying the result by .
The large computing time of this type of algorithms (when becomes large enough) has propelled a large enterprise of more efficient algorithms, including tree-codes where the computing time is reduced to
Convergence
The last concept related to algorithms is convergence. This refers to how fast an algorithm can reach a desired result with some given precision. Some rigorous techniques can be used to quantify the convergence degree, however it is commonly a more useful approach to compare the convergence of an algorithm with other already known.
In a IPython notebook and using the codes of the first task, make a figure where is compared the values obtained for the two methods for calculating as a function of the number of performed iterations. Which method reaches a faster convergence?