Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Using the Relaxation Method to Solve Laplace’s Equation
Numerical solutions to many physical problems can be found using finite difference methods. In this approach, a regular grid is imposed on a region. At each point on the grid, the differential equation governing the system is approximated. The resulting equations are then solved iteratively. The relaxation method is a relatively simple example. The problem is to find the potential V in empty regions (ρ = 0) when V is known on the boundaries. In other words, we want to find solutions to Laplace’s equation (we’ll concentrate on two-dimensional problems),
The value of V at a point (x,y) is the average of the values of V on a circle centered around that point,
This suggests that if V is known for points on a grid, the value of V at each point should be the average of V for its nearest neighbors. A solution can be found by cycling through the points on the grid and assigning to each one the average of it neighbors. On each subsequent pass, the updated values are used. After a few iterations, the values change less on each subsequent pass. Eventually, the changes are negligible and a numerical solution has been reached.
First, we need to be able to keep track of the potential at points on a grid. In Python, V[3,2] refers to an element in a matrix. The first index is for the row, which we’ll associate with the y direction, and the second index is for the column, which we’ll associate with the x direction. The example below shows the elements of a 5x6 matrix. Notice that the lowest value of each index is zero, not one. Each cell in the matrix V will represent the value of the potential a point in space. We will divide up the region so that the distance d between the points is the same in both directions. The shading in the diagram will be explained later.
We will use the variables NX and NY to for the numbers of cells in the x and y directions. If the distance between the first cell and the last cell in the x direction is Nd, the matrix must have (N + 1) columns. In the figure above, the distance from the first column to the last column is 5d, so there must be NX=6 columns.
Second, we need to be able to calculate (approximate) second derivatives with respect to x and y. The first derivative with respect to x halfway between the locations of V[i,j] and V[i,j+1] can be approximated as
The second derivative with respect to x at the location of V[i,j] can be approximated as
Similarly, the second derivative with respect to y at the location of V[i,j] is approximately
The two-dimensional version of Laplace’s equation will be approximately satisfied if
so
In other words, the value of the potential should be approximately equal to the average of the values in the four closest positions.
Finally, we need to know how to apply the condition above iteratively to find a solution. The potential is set for the boundary cells (the ones shaded gray in the figure above) and all of the other cells are initially set to zero. The values are updated iteratively to reach a solution. One iteration consists of updating the values of each of the non-boundary cells using the condition above. The latest available value of each V on the right-hand side of the equation is used. The “difference” between two iterations (diff) is defined as the maximum of the absolute value of the difference between values of potential for all cells. More iterations are performed until the difference between the last two iterations reaches a specified value (maxdiff).
Note that the darker gray cells in the corners of the figure are never used in the calculation. However, the value entered in those cells may change the appearance of a contour plot of the results.
The example below uses the relaxation method to solve Laplace’s equation with the following boundary conditions: A spacing of d = 1 cm is used, so the matrix is the same size as the one shown in the figure above.
For each iteration, only the interior (non-boundary) cells are updated. The new value of V[i,j] is temporarily stored in the variable newVij so that it can be compared with the old value. If the difference between them (lastdiff) is bigger than any previous differences for the current iteration, it becomes the new value of the difference (diff).
You can use the contour function (or contourf) from the pylab library to display the results of the calculations.
Note: This example only used a few points in each direction for the grid, so that the array could be seen easily when printed. For an actual calculation, more points should be used so that the contour plot is smoother!
The rate of convergence to a solution can often be improved by “overrelaxation.” The condition above can be rewritten as
where w is a nonzero constant. Since the two conditions are equivalent, the solutions reached using them are the same. When this condition is used, the current value (on the right-hand side) of the potential for a cell is used in the calculation of the new value (on the left-hand side). A good choice of w can produce much quicker convergence (less iterations are required). For most problems, the optimum value of w cannot be know in advance, but w = 1.5 works well in many cases. If we set w = 1, there is no overrelaxation.
References
- David J. Griffiths, Introduction to Electrodynamics, 3rd ed. (Prentice-Hall, Upper Saddle River, NJ, 1999), pp. 113-114.
- John R. Reitz, Frederick J. Milford, and Robert W. Christy, Foundations of Electromagnetic Theory, 4th ed. (Addison Wesley, Reading, MA, 1993), pp. 77-84.
- Paul Lorrain, Dale R. Corson, and Francois Lorrain, Fundamentals of Electromagnetic Phenomena (W. H. Freeman, San Francisco, 2000), pp. 174-179.