Path: blob/main/notebooks/published/bisection_method/bisection_method.ipynb
51 views
The Bisection Method for Root Finding
Introduction
The bisection method is one of the simplest and most robust numerical algorithms for finding roots of continuous functions. Given a continuous function and an interval where and have opposite signs, the Intermediate Value Theorem guarantees the existence of at least one root such that .
Mathematical Foundation
The Intermediate Value Theorem
If is continuous and , then there exists at least one such that:
Algorithm Description
The bisection method iteratively halves the interval by computing the midpoint:
At each iteration, we evaluate and determine which subinterval contains the root:
If , the root lies in , so we set
If , the root lies in , so we set
If , we have found the exact root
Convergence Analysis
After iterations, the interval width is:
The error bound after iterations is:
To achieve a tolerance , we need:
The method has linear convergence with a convergence rate of .
Advantages and Limitations
Advantages:
Guaranteed convergence for continuous functions
Simple to implement
Robust and numerically stable
Limitations:
Relatively slow convergence compared to Newton-Raphson
Requires initial bracketing interval with sign change
Cannot find roots where touches but doesn't cross the x-axis
Implementation
We implement the bisection method with detailed tracking of the iteration history for visualization purposes.
Example: Finding the Root of
We will find the root of in the interval .
Note that:
Therefore, there exists a root in .
Iteration History
Let's examine how the method converges to the root.
Visualization
We create a comprehensive visualization showing:
The function and root
The bisection process over iterations
Convergence of error bound
Convergence of
Additional Example: Transcendental Equation
Let's find the root of the transcendental equation:
This equation arises in fixed-point theory and has a unique root in .
Comparison: Required Iterations for Different Tolerances
We verify the theoretical formula for the number of iterations required.
Conclusion
The bisection method is a fundamental root-finding algorithm that offers:
Guaranteed convergence for continuous functions with a sign change
Predictable performance - the number of iterations is known a priori
Numerical stability - immune to issues that plague gradient-based methods
While slower than Newton-Raphson (linear vs quadratic convergence), the bisection method remains valuable for:
Providing reliable initial estimates for faster methods
Solving problems where derivatives are unavailable or expensive
Educational purposes as an introduction to numerical analysis
The theoretical error bound matches our computational results, demonstrating the method's well-understood convergence behavior.