In this chapter, we’ll focus on ways to solve all continuous functions.
1. Initial Condition
Start with x_L and x_R such that:
f(x_L) \times f(x_R) < 0
Why?: We want two points that enclose a zero (opposite signs)
2. Calculate midpoint
x_M = \frac{x_L + x_R}{2} \qquad \text{error} = x_R - x_L
3. Evaluate function at midpoint and shrink window
Stopping Criteria: Stop here and return x_M if:
- f(x_M)=0, or
- |\text{error}| < \epsilon
If f(x_L) f(x_M) < 0, root is in left half. Update x_R \leftarrow x_M
Return to step 2.
Q: Solve with bisection method.
f(x) = 3x^2 -4x + 1
Starting Points: x_0 = 0.5, x_1 = 1.8
A:
f(x_0) = -0.25 \times 3.53 < 0
Iteration 1:
x_2 = \frac{x_1 + x_0}{2} = 1.15
f(1.15) = 0.3675
Not zero or in acceptabe range, so we’ll shrink.
Iteration 2:
x_3 = \frac{x_1 + x_2}{2} = \frac{0.5 + 1.15}{2} = 0.825
f(0.825) = -0.258125
Shrink again.
Iteration 3:
x_4 = \frac{x_3 + x_2}{2} = \frac{0.825 + 1.15}{2} = 0.9875
f(x_4) = f(0.9875) = - 0.02453125
Shrink again
$ x_3 = 1.15 $
Iteration 4:
x_5 = \frac{x_3 + x_4}{2} = \frac{0.9875 + 1.15}{2} = 1.06875
f(x_5) = f(1.06875) = 0.1516796875
We just lost the zero!
Note — The speed is good at the start (halving repeatedly), but then it takes ages to converge.
function Bisection (f : float \rightarrow float, a : float, b : float, maxIter : int, eps : float)fa := call f(a)fb := call f(b)if fa \large{*} fb \ge 0 then// Inadequate values for a and b.return -1.0end iferror := b \text{$-$} afor it \leftarrow to maxItererror := error / 2c := a \text{$+$} errorfc := call f(c)if |error| \lt eps or fc = 0 then// Algorithm has converged after #{it} iterations!return cend ifif fa \large{*} fc \lt 0 thenb := cfb := fcelsea := cfa := fcend ifend for// Max Iterations reached without convergence...return cend function
Notice in the code we calculate the midpoint a little oddly. Mathematically, we’re just doing:
x_M = \frac{x_R - x_L}{2} + x_L
This modified form uses the error as the numerator, which lets us kill two birds with one stone.
1. Initialization
Start with initial value x_0, f, and f'.
2. Evaluate f and f' at x_0
Calculate f(x_0) and f'(x_0).
3. Compute the next iteration value
\boxed{ x_{i+1} = x - \frac{f(x_{i})}{f'(x_{i})} }
4. Check if complete
Stopping Criteria: Stop here and return x_i if |f(x_i) / f'(x_i)| < \epsilon, otherwise:
Return to step 2.
Q: Solve with Newton’s method
f(x) = 3x^2 - 4x + 1
x_0 = 1.8
A:
f'(x) = 6x - 4
f(x_0) = 3.52\\ f'(x_0) = 6.8
Iteration 1:
x_1 = 1.8 - \frac{3.52}{6.8} = 1.282352947
f(x_1) = 0.8038754325\\ f'(x_1) = 3.694177647
Iteration 2:
x_2 = 1.282352947 - \frac{0.8038754325}{3.694177647} = 1.06474335
f(x_2) = 0.1420618031\\ f'(x_2) = 2.388460097
Iteration 3:
x_3 = 1.06474335 - \frac{0.1420618031}{2.388460097} = 1.00526494
f(x_3) = 0.0106130388\\ f'(x_3) = 2.03158964
Iteration 4:
x_4 = 1.00526494 - \frac{0.0106130388}{2.03158964} = 1.000040933
f(x_4) = 0.000081871\\
Note — This method is a lot faster than bisection, but getting the derivative isn’t trivial, especially with real-world data.
- The next method (Secant), will solve this by approximating the derivative
- Future chapters will give us ways to make functions that describe given data.
function Newton (f : float \rightarrow float, derF : float \rightarrow float, x : float, maxIter : int, eps : float, delta : float)fx := call f(x)for it \leftarrow to maxIterfd := call derF(x)if |fd| \lt delta then// Small slope!return xend ifd := fx / fdx := x \text{$-$} dfx := call f(x)if |d| \lt eps then// Algorithm has converged after #{it} iterations!return xend ifend for// Max iterations reached without convergence...return xend function
Like Newton’s but you use this to compute next iteration value.
\boxed{ x_i = x_{i-1} - \frac{ (x_{i-1} - x_{i-2}) f(x_{i-1}) }{ f(x_{i-1}) - f(x_{i-2}) } }
Note — We approximate the derivative using the two previous points, which also means we need two initial guesses instead of one.
The error term is \Delta, which is just \text{new point} - \text{old point}
Q: Apply the Secant method to the following.
f(x) = 3x^2 - 4x + 1 \\ x_0 = 2 \quad x_1 = 1.8
A:
f(x_0) = 5 \quad f(x_1) = 3.52
It 1:
\begin{aligned} x_2 &= x_1 - \frac{ (x_1 - x_0) f(x_1) }{ f(x_1) - f(x_0) } \\ &= 1.8 - \frac{ (1.8 - 2) (3.52) }{ 3.52 - 5 } \\ &= 1.\overline{324} \end{aligned} \\~\\ f(x_2) = 3(1.\overline{324})^2 - 4(1.\overline{324}) + 1 = 0.96420745
It 2:
\begin{aligned} x_3 &= x_2 - \frac{ (x_2 - x_1) f(x_2) }{ f(x_2) - f(x_1) } \\ &= 1.\overline{324} - \frac{ (1.\overline{324} - 1.8) (0.96420745) }{ 0.96420745 - 3.52 } \\ &\approx 1.144869215291 \end{aligned} \\~\\ f(x_3) \approx 0.352699699201
It 3:
\begin{aligned} x_4 &= x_3 - \frac{ (x_3 - x_2) f(x_3) }{ f(x_3) - f(x_2) } \\ &= 1.144869215291 - \frac{ (1.144869215291 - 1.\overline{324}) (0.352699699201) }{ 0.352699699201 - 0.96420745 } \\ &\approx 1.0413647824838 \end{aligned} \\~\\ f(x_4) \approx 0.087862700657
Note — If you aren’t me, you can just ignore this.
f(x) := 3x^2 - 4x + 1
i(x, y) := x - ( x - y )( f(x) )/( f(x) - f(y) )
s(x, y) := [ i(x, y) , f(i(x, y)) ]\begin{aligned} \textbf{It 1: }& s(1.8 &, 2) \\ \textbf{It 2: }& s(i(1.8, 2) &, 1.8) \\ \textbf{It 3: }& s(i(i(1.8, 2), 1.8) &, i(1.8, 2)) \\ \end{aligned}
function Secant(f : float \rightarrow float, a : float, b : float, maxIter : int, eps : float)fa := call f(a)fb := call f(b)if |fa| \gt |fb| thenswap(a,b)swap(fa,fb)end iffor it \leftarrow to maxIterif |fa| \gt |fb| thenswap(a,b)swap(fa,fb)end ifd := (b-a)/(fb-fa)b := afb := fad := d \large{*} faif |d| \lt eps then// Algorithm has converged after ${it} iterations!return aend ifa := a \text{$-$} dfa := call f(a)end for// Maximum number of iterations reached!return aend function
Recall — Newton’s and Secant method can get stuck in dips (local extrema) in the graph. Secant method is technically less likely to get stuck by not strictly hugging the line, allowing the possibility to shoot past the dip.
Also, the bisection method is actually pretty fast at the start thanks to its halving behavior, which is then its downfall as it takes forever to converge.
Therefore — The hybrid method is where you do bisection method some amount of times (Professor recommends 5 iterations), before switching to Secant or Newton.
In other words, you quickly reduce the problem size by 96.875%, and then switch to a faster method to do the last ~3%.