Single-Variable Iteration

Types

Functions

LUSE_ENGR701_704_NumericalMethods.SingleVariableIterations.bisectionMethod
bisection(SVI::SingleVariableIteration)

Root-finding method: f(x) = 0. Search for solution by halving the bounds such that a and b initially yield opposite signs in function.

Notes

Relying on the Intermediate Value Theorem (IVT), this is a bracketed, root-finding method. This method is rather slow to converge but will always converge to a solution; therefore, is a good starter method.

source
LUSE_ENGR701_704_NumericalMethods.SingleVariableIterations.false_positionMethod
false_position(SVI::SingleVariableIteration, p0, p1)

Attempt root-finding method with initial guesses such that p0 and p1 in [a, b] yield opposite signs in function.

Use function with lowest slope!

Notes

Similar to, but slower to converge than, the secant_method() by including a test to ensure solution is root-bracketed.

See fixed_point() for theorem.

source
LUSE_ENGR701_704_NumericalMethods.SingleVariableIterations.fixed_pointMethod
fixed_point(SVI::SingleVariableIteration, p0)

Attempt root-finding method with initial guess, p0 in [a, b] by solving the equation g(p) = p via f(p) - p = 0.

Use function with lowest slope!

Not root-bracketed.

Notes

Theorem:

  1. Existence of a fixed-point: If g ∈ C[a,b] and g(x) ∈ C[a, b] for all x ∈ [a, b], then function, g has a fixed point, p ∈ [a, b].
  2. Uniqueness of a fixed point: If g'(x) exists on [a, b] and a positive constant, k < 1 exist with {|g'(x)| ≤ k | x ∈ (a, b)}, then there is exactly one fixed-point, p ∈ [a, b].

Converges by 𝒪(linear) if g'(p) ≠ 0, and 𝒪(quadratic) if g'(p) = 0 and g''(p) < M, where M = g''(ξ) that is the error function.

source
LUSE_ENGR701_704_NumericalMethods.SingleVariableIterations.secant_methodMethod
secant_method(SVI::SingleVariableIteration, p0, p1)

Attempt root-finding method with initial guesses such that p0 and p1 in [a, b] yield opposite signs in function.

Use function with lowest slope!

Not root-bracketed.

Notes

Method is less computationally expensive than newton_raphson() but may converge at slower rate by circumventing need to calculate derivative.

See fixed_point() for theorem.

source
LUSE_ENGR701_704_NumericalMethods.newton_raphsonMethod
newton_raphson(SVI::SingleVariableIteration, p0[; df=nothing])

Attempt root-finding method with initial guess, p0 in [a, b] by solving the equation g(p) = p via f(p) - p = 0. df will be the first derivative of function if not given.

Use function with lowest slope!

df(x) ≠ 0

Notes

Quickest convergence rate, but not root-bracketed and has trouble with symmetric functions! Initial guess, p0 must be close to real solution; else, will converge to different root or oscillate (if symmetric). This method can be viewed as fixed-point iteration.

Technique based on first Taylor polynomial expansion of f about $p_{0}$ (that is p0) and evaluated at x = p. |p - $p_{0}$| is assumed small; therefore, 2ⁿᵈ-order Taylor term, the error, is small.

See fixed_point() for theorem.

source
LUSE_ENGR701_704_NumericalMethods.solveMethod
solve(SVI::SingleVariableIteration[; method=:bisection, p0, p1, df])

Attempt to find where f(p) = 0 according to method ∈ {:bisection (default), :fixed_point, :newton_raphson, :secant_method, :false_position}. Each method also has a convenience function.

Notes

Convergence Rates: :newton_raphson > :secant_method > :false_position > :fixed_point > :bisection

:bisection is default because will always converge to a solution but is not the quickest.

source

Index