# Approximations

Jump to: navigation, search

It is important to solve non-linear equations, even when there are no analytical solutions. This requires the use of approximations using iterative methods.

## Iterative Methods

Iterative methods involve finding approximations to formulas and then refining the approximation to a certain accuracy. Error estimation is then used to find the accuracy of the result.

#### Taylor's Theorem

Taylor polynomials can be used to estimate functions based on a given point. To do this, use the following formula learnt in previous courses: #### Mean Value Theorem

The mean value theorem is the differential at a point to estimate the value with repeated guessing and checking of solution against a gradient of a secant. This can be done using the following formula learnt in previous courses: #### Graphical Method

For simple equations, the function can be plotted and the roots found by reading the intersects on the axis on an accurate plot.

#### Bisection Method

The bisection method is useful for finding a value of a root, where the interval over which the value is checked must have one point above and one point below the axis, (f(x1) * f(x2) < 0). Once the interval is estimated and found, it is then halved, and this halfway point evaluated for its position above or below the axis (f(x3)), with this new value substituting for the corresponding term above or below the x-axis.
The initial estimate of the solution:

x0 = (xl + xr) / 2

The initial error:

EI = |xl - xr| / 2

These values change as the interval halves every time, and a new and more accurate interval is established.
The number of iterations required to reduce the error to Er:

N = log2 (EI / Er) where N is the required number of iterations and EI is the initial error.

#### Fixed Point Iteration

Involves rearranging an equation with multiple x terms to equal to x in terms of x. It is common to find 2 different equations from a function to equal x. To find which of the two equations will provide a way to approximate x by converging, we need to find which of the two converge. This is done by deriving each equation in terms of x, and substituting the first estimated x value. If the absolute of the resulting value is less than one, the equation converges and therefore will approximate the root of the original equation.

If |f'(x0)|<1, f(x) converges.

The f(x) that converges then has the initial x0 estimate substituted to provide the next estimate, x1. This is then substituted into f(x) again, repeated for the required amount of iterations.

#### Newton-Rhapson Method

This method is a powerful estimation procedure which relies on a simplification of the Taylor series equation.
Derived by considering first-order Taylor’s series expansion of f(x) about an arbitrary point x1.
This method, as with all previous methods, requires that the function and its derivatives be continuous over all x. The new value found is always substituted back into the equation. This is required until the f(xi+1) ≤ ε.

#### Secant Method

The secant method relies on the idea that similar triangles will be created on estimating the roots of an equation that is linear between the values x1 and x2.
The calculation requires two values of x and, similar to the halving the interval method, takes the new value calculated and substitutes it for the previous x value. However, the distinction lies in that the new value does not need to be tested to see which one it will replace, but instead, automatically replaces the x1 term. ## End

This is the end of this topic. Click here to go back to the main subject page for Numerical Methods & Statistics.