CSc 2262 Numerical Methods Fall 2004[1]

Notes

History

            Calculating devices over time – ancient, middle, modern

                        Babbage, analytical (including slide rule), digital

                        WWII – scientific computing

                        Supercomputers and grand challenge problems – weather, human genome,

oil exploration, …

                        Mathematics and Electrical Engineering roots – circuits, hardware;

algebra, trigonometry, calculus

Number Representation

            Decimal numbers, binary digits, octal and hexadecimal digits – conversion

            Floating point numbers – sign, exponent, fraction (IEEE standard)

                        characteristic – 8 bit, excess 127

                        sign bit

mantissa or significant (fraction) – n=23 bit single precision,

n=52 bit double precision

                        accuracy – single 2-23 or 10-7; double 2-52 or 10-16

Errors

            Can only store n bits of fraction – what to do with 1/3 or p?

            F(x) = version of x stored as floating point = x(1+e)

                        Chopping – drop bits if more than n è eÎ[-2-n,0]

                        Rounding – if digit n+1=0 then chop else chop and add 1 if that digit=1

                                    so eÎ[-2-n,2-n]

                        è worst chopping error twice rounding error, sign of f(x) same as sign of

x so no cancellation possible

                        *Consequences

Error = true value–approximate value=xt-xa

Relative error = error/true value =(xt-xa)/xt=1xt/xt

            Example: p=3.14159265…= xt while 22/7=3.1428571…=xa

            Error = 0.00126 and Relative error = 0.000402

Error sources

            Modeling errors            Nt=number of people on earth in year t = N0ekt

                        Out of scope but model does not hold based upon real data

            Mistakes (blunders) – programmers

            Physical measurement – c = speed of light = 2.997925+e *1010 cm/sec

            Machine representation and arithmetic errors – rounding, chopping, underflow,

Overflow

            Mathematical approximation and truncation (digitization) - p

            Loss of significance

            *Noise

                        Function, fuzzy curve

            Program segment example:

                        n=30; x=1.0/9.9;

for (i=1; i<=n;i++) {x=10.0*x-1.0; print(i,x);}

            Propagation     (interval arithmetic)

                        xa+ya                |(xt+yt)|-(xa+ya)|£ex+ey

                        xa/ya                  (xa-ex)/(ya+ey)£xt/yt)|£ xa+ex)/(ya-ey)

                        relative error of (x*y) = relative error(x)+relative error(y) – relative

error(x)*relative error(y)

                        relative error ~ f’(xt)(xt-xa)/f(xt)= f’(xt)/(xt) xt relative error(xa)

Theorems

            intermediate value theorem

                        f(x) continuous            xÎ[a,b]

                        M=maxxf(x) and m=minxf(x)

                        "vÎ[m,M] $cÎ[a,b] ' f(c)=v

            mean value theorem

                        f(x) continuous and f’(x) continuous and differentiable            xÎ[a,b]

                        $cÎ[a,b] ' [f(b-f(a)]=(b-a)*f’(c)

            integral mean value theorem

                        f(x) continuous and w(x) nonnegative and integrable            xÎ[a,b]

                        $cÎ[a,b] ' òabf(x)w(x)dx=f(c)òab w(x)dx

Taylor Series

f(x) continuous with n+1 derivatives                    xÎ[a,b]

Pn(x)=Sj=0nfj(x0)(x-x0)j/j!

            Rn(x) = remainder = En(x)= error =f(x)-Pn(x)=(x-x0)n+1fn+1(c)/(n+1)!

            j!=j(j-1)(j-2)…(2)(1)=factorial and 0!=1            fj(x0)=jth derivative of f at x0

            Note: if x0=0 this is called a Maclaurin series

            Example: f(x)=ex and x0=0                             xÎ (-¥,¥)

                        fj(x0)=ex0=e0=1 "j=0,1,…,n+1

                        Pn(x)=1+x+x2/2+…+xn/n! and Rn(x)=xn+1/(n+1)!ec for cÎ[0,x]

Evaluation of Polynomials (Horner method)

Pn(x)=Sj=0najxj            for real coefficients aj                Evaluate Pn(x) at x=z

Let bn= an then bn-1=an-1+zbn, bn-2=an-2+zbn-1,…,bk=ak+zbk+1,…, b0=a0+zb1

then Pn(z)= b0

            Synthetic Division

                        Let Qn-1(x)=Sj=1nbjxj-1= quotient of Pn(x)/(x-z) with b0 as remainder è

                        Pn(x)=b0+(x-z) Qn-1(x) (x)

Roots                                                  e=error tolerance

            Find x such that f(x)=0

            Example: Suppose that one puts in Pin dollars into a savings account at 100*r % interest compounded in each of Nin time periods, and after that withdraws Pout dollars from the account for Nout time periods until the account has no money left.  After Nin time periods the account has Sin= Pin (1+r)[(1+r)Nin-1]/r dollars and the amount after Nout withdrawals of Pout dollars = f® = (1+r)Nout-1Sin-Pout[(1+r)Nout –1]/r = 0.  We can solve for the r needed for this transaction.

Bracketing Methods            f(x) continuous for xÎ[a,b]

Incremental Search              multiple roots

            set aj=(a+(b-a)j/n

            if f(aj)*f(aj+1)<0 then output [aj,aj+1] as small interval with root

            Note: could then use other bracketing methods to get more exact estimates

Bisection                                  f(a)*f(b)<0

            set c=(ab)/2; if b-c£e then c=root and stop

            elseif f(b)*f(c)£ 0 then set a=c elseif set b=c; restart

            Note: bn- an=2n-1(b-a) and root is in [an,bn] so must converge as n increases

                        è n³log(b-a)/e)/log(2)

False Position                           f(a)*f(b)<0

            x0,x1Î[a,b]; x=x1-f(x1)*(x1-x0)/[f(x1)-f(x0)];

            if f(x)*f(b)<0 then x0=x else x1=x;

            if |x1-x|<Î else restart

Open Methods (no bracketing)

Fixed Point Iteration                        f(x) continuous

            find g(x) such that root of f(x) solves g(x)=x

            one possibility for g is g(x)=bf(x)+x for some nonzero parameter b

            find original estimate x0; set x=g(x0);

if |x-x0|<Î stop; else restart with x0 = x              

Secant

analogous to false position but without the brackets

            find initial estimates x0,x1;

x=x1-f(x1)*(x1-x0)/[f(x1)-f(x0)];

            if f(x)*f(x1)<0 then x0=x else x1=x;

            if |x1-x|<Î else restart

Newton-Raphson

            based on Taylor series                        f(x) continuous and differentiable

            find x0;  x=x0-f (x0)/f’(x0);

if |x-x0|<Î stop; else restart with x0 = x

Matlab

            Can use fzero to find roots

            Can use polyeval to evaluate a polynomial

            Can use poly to construct a polynomial given its roots

            Can use conv to multiply a polynomial by another polynomial

            Can use deconv to divide one polynomial into another polynomial

Interpolating (or Approximating) Polynomials

Computers have numbers via binary representations and can do simple arithmetic (+,-,*,/)

Need to estimate complex functions (e.g., ex or sin(x)) via these simple arithmetic

functions è estimating polynomials (using multiplication and addition)

Interpolation – find Pn(x) as an interpolating polynomial that has the same value as the function through some known points, i.e., f(xj)=Pn(xj). Note that the known points are x0,x1,…, xn and y0,y1,…, yn where yj=f(xj).

Taylor Series                            x0Î[a,b]

Pn(x)=Sj=0nfj(x0)(x-x0)j/j!

            En(x)= error =f(x)-Pn(x)=(x-x0)n+1fn+1(c)/(n+1)!

Linear Interpolation                 x0, x1Î[a,b]

P1(x)=[(x1-x)y0+(x-x0)y1]/(x1-x0)

Note: slope = (y1-y0)/(x1-x0) and y-intercept = (x1y0-x0y1)/(x1-x0)

Note that P1(xi)=yi for i=0,1

Example: f(x)=Öx x0=1 and x1=4 so y0=1 and y1=2

            Slope = (2-1)/(4-1)=1/3 and Y-intercept=(4*1-1*2)/(4-1)=2/3

P1(x)= (1/3)x+(2/3)

            for x=2, f(x)=1.4142135 and P1(x)=4/3=1.3333333

Example: f(x)=ex x0=0.82 and x1=0;83 so y0=2,2705 and y1=2.293319

            for x=2, f(x)=2.2841638 and P1(x)=2.2841914

Lagrange Polynomials            x0, x1,…,xnÎ[a,b]

Pn(x)=Sj=0n yjLj(x)  where Lj(x)=Pi=0,i¹jn[(x-xi)/(xj-xi)]

Note that Lj(xi)=δij=1 if i=j, 0 if i¹j  (δ=Kronecker delta)è Pn(xj)=yj

Note that polynomial can be shown to be unique

En(x)= error of Pn(x) at or x = Pi=0n(x-xi) fn+1(c)/(n+1)!  for cÎ[a,b]

Example: n=2; f(x)=ex x0=0.82, x1=0;83, and x2=0.84,  so y0=2,2705,

y1=2.293319, and y2=2.316367

            for x=0.826, f(x)=2.2841638 and P2(x)=2.2841639


Newton Polynomials                  x0, x1,…,xnÎ[a,b]

Pn(x)=Sj=0n Dj*Pi=0j-1(x-xi)

Divided Differences

D0=f[x0]=f(x0);

D1=f[x0,x1]=[f(x1)-f(x0)]/(x1-x0)={f[x1]-f[x0]}/(x1-x0)

~=f’(c) and c~{x0+ x1}/2

D2=f[x0,x1,x2]={f[x1,x2]-f[x0,x1]}/x2-x0} …

Dn=f[x0, x1,..., xn]={f[x1,…, xn]-f[x0,…, xn-1]}/{xn-x0}            ~= f(n)(c)/n!

Note that we can permute the order of x0 variables

En(x)=error = Pi=0n (x-xi)f(n+1)(c)/(n+1)! for cÎ[min xj,max xj]

Note that we can calculate recursively:

dk,0=f(xk), dk,j=[dk,j-1-dk-1,j-1]/(xk-xk-j) è Dk=dk,k

Approximation – find Pn(x) as an approximating polynomial minimizing the error, at

least at some known points x0,x1,…, xn and y0,y1,…, yn where yj=f(xj).

            Note: error is a function of Pn(xj)-yj at xj

Chebyshev Polynomials                  x0, x1,…,xnÎ[-1,1]

            En(x)=Pi=0n (x-xi)f(n+1)(c)/(n+1)!           Q(x)= Pi=0n (x-xi)

            Minimize Q(x) by picking xk=cos[(2k+1)P/2n]

            Pn(x)=Sj=0n ajCj(x)            where

                        Cj(x)=cos(j cos-1(x))

                        C0(x)=1; C1(x)=1; Cj(x)=2xCj-1(x)-Cj-2(x)

                        a0=[1/(n+1)]Sk=0nykC0(xk)= [1/(n+1)]Sk=0nyk

aj=[2/(n+1)]Sk=0nykCj(xk)=[2/(n+1)]Sk=0nykcos[jP(2k+1)/(2n+2)]

Transforming the interval

                        if xÎ[a,b] and a¹-1 and/or b¹1, consider tÎ[-1,1] by letting

t=2[(x-a)/(b-a)]-1 which means x=[(b-a)t/2]+[(a+b)/2]

Select tk=cos[(2n+1-2k)P/(2n+2)] so xk=[(b-a)tk/2]+[(a+b)/2]


Legendre Polynomials                  x0, x1,…,xnÎ[a,b]

Minimize E = error = Ö[1/(b-a)]òab[f(x)-Pn(x)]2dx

P0(x) = 1; Pj(x)=i/(j!2j)dj[(x2-1)j]/dxj

Pn(x)=Sj=0n bjLGj(x)

LG0(x)=1;            LGj+1(x)=[(2j+1)/(j+1)]xLGj(x)- [j/(j+1)]LGj-1(x)

bj=òab f(x)LGj(x)/ òabLGj(x)2dx

Splines (thin, flexible strings)             x0áx1ááxnÎ[a,b]

            Knot – x values where two splines meet

            Let hi=xi+1-xi

Linear

Si(x)=ai+bi(x-xi)

Si(xi)=yi (continuity) è ai= yi è Si(x)=yi+bi(x-xi)

bi=(yi+1-yi)/(xi+1-xi) =(yi+1-yi)/hi = slope

è Si(x)= yi+[(yi+1-yi)/(xi+1-xi)](x-xi)=yi+[(yi+1-yi)/hi](x-xi)

            Note that Si(xi+1)= Si(xi+1)=yi+1                (equality at knots)

Quadratic

Si(x)=ai+bi(x-xi)+ci(x-xi)2

Si(x)=yi è ai= yi è Si(x)=yi+bi(x-xi)+ci(x-xi)2

Si(xi+1)=yi+bi(xi+1-xi)+ci(xi+1-xi)2=Si+1(xi+1)=yi+1+bi+1(xi+1-xi+1)+ci(xi+1-xi+1)2=yi+1

è yi+bihi+cihi2=yi+1                         (equality at knots)

Si’(x)=bi+2ci(x-xi) è Si’(xi+1)=bi+2cihi=Si+1(xi+1)=bi+1 è bi+2cihi=bi+1

(equality at knots – first derivative)

Cubic

Si(x)=ai+bi(x-xi)+ci(x-xi)2+di(x-xi)3            è ai= yi (continuity)

è Si(x)=yi+bi(x-xi)+ci(x-xi)2+d(x-xi)3

yi+bihi+cihi2+dihi3=yi+1 and  bi+2cihi+3dihi2=bi+1  (equality of knots for S and S’)

S’(x)= bi+2ci(x-xi)+3di(x-xi)2 and S”(x)=2ci+6di(x-xi)  è ci+3dihi=ci+1 

(equality of knots for S”)

            Natural spline c0=cn=0 (additional conditions)

            Clamped (f’ and f” specified for x0 and xn)

                                                 è 2h0c0+h0c1=3(y1-y0)/(x1-x0)-3f’(x0)

 2hn-1cn-1+2hn-1cn=3f’(xn)-3(yn-yn-1)/(xn-xn -1)

            Not-a-knot (continuity of 3rd derivatives at 2nd and next-to-last knots – first

internal knots no longer true knots; gives same result using ordinary cubic

polynomial for first four points)

è h1c0-(h0+h1)c1+h0c2=0

hN-1cN-2-(hN-2+hN-1)cN-1+hN-2cn=0

Least Squares

Y=f(x) or f(xk) = yk+ek or ek=f(xk)-yk             k=1,…,n

E¥(f)=maxk|f(xk)-yk| = maximum error; E1(f)=(1/n)Sk=1n|f(xk)-yk| = average error;

            E2(f)=Ö(1/n)Sk=1n|f(xk)-yk|2 = root-mean-square error

Least squares minimizes E2(f)

            Linear f(x)=ax+b                     E2(f)=Ö(1/n)Sk=1n|axk+b-yk|2

normal equations obtained from dE2(f) d/a=0 and dE2(f) d/b=0

è aSk=1nxk2+bSk=1nxk=Sk=1nxkyk

aSk=1nxk+bn=Sk=1nyk

                        è a=(nSk=1nxkyk-Sk=1nxkSk=1nyk )/[nSk=1nxk2-(Sk=1nxk)2]

b=(1/n)[Sk=1nyk-aSk=1nxk]

  = g-ac where g=(1/n)[Sk=1nyk=average y value and

c=(1/n)[Sk=1nxk=average x value

Matlablinregr

            x = vector of n x values, y = vector of n y values

            linregr(x,y,n)

Nonlinear – can try to linearize

            y=aebx è ln(y)=ln(a)+bxè z=A+bx with z=ln( y) and A=ln(a)

y=axbè log(y)=log(a)+blog(x)

 è z=A+bw with z=log(y), A=log(a), and w-log(x)

y=ax/(b+x) è (1/y)=(1/a)+(b/a)(1/x)

 è z=A+Bw with z=1/y, A=1/a, B=b/a, and w=1/x

 

 

Matlabpolyfit, polyeval

                        x = vector of n x values, y = vector of n y values,

X=value of x to be tested

a=polyfit(x,y,n)

Y=polyeval(a,X)



[1] Grader is Ankur Suri, at asuri1@lsu.edu