Ma 449: Numerical Applied Mathematics
Model Solutions to Homework Assignment 5
Prof. Wickerhauser
Exercise 4(a,b) of Section 4.1, p.195.
(a) Taylor polynomial of degree 5:
2 3 4 5
1 - x + x - x + x - x
(b) Error term:
6
x
--------
7
(1 + c)
for some c between 0 and x.
Exercises 15(c) of Section 4.1, p.197.
The error term E4=E4(x,y) for the degree-N Taylor expansion, with
N=4, of exp(x) at x0=0 is
x^(1+N) exp(y)/(1+N)! = x^(5) exp(y)/5!,
where 0<|y|<|x|. Thus |E4| is maximal on [-c,c] when x=y=c,
where it equals
|E4(c.c)| = exp(c) * c^5/5! = exp(c) * c^5 / 120
Now exp(c) is approximately 1 for small values of c, so it does not
enlarge the error too much. It is enough to test a few values of c
near 0 to see which satisfy |E4(c,c)| < 1 e -6:
(too small) c=0.1 gives E4 < 9.3 e -8 with exp(c)<1.11
(too big) c=0.2 gives E4 < 3.3 e -6 with exp(c)<1.22
(just right) c=0.15 gives E4 < 0.75 e -6 with exp(c)<1.17
(checking, too bog) c=0.16 gives E4 > 1.0 e -6
The desired accuracy, 1 e -6, is achieved with c=0.15.
Exercise 2(a,b,c,d) of Section 4.2, p.205.
(a) P[x] = -0.04 x^3 + 0.14 x^2 - 0.16 x + 2.08, so P[3] = 1.78
(b) P'[x] = -0.16 + 0.28 x - 0.12 x^2, so P'[3] = -0.4
(c) Integral of P on [0,3] is 5.97
(d) Extrapolated value P[4.5] = 0.55
Exercise 3(a) of Section 4.3, p.218.
MATLAB PROGRAM [from the textbook's website]:
function [C,L]=lagran(X,Y)
%Input - X is a vector that contains a list of abscissas
% - Y is a vector that contains a list of ordinates
%Output - C is a matrix that contains the coefficents of
% the Lagrange interpolatory polynomial
% - L is a matrix that contains the Lagrange
% coefficient polynomials
% NUMERICAL METHODS: Matlab Programs
% (c) 2004 by John H. Mathews and Kurtis D. Fink
% Complementary Software to accompany the textbook:
% NUMERICAL METHODS: Using Matlab, Fourth Edition
% ISBN: 0-13-065248-2
% Prentice-Hall Pub. Inc.
% One Lake Street
% Upper Saddle River, NJ 07458
w=length(X);
n=w-1;
L=zeros(w,w);
%Form the Lagrange coefficient polynomials
for k=1:n+1
V=1;
for j=1:n+1
if k~=j
V=conv(V,poly(X(j)))/(X(k)-X(j));
end
end
L(k,:)=V;
end
%Determine the coefficients of the Lagrange interpolator
%polynomial
C=Y*L;
INPUT/OUTPUT:
% 3 abscissas give quadratic interpolating polynomial:
x = [0,1,3]
% Compute the ordinates
y = 2*sin(pi*x/6) % this produces one value for each x coordinate
x = 0 1 3
y = 0 1.0000 2.0000
[C,L] = lagran(x,y)
C = -0.1667 1.1667 0
L =
0.3333 -1.3333 1.0000
-0.5000 1.5000 0
0.1667 -0.1667 0
C*[2^2 2 1]' % evaluate the Lagrange polynomial at x=2
ans = 1.6667
2*sin(pi*2/6) % compare with the exact value at x=2
ans = 1.7321
C*[2.4^2 2.4 1]' % evaluate the Lagrange polynomial at x=2.4
ans = 1.8400
2*sin(pi*2.4/6) % compare with the exact value at x=2.4
ans = 1.9021
Exercise 3(b) of Section 4.3, p.218.
MATLAB PROGRAM:
Reuse lagran.m from the textbook, printed in part (a) above.
INPUT/OUTPUT:
% 4 abscissas give cubic interpolating polynomial:
x = [0,1,3,5]
% Compute the ordinates
y = 2*sin(pi*x/6) % this produces one value for each x coordinate
x = 0 1 3 5
y = 0 1.0000 2.0000 1.0000
[C,L] = lagran(x,y)
C = -0.0167 -0.1000 1.1167 0
L =
-0.0667 0.6000 -1.5333 1.0000
0.1250 -1.0000 1.8750 0
-0.0833 0.5000 -0.4167 0
0.0250 -0.1000 0.0750 0
C*[2^3 2^2 2 1]' % evaluate the Lagrange polynomial at x=2
ans = 1.7000
2*sin(pi*2/6) % compare with the exact value at x=2
ans = 1.7321
C*[2.4^3 2.4^2 2.4 1]' % evaluate the Lagrange polynomial at x=2.4
ans = 1.8736
2*sin(pi*2.4/6) % compare with the exact value at x=2.4
ans = 1.9021
Exercises 10(a*) of Section 4.3, p.219.
Since f(x)=sin(x) has all derivatives of maximum absolute value
1, and the nodes are equispaced with step size h, Theorem 4.4
applies with M2=1 and gives the error bound |E1(h)| =< h^2 /8.
Thus, to have |E1(h)| < 5 e -7, it suffices that
h< sqrt((5e-7)*8),
or h<0.002
Exercises 10(c) of Section 4.3, p.219.
Since f(x)=sin(x) has all derivatives of maximum absolute value
1, and the nodes are equispaced with step size h, Theorem 4.4
applies with M4=1 and gives the error bound |E3(h)|=< h^4 /24.
Thus, to have |E3(h)|< 5 e -7, it suffices that
h<((5e-7)*24)^{1/4},
or h<0.059
Exercise 6 of Section 4.4, p.229.
MATLAB PROGRAM [from textbook's website, "newpoly.m"]:
function [C,D]=newpoly(X,Y)
%Input - X is a vector that contains a list of abscissas
% - Y is a vector that contains a list of ordinates
%Output - C is a vector that contains the coefficients
% of the Newton intepolatory polynomial
% - D is the divided difference table
% NUMERICAL METHODS: Matlab Programs
% (c) 2004 by John H. Mathews and Kurtis D. Fink
% Complementary Software to accompany the textbook:
% NUMERICAL METHODS: Using Matlab, Fourth Edition
% ISBN: 0-13-065248-2
% Prentice-Hall Pub. Inc.
% One Lake Street
% Upper Saddle River, NJ 07458
n=length(X);
D=zeros(n,n);
D(:,1)=Y';
%Use formula (20) to form the divided difference table
for j=2:n
for k=j:n
D(k,j)=(D(k,j-1)-D(k-1,j-1))/(X(k)-X(k-j+1));
end
end
%Determine the coefficients of the Newton interpolatory polynomial
C=D(n,n);
for k=(n-1):-1:1
C=conv(C,poly(X(k)));
m=length(C);
C(m)=C(m)+D(k,k);
end
INPUT/OUTPUT:
x=[1 2 3 4 5], y=3.6./x % Abscissas and ordinates
x = 1 2 3 4 5
y = 3.6000 1.8000 1.2000 0.9000 0.7200
[c,d]=newpoly(x,y)
c = 0.0300 -0.4500 2.5500 -6.7500 8.2200
d =
3.6000 0 0 0 0
1.8000 -1.8000 0 0 0
1.2000 -0.6000 0.6000 0 0
0.9000 -0.3000 0.1500 -0.1500 0
0.7200 -0.1800 0.0600 -0.0300 0.0300
(a) Divided difference table, from d:
k x(k) f(x(k)) f[,] f[,,] f[,,,] f[,,,,]
--------------------------------------------------------
0 0.0 3.6000 0 0 0 0
1 1.0 1.8000 -1.8000 0 0 0
2 2.0 1.2000 -0.6000 0.6000 0 0
3 3.0 0.9000 -0.3000 0.1500 -0.1500 0
4 4.0 0.7200 -0.1800 0.0600 -0.0300 0.0300
(b) Newton polynomials:
P0(x) = 3.6000
P1(x) = -1.8000 * x
P2(x) = 0.6000 * x * (x - 1.00000)
P3(x) = -0.1500 * x * (x - 1.00000) * (x - 2.00000)
P4(x) = 0.0300 * x * (x - 1.00000) * (x - 2.00000) * (x - 3.00000)
(c) Evaluations:
% At x=1.500000:
x=1.500000; c* x.^[4 3 2 1 0]', 3.6/x
ans = 2.4656 % polynomial approximation
ans = 2.4000 % exact value
% At x=2.500000:
x=2.500000; c* x.^[4 3 2 1 0]', 3.6/x
ans = 1.4231 % polynomial approximation
ans = 1.4400 % exact value
% At x=3.500000:
x=3.500000; c* x.^[4 3 2 1 0]', 3.6/x
ans = 1.0406 % polynomial approximation
ans = 1.0286 % exact value
Exercise 3 of Section 4.5, p.241.
Property 2: For N>0, the coefficient of x^N in T_N(x) is 2^{N-1}.
Proof: First strengthen the claim to include that T_N(x) is a polynomial
of degree N in x. This is true for T_1 and T_2. Suppose it is true for T_N
and T_{N-1} where N is any integer greater than 1. Use the recurrence
T_{N+1}(x) = 2*x*T_N(x) - T_{N-1}(x)
to conclude that T_{N+1}(x) is a polynomial of degree N+1 in X, and that
the coefficient of x^{N+1} in T_{N+1}(x) is 2 times the coefficient of x^N
in T_N(x), hence is 2^N.
Exercise 4 of Section 4.5, p.241.
Property 3: T_N(x) is an even function of x for even N and an odd function
of x for odd N.
Proof: By induction. The statement is true for T_0(x)=1 and T_1(x)= x.
Now suppose that N>1 and use the recurrence to evaluate
T_{N}(-x) = 2*(-x)*T_{N-1}(-x) - T_{N-2}(-x)
If N is even, then N-2 is even but N-1 is odd, so by the inductive
hypothesis,
T_N(-x) = 2*(-x)*[-T_{N-1}(x)] - T_{N-2}(x)
= 2*x*T_{N-1}(x) - T_{N-2}(x) = T_N(x)
Else N is odd, so N-2 is odd but N-1 is even, so again by the inductive
hypothesis
T_N(-x) = 2*(-x)*T_{N-1}(x) - [-T_{N-2}(x)]
= -[2*x*T_{N-1}(x) - T_{N-2}(x)] = -T_N(x)
Hence T_N has the same parity as N for all N.
Algorithm 2(a,b,c,d) of Section 4.5, p.242.
MATLAB PROGRAM:
function [C,X,Y]=cheby(fun,n,a,b)
%Input - fun is the function to be approximated
% - n is the degree of the Chebyshev interpolating polynomial
% - a is the left endpoint
% - b is the right endpoint
%Output - C is the coefficient list for the polynomial
% - X contains the abscissas
% - Y contains the ordinates
% If fun is defined as an M-file function use the call
% [C,X,Y]=cheby('fun(x)',m,a,b)
% NUMERICAL METHODS: Matlab Programs
% (c) 2004 by John H. Mathews and Kurtis D. Fink
% Complementary Software to accompany the textbook:
% NUMERICAL METHODS: Using Matlab, Fourth Edition
% ISBN: 0-13-065248-2
% Prentice-Hall Pub. Inc.
% One Lake Street
% Upper Saddle River, NJ 07458
if nargin==2, a=-1;b=1;end
d=pi/(2*n+2);
C=zeros(1,n+1);
for k=1:n+1
X(k)=cos((2*k-1)*d);
end
X=(b-a)*X/2+(a+b)/2;
x=X;
Y=eval(fun);
for k =1:n+1
z=(2*k-1)*d;
for j=1:n+1
C(j)=C(j)+Y(k)*cos((j-1)*z);
end
end
C=2*C/(n+1);
C(1)=C(1)/2;
OUTPUT:
Call the function in "cheby.m" from the Matlab console as follows, to
calculate the coefficients of the interpolating polynomials:
>> format long
>> a=-1; b=1; fun='sin(x)';
(a)
>> n=4; c4=cheby(fun,n,a,b)
c4 = 0 0.880101161015327 0 -0.039123703313740 0
(b)
>> n=5; c5=cheby(fun,n,a,b)
c5 = 0 0.880101171513789 0 -0.039126718463837 0 0.000502520112057
(c)
>> n=6; c6=cheby(fun,n,a,b)
c6 = 0 0.880101171489829 0 -0.039126707941377 0 0.000499504961922 0
(d)
>> n=7; c7=cheby(fun,n,a,b)
c7 = 0 0.880101171489867 0 -0.039126707965375 0 0.000499515484383 0 -0.000003015150136
Plot the interpolations as follows:
>> x=-1:0.01:1; y=sin(x); plot(x,y) % lots of x values for a smooth-looking plot
>> y4 = c4*[x.^0; x.^1; x.^2; x.^3; x.^4]; plot(x,y4)
>> y5 = c5*[x.^0; x.^1; x.^2; x.^3; x.^4; x.^5]; plot(x,y5)
>> y6 = c6*[x.^0; x.^1; x.^2; x.^3; x.^4; x.^5; x.^6]; plot(x,y6)
>> y7 = c7*[x.^0; x.^1; x.^2; x.^3; x.^4; x.^5; x.^6; x.^7]; plot(x,y7)
>> % plot differences to see the errors:
>> plot(x,y-y5)
>> plot(x,y-y4)
>> plot(x,y-y6)
>> plot(x,y-y7)
>> plot(x,y4-y7)
>> plot(x,y4-y5)
Exercise 2(a,b) of Section 4.6, p.248.
(a) Write R_{1,1}(x)=(a+bx)/(c+dx) and obtain the values of
a,b,c,d for f(x)=ln(1+x)/x = 1-x/2+x^2/3-x^3/4+... by the
method of undetermined coefficients:
Since R_{1,1}(0)=f(0)=1, we must have a/c=1 ==> a=c. With
that, and since R'_{1,1}(0)=f'(0)= -1/2, we must have
(bc-ad)/c^2 = (b-d)/c = -1/2 ==> b=(-c+2d)/2.
With that, and since R''_{1,1}(0)=f''(0)=2/3, we must have
2 (ad^2 -bcd)/ c^3 = 2/3 ==> b=c/6, d=2c/3.
We can choose c=6 so that all four coefficients are
integers. This yields a=c=6, b=c/6=1, and d=2c/3=4, so that
R_{1,1}(x) = (6+x)/(6+4x).
(b) Set R_{2,1}(x)=xR_{1,1}(x) = (6x+x^2)/(6+4x). Then
R_{2,1}(0) = 0 = ln(1+0).
But also, R'_{2,1}(x)=R_{1,1}(x)+xR'_{1,1}(x)
==> R'_{2,1}(0)=R_{1,1}(0)=1.
Likewise, R''_{2,1}(x)=2R'_{1,1}(x)+xR''_{1,1}(x)
==> R''_{2,1}(0) = 2R'_{1,1}(0) = -1.
Finally, $R'''_{2,1}(x) = 3R''_{1,1}(x)+xR'''_{1,1}(x)
==> R'''_{2,1}(0)=3R'_{1,1}(0) = 2.
These are exactly the first 3 derivatives of ln(1+x)
evaluated at x=0, so the function R_{2,1}(x) is indeed the
Pade approximant.
Algorithm 3(a,b) of Section 4.6, p.250.
(a) The following Matlab commands generate the needed graphs:
>> x=-1:0.01:1; z=tan(x); % lots of x values for a smooth-looking plot
>> t9=x+(1/3)*x.^3 + (2/15)*x.^5 + (17/315)*x.^7 + (62/2835)*x.^9;
>> r54 = (945*x-105*x.^3 + x.^5)./(945-420*x.^2+15*x.^4);
>> plot(x,z) % original function
>> plot(x,t9) % Taylor approximation
>> plot(x,r54) % Pade approximation
>> plot(x,t9-z) % difference: Taylor - Exact
>> plot(x,r54-z) % difference: Pade - Exact
(b) From those graphs, it seems that the greatest
approximation error occurs at the right endpoint, x=1:
| T9(1) - tan(1) | ~ 1.5 e -2
| R54(1) - tan(1) | ~ 3.1 e -7
The Pade approximation seems much better.