2023-10-05
Some shorthand: leg
for legend
Linear algebra: tuple (a, b)
, dot product dot()
Graphics
layout = (2, 1)
and subplot
m = :o
, ytick = []
, xflip = true
annotate()
fill = 0
Polynomials.fit()
, plinterp()
, Spline1D()
fdweights()
(pay attention to line 35 of Function 5.4.7)quadgk()
, trapezoid()
, intadapt()
What is the interpolation problem?
Issue: Interpolation by a polynomial at equally spaced nodes is ill-conditioned as the degree of the polynomial grows.
Solution: Piecewise polynomials of modest degree PLUS smoothing
legend=:none
)But it may be difficult to actually find closed forms for these cardinal functions.
An example is the cubic spline interpolant.
Rather than use the cardinal function straitjacket, may be more “natural” to
The implementation of cubic splines exactly does this! Let \(S\) be the interpolant and \(S_k\) be a cubic polynomial on the interval \([t_{k-1}, t_k]\).
Endpoint restrictions could be:
Condition number for interpolation:
Convergence results:
Problem: Look at performance of polynomial interpolant, piecewise linear interpolant, and cubic spline in the case of \(y=\tanh(t)\) on \([-2,4]\).
Problem: Create a flying saucer through a parametric plot.
To obtain finite difference formulas for the \(k\)th order derivative, follow the mantra: “Interpolate, then differentiate \(k\) times.”
An example of centered difference formula:
Differentiate:
Issue: Finite difference formulas are simply too abundant.
Solution: Narrow down using accuracy results.
Problem: Compute second-order accurate approximations to \(f'\) using the following information:
Typo / sequencing issue in book: accuracy has not been established, yet also asked to do forward and backward differences
f = x -> sin(2x);
h = 0.1;
t = [0.9, 1.0, 1.1];
deriv = 2*cos.(2t);
CD = []; FD = []; BD = [];
for t in t
CD2 = (-f(t-h) + f(t+h))/(2h);
FD2 = (-3f(t) + 4f(t+h) - f(t+2h)) / (2h);
BD2 = ( f(t-2h) - 4f(t-h) + 3f(t)) / (2h);
push!(CD, CD2);
push!(FD, FD2);
push!(BD, BD2);
end
FNC.pretty_table((t=t,deriv=deriv,CD=CD,FD=FD,BD=BD),["t" "f'(t)" "Centered" "Forward" "Backward"])
┌─────┬───────────┬───────────┬───────────┬───────────┐
│ t │ f'(t) │ Centered │ Forward │ Backward │
├─────┼───────────┼───────────┼───────────┼───────────┤
│ 0.9 │ -0.454404 │ -0.451381 │ -0.464248 │ -0.456509 │
│ 1.0 │ -0.832294 │ -0.826756 │ -0.846849 │ -0.839623 │
│ 1.1 │ -1.177 │ -1.16917 │ -1.19569 │ -1.18926 │
└─────┴───────────┴───────────┴───────────┴───────────┘
Problem: Let \(f(x)=e^{-x}\), \(x=0.5\), and \(h=0.2\).
fdweights()
to get the necessary weights on five nodes centered at \(x\).f = x -> exp(-x);
dfdx = x -> -exp(-x);
exact_value_odd = dfdx(0.5);
exact_value_even = f(0.5);
h = 0.2;
t = (0.5-2*h):h:(0.5+2*h);
m = 1:4;
CD = [];
for m in m
w = FNC.fdweights(t.-0.5, m);
fd_value = dot(w,f.(t));
push!(CD, fd_value);
end
error = CD - [exact_value_odd, exact_value_even, exact_value_odd, exact_value_even];
FNC.pretty_table((m=m, CD=CD,error=error),["mth order" "Centered" "Error"])
┌───────────┬───────────┬─────────────┐
│ mth order │ Centered │ Error │
├───────────┼───────────┼─────────────┤
│ 1 │ -0.606498 │ 3.25027e-5 │
│ 2 │ 0.60652 │ -1.08213e-5 │
│ 3 │ -0.61262 │ -0.00608962 │
│ 4 │ 0.610586 │ 0.00405569 │
└───────────┴───────────┴─────────────┘
Problem: Use fdweights()
on centered node vectors of length 3, 5, 7, and 9 to produce a table of weights for the second derivative \(f''(0)\).
FNC.fdweights(Rational.(-r:r), 2) = Rational{Int64}[1//1, -2//1, 1//1]
FNC.fdweights(Rational.(-r:r), 2) = Rational{Int64}[-1//12, 4//3, -5//2, 4//3, -1//12]
FNC.fdweights(Rational.(-r:r), 2) = Rational{Int64}[1//90, -3//20, 3//2, -49//18, 3//2, -3//20, 1//90]
FNC.fdweights(Rational.(-r:r), 2) = Rational{Int64}[-1//560, 8//315, -1//5, 8//5, -205//72, 8//5, -1//5, 8//315, -1//560]
Problem:
┌─────────────┬───────────┬─────────────┐
│ h │ Centered │ Error │
├─────────────┼───────────┼─────────────┤
│ 0.5 │ -0.308713 │ 3.46329 │
│ 0.25 │ -2.38196 │ 5.53654 │
│ 0.125 │ 1.56762 │ 1.58696 │
│ 0.0625 │ 2.89668 │ 0.257902 │
│ 0.03125 │ 3.10315 │ 0.0514259 │
│ 0.015625 │ 3.14262 │ 0.0119621 │
│ 0.0078125 │ 3.15165 │ 0.00293336 │
│ 0.00390625 │ 3.15385 │ 0.000729746 │
│ 0.00195312 │ 3.1544 │ 0.000182212 │
│ 0.000976562 │ 3.15453 │ 4.55388e-5 │
│ 0.000488281 │ 3.15457 │ 1.13838e-5 │
│ 0.000244141 │ 3.15458 │ 2.8459e-6 │
│ 0.00012207 │ 3.15458 │ 7.11473e-7 │
│ 6.10352e-5 │ 3.15458 │ 1.7787e-7 │
│ 3.05176e-5 │ 3.15458 │ 4.44659e-8 │
│ 1.52588e-5 │ 3.15458 │ 1.11111e-8 │
│ 7.62939e-6 │ 3.15458 │ 2.77647e-9 │
│ 3.8147e-6 │ 3.15458 │ 6.66442e-10 │
│ 1.90735e-6 │ 3.15458 │ 1.71677e-10 │
│ 9.53674e-7 │ 3.15458 │ 1.19361e-10 │
│ 4.76837e-7 │ 3.15458 │ 1.1347e-10 │
│ 2.38419e-7 │ 3.15458 │ 2.29885e-10 │
│ 1.19209e-7 │ 3.15458 │ 4.68607e-10 │
│ 5.96046e-8 │ 3.15458 │ 2.32536e-9 │
│ 2.98023e-8 │ 3.15458 │ 4.62716e-10 │
│ 1.49012e-8 │ 3.15458 │ 7.9133e-9 │
│ 7.45058e-9 │ 3.15458 │ 7.9133e-9 │
│ 3.72529e-9 │ 3.15458 │ 6.98786e-9 │
│ 1.86265e-9 │ 3.15458 │ 1.27123e-7 │
│ 9.31323e-10 │ 3.15458 │ 6.75179e-8 │
└─────────────┴───────────┴─────────────┘
“Interpolate, then integrate.”
“Interpolate, then integrate.”
Let \(I\) be the exact value of the integral. Error \(I-T_f\left(n\right)\) has an upper bound:
Bottom line: Trapezoid rule is second-order accurate.
To get fourth-order accuracy:
Gregory and Simpson are analogous to higher-order bias adjustments in statistics.
Sixth-order accuracy can also be obtained by extrapolation.
How does extrapolation work?
How does extrapolation work?
The result is Simpson’s formula: \[S_f\left(2n\right)=\frac{1}{3}\left(4T_f(2n)-T_f(n)\right)\]
Idea extends to sixth-order accuracy: Romberg integration
Simpson’s formula can also be motivated using “Interpolate, then integrate” (see Exercise 5.6.4).
Exercise 5.6.5 asks you to show that the previous result is equal to \(S_f(n)\) (typo here in book!).
Computations: Pay attention to how many evaluations of \(f\) are made and to whether previous evaluations could be reused.
Problem: Estimate each integral for \(n=10\cdot 2^k\) nodes for \(k=1,2,\ldots,10\) using a variety of methods:
For \(f(x)=x\log(1+x)\):
f = x -> x*log(1+x);
n = @. 10*2^(1:10);
result = []; err = []; resultG = []; errG = []; resultS = []; errS = []; errR = [];
for n in n
simpvec = ones(n+1);
simpvec[2:2:n] .= 4;
simpvec[3:2:(n-1)] .= 2;
T, t, y = FNC.trapezoid(f, 0, 1, n);
T2 = FNC.trapezoid(f, 0, 1, 2*n)[1];
T4 = FNC.trapezoid(f, 0, 1, 4*n)[1];
TG = T - step(t)/24*((y[n-1]+y[3])-4*(y[n]+y[2])+3*(y[n+1]+y[1]));
TS = step(t)/3*dot(simpvec, y);
TR = 1/15*(16*1/3*(4*T4-T2)-(1/3*(4*T2-T)));
push!(result, T);
push!(resultG, TG);
push!(resultS, TS);
push!(err, abs(0.25 - T));
push!(errG, abs(0.25 - TG));
push!(errS, abs(0.25 - TS));
push!(errR, abs(0.25 - TR));
end
table = [n err errG errS errR];
pretty_table(table,["# nodes", "Trapezoid Err", "Gregory Err", "Simpson Err", "Romberg Err"])
┌─────────┬───────────────┬─────────────┬─────────────┬─────────────┐
│ # nodes │ Trapezoid Err │ Gregory Err │ Simpson Err │ Romberg Err │
├─────────┼───────────────┼─────────────┼─────────────┼─────────────┤
│ 20 │ 0.000248551 │ 3.60208e-7 │ 8.65094e-8 │ 2.32647e-13 │
│ 40 │ 6.21417e-5 │ 2.40814e-8 │ 5.42069e-9 │ 3.60822e-15 │
│ 80 │ 1.55357e-5 │ 1.55686e-9 │ 3.39011e-10 │ 0.0 │
│ 160 │ 3.88394e-6 │ 9.89686e-11 │ 2.11917e-11 │ 5.55112e-17 │
│ 320 │ 9.70985e-7 │ 6.23834e-12 │ 1.32455e-12 │ 0.0 │
│ 640 │ 2.42746e-7 │ 3.9152e-13 │ 8.28226e-14 │ 0.0 │
│ 1280 │ 6.06866e-8 │ 2.45359e-14 │ 5.16254e-15 │ 1.11022e-16 │
│ 2560 │ 1.51717e-8 │ 1.55431e-15 │ 3.33067e-16 │ 0.0 │
│ 5120 │ 3.79291e-9 │ 1.66533e-16 │ 0.0 │ 5.55112e-17 │
│ 10240 │ 9.48228e-10 │ 5.55112e-17 │ 0.0 │ 0.0 │
└─────────┴───────────────┴─────────────┴─────────────┴─────────────┘
For \(f(x)=x^2\tan^{-1} x\):
┌─────────┬───────────────┬─────────────┬─────────────┬─────────────┐
│ # nodes │ Trapezoid Err │ Gregory Err │ Simpson Err │ Romberg Err │
├─────────┼───────────────┼─────────────┼─────────────┼─────────────┤
│ 20 │ 0.000431464 │ 8.86915e-7 │ 1.91461e-7 │ 3.80029e-13 │
│ 40 │ 0.000107857 │ 5.61757e-8 │ 1.19434e-8 │ 5.88418e-15 │
│ 80 │ 2.69637e-5 │ 3.52896e-9 │ 7.46104e-10 │ 5.55112e-17 │
│ 160 │ 6.74089e-6 │ 2.2104e-10 │ 4.6626e-11 │ 2.77556e-17 │
│ 320 │ 1.68522e-6 │ 1.38287e-11 │ 2.91409e-12 │ 0.0 │
│ 640 │ 4.21305e-7 │ 8.64669e-13 │ 1.82132e-13 │ 1.11022e-16 │
│ 1280 │ 1.05326e-7 │ 5.40401e-14 │ 1.14075e-14 │ 0.0 │
│ 2560 │ 2.63315e-8 │ 3.30291e-15 │ 7.21645e-16 │ 1.11022e-16 │
│ 5120 │ 6.58288e-9 │ 1.94289e-16 │ 5.55112e-17 │ 5.55112e-17 │
│ 10240 │ 1.64572e-9 │ 8.32667e-17 │ 5.55112e-17 │ 5.55112e-17 │
└─────────┴───────────────┴─────────────┴─────────────┴─────────────┘
For \(f(x)=\exp(x)\cos (x)\).
┌─────────┬───────────────┬─────────────┬─────────────┬─────────────┐
│ # nodes │ Trapezoid Err │ Gregory Err │ Simpson Err │ Romberg Err │
├─────────┼───────────────┼─────────────┼─────────────┼─────────────┤
│ 20 │ 0.00298643 │ 7.35243e-6 │ 1.61461e-6 │ 2.81819e-12 │
│ 40 │ 0.000746682 │ 4.69672e-7 │ 1.00744e-7 │ 4.37428e-14 │
│ 80 │ 0.000186675 │ 2.96356e-8 │ 6.29386e-9 │ 6.66134e-16 │
│ 160 │ 4.66691e-5 │ 1.86043e-9 │ 3.93325e-10 │ 2.22045e-16 │
│ 320 │ 1.16673e-5 │ 1.16523e-10 │ 2.45821e-11 │ 0.0 │
│ 640 │ 2.91682e-6 │ 7.28995e-12 │ 1.53655e-12 │ 2.22045e-16 │
│ 1280 │ 7.29206e-7 │ 4.55636e-13 │ 9.59233e-14 │ 8.88178e-16 │
│ 2560 │ 1.82302e-7 │ 2.84217e-14 │ 5.9952e-15 │ 0.0 │
│ 5120 │ 4.55754e-8 │ 1.33227e-15 │ 0.0 │ 0.0 │
│ 10240 │ 1.13938e-8 │ 2.22045e-16 │ 0.0 │ 0.0 │
└─────────┴───────────────┴─────────────┴─────────────┴─────────────┘
For \(f(x)=\sqrt{x}\log\ x\):
┌─────────┬───────────────┬─────────────┬─────────────┬─────────────┐
│ # nodes │ Trapezoid Err │ Gregory Err │ Simpson Err │ Romberg Err │
├─────────┼───────────────┼─────────────┼─────────────┼─────────────┤
│ 20 │ 0.0112056 │ 0.00693196 │ 0.00581359 │ 0.000791061 │
│ 40 │ 0.00450979 │ 0.00272522 │ 0.00227785 │ 0.000304098 │
│ 80 │ 0.00179044 │ 0.00106052 │ 0.000883986 │ 0.000116147 │
│ 160 │ 0.000702865 │ 0.000409249 │ 0.000340341 │ 4.41159e-5 │
│ 320 │ 0.000273335 │ 0.000156818 │ 0.000130159 │ 1.66763e-5 │
│ 640 │ 0.000105454 │ 5.97306e-5 │ 4.94936e-5 │ 6.27744e-6 │
│ 1280 │ 4.04091e-5 │ 2.26337e-5 │ 1.87274e-5 │ 2.35428e-6 │
│ 2560 │ 1.53939e-5 │ 8.53811e-6 │ 7.05557e-6 │ 8.80045e-7 │
│ 5120 │ 5.83457e-6 │ 3.20814e-6 │ 2.64811e-6 │ 3.28e-7 │
│ 10240 │ 2.20155e-6 │ 1.20123e-6 │ 9.9055e-7 │ 1.21925e-7 │
└─────────┴───────────────┴─────────────┴─────────────┴─────────────┘
For \(f(x)=\sqrt{1-x^2}\):
┌─────────┬───────────────┬─────────────┬─────────────┬─────────────┐
│ # nodes │ Trapezoid Err │ Gregory Err │ Simpson Err │ Romberg Err │
├─────────┼───────────────┼─────────────┼─────────────┼─────────────┤
│ 20 │ 0.00328194 │ 0.00158795 │ 0.0012864 │ 0.000140956 │
│ 40 │ 0.00116123 │ 0.000560636 │ 0.000454325 │ 4.9825e-5 │
│ 80 │ 0.000410714 │ 0.000198073 │ 0.000160542 │ 1.76139e-5 │
│ 160 │ 0.000145237 │ 7.0004e-5 │ 5.67448e-5 │ 6.22713e-6 │
│ 320 │ 5.1354e-5 │ 2.47456e-5 │ 2.00596e-5 │ 2.20157e-6 │
│ 640 │ 1.81572e-5 │ 8.74811e-6 │ 7.09166e-6 │ 7.78361e-7 │
│ 1280 │ 6.41971e-6 │ 3.09278e-6 │ 2.5072e-6 │ 2.7519e-7 │
│ 2560 │ 2.26974e-6 │ 1.09344e-6 │ 8.86413e-7 │ 9.72941e-8 │
│ 5120 │ 8.02478e-7 │ 3.86585e-7 │ 3.13392e-7 │ 3.43986e-8 │
│ 10240 │ 2.8372e-7 │ 1.36678e-7 │ 1.108e-7 │ 1.21617e-8 │
└─────────┴───────────────┴─────────────┴─────────────┴─────────────┘
Problem: For \(n=10,20,30,\ldots,200\), compute the trapezoidal approximation to \[ \int_{0}^1 \frac{1}{2.01+\sin (6\pi x)-\cos(2\pi x)} \,d x \approx 0.9300357672424684\]
Make two separate plots of the absolute error as a function of \(n\):
n = 10*(1:20);
f = x -> 1/(2.01+sin(6*pi*x)-cos(2*pi*x));
err = [];
for n in n
T = FNC.trapezoid(f, 0, 1, n)[1];
push!(err, abs.(0.9300357672424684-T));
end
l = @layout [a ; b];
p1 = plot(n,err,m=:o,label="Absolute error",
xaxis=(:log10,L"n"),yaxis=(:log10,"error"),
title="Convergence of trapezoidal integration");
p2 = plot(n,err,m=:o,label="Absolute error",
xaxis=(L"n"),yaxis=(:log10,"error"),
title="Convergence of trapezoidal integration");
For certain functions which may have “too much” variation, the trapezoid rule (along with refinements) may not work very well. (See Demo 5.7.1.)
Adaptive integration has two core ideas:
But may not work very well:
Algorithm:
Problem: Three integrals are available in the exercise. Do the following:
quadgk()
to find the value to at least 12 digitsintadapt()
to evaluate the integral to a tolerance of \(10^{-8}\)Plots of the three functions to be integrated:
For the function \(f(x)=\mathrm{sech}(\sin(1/x))\):
f = x -> sech(sin(1/x));
dfdx = x-> -sech(sin(1/x))*tanh(sin(1/x))*cos(1/x)*(-1/x^2);
(exact, E) = quadgk(f,0.1,3,atol=1e-12);
A,t = FNC.intadapt(f, 0.1, 3, 1e-8);
num_nodes = length(t);
error = A - exact;
h_euler = sqrt((1e-8)*(-(dfdx(3) - dfdx(0.1))/12));
num_nodes_euler = (3-0.1)/h_euler;
@show (exact, A, error, num_nodes, num_nodes_euler);
(exact, A, error, num_nodes, num_nodes_euler) = (2.422950184278125, 2.422950188207652, 3.929526926071958e-9, 209, 16718.19491773914)
For the function \(f(x)=\log((x+1)^3)\):
f = x -> log((x+1)^3);
dfdx = x-> 3/(x+1);
(exact, E) = quadgk(f,-0.9,9,atol=1e-12);
A,t = FNC.intadapt(f,-0.9, 9, 1e-8);
num_nodes = length(t);
error = A - exact;
h_euler = sqrt((1e-8)*(-(dfdx(9) - dfdx(-0.9))/12));
num_nodes_euler = (9-(-0.9))/h_euler;
@show (exact, A, error, num_nodes, num_nodes_euler);
(exact, A, error, num_nodes, num_nodes_euler) = (40.06832831771958, 40.068328093517046, -2.2420253742438945e-7, 205, 62928.530890209084)
For the function \(f(x)=\cos(x^3)\):
f = x -> cos(x^3);
dfdx = x-> -sin(x^3)*(3*x^2);
(exact, E) = quadgk(f,-pi,pi,atol=1e-12);
A,t = FNC.intadapt(f, -pi, pi, 1e-8);
num_nodes = length(t);
error = A - exact;
h_euler = sqrt(Complex((1e-8)*(-(dfdx(pi) - dfdx(-pi))/12)));
num_nodes_euler = (pi-(-pi))/h_euler;
@show (exact, A, error, num_nodes, num_nodes_euler);
(exact, A, error, num_nodes, num_nodes_euler) = (1.5184871958591977, 1.518487104090202, -9.17689957358192e-8, 945, 0.0 - 44817.361506520414im)
Problem: Consider the improper integral \(\displaystyle \int_0^1 x^{-2/3}\, dx\).
intadapt()
and make a log-log plot of the error as a function of \(\epsilon\) for \(\epsilon=10^{-15},10^{-16},\ldots,10^{-45}\).Problem: Plot the sine integral function \[\operatorname{Si}(x) = \int_0^x \frac{\sin z}{z}\, dz.\]
Problem: Consider the error function: \[\operatorname{erf}(x) = \frac{2}{\sqrt{\pi}}\int_0^x e^{-s^2}\,ds\]
trapezoid()
with \(n=300\). Make a plot of the error \(g(x)-\operatorname{erf}(x)\) at 500 points in the interval \([0,3]\).Define another approximation \(h\) that applies intadapt()
with error tolerance \(10^{-7}\). Plot the error in \(h\). Why does it look so different from the previous case?
Suppose you wished to find \(x\) such that \(\operatorname{erf}(x) = .95\) by using rootfinding on one of your two approximations. Which would be preferable?
Adjust tolerance.
g = x -> FNC.trapezoid(z -> 2/sqrt(pi)*exp(-z^2), 0, x, 300)[1];
h = x -> FNC.intadapt(z -> 2/sqrt(pi)*exp(-z^2), 0, x, 1e-14)[1];
plot([x -> (g(x) - erf(x)), x -> (h(x) - erf(x))], 0, 3, label=["Trapezoid error" "Adaptive error"])