FNC Chapter 5

Andrew Pua

2023-10-05

Learning objectives

  • Show that interpolation via piecewise polynomials plus smoothing is key to achieving numerical stability
  • Demonstrate that polynomial interpolation is the key to numerical differentiation and numerical integration
  • Derive a wide assortment of finite difference formulas and assess their accuracy
  • Derive a wide assortment of numerical integration formulas and assess their accuracy

What’s new in terms of Julia

  • Some shorthand: leg for legend

  • Linear algebra: tuple (a, b), dot product dot()

  • Graphics

    • layout = (2, 1) and subplot
    • m = :o, ytick = [], xflip = true
    • annotate()
    • fill = 0
  • Object type: Tuple
  • Conditional statements (if, elseif, else): Function 5.2.2 implementation of hat functions
  • Interpolation: Polynomials.fit(), plinterp(), Spline1D()
  • Numerical differentiation: fdweights() (pay attention to line 35 of Function 5.4.7)
  • Numerical integration: quadgk(), trapezoid(), intadapt()

Key theory: interpolation

  • What is the interpolation problem?

    • Given: \(n+1\) distinct points \((t_0,y_0)\), \((t_1,y_1),\ldots,(t_n,y_n)\), with \(t_0<t_1<\ldots <t_n\) called node
    • Target: Find \(p(x)\), called the interpolant, such that \(p(t_k)=y_k\) for \(k=0,\dots,n\).
  • Issue: Interpolation by a polynomial at equally spaced nodes is ill-conditioned as the degree of the polynomial grows.

  • Solution: Piecewise polynomials of modest degree PLUS smoothing

    • Piecewise interpolants have a nice geometric structure.
    • Smoothing is achieved by restricting values of the derivatives at “sharp turns”.
  • Cardinal functions \(\phi_k\) map the node set to \(\{0,1\}\), such that \[\phi_k\left(t_j\right)=\begin{cases}1 & \mathrm{if}\ j=k \\ 0 & \mathrm{if}\ \ j\neq k\end{cases}\]
  • An example is the hat function \(H_k\). See Demo 5.2.3 (code must be fixed legend=:none)
  • Every interpolant \(p(x)\) is a linear combination of cardinal functions \(\phi_k\).
  • But it may be difficult to actually find closed forms for these cardinal functions.

  • An example is the cubic spline interpolant.

  • Rather than use the cardinal function straitjacket, may be more “natural” to

    • retain the spirit of solving a linear system to determine the unknown coefficients
    • impose constraints directly
  • The implementation of cubic splines exactly does this! Let \(S\) be the interpolant and \(S_k\) be a cubic polynomial on the interval \([t_{k-1}, t_k]\).

    • \(S_k\) should pass through \((t_{k-1}, y_{k-1})\) and \((t_{k}, y_{k})\): Gives \(2n\) linear constraints
    • \(S_{k-1}^\prime\) and \(S_k^\prime\) should agree when evaluated at \(t_{k-1}\): Gives \(n-1\) linear constraints
    • \(S_{k-1}^{\prime\prime}\) and \(S_k^{\prime\prime}\) should agree when evaluated at \(t_{k-1}\): Gives \(n-1\) linear constraints
    • Two more constraints come from endpoint restrictions.
  • Endpoint restrictions could be:

    • Provided by the application
    • “Natural” spline: \(S_{1}^{\prime\prime}(t_0)=S_{n}^{\prime\prime}(t_n)=0\)
    • Not-a-knot spline: \(S_{1}^{\prime\prime}(t_1)=S_{2}^{\prime\prime}(t_1)\) and \(S_{n-1}^{\prime\prime\prime}(t_{n-1})=S_{n}^{\prime\prime\prime}(t_{n-1})\)
  • Condition number for interpolation:

    • Bounds: \[\max_{0\le k \le n}\, \bigl\| \phi_k \bigr\|_\infty \le \kappa(\mathbf{y}) \le \sum_{k=0}^n \, \bigl\| \phi_k \bigr\|_\infty\]
    • Piecewise linear interpolation: \(\kappa(\mathbf{y})=1\) (use a property of hat functions in Exercise 5.2.4)
    • Cubic spline: Not in book, complicated because perturbation has a “domino” effect
  • Convergence results:

    • Piecewise linear interpolants: \(f\in C^2\) PLUS equally spaced (length \(h\)) nodes enough to show that \[\max_{x\in [a,b]} | f(x)-p_n(x) | \leq || f^{\prime\prime} ||_{\infty} \times h^2 \]
    • Not-a-knot cubic splines: \(f\in C^4\) PLUS equally spaced (length \(h\)) nodes PLUS not-a-knot restrictions are enough to show that there exists some constant \(C\) such that \[\max_{x\in [a,b]} | f(x)-S_n(x) | \leq C \times h^4 \]

Exercise 5.1.1

Problem: Look at performance of polynomial interpolant, piecewise linear interpolant, and cubic spline in the case of \(y=\tanh(t)\) on \([-2,4]\).

using FundamentalsNumericalComputation;
t = -2:4;  y = tanh.(t);
p1 = FNC.Polynomials.fit(t,y);
p2 = FNC.plinterp(t,y);
p3 = FNC.Spline1D(t,y);
# p4 = FNC.spinterp(t,y); # discussed in Function 5.3.2
scatter(t,y,label="data",leg=:topleft)
plot!(p1, -3, 5, label="interpolant")
plot!(p2, -3, 5, label="piecewise linear")
plot!(x->p3(x), -3, 5, label="piecewise cubic")

Exercise 5.1.3

Problem: Create a flying saucer through a parametric plot.

  • Issue: How to answer (b) about “periodic variation on cubic spline interpolation”? Not very obvious.
s = 0:15;
x = [ 0,0.51,0.96,1.06,1.29,1.55,1.73,2.13,2.61,
      2.19,1.76,1.56,1.25,1.04,0.58,0 ];
y = [ 0,0.16,0.16,0.43,0.62,0.48,0.19,0.18,0,
      -0.12,-0.12,-0.29,-0.30,-0.15,-0.16,0 ];
a = FNC.Spline1D(s, x);
b = FNC.Spline1D(s, y);
scatter(x, y, label="data")
plot!(a(s), b(s), label="spline")

Key theory: numerical differentiation

  • Calculate the derivative without actually encoding the symbolic form of the derivative.
  • Use evaluations of \(f\) around the neighborhood of a pre-specified point \(x_0\) to approximate \(f^\prime\left(x_0\right)\).
  • Motivation comes from the (finite) difference quotient \[\frac{f\left(x_0+h\right)-f\left(x_0\right)}{h}\]
  • Slope of line passing through \((x_0,f(x_0))\) and \((x_0+h, f(x_0+h))\)
  • Equation of line is \[q(x)=f(x_0)+\frac{f\left(x_0+h\right)-f\left(x_0\right)}{h}\left(x-x_0\right)\]
  • Take first derivative of \(q(x)\), evaluate at \(x_0\): \[q^\prime\left(x_0\right)= \frac{f\left(x_0+h\right)-f\left(x_0\right)}{h}\]
  • To obtain finite difference formulas for the \(k\)th order derivative, follow the mantra: “Interpolate, then differentiate \(k\) times.”

  • An example of centered difference formula:

    • Without loss of generality, set \(x_0=0\).
    • Have two points available around \((0, f(0))\), namely, \((h, f(h))\) and \((-h, f(-h))\).
  • Interpolate: \[\begin{split}q\left(x\right) & =\frac{f(h)-2f(0)+f(-h)}{2h^2}\cdot x^2 \\ & +\frac{f(h)-f(-h)}{2h}\cdot x \\ & + f(0)\end{split}\]
  • Differentiate:

    • \(f^\prime\left(0\right)\approx q^\prime\left(0\right)=\dfrac{f(h)-f(-h)}{2h}\)
    • \(f^{\prime\prime}\left(0\right)\approx q^{\prime\prime}\left(0\right)=\dfrac{f(h)-2f(0)+f(-h)}{h^2}\)
  • Issue: Finite difference formulas are simply too abundant.

  • Solution: Narrow down using accuracy results.

    • Centered differences have higher order of accuracy \(O(h^{2m})\) compared to forward/backward differences \(O(h^m)\). See Tables 5.4.1 and 5.4.2, along with calculations in Section 5.5.
    • Although centered formulas are preferable, an exception occurs when you are at the endpoints of an interval.
  • Order of accuracy is calculated by expanding the truncation error in a Taylor series about \(h=0\) and ignoring all but the leading term.
  • The number of terms in the Taylor series expansion will depend on whether the leading term is zero or not. If it is zero, then you have to add more terms. (see Exercise 5.5.3 for example)
  • There is a limit to how far you can push \(h\) to be small (see Exercise 5.5.1 and Demo 5.5.7).
  • There is a “sweet spot” for the choice of \(h\), reminiscent of the bias-variance tradeoff.
  • Think of \(f^\prime (x)\) as the sum of the truncation error \(\tau_f(h)\) and a first-order accurate finite difference formula \(\delta(h)\). Let \(\widetilde{\delta}(h)\) be the numerical approximation. \(\widetilde{\delta}(h)-\delta(h)\) is called the roundoff error.
  • (Note the book may have a typo) It can be shown that \[f^\prime (x) - \widetilde{\delta}(h) \approx \tau_f(h)-f(x)\varepsilon_{\text{mach}} h^{-1}\]
  • If \(\delta(h)\) is first-order accurate, then \(\tau_f(h) = O(h)\). The two errors are balanced roughly of the same magnitude whenever \(h =O(\varepsilon^{1/2})\).
  • For computing with a finite-difference method of order \(m\) in the presence of roundoff, the optimal spacing of nodes satisfies \(h_\text{opt} \approx \epsilon_\text{mach}^{\,\,1/(m+1)}\) and the optimum total error is roughly \(\epsilon_\text{mach}^{\,\, m/(m+1)}\).

Exercise 5.4.2 (b)

Problem: Compute second-order accurate approximations to \(f'\) using the following information:

  • Function \(f(x) = \sin(2x)\)
  • Node set \(t_0=0.9\), \(t_1=1\), \(t_2=1.1\)

Typo / sequencing issue in book: accuracy has not been established, yet also asked to do forward and backward differences

f = x -> sin(2x);
h = 0.1;
t = [0.9, 1.0, 1.1];
deriv = 2*cos.(2t); 
CD = []; FD = []; BD = [];
for t in t 
  CD2 = (-f(t-h) + f(t+h))/(2h);
  FD2 = (-3f(t) + 4f(t+h) - f(t+2h)) / (2h);
  BD2 = ( f(t-2h) - 4f(t-h) + 3f(t)) / (2h);
  push!(CD, CD2);
  push!(FD, FD2);
  push!(BD, BD2);
end 
FNC.pretty_table((t=t,deriv=deriv,CD=CD,FD=FD,BD=BD),["t" "f'(t)" "Centered" "Forward"  "Backward"])
┌─────┬───────────┬───────────┬───────────┬───────────┐
│   t │     f'(t) │  Centered │   Forward │  Backward │
├─────┼───────────┼───────────┼───────────┼───────────┤
│ 0.9 │ -0.454404 │ -0.451381 │ -0.464248 │ -0.456509 │
│ 1.0 │ -0.832294 │ -0.826756 │ -0.846849 │ -0.839623 │
│ 1.1 │    -1.177 │  -1.16917 │  -1.19569 │  -1.18926 │
└─────┴───────────┴───────────┴───────────┴───────────┘

Exercise 5.4.3

Problem: Let \(f(x)=e^{-x}\), \(x=0.5\), and \(h=0.2\).

  • Use fdweights() to get the necessary weights on five nodes centered at \(x\).
  • Find finite-difference approximations to the first, second, third, and fourth derivatives of \(f\). Make a table showing the derivative values and the errors in each case.
f = x -> exp(-x);
dfdx = x -> -exp(-x);
exact_value_odd = dfdx(0.5);
exact_value_even = f(0.5);
h = 0.2;
t = (0.5-2*h):h:(0.5+2*h);
m = 1:4;
CD = [];
for m in m
  w = FNC.fdweights(t.-0.5, m);
  fd_value = dot(w,f.(t));
  push!(CD, fd_value); 
end
error = CD - [exact_value_odd, exact_value_even, exact_value_odd, exact_value_even];
FNC.pretty_table((m=m, CD=CD,error=error),["mth order" "Centered" "Error"])
┌───────────┬───────────┬─────────────┐
│ mth order │  Centered │       Error │
├───────────┼───────────┼─────────────┤
│         1 │ -0.606498 │  3.25027e-5 │
│         2 │   0.60652 │ -1.08213e-5 │
│         3 │  -0.61262 │ -0.00608962 │
│         4 │  0.610586 │  0.00405569 │
└───────────┴───────────┴─────────────┘

Exercise 5.4.4

Problem: Use fdweights() on centered node vectors of length 3, 5, 7, and 9 to produce a table of weights for the second derivative \(f''(0)\).

r = 1:4;
for r in r
  @show FNC.fdweights(Rational.(-r:r), 2)
end 
FNC.fdweights(Rational.(-r:r), 2) = Rational{Int64}[1//1, -2//1, 1//1]
FNC.fdweights(Rational.(-r:r), 2) = Rational{Int64}[-1//12, 4//3, -5//2, 4//3, -1//12]
FNC.fdweights(Rational.(-r:r), 2) = Rational{Int64}[1//90, -3//20, 3//2, -49//18, 3//2, -3//20, 1//90]
FNC.fdweights(Rational.(-r:r), 2) = Rational{Int64}[-1//560, 8//315, -1//5, 8//5, -205//72, 8//5, -1//5, 8//315, -1//560]

Exercise 5.5.1

Problem:

  • Evaluate the centered second-order finite-difference approximation to \(f'(4\pi/5)\) for \(f(x)=\cos(x^3)\) and \(h=2^{-1},2^{-2},\ldots,2^{-8}\).
  • On a log-log graph, plot the error as a function of \(h\) and compare it graphically to second-order convergence.
f = x -> cos(x^3);
dfdx = x -> -sin(x^3)*3*x^2; 
h = [ 1/2^n for n in 1:30 ];
CD2 = [];
for h in h 
    nodes = h*(-1:1);
    vals = @. f(4*pi/5+nodes);
    push!(CD2, dot([-1/2 0 1/2]/h, vals));
end
error = abs.(dfdx(4*pi/5) .- CD2);
table = [ h CD2 error];
FNC.pretty_table(table,["h","Centered","Error"])
┌─────────────┬───────────┬─────────────┐
│           h │  Centered │       Error │
├─────────────┼───────────┼─────────────┤
│         0.5 │ -0.308713 │     3.46329 │
│        0.25 │  -2.38196 │     5.53654 │
│       0.125 │   1.56762 │     1.58696 │
│      0.0625 │   2.89668 │    0.257902 │
│     0.03125 │   3.10315 │   0.0514259 │
│    0.015625 │   3.14262 │   0.0119621 │
│   0.0078125 │   3.15165 │  0.00293336 │
│  0.00390625 │   3.15385 │ 0.000729746 │
│  0.00195312 │    3.1544 │ 0.000182212 │
│ 0.000976562 │   3.15453 │  4.55388e-5 │
│ 0.000488281 │   3.15457 │  1.13838e-5 │
│ 0.000244141 │   3.15458 │   2.8459e-6 │
│  0.00012207 │   3.15458 │  7.11473e-7 │
│  6.10352e-5 │   3.15458 │   1.7787e-7 │
│  3.05176e-5 │   3.15458 │  4.44659e-8 │
│  1.52588e-5 │   3.15458 │  1.11111e-8 │
│  7.62939e-6 │   3.15458 │  2.77647e-9 │
│   3.8147e-6 │   3.15458 │ 6.66442e-10 │
│  1.90735e-6 │   3.15458 │ 1.71677e-10 │
│  9.53674e-7 │   3.15458 │ 1.19361e-10 │
│  4.76837e-7 │   3.15458 │  1.1347e-10 │
│  2.38419e-7 │   3.15458 │ 2.29885e-10 │
│  1.19209e-7 │   3.15458 │ 4.68607e-10 │
│  5.96046e-8 │   3.15458 │  2.32536e-9 │
│  2.98023e-8 │   3.15458 │ 4.62716e-10 │
│  1.49012e-8 │   3.15458 │   7.9133e-9 │
│  7.45058e-9 │   3.15458 │   7.9133e-9 │
│  3.72529e-9 │   3.15458 │  6.98786e-9 │
│  1.86265e-9 │   3.15458 │  1.27123e-7 │
│ 9.31323e-10 │   3.15458 │  6.75179e-8 │
└─────────────┴───────────┴─────────────┘
plot(h,error,label="Centered",m=:o,
    xaxis=(:log2,L"h"),xflip=true,yaxis=(:log2,"error"),
    title="CD error with roundoff",legend=:topright)
plot!(h, h .^2,l=:dash,label= L"O(h^2)")

Key theory: numerical integration

  • Let \(f\) be given (does not need to have closed form at all).
  • Split \([t_0,t_n]=[a,b]\) into \(n\) sub-intervals of equal length.
  • Let \(h=(b-a)/n\) and \(t_i=a+ih\).
  • “Interpolate, then integrate.”

    • Express \(f\) in terms of cardinal functions, then integrate.
    • Choose hat functions \(H_k\) as the cardinal function, so that \[\begin{split}\int_a^b f(x) \, dx \approx & \int_a^b \sum_{i=0}^n f(t_i) H_i(x)\, dx \\ & = \sum_{i=0}^n f(t_i) \left[ \int_a^b H_i(x)\, dx\right]\end{split}\]
  • “Interpolate, then integrate.”

    • Integrals of hat functions are easy to evaluate: \[\int_a^b H_i(x)\, dx=\begin{cases}h & \mathrm{if}\ i=1,\ldots, n-1\\ h/2 & \mathrm{if}\ i=0,n \end{cases}\]
    • Trapezoid rule: \[\begin{split} \int_a^b f(x)\, dx & \approx \underbrace{h\left[ \frac{1}{2}f(t_0) + f(t_1) + \cdots + f(t_{n-1}) + \frac{1}{2}f(t_n) \right]}_{ T_f(n)}\end{split}\]
  • Let \(I\) be the exact value of the integral. Error \(I-T_f\left(n\right)\) has an upper bound:

    • Most basic upper bound: \(O\left(h^2\right)\)
    • Refined version based on Euler-Maclaurin formula: \[ - \frac{h^2}{12} \left[ f'(b)-f'(a) \right] + \frac{h^4}{740} \left[ f'''(b)-f'''(a) \right] + O(h^6)\]
  • Bottom line: Trapezoid rule is second-order accurate.

  • To get fourth-order accuracy:

    • Use Gregory integration formula (Exercise 5.6.2): Replace \(f'(a)\) with the forward finite difference formula and its reflected backward finite difference for \(f'(b)\). Then subtract from \(T_f(n)\).
    • Use Simpson’s formula: Use extrapolation or “Interpolate, then integrate” (see Exercise 5.6.4).
  • Gregory and Simpson are analogous to higher-order bias adjustments in statistics.

  • Sixth-order accuracy can also be obtained by extrapolation.

  • How does extrapolation work?

    • Start with an “asymptotic” expansion of the error estimate. Euler-Maclaurin states that \[I = T_f(n) + a_2 h^2 + a_4 h^{4} + \cdots\]
    • Since \(h=(b-a)/n\), rewrite in terms of the number of sub-intervals (or number of nodes, if you wish): \[I = T_f(n) + c_2 n^{-2} + c_4 n^{-4} + \cdots\]
  • How does extrapolation work?

    • Previous expression applies to a doubling of the number of sub-intervals: \[I = T_f(2n) + \frac{1}{4}c_2 n^{-2} + \frac{1}{16}c_4 n^{-4} + \cdots\]
    • Remove the second-order component completely by a linear combination of the two expansions in order to get fourth-order accuracy.
  • The result is Simpson’s formula: \[S_f\left(2n\right)=\frac{1}{3}\left(4T_f(2n)-T_f(n)\right)\]

  • Idea extends to sixth-order accuracy: Romberg integration

    • Start from Simpson and double the number of sub-intervals again.
    • Write down the error expansion and find a suitable linear combination which allows you to eliminate the \(O\left(n^{-4}\right)\) term.
  • Simpson’s formula can also be motivated using “Interpolate, then integrate” (see Exercise 5.6.4).

    • After some notational adjustments, Exercise 5.6.4 (a) has already been demonstrated earlier when centered differences were discussed.
    • Eventually, you will get to the point where \[ \begin{split} \int_a^b f(x)\, dx \approx \frac{h}{3}\bigl[ &f(t_0) + 4f(t_1) + 2f(t_2) + + \cdots\\ &+ 2f(t_{n-2}) + 4f(t_{n-1}) + f(t_n) \bigr] \end{split}\]
  • Exercise 5.6.5 asks you to show that the previous result is equal to \(S_f(n)\) (typo here in book!).

    • Purely algebraic manipulations PLUS trapezoid rule
    • Be careful of the length of the sub-interval. It is not \(h\) all throughout.
  • Computations: Pay attention to how many evaluations of \(f\) are made and to whether previous evaluations could be reused.

Exercises 5.6.1, 5.6.3, 5.6.6

Problem: Estimate each integral for \(n=10\cdot 2^k\) nodes for \(k=1,2,\ldots,10\) using a variety of methods:

  • Trapezoid, Gregory, Simpson, Romberg
  • Make a log-log plot of the errors and assess accuracy.

For \(f(x)=x\log(1+x)\):

f = x -> x*log(1+x);
n = @. 10*2^(1:10);
result = []; err = []; resultG = []; errG = []; resultS = []; errS = []; errR = [];
for n in n
  simpvec = ones(n+1);
  simpvec[2:2:n] .= 4; 
  simpvec[3:2:(n-1)] .= 2;
  T, t, y = FNC.trapezoid(f, 0, 1, n);
  T2 = FNC.trapezoid(f, 0, 1, 2*n)[1];
  T4 = FNC.trapezoid(f, 0, 1, 4*n)[1];
  TG = T - step(t)/24*((y[n-1]+y[3])-4*(y[n]+y[2])+3*(y[n+1]+y[1]));
  TS = step(t)/3*dot(simpvec, y);
  TR = 1/15*(16*1/3*(4*T4-T2)-(1/3*(4*T2-T)));
  push!(result, T);
  push!(resultG, TG);
  push!(resultS, TS);
  push!(err, abs(0.25 - T));
  push!(errG, abs(0.25 - TG));
  push!(errS, abs(0.25 - TS));
  push!(errR, abs(0.25 - TR));
end
table = [n err errG errS errR];
pretty_table(table,["# nodes", "Trapezoid Err", "Gregory Err", "Simpson Err", "Romberg Err"])
┌─────────┬───────────────┬─────────────┬─────────────┬─────────────┐
│ # nodes │ Trapezoid Err │ Gregory Err │ Simpson Err │ Romberg Err │
├─────────┼───────────────┼─────────────┼─────────────┼─────────────┤
│      20 │   0.000248551 │  3.60208e-7 │  8.65094e-8 │ 2.32647e-13 │
│      40 │    6.21417e-5 │  2.40814e-8 │  5.42069e-9 │ 3.60822e-15 │
│      80 │    1.55357e-5 │  1.55686e-9 │ 3.39011e-10 │         0.0 │
│     160 │    3.88394e-6 │ 9.89686e-11 │ 2.11917e-11 │ 5.55112e-17 │
│     320 │    9.70985e-7 │ 6.23834e-12 │ 1.32455e-12 │         0.0 │
│     640 │    2.42746e-7 │  3.9152e-13 │ 8.28226e-14 │         0.0 │
│    1280 │    6.06866e-8 │ 2.45359e-14 │ 5.16254e-15 │ 1.11022e-16 │
│    2560 │    1.51717e-8 │ 1.55431e-15 │ 3.33067e-16 │         0.0 │
│    5120 │    3.79291e-9 │ 1.66533e-16 │         0.0 │ 5.55112e-17 │
│   10240 │   9.48228e-10 │ 5.55112e-17 │         0.0 │         0.0 │
└─────────┴───────────────┴─────────────┴─────────────┴─────────────┘

For \(f(x)=x^2\tan^{-1} x\):

┌─────────┬───────────────┬─────────────┬─────────────┬─────────────┐
│ # nodes │ Trapezoid Err │ Gregory Err │ Simpson Err │ Romberg Err │
├─────────┼───────────────┼─────────────┼─────────────┼─────────────┤
│      20 │   0.000431464 │  8.86915e-7 │  1.91461e-7 │ 3.80029e-13 │
│      40 │   0.000107857 │  5.61757e-8 │  1.19434e-8 │ 5.88418e-15 │
│      80 │    2.69637e-5 │  3.52896e-9 │ 7.46104e-10 │ 5.55112e-17 │
│     160 │    6.74089e-6 │  2.2104e-10 │  4.6626e-11 │ 2.77556e-17 │
│     320 │    1.68522e-6 │ 1.38287e-11 │ 2.91409e-12 │         0.0 │
│     640 │    4.21305e-7 │ 8.64669e-13 │ 1.82132e-13 │ 1.11022e-16 │
│    1280 │    1.05326e-7 │ 5.40401e-14 │ 1.14075e-14 │         0.0 │
│    2560 │    2.63315e-8 │ 3.30291e-15 │ 7.21645e-16 │ 1.11022e-16 │
│    5120 │    6.58288e-9 │ 1.94289e-16 │ 5.55112e-17 │ 5.55112e-17 │
│   10240 │    1.64572e-9 │ 8.32667e-17 │ 5.55112e-17 │ 5.55112e-17 │
└─────────┴───────────────┴─────────────┴─────────────┴─────────────┘

For \(f(x)=\exp(x)\cos (x)\).

┌─────────┬───────────────┬─────────────┬─────────────┬─────────────┐
│ # nodes │ Trapezoid Err │ Gregory Err │ Simpson Err │ Romberg Err │
├─────────┼───────────────┼─────────────┼─────────────┼─────────────┤
│      20 │    0.00298643 │  7.35243e-6 │  1.61461e-6 │ 2.81819e-12 │
│      40 │   0.000746682 │  4.69672e-7 │  1.00744e-7 │ 4.37428e-14 │
│      80 │   0.000186675 │  2.96356e-8 │  6.29386e-9 │ 6.66134e-16 │
│     160 │    4.66691e-5 │  1.86043e-9 │ 3.93325e-10 │ 2.22045e-16 │
│     320 │    1.16673e-5 │ 1.16523e-10 │ 2.45821e-11 │         0.0 │
│     640 │    2.91682e-6 │ 7.28995e-12 │ 1.53655e-12 │ 2.22045e-16 │
│    1280 │    7.29206e-7 │ 4.55636e-13 │ 9.59233e-14 │ 8.88178e-16 │
│    2560 │    1.82302e-7 │ 2.84217e-14 │  5.9952e-15 │         0.0 │
│    5120 │    4.55754e-8 │ 1.33227e-15 │         0.0 │         0.0 │
│   10240 │    1.13938e-8 │ 2.22045e-16 │         0.0 │         0.0 │
└─────────┴───────────────┴─────────────┴─────────────┴─────────────┘

For \(f(x)=\sqrt{x}\log\ x\):

┌─────────┬───────────────┬─────────────┬─────────────┬─────────────┐
│ # nodes │ Trapezoid Err │ Gregory Err │ Simpson Err │ Romberg Err │
├─────────┼───────────────┼─────────────┼─────────────┼─────────────┤
│      20 │     0.0112056 │  0.00693196 │  0.00581359 │ 0.000791061 │
│      40 │    0.00450979 │  0.00272522 │  0.00227785 │ 0.000304098 │
│      80 │    0.00179044 │  0.00106052 │ 0.000883986 │ 0.000116147 │
│     160 │   0.000702865 │ 0.000409249 │ 0.000340341 │  4.41159e-5 │
│     320 │   0.000273335 │ 0.000156818 │ 0.000130159 │  1.66763e-5 │
│     640 │   0.000105454 │  5.97306e-5 │  4.94936e-5 │  6.27744e-6 │
│    1280 │    4.04091e-5 │  2.26337e-5 │  1.87274e-5 │  2.35428e-6 │
│    2560 │    1.53939e-5 │  8.53811e-6 │  7.05557e-6 │  8.80045e-7 │
│    5120 │    5.83457e-6 │  3.20814e-6 │  2.64811e-6 │     3.28e-7 │
│   10240 │    2.20155e-6 │  1.20123e-6 │   9.9055e-7 │  1.21925e-7 │
└─────────┴───────────────┴─────────────┴─────────────┴─────────────┘

For \(f(x)=\sqrt{1-x^2}\):

┌─────────┬───────────────┬─────────────┬─────────────┬─────────────┐
│ # nodes │ Trapezoid Err │ Gregory Err │ Simpson Err │ Romberg Err │
├─────────┼───────────────┼─────────────┼─────────────┼─────────────┤
│      20 │    0.00328194 │  0.00158795 │   0.0012864 │ 0.000140956 │
│      40 │    0.00116123 │ 0.000560636 │ 0.000454325 │   4.9825e-5 │
│      80 │   0.000410714 │ 0.000198073 │ 0.000160542 │  1.76139e-5 │
│     160 │   0.000145237 │   7.0004e-5 │  5.67448e-5 │  6.22713e-6 │
│     320 │     5.1354e-5 │  2.47456e-5 │  2.00596e-5 │  2.20157e-6 │
│     640 │    1.81572e-5 │  8.74811e-6 │  7.09166e-6 │  7.78361e-7 │
│    1280 │    6.41971e-6 │  3.09278e-6 │   2.5072e-6 │   2.7519e-7 │
│    2560 │    2.26974e-6 │  1.09344e-6 │  8.86413e-7 │  9.72941e-8 │
│    5120 │    8.02478e-7 │  3.86585e-7 │  3.13392e-7 │  3.43986e-8 │
│   10240 │     2.8372e-7 │  1.36678e-7 │    1.108e-7 │  1.21617e-8 │
└─────────┴───────────────┴─────────────┴─────────────┴─────────────┘

Exercise 5.6.7

Problem: For \(n=10,20,30,\ldots,200\), compute the trapezoidal approximation to \[ \int_{0}^1 \frac{1}{2.01+\sin (6\pi x)-\cos(2\pi x)} \,d x \approx 0.9300357672424684\]

Make two separate plots of the absolute error as a function of \(n\):

  • one using a log-log scale
  • the other using log-linear
n = 10*(1:20); 
f = x -> 1/(2.01+sin(6*pi*x)-cos(2*pi*x));
err = [];
for n in n
  T = FNC.trapezoid(f, 0, 1, n)[1];
  push!(err, abs.(0.9300357672424684-T));
end
l = @layout [a ; b]; 
p1 = plot(n,err,m=:o,label="Absolute error",
    xaxis=(:log10,L"n"),yaxis=(:log10,"error"),
    title="Convergence of trapezoidal integration");
p2 = plot(n,err,m=:o,label="Absolute error",
    xaxis=(L"n"),yaxis=(:log10,"error"),
    title="Convergence of trapezoidal integration");
plot(p1, p2, layout = l)

Key theory: adaptive integration

  • For certain functions which may have “too much” variation, the trapezoid rule (along with refinements) may not work very well. (See Demo 5.7.1.)

  • Adaptive integration has two core ideas:

    • More nodes on more “difficult” regions
    • Deciding where are the difficult parts require attention to both absolute and relative error
  • But may not work very well:

    • “Slower” because you have to evaluate functions values more often
    • If the numerical evaluation of the integral is an intermediate step
    • Error estimation is a recipe for subtractive cancellation
  • Algorithm:

    • Start with some interval.
    • Split the previous interval into four sub-intervals. Find \(T_f(1)\), \(T_f(2)\), \(T_f(4)\).
    • Estimate the error using \((S_f(4) - S_f(2))/15\).
    • If error is small enough, stop. Return the most accurate Simpson evaluation.
    • Otherwise, split the interval at the beginning into 2 sub-intervals and repeat the all previous steps to each of those sub-intervals.

Exercise 5.7.2

Problem: Three integrals are available in the exercise. Do the following:

  • Use quadgk() to find the value to at least 12 digits
  • Use intadapt() to evaluate the integral to a tolerance of \(10^{-8}\)
  • Compute the absolute error and the number of nodes used
  • Use the \(O(h^2)\) term in the Euler–Maclaurin formula to estimate how many nodes are required by the fixed-stepsize trapezoidal formula to reach an absolute error of \(10^{-8}\).

Plots of the three functions to be integrated:

For the function \(f(x)=\mathrm{sech}(\sin(1/x))\):

f = x -> sech(sin(1/x));
dfdx = x-> -sech(sin(1/x))*tanh(sin(1/x))*cos(1/x)*(-1/x^2);
(exact, E) = quadgk(f,0.1,3,atol=1e-12);
A,t = FNC.intadapt(f, 0.1, 3, 1e-8);
num_nodes = length(t);
error = A - exact;
h_euler = sqrt((1e-8)*(-(dfdx(3) - dfdx(0.1))/12));
num_nodes_euler = (3-0.1)/h_euler;
@show (exact, A, error, num_nodes, num_nodes_euler);
(exact, A, error, num_nodes, num_nodes_euler) = (2.422950184278125, 2.422950188207652, 3.929526926071958e-9, 209, 16718.19491773914)

For the function \(f(x)=\log((x+1)^3)\):

f = x -> log((x+1)^3);
dfdx = x-> 3/(x+1);
(exact, E) = quadgk(f,-0.9,9,atol=1e-12);
A,t = FNC.intadapt(f,-0.9, 9, 1e-8);
num_nodes = length(t);
error = A - exact;
h_euler = sqrt((1e-8)*(-(dfdx(9) - dfdx(-0.9))/12));
num_nodes_euler = (9-(-0.9))/h_euler;
@show (exact, A, error, num_nodes, num_nodes_euler);
(exact, A, error, num_nodes, num_nodes_euler) = (40.06832831771958, 40.068328093517046, -2.2420253742438945e-7, 205, 62928.530890209084)

For the function \(f(x)=\cos(x^3)\):

f = x -> cos(x^3);
dfdx = x-> -sin(x^3)*(3*x^2);
(exact, E) = quadgk(f,-pi,pi,atol=1e-12);
A,t = FNC.intadapt(f, -pi, pi, 1e-8);
num_nodes = length(t);
error = A - exact;
h_euler = sqrt(Complex((1e-8)*(-(dfdx(pi) - dfdx(-pi))/12)));
num_nodes_euler = (pi-(-pi))/h_euler;
@show (exact, A, error, num_nodes, num_nodes_euler);
(exact, A, error, num_nodes, num_nodes_euler) = (1.5184871958591977, 1.518487104090202, -9.17689957358192e-8, 945, 0.0 - 44817.361506520414im)
  • Notice that the calculations for the number of nodes using Euler-Maclaurin fails.

Exercise 5.7.3

Problem: Consider the improper integral \(\displaystyle \int_0^1 x^{-2/3}\, dx\).

  • How does one deal with the problem of the infinite value for \(f(0)\)? Replace \(0\) with a very small number \(\epsilon\).
  • Use intadapt() and make a log-log plot of the error as a function of \(\epsilon\) for \(\epsilon=10^{-15},10^{-16},\ldots,10^{-45}\).
f = x -> x^(-2/3);
ϵ = 10. .^(-(15:45)); 
err = [];
for ϵ in ϵ
  A, t = FNC.intadapt(f, ϵ, 1, 1e-12);
  push!(err, abs(A-3)); 
end
plot(ϵ,err,m=:o,label="Absolute error",xflip=true,
    xaxis=(:log10,L"\epsilon"),yaxis=(:log10,"error"))

Exercise 5.7.5

Problem: Plot the sine integral function \[\operatorname{Si}(x) = \int_0^x \frac{\sin z}{z}\, dz.\]

f = x -> FNC.intadapt(z -> sin(z)/z, eps(), x, 1e-12)[1];
plot(f, 1, 10)

Exercise 5.7.6

Problem: Consider the error function: \[\operatorname{erf}(x) = \frac{2}{\sqrt{\pi}}\int_0^x e^{-s^2}\,ds\]

  • Define a function \(g\) that approximates erf by applying trapezoid() with \(n=300\). Make a plot of the error \(g(x)-\operatorname{erf}(x)\) at 500 points in the interval \([0,3]\).
  • Define another approximation \(h\) that applies intadapt() with error tolerance \(10^{-7}\). Plot the error in \(h\). Why does it look so different from the previous case?

  • Suppose you wished to find \(x\) such that \(\operatorname{erf}(x) = .95\) by using rootfinding on one of your two approximations. Which would be preferable?

g = x -> FNC.trapezoid(z -> 2/sqrt(pi)*exp(-z^2), 0, x, 300)[1];
h = x -> FNC.intadapt(z -> 2/sqrt(pi)*exp(-z^2), 0, x, 1e-7)[1];
plot([x -> (g(x) - erf(x)), x -> (h(x) - erf(x))], 0, 3, label=["Trapezoid error" "Adaptive error"])

Adjust tolerance.

g = x -> FNC.trapezoid(z -> 2/sqrt(pi)*exp(-z^2), 0, x, 300)[1];
h = x -> FNC.intadapt(z -> 2/sqrt(pi)*exp(-z^2), 0, x, 1e-14)[1];
plot([x -> (g(x) - erf(x)), x -> (h(x) - erf(x))], 0, 3, label=["Trapezoid error" "Adaptive error"])