Introduction

This article is based on Chapter 5 of Fundamentals of Numerical Computation (Julia Edition). A version of this article in slide format may be found here. I presented the slides as part of a learning community and the video recordings for Part 1 and Part 2 are linked to this article.

The learning objectives of the chapter are as follows:

In this article, I summarize the findings of the chapter more compactly to serve as a reading guide for the chapter. I also work out several exercises and provide Julia code for these exercises. I also correct some errors and typos in the chapter.

What’s new in terms of Julia

Key theory: interpolation

What is the interpolation problem? You are given \(n+1\) distinct points \((t_0,y_0)\), \((t_1,y_1),\ldots,(t_n,y_n)\), with \(t_0<t_1<\ldots <t_n\) called nodes. The target is to find \(p(x)\), called the interpolant, such that \(p(t_k)=y_k\) for \(k=0,\dots,n\).

The main issue is that interpolation by a polynomial at equally spaced nodes is ill-conditioned as the degree of the polynomial grows. A solution to this problem is to consider piecewise polynomials of modest degree PLUS smoothing.

Piecewise interpolants have a nice geometric structure. Smoothing is achieved by restricting values of the derivatives at “sharp turns”.

A class of piecewise interpolants is provided by the set of cardinal functions. Cardinal functions \(\phi_k\) map the node set to \(\{0,1\}\), such that \[\phi_k\left(t_j\right)=\begin{cases}1 & \mathrm{if}\ j=k \\ 0 & \mathrm{if}\ \ j\neq k\end{cases}\] An example is the hat function \(H_k\). See Demo 5.2.3 (code must be fixed legend=:none). It can be shown that every interpolant \(p(x)\) is a linear combination of cardinal functions \(\phi_k\).

A key roadblock is that it may be difficult to actually find closed forms for these cardinal functions. For example, it is relatively easy to describe the cubic spline interpolant, but it is difficult to obtain the cardinal function representation of this interpolant. Rather than use the cardinal function straitjacket, it may be more “natural” to retain the spirit of solving a linear system to determine the unknown coefficients and impose constraints on these unknown coefficients directly.

In fact, the implementation of cubic splines exactly does this! Let \(S\) be the interpolant and \(S_k\) be a cubic polynomial on the interval \([t_{k-1}, t_k]\). There are \(4n\) unknown coefficients for this interpolant. We can now express the constraints on the interpolant:

  1. \(S_k\) should pass through \((t_{k-1}, y_{k-1})\) and \((t_{k}, y_{k})\): This restriction gives \(2n\) linear constraints.
  2. \(S_{k-1}^\prime\) and \(S_k^\prime\) should agree when evaluated at \(t_{k-1}\): This restriction gives \(n-1\) linear constraints.
  3. \(S_{k-1}^{\prime\prime}\) and \(S_k^{\prime\prime}\) should agree when evaluated at \(t_{k-1}\): This restriction gives \(n-1\) linear constraints

After having \(4n-2\) linear constraints on the unknown coefficients of the cubic spline interpolant, we still need two more constraints which will come from endpoint restrictions. Endpoint restrictions could originate from, but are not limited to:

  1. Provided by the application
  2. Based on what they call a “natural” spline: \(S_{1}^{\prime\prime}(t_0)=S_{n}^{\prime\prime}(t_n)=0\)
  3. Not-a-knot spline: \(S_{1}^{\prime\prime}(t_1)=S_{2}^{\prime\prime}(t_1)\) and \(S_{n-1}^{\prime\prime\prime}(t_{n-1})=S_{n}^{\prime\prime\prime}(t_{n-1})\)

In terms of condition number of the interpolant, we have:

  1. Bounds on any interpolant: \[\max_{0\le k \le n}\, \bigl\| \phi_k \bigr\|_\infty \le \kappa(\mathbf{y}) \le \sum_{k=0}^n \, \bigl\| \phi_k \bigr\|_\infty\]
  2. Bounds based on a piecewise linear interpolant: \(\kappa(\mathbf{y})=1\) (use a property of hat functions in Exercise 5.2.4)
  3. Unfortunately, bounds based on a cubic spline interpolant are not discussed in the chapter. They are complicated to work out because a perturbation will have a “domino” effect. This “domino” effect stems from the fact that we have implied constraints on the unknown coefficients of the interpolant.

In terms of the convergence of the interpolant to the true function \(f\), we have

  1. Piecewise linear interpolants: \(f\in C^2\) PLUS equally spaced (length \(h\)) nodes enough to show that \[\max_{x\in [a,b]} | f(x)-p_n(x) | \leq || f^{\prime\prime} ||_{\infty} \times h^2 \]
  2. Not-a-knot cubic splines: \(f\in C^4\) PLUS equally spaced (length \(h\)) nodes PLUS not-a-knot restrictions are enough to show that there exists some constant \(C\) such that \[\max_{x\in [a,b]} | f(x)-S_n(x) | \leq C \times h^4 \]

Exercise 5.1.1

Problem: Look at performance of polynomial interpolant, piecewise linear interpolant, and cubic spline in the case of \(y=\tanh(t)\) on \([-2,4]\).

using FundamentalsNumericalComputation;
t = -2:4;  y = tanh.(t);
p1 = FNC.Polynomials.fit(t,y);
p2 = FNC.plinterp(t,y);
p3 = FNC.Spline1D(t,y);
# p4 = FNC.spinterp(t,y); # discussed in Function 5.3.2
scatter(t,y,label="data",leg=:topleft)

plot!(p1, -3, 5, label="interpolant")

plot!(p2, -3, 5, label="piecewise linear")

plot!(x->p3(x), -3, 5, label="piecewise cubic")

Exercise 5.1.3

Problem: Create a flying saucer through a parametric plot.

s = 0:15;
x = [ 0,0.51,0.96,1.06,1.29,1.55,1.73,2.13,2.61,
      2.19,1.76,1.56,1.25,1.04,0.58,0 ];
y = [ 0,0.16,0.16,0.43,0.62,0.48,0.19,0.18,0,
      -0.12,-0.12,-0.29,-0.30,-0.15,-0.16,0 ];
a = FNC.Spline1D(s, x);
b = FNC.Spline1D(s, y);
scatter(x, y, label="data")

plot!(a(s), b(s), label="spline")

Key theory: numerical differentiation

In actual practice, calculating derivatives by hand and then coding their forms can easily become complicated. We need ways to calculate the derivative without actually encoding the symbolic form of the derivative.

The key idea is to use evaluations of \(f\) around the neighborhood of a pre-specified point \(x_0\) to approximate \(f^\prime\left(x_0\right)\). The motivation comes directly the definition of the derivative which is based on the (finite) difference quotient \[\frac{f\left(x_0+h\right)-f\left(x_0\right)}{h}\]

As a starting example, consider the slope of a line passing through \((x_0,f(x_0))\) and \((x_0+h, f(x_0+h))\). The equation of that line is \[q(x)=f(x_0)+\frac{f\left(x_0+h\right)-f\left(x_0\right)}{h}\left(x-x_0\right)\] If we take the first derivative of \(q(x)\), evaluate at \(x_0\), we will have \[q^\prime\left(x_0\right)= \frac{f\left(x_0+h\right)-f\left(x_0\right)}{h}\] Interestingly, this is the finite difference quotient earlier.

To obtain finite difference formulas for the \(k\)th order derivative, we basically follow the strategy described earlier and follow the mantra: “Interpolate, then differentiate \(k\) times.”

Consider the following example for the centered difference formula. Without loss of generality, set \(x_0=0\). Assume you have two points available around \((0, f(0))\), namely, \((h, f(h))\) and \((-h, f(-h))\). Next, we interpolate: \[\begin{split}q\left(x\right) & =\frac{f(h)-2f(0)+f(-h)}{2h^2}\cdot x^2 \\ & +\frac{f(h)-f(-h)}{2h}\cdot x \\ & + f(0)\end{split}\]

After that, we differentiate to get the first derivative \(f^\prime\left(0\right)\approx q^\prime\left(0\right)=\dfrac{f(h)-f(-h)}{2h}\). We can also take the second derivative \(f^{\prime\prime}\left(0\right)\approx q^{\prime\prime}\left(0\right)=\dfrac{f(h)-2f(0)+f(-h)}{h^2}\)

If it is easy to obtain these finite difference formulas, what is the issue? Finite difference formulas are simply too abundant. As long as you stick to “interpolate, then differentiate”, there are just too many varieties of finite difference formulas one could obtain. To choose among them, we narrow down using accuracy results. In particular,

  1. Centered differences have higher order of accuracy \(O(h^{2m})\) compared to forward/backward differences \(O(h^m)\). See Tables 5.4.1 and 5.4.2, along with calculations in Section 5.5.
  2. Although centered formulas are preferable, an exception occurs when you are at the endpoints of an interval.

The order of accuracy is calculated by expanding the truncation error in a Taylor series about \(h=0\) and ignoring all but the leading term. The number of terms in the Taylor series expansion will depend on whether the leading term is zero or not. If it is zero, then you have to add more terms. (see Exercise 5.5.3 for example)

There is a limit to how far you can push \(h\) to be small (see Exercise 5.5.1 and Demo 5.5.7). There is also a “sweet spot” for the choice of \(h\), reminiscent of the bias-variance tradeoff. Think of \(f^\prime (x)\) as the sum of the truncation error \(\tau_f(h)\) and a first-order accurate finite difference formula \(\delta(h)\).

Let \(\widetilde{\delta}(h)\) be the numerical approximation. \(\widetilde{\delta}(h)-\delta(h)\) is called the roundoff error.

(Note the book may have a typo) It can be shown that \[f^\prime (x) - \widetilde{\delta}(h) \approx \tau_f(h)-f(x)\varepsilon_{\text{mach}} h^{-1}\] If \(\delta(h)\) is first-order accurate, then \(\tau_f(h) = O(h)\). The two errors are balanced roughly of the same magnitude whenever \(h =O(\varepsilon^{1/2})\).

For computing with a finite-difference method of order \(m\) in the presence of roundoff, the optimal spacing of nodes satisfies \(h_\text{opt} \approx \epsilon_\text{mach}^{\,\,1/(m+1)}\) and the optimum total error is roughly \(\epsilon_\text{mach}^{\,\, m/(m+1)}\).

Exercise 5.4.2 (b)

Problem: Compute second-order accurate approximations to \(f'\) using the following information:

There is a possible typo or sequencing issue in book: accuracy has not been established by Section 5.4, yet also asked to do forward and backward differences.

f = x -> sin(2x);
h = 0.1;
t = [0.9, 1.0, 1.1];
deriv = 2*cos.(2t); 
CD = []; FD = []; BD = [];
for t in t 
  CD2 = (-f(t-h) + f(t+h))/(2h);
  FD2 = (-3f(t) + 4f(t+h) - f(t+2h)) / (2h);
  BD2 = ( f(t-2h) - 4f(t-h) + 3f(t)) / (2h);
  push!(CD, CD2);
  push!(FD, FD2);
  push!(BD, BD2);
end 
FNC.pretty_table((t=t,deriv=deriv,CD=CD,FD=FD,BD=BD),["t" "f'(t)" "Centered" "Forward"  "Backward"])
## ┌─────┬───────────┬───────────┬───────────┬───────────┐
## │   t │     f'(t) │  Centered │   Forward │  Backward │
## ├─────┼───────────┼───────────┼───────────┼───────────┤
## │ 0.9 │ -0.454404 │ -0.451381 │ -0.464248 │ -0.456509 │
## │ 1.0 │ -0.832294 │ -0.826756 │ -0.846849 │ -0.839623 │
## │ 1.1 │    -1.177 │  -1.16917 │  -1.19569 │  -1.18926 │
## └─────┴───────────┴───────────┴───────────┴───────────┘

Exercise 5.4.3

Problem: Let \(f(x)=e^{-x}\), \(x=0.5\), and \(h=0.2\).


f = x -> exp(-x);
dfdx = x -> -exp(-x);
exact_value_odd = dfdx(0.5);
exact_value_even = f(0.5);
h = 0.2;
t = (0.5-2*h):h:(0.5+2*h);
m = 1:4;
CD = [];
for m in m
  w = FNC.fdweights(t.-0.5, m);
  fd_value = dot(w,f.(t));
  push!(CD, fd_value); 
end
error = CD - [exact_value_odd, exact_value_even, exact_value_odd, exact_value_even];
FNC.pretty_table((m=m, CD=CD,error=error),["mth order" "Centered" "Error"])
## ┌───────────┬───────────┬─────────────┐
## │ mth order │  Centered │       Error │
## ├───────────┼───────────┼─────────────┤
## │         1 │ -0.606498 │  3.25027e-5 │
## │         2 │   0.60652 │ -1.08213e-5 │
## │         3 │  -0.61262 │ -0.00608962 │
## │         4 │  0.610586 │  0.00405569 │
## └───────────┴───────────┴─────────────┘

Exercise 5.4.4

Problem: Use fdweights() on centered node vectors of length 3, 5, 7, and 9 to produce a table of weights for the second derivative \(f''(0)\).

r = 1:4;
for r in r
  @show FNC.fdweights(Rational.(-r:r), 2)
end 
## FNC.fdweights(Rational.(-r:r), 2) = Rational{Int64}[1//1, -2//1, 1//1]
## FNC.fdweights(Rational.(-r:r), 2) = Rational{Int64}[-1//12, 4//3, -5//2, 4//3, -1//12]
## FNC.fdweights(Rational.(-r:r), 2) = Rational{Int64}[1//90, -3//20, 3//2, -49//18, 3//2, -3//20, 1//90]
## FNC.fdweights(Rational.(-r:r), 2) = Rational{Int64}[-1//560, 8//315, -1//5, 8//5, -205//72, 8//5, -1//5, 8//315, -1//560]

Exercise 5.5.1

Problem:

f = x -> cos(x^3);
dfdx = x -> -sin(x^3)*3*x^2; 
h = [ 1/2^n for n in 1:30 ];
CD2 = [];
for h in h 
    nodes = h*(-1:1);
    vals = @. f(4*pi/5+nodes);
    push!(CD2, dot([-1/2 0 1/2]/h, vals));
end
error = abs.(dfdx(4*pi/5) .- CD2);
table = [ h CD2 error];
FNC.pretty_table(table,["h","Centered","Error"])
## ┌─────────────┬───────────┬─────────────┐
## │           h │  Centered │       Error │
## ├─────────────┼───────────┼─────────────┤
## │         0.5 │ -0.308713 │     3.46329 │
## │        0.25 │  -2.38196 │     5.53654 │
## │       0.125 │   1.56762 │     1.58696 │
## │      0.0625 │   2.89668 │    0.257902 │
## │     0.03125 │   3.10315 │   0.0514259 │
## │    0.015625 │   3.14262 │   0.0119621 │
## │   0.0078125 │   3.15165 │  0.00293336 │
## │  0.00390625 │   3.15385 │ 0.000729746 │
## │  0.00195312 │    3.1544 │ 0.000182212 │
## │ 0.000976562 │   3.15453 │  4.55388e-5 │
## │ 0.000488281 │   3.15457 │  1.13838e-5 │
## │ 0.000244141 │   3.15458 │   2.8459e-6 │
## │  0.00012207 │   3.15458 │  7.11473e-7 │
## │  6.10352e-5 │   3.15458 │   1.7787e-7 │
## │  3.05176e-5 │   3.15458 │  4.44659e-8 │
## │  1.52588e-5 │   3.15458 │  1.11111e-8 │
## │  7.62939e-6 │   3.15458 │  2.77647e-9 │
## │   3.8147e-6 │   3.15458 │ 6.66442e-10 │
## │  1.90735e-6 │   3.15458 │ 1.71677e-10 │
## │  9.53674e-7 │   3.15458 │ 1.19361e-10 │
## │  4.76837e-7 │   3.15458 │  1.1347e-10 │
## │  2.38419e-7 │   3.15458 │ 2.29885e-10 │
## │  1.19209e-7 │   3.15458 │ 4.68607e-10 │
## │  5.96046e-8 │   3.15458 │  2.32536e-9 │
## │  2.98023e-8 │   3.15458 │ 4.62716e-10 │
## │  1.49012e-8 │   3.15458 │   7.9133e-9 │
## │  7.45058e-9 │   3.15458 │   7.9133e-9 │
## │  3.72529e-9 │   3.15458 │  6.98786e-9 │
## │  1.86265e-9 │   3.15458 │  1.27123e-7 │
## │ 9.31323e-10 │   3.15458 │  6.75179e-8 │
## └─────────────┴───────────┴─────────────┘
plot(h,error,label="Centered",m=:o,
    xaxis=(:log2,L"h"),xflip=true,yaxis=(:log2,"error"),
    title="CD error with roundoff",legend=:topright)

plot!(h, h .^2,l=:dash,label= L"O(h^2)")

Key theory: numerical integration

Just like the case for numerical differentiation, you may encounter situations where the functional form of \(f\) would make it difficult to carry out integration by hand. The situation is much worse in the case of integration because there are simple functions for which a closed form for the integral is not available.

Let \(f\) be given (does not need to have closed form at all). Split \([t_0,t_n]=[a,b]\) into \(n\) sub-intervals of equal length. Let \(h=(b-a)/n\) and \(t_i=a+ih\).

Just like what we did with numerical differentiation, we follow a similar mantra: “Interpolate, then integrate.” Consider the case where we can express \(f\) in terms of cardinal functions. Choose hat functions \(H_k\) as the cardinal function, so that \[\begin{split}\int_a^b f(x) \, dx \approx & \int_a^b \sum_{i=0}^n f(t_i) H_i(x)\, dx \\ & = \sum_{i=0}^n f(t_i) \left[ \int_a^b H_i(x)\, dx\right]\end{split}\] Integrals of hat functions are easy to evaluate: \[\int_a^b H_i(x)\, dx=\begin{cases}h & \mathrm{if}\ i=1,\ldots, n-1\\ h/2 & \mathrm{if}\ i=0,n \end{cases}\]

The case of \(f\) having a cardinal function representation gives rise to a numerical integration method called the trapezoid rule: \[\begin{split} \int_a^b f(x)\, dx & \approx \underbrace{h\left[ \frac{1}{2}f(t_0) + f(t_1) + \cdots + f(t_{n-1}) + \frac{1}{2}f(t_n) \right]}_{ T_f(n)}\end{split}\]

Let \(I\) be the exact value of the integral. Error \(I-T_f\left(n\right)\) has an upper bound:

  1. Most basic upper bound: \(O\left(h^2\right)\)
  2. Refined version based on Euler-Maclaurin formula: \[ - \frac{h^2}{12} \left[ f'(b)-f'(a) \right] + \frac{h^4}{740} \left[ f'''(b)-f'''(a) \right] + O(h^6)\]

The bottom line is that the trapezoid rule is second-order accurate.

It is possible to push for higher-order accuracy. To get fourth-order accuracy, we have two options: the Gregory integration formula or Simpson’s formula.

Consider the idea leading to the Gregory integration formula (Exercise 5.6.2). We replace \(f'(a)\) in the Euler-MacLaurin formula with the forward finite difference formula and its reflected backward finite difference for \(f'(b)\). Then subtract from \(T_f(n)\). Consider next the idea leading to Simpson’s formula. We use extrapolation or “Interpolate, then integrate” (see Exercise 5.6.4).

We can think of Gregory and Simpson as analogous to higher-order bias adjustments in statistics. Sixth-order accuracy can also be obtained by extrapolation.

But how does extrapolation work? We start with an “asymptotic” expansion of the error estimate. Euler-Maclaurin states that \[I = T_f(n) + a_2 h^2 + a_4 h^{4} + \cdots\] Since \(h=(b-a)/n\), rewrite in terms of the number of sub-intervals (or number of nodes, if you wish): \[I = T_f(n) + c_2 n^{-2} + c_4 n^{-4} + \cdots\] The previous expression applies to a doubling of the number of sub-intervals: \[I = T_f(2n) + \frac{1}{4}c_2 n^{-2} + \frac{1}{16}c_4 n^{-4} + \cdots\] Remove the second-order component completely by a linear combination of the two expansions in order to get fourth-order accuracy. The result will then be Simpson’s formula: \[S_f\left(2n\right)=\frac{1}{3}\left(4T_f(2n)-T_f(n)\right)\]

We can push the core idea in order to achieve sixth-order accuracy which leads to the so-called Romberg integration.

  1. Start from Simpson and double the number of sub-intervals again.
  2. Write down the error expansion and find a suitable linear combination which allows you to eliminate the \(O\left(n^{-4}\right)\) term.

Simpson’s formula can also be motivated using “Interpolate, then integrate” (see Exercise 5.6.4). After some notational adjustments, Exercise 5.6.4 (a) has already been demonstrated earlier when centered differences were discussed. Eventually, you will get to the point where \[ \begin{split} \int_a^b f(x)\, dx \approx \frac{h}{3}\bigl[ &f(t_0) + 4f(t_1) + 2f(t_2) + + \cdots\\ &+ 2f(t_{n-2}) + 4f(t_{n-1}) + f(t_n) \bigr] \end{split}\] Exercise 5.6.5 asks you to show that the previous result is equal to \(S_f(n)\) (typo here in book!). Purely algebraic manipulations PLUS trapezoid rule are needed for this, but be careful of the length of the sub-interval. It is not \(h\) all throughout.

In terms of the possible computational savings brought about by extrapolation, pay attention to how many evaluations of \(f\) are made and to whether previous evaluations could be reused.

Exercises 5.6.1, 5.6.3, 5.6.6

Problem: Estimate each integral for \(n=10\cdot 2^k\) nodes for \(k=1,2,\ldots,10\) using a variety of methods:


For \(f(x)=x\log(1+x)\):

f = x -> x*log(1+x);
n = @. 10*2^(1:10);
result = []; err = []; resultG = []; errG = []; resultS = []; errS = []; errR = [];
for n in n
  simpvec = ones(n+1);
  simpvec[2:2:n] .= 4; 
  simpvec[3:2:(n-1)] .= 2;
  T, t, y = FNC.trapezoid(f, 0, 1, n);
  T2 = FNC.trapezoid(f, 0, 1, 2*n)[1];
  T4 = FNC.trapezoid(f, 0, 1, 4*n)[1];
  TG = T - step(t)/24*((y[n-1]+y[3])-4*(y[n]+y[2])+3*(y[n+1]+y[1]));
  TS = step(t)/3*dot(simpvec, y);
  TR = 1/15*(16*1/3*(4*T4-T2)-(1/3*(4*T2-T)));
  push!(result, T);
  push!(resultG, TG);
  push!(resultS, TS);
  push!(err, abs(0.25 - T));
  push!(errG, abs(0.25 - TG));
  push!(errS, abs(0.25 - TS));
  push!(errR, abs(0.25 - TR));
end
table = [n err errG errS errR];
pretty_table(table,["# nodes", "Trapezoid Err", "Gregory Err", "Simpson Err", "Romberg Err"])
## ┌─────────┬───────────────┬─────────────┬─────────────┬─────────────┐
## │ # nodes │ Trapezoid Err │ Gregory Err │ Simpson Err │ Romberg Err │
## ├─────────┼───────────────┼─────────────┼─────────────┼─────────────┤
## │      20 │   0.000248551 │  3.60208e-7 │  8.65094e-8 │ 2.32647e-13 │
## │      40 │    6.21417e-5 │  2.40814e-8 │  5.42069e-9 │ 3.60822e-15 │
## │      80 │    1.55357e-5 │  1.55686e-9 │ 3.39011e-10 │         0.0 │
## │     160 │    3.88394e-6 │ 9.89686e-11 │ 2.11917e-11 │ 5.55112e-17 │
## │     320 │    9.70985e-7 │ 6.23834e-12 │ 1.32455e-12 │         0.0 │
## │     640 │    2.42746e-7 │  3.9152e-13 │ 8.28226e-14 │         0.0 │
## │    1280 │    6.06866e-8 │ 2.45359e-14 │ 5.16254e-15 │ 1.11022e-16 │
## │    2560 │    1.51717e-8 │ 1.55431e-15 │ 3.33067e-16 │         0.0 │
## │    5120 │    3.79291e-9 │ 1.66533e-16 │         0.0 │ 5.55112e-17 │
## │   10240 │   9.48228e-10 │ 5.55112e-17 │         0.0 │         0.0 │
## └─────────┴───────────────┴─────────────┴─────────────┴─────────────┘

For \(f(x)=x^2\tan^{-1} x\):

f = x -> x^2*atan(x);
n = @. 10*2^(1:10);
result = []; err = []; resultG = []; errG = []; resultS = []; errS = []; errR = [];
for n in n
  simpvec = ones(n+1);
  simpvec[2:2:n] .= 4; 
  simpvec[3:2:(n-1)] .= 2;
  T, t, y = FNC.trapezoid(f, 0, 1, n);
  T2 = FNC.trapezoid(f, 0, 1, 2*n)[1];
  T4 = FNC.trapezoid(f, 0, 1, 4*n)[1];
  TG = T - step(t)/24*((y[n-1]+y[3])-4*(y[n]+y[2])+3*(y[n+1]+y[1]));
  TS = step(t)/3*dot(simpvec, y);
  TR = 1/15*(16*1/3*(4*T4-T2)-(1/3*(4*T2-T)));
  push!(result, T);
  push!(resultG, TG);
  push!(resultS, TS);
  push!(err, abs((pi-2+2*log(2))/12 - T));
  push!(errG, abs((pi-2+2*log(2))/12 - TG));
  push!(errS, abs((pi-2+2*log(2))/12 - TS));  
  push!(errR, abs((pi-2+2*log(2))/12 - TR));
end
table = [n err errG errS errR];
pretty_table(table,["# nodes", "Trapezoid Err", "Gregory Err", "Simpson Err", "Romberg Err"])
## ┌─────────┬───────────────┬─────────────┬─────────────┬─────────────┐
## │ # nodes │ Trapezoid Err │ Gregory Err │ Simpson Err │ Romberg Err │
## ├─────────┼───────────────┼─────────────┼─────────────┼─────────────┤
## │      20 │   0.000431464 │  8.86915e-7 │  1.91461e-7 │ 3.80029e-13 │
## │      40 │   0.000107857 │  5.61757e-8 │  1.19434e-8 │ 5.88418e-15 │
## │      80 │    2.69637e-5 │  3.52896e-9 │ 7.46104e-10 │ 5.55112e-17 │
## │     160 │    6.74089e-6 │  2.2104e-10 │  4.6626e-11 │ 2.77556e-17 │
## │     320 │    1.68522e-6 │ 1.38287e-11 │ 2.91409e-12 │         0.0 │
## │     640 │    4.21305e-7 │ 8.64669e-13 │ 1.82132e-13 │ 1.11022e-16 │
## │    1280 │    1.05326e-7 │ 5.40401e-14 │ 1.14075e-14 │         0.0 │
## │    2560 │    2.63315e-8 │ 3.30291e-15 │ 7.21645e-16 │ 1.11022e-16 │
## │    5120 │    6.58288e-9 │ 1.94289e-16 │ 5.55112e-17 │ 5.55112e-17 │
## │   10240 │    1.64572e-9 │ 8.32667e-17 │ 5.55112e-17 │ 5.55112e-17 │
## └─────────┴───────────────┴─────────────┴─────────────┴─────────────┘

For \(f(x)=\exp(x)\cos (x)\).

f = x -> exp(x)*cos(x);
n = @. 10*2^(1:10);
result = []; err = []; resultG = []; errG = []; resultS = []; errS = []; errR = [];
for n in n
  simpvec = ones(n+1);
  simpvec[2:2:n] .= 4; 
  simpvec[3:2:(n-1)] .= 2;
  T, t, y = FNC.trapezoid(f, 0, pi/2, n);
  T2 = FNC.trapezoid(f, 0, pi/2, 2*n)[1];
  T4 = FNC.trapezoid(f, 0, pi/2, 4*n)[1];
  TG = T - step(t)/24*((y[n-1]+y[3])-4*(y[n]+y[2])+3*(y[n+1]+y[1]));
  TS = step(t)/3*dot(simpvec, y);
  TR = 1/15*(16*1/3*(4*T4-T2)-(1/3*(4*T2-T)));
  push!(result, T);
  push!(resultG, TG);
  push!(resultS, TS);
  push!(err, abs((exp(pi/2)-1)/2 - T));
  push!(errG, abs((exp(pi/2)-1)/2 - TG));
  push!(errS, abs((exp(pi/2)-1)/2 - TS));
  push!(errR, abs((exp(pi/2)-1)/2 - TR));
end
table = [n err errG errS errR];
pretty_table(table,["# nodes", "Trapezoid Err", "Gregory Err", "Simpson Err", "Romberg Err"])
## ┌─────────┬───────────────┬─────────────┬─────────────┬─────────────┐
## │ # nodes │ Trapezoid Err │ Gregory Err │ Simpson Err │ Romberg Err │
## ├─────────┼───────────────┼─────────────┼─────────────┼─────────────┤
## │      20 │    0.00298643 │  7.35243e-6 │  1.61461e-6 │ 2.81819e-12 │
## │      40 │   0.000746682 │  4.69672e-7 │  1.00744e-7 │ 4.37428e-14 │
## │      80 │   0.000186675 │  2.96356e-8 │  6.29386e-9 │ 6.66134e-16 │
## │     160 │    4.66691e-5 │  1.86043e-9 │ 3.93325e-10 │ 2.22045e-16 │
## │     320 │    1.16673e-5 │ 1.16523e-10 │ 2.45821e-11 │         0.0 │
## │     640 │    2.91682e-6 │ 7.28995e-12 │ 1.53655e-12 │ 2.22045e-16 │
## │    1280 │    7.29206e-7 │ 4.55636e-13 │ 9.59233e-14 │ 8.88178e-16 │
## │    2560 │    1.82302e-7 │ 2.84217e-14 │  5.9952e-15 │         0.0 │
## │    5120 │    4.55754e-8 │ 1.33227e-15 │         0.0 │         0.0 │
## │   10240 │    1.13938e-8 │ 2.22045e-16 │         0.0 │         0.0 │
## └─────────┴───────────────┴─────────────┴─────────────┴─────────────┘

For \(f(x)=\sqrt{x}\log\ x\):

f = x -> sqrt(x)*log(x);
n = @. 10*2^(1:10);
result = []; err = []; resultG = []; errG = []; resultS = []; errS = []; errR = [];
for n in n
  simpvec = ones(n+1);
  simpvec[2:2:n] .= 4; 
  simpvec[3:2:(n-1)] .= 2;
  T, t, y = FNC.trapezoid(f, eps(), 1, n);
  T2 = FNC.trapezoid(f, eps(), 1, 2*n)[1];
  T4 = FNC.trapezoid(f, eps(), 1, 4*n)[1];
  TG = T - step(t)/24*((y[n-1]+y[3])-4*(y[n]+y[2])+3*(y[n+1]+y[1]));
  TS = step(t)/3*dot(simpvec, y);
  TR = 1/15*(16*1/3*(4*T4-T2)-(1/3*(4*T2-T)));
  push!(result, T);
  push!(resultG, TG);
  push!(resultS, TS);
  push!(err, abs(-4/9 - T));
  push!(errG, abs(-4/9 - TG));
  push!(errS, abs(-4/9 - TS));
  push!(errR, abs(-4/9 - TR));
end
table = [n err errG errS errR];
pretty_table(table,["# nodes", "Trapezoid Err", "Gregory Err", "Simpson Err", "Romberg Err"])
## ┌─────────┬───────────────┬─────────────┬─────────────┬─────────────┐
## │ # nodes │ Trapezoid Err │ Gregory Err │ Simpson Err │ Romberg Err │
## ├─────────┼───────────────┼─────────────┼─────────────┼─────────────┤
## │      20 │     0.0112056 │  0.00693196 │  0.00581359 │ 0.000791061 │
## │      40 │    0.00450979 │  0.00272522 │  0.00227785 │ 0.000304098 │
## │      80 │    0.00179044 │  0.00106052 │ 0.000883986 │ 0.000116147 │
## │     160 │   0.000702865 │ 0.000409249 │ 0.000340341 │  4.41159e-5 │
## │     320 │   0.000273335 │ 0.000156818 │ 0.000130159 │  1.66763e-5 │
## │     640 │   0.000105454 │  5.97306e-5 │  4.94936e-5 │  6.27744e-6 │
## │    1280 │    4.04091e-5 │  2.26337e-5 │  1.87274e-5 │  2.35428e-6 │
## │    2560 │    1.53939e-5 │  8.53811e-6 │  7.05557e-6 │  8.80045e-7 │
## │    5120 │    5.83457e-6 │  3.20814e-6 │  2.64811e-6 │     3.28e-7 │
## │   10240 │    2.20155e-6 │  1.20123e-6 │   9.9055e-7 │  1.21925e-7 │
## └─────────┴───────────────┴─────────────┴─────────────┴─────────────┘

For \(f(x)=\sqrt{1-x^2}\):

f = x -> sqrt(1-x^2);
n = @. 10*2^(1:10);
result = []; err = []; resultG = []; errG = []; resultS = []; errS = []; errR = [];
for n in n
  simpvec = ones(n+1);
  simpvec[2:2:n] .= 4; 
  simpvec[3:2:(n-1)] .= 2;
  T, t, y = FNC.trapezoid(f, 0, 1, n);
  T2 = FNC.trapezoid(f, 0, 1, 2*n)[1];
  T4 = FNC.trapezoid(f, 0, 1, 4*n)[1];
  TG = T - step(t)/24*((y[n-1]+y[3])-4*(y[n]+y[2])+3*(y[n+1]+y[1]));
  TS = step(t)/3*dot(simpvec, y);
  TR = 1/15*(16*1/3*(4*T4-T2)-(1/3*(4*T2-T)));
  push!(result, T);
  push!(resultG, TG);
  push!(resultS, TS);
  push!(err, abs(pi/4 - T));
  push!(errG, abs(pi/4 - TG));
  push!(errS, abs(pi/4 - TS));
  push!(errR, abs(pi/4 - TR));
end
table = [n err errG errS errR];
pretty_table(table,["# nodes", "Trapezoid Err", "Gregory Err", "Simpson Err", "Romberg Err"])
## ┌─────────┬───────────────┬─────────────┬─────────────┬─────────────┐
## │ # nodes │ Trapezoid Err │ Gregory Err │ Simpson Err │ Romberg Err │
## ├─────────┼───────────────┼─────────────┼─────────────┼─────────────┤
## │      20 │    0.00328194 │  0.00158795 │   0.0012864 │ 0.000140956 │
## │      40 │    0.00116123 │ 0.000560636 │ 0.000454325 │   4.9825e-5 │
## │      80 │   0.000410714 │ 0.000198073 │ 0.000160542 │  1.76139e-5 │
## │     160 │   0.000145237 │   7.0004e-5 │  5.67448e-5 │  6.22713e-6 │
## │     320 │     5.1354e-5 │  2.47456e-5 │  2.00596e-5 │  2.20157e-6 │
## │     640 │    1.81572e-5 │  8.74811e-6 │  7.09166e-6 │  7.78361e-7 │
## │    1280 │    6.41971e-6 │  3.09278e-6 │   2.5072e-6 │   2.7519e-7 │
## │    2560 │    2.26974e-6 │  1.09344e-6 │  8.86413e-7 │  9.72941e-8 │
## │    5120 │    8.02478e-7 │  3.86585e-7 │  3.13392e-7 │  3.43986e-8 │
## │   10240 │     2.8372e-7 │  1.36678e-7 │    1.108e-7 │  1.21617e-8 │
## └─────────┴───────────────┴─────────────┴─────────────┴─────────────┘

Exercise 5.6.7

Problem: For \(n=10,20,30,\ldots,200\), compute the trapezoidal approximation to \[ \int_{0}^1 \frac{1}{2.01+\sin (6\pi x)-\cos(2\pi x)} \,d x \approx 0.9300357672424684\]

Make two separate plots of the absolute error as a function of \(n\):

n = 10*(1:20); 
f = x -> 1/(2.01+sin(6*pi*x)-cos(2*pi*x));
err = [];
for n in n
  T = FNC.trapezoid(f, 0, 1, n)[1];
  push!(err, abs.(0.9300357672424684-T));
end
l = @layout [a ; b]; 
p1 = plot(n,err,m=:o,label="Absolute error",
    xaxis=(:log10,L"n"),yaxis=(:log10,"error"),
    title="Convergence of trapezoidal integration");
p2 = plot(n,err,m=:o,label="Absolute error",
    xaxis=(L"n"),yaxis=(:log10,"error"),
    title="Convergence of trapezoidal integration");
plot(p1, p2, layout = l)