In numerical analysis we are covering “approximate differentiation”. One of the formulas we are using: where is some number in ; of course we assume that the third derivative is continuous in this interval.

The derivation can be done in a couple of ways: one can either use the degree 2 Lagrange polynomial through and differentiate or one can use the degree 2 Taylor polynomial expanded about and use and solve for ; of course one runs into some issues with the remainder term if one uses the Taylor method.

But that isn’t the issue that I want to talk about here.

The issue: “what should we use for ?” In theory, we should get a better approximation if we make as small as possible. But if we are using a computer to make a numerical evaluation, we have to concern ourselves with round off error. So what we actually calculate will NOT be but rather where where is the round off error used in calculating the function at (respectively).

So, it is an easy algebraic exercise to show that:

and the magnitude of the actual error is bounded by where on some small neighborhood of and is a bound on the round-off error of representing .

It is an easy calculus exercise (“take the derivative and set equal to zero and check concavity” easy) to see that this error bound is a minimum when .

Now, of course, it is helpful to get a “ball park” estimate for what is. Here is one way to demonstrate this to the students: solve for and obtain and then do some experimentation to determine .

That is: obtain an estimate of by using this “3 point midpoint” estimate for a known derivative near a value of for which (a bound for the 3’rd derivative) is easy to obtain, and then obtain an educated guess for .

Here are a couple of examples: one uses Excel and one uses MATLAB. I used at ; of course and is reasonable here (just a tiny bit off). I did the 3-point estimation calculation for various values of and saw where the error started to increase again.

Here is the Excel output for at and at respectively. In the first case, use and in the second

In the case, we see that the error starts to increase again at about ; the same sort of thing appears to happen for .

So, in the first case, is about ; it is roughly at .

Note: one can also approach by using powers of instead; something interesting happens in the case; the case gives results similar to what we’ve shown. Reason (I think): 1 is easy to represent in base 2 and the powers of can be represented exactly.

Now we turn to MATLAB and here we do something slightly different: we graph the error for different values of . Since the values of are very small, we use a scale by doing the following (approximating for )

. By design, . The graph looks like:

Now, the small error scale makes things hard to read, so we turn to using the log scale, this time on the axis: let and run plot(N, LE):

and sure enough, you can see where the peak is: about , which is the same as EXCEL.