College Math Teaching

April 4, 2014

Gaussian Quadrature and Legendre polynomials

In a previous post, I discussed the Legendre polynomials. We now know that if P_n is the n’th Legendre polynomial then:

1. If k < n, \int^{1}_{-1}P_k(x)P_n(x)dx = 0 .
2. If Q(x) has degree k < n then \int^{1}_{-1}Q(x)P_n(x)dx = 0 .
3. P_n(x) has n simple roots in (-1,1) which are symmetric about 0.
4. P_n(x) is an odd function if n is odd and an even function if n is even.
5. P_0, P_1, P_2,....P_n form an orthogonal basis for the vector pace of polynomials of degree n or less.

To make these notes complete, I’d like to review the Lagrange Interpolating polynomial:

Given a set of points (x_0, y_0), (x_1, y_1), .....(x_n, y_n) where the x_0, x_1, ...x_n are all distinct, the Lagrange polynomial through these points (called “nodes”) is defined to be:
L_n(x) = y_0 \frac{(x-x_1)(x-x_2)....(x-x_n)}{(x_0 -x_1)(x_0-x_2)...(x_0-x_n)} +y_1 \frac{(x-x_0)(x-x_2)...(x-x_n)}{(x_1-x_0)(x_1-x_2)....(x_1-x_n)} + ....y_n\frac{(x-x_0)(x-x_2)...(x-x_{n-1})}{(x_n-x_0)(x_n-x_2)....(x_n-x_{n-1})}

It is easy to see that L_n(x_i) = y_i because the associated coefficient for the y_i term is 1 and the other coefficients are zero. We call the coefficient polynomials L_{n,i} = \Pi^n_{j=0, j \ne i}\frac{(x-x_j)}{x_j-x_i}

It can be shown by differentiation that if f is a function that has n+1 continuous derivatives on some open interval containing (a,b) and if L_n(x) is the Lagrange polynomial running through the nodes (x_i, f(x_i)) where all of the x_i \in [a,b] then the maximum of |f(x) - L_n(x)| is bounded by |K f^{n+1}(\omega) | where \omega \in [a,b] . So if the n+1’st derivative of f is zero, then the interpolation is exact.

So, we introduce some notation: let P_m(x) be the m'th Legendre polynomial, let r_1, r_2,...r_m be the roots, and c_i = L_(m,i) = \Pi^n_{j=0, j \ne i}\frac{(x-x_j)}{x_j-x_i} , the j’th coefficient polynomial for the Lagrange polynomial through the nodes (r_i, 1) .

Now let P(x) be any polynomial of degree less than m. First: the Lagrange polynomial L_m through the nodes (r_1, P(r_1)), (r_2, P(r_2)), ...(r_m, P(r_m)) represents P(x) exactly as the error term is a multiple of the m+1'st derivative of P(x) which is zero.

Therefore \int^{1}_{-1} P(x) dx = \int^{1}_{-1} L_m (x)dx = \int^{1}_{-1} P(r_1 )L_{m,1} +P(r_2 )L_{m,2} + ....P(r_m )L_{m,m} dx = \sum^{m}_{i=1} P(r_i )\int^1_{-1} L_{m,i}(x) dx

Now here is the key: \int^1_{-1} L_{m,i}(x) dx only depends on the Legendre polynomial being used and NOT on P(x) .

Hence we have a quadrature formula that is exact for all polynomials of degree less than m .

But, thanks to the division algorithm, we can do even better. We show that this quadrature scheme is exact for all polynomials P whose degree are less than 2m . This is where the orthogonality of the Legendre polynomials will be used.

Let P be a polynomial of degree less than 2m . Then by the division algorithm, P(x) = Q(x)P_m(x) + R(x) where BOTH Q(x) and R(x) have degree less than m.

Now note the following: P(r_i) = Q(r_i)P_m(r_i) + R(r_i) = R(r_i) ; that is where the fact that the r_i are the roots of the Legendre polynomials matters.
Then if we integrate: \int^1_{-1} P(x) dx = \int^{1}_{-1} Q(x)P_m(x) dx + \int^{1}_{-1} R(x)dx . Now because of property 2 above, \int^{1}_{-1} Q(x)P_m(x)dx =0 .

Now because the degree of R is less than m , \int^1_{-1}R(x) dx = \sum^{m}_{i=1} R(r_i )\int^1_{-1} L_{m,i}(x) dx =\sum^{m}_{i=1} P(r_i )\int^1_{-1} L_{m,i}(x) dx

This means: this quadrature scheme using the Legendre polynomials of degree m is EXACT for all polynomials of degree less than 2m.

Of course, one must know the roots r_i and the corresponding \int^1_{-1} L_{m,i}(x) dx but these have been tabulated.

Why this is exciting:

Let’s try this technique on: \int^1_{-1} e^x dx with, say, n = 5. We know that the exact answer is e-e^{-1} . But using the quadrature method with a spreadsheet:
guassquadexample

The roots r_i are in column A, rows 5-9. The weights (coefficients; the \int^1_{-1} L_{m,i}(x) dx ) are in column B, rows 5-9. In column F we apply the function e^x to the roots in column A; the products of the weights with e^{r_i} are in column G, and then we do a simple sum to get a very accurate approximation to the integral.

The accuracy: this is due to the quadrature method being EXACT on the first 9 terms of the series for e^{x} = \sum^{\infty}_{j=0} \frac{x^j}{j!} . The inexactness comes from the higher order Taylor terms.

Note: to convert \int^{b}_{a} f(x) dx , one uses the change of variable: u = \frac{2}{b-a} x - \frac{a+b}{b-a} to convert this to \frac{(b-a)}{2}\int^{1}_{-1} f(u \frac{(b-a)}{2}+\frac{a+b}{2})dx

Advertisements

April 1, 2014

Legendre Polynomials: elementary linear algebra proof of orthogonality

In our numerical analysis class, we are coming up on Gaussian Quadrature (a way of finding a numerical estimate for integrals). Here is the idea: given an interval [a,b] and a positive integer n we’d like to select numbers x_i \in [a,b], i \in \{1,2,3,...n\} and weights c_i so that \int^b_a f(x) dx is estimated by \sum^n_{i=1} c_i f(x_i) and that this estimate is exact for polynomials of degree n or less.

You’ve seen this in calculus classes: for example, Simpson’s rule uses x_1 =a, x_2 = \frac{a+b}{2}, x_3 = b and uses c_1 = \frac{b-a}{6}, c_2 =\frac{2(b-a)}{3}, c_3 =\frac{b-a}{6} and is exact for polynomials of degree 3 or less.

So, Gaussian quadrature is a way of finding such a formula that is exact for polynomials of degree less than or equal to a given fixed degree.

I might discuss this process in detail in a later post, but the purpose of this post is to discuss a tool used in developing Gaussian quadrature formulas: the Legendre polynomials.

First of all: what are these things? You can find a couple of good references here and here; note that one can often “normalize” these polynomials by multiplying by various constants.

One way these come up: they are polynomial solutions to the following differential equation: \frac{d}{dx}((1-x^2)\frac{d}{dx} P_n(x)) + n(n+1)P_n(x) = 0 . To see that these solutions are indeed polynomials (for integer values of n ). To see this: try the power series method expanded about x = 0 ; the singular points (regular singular points) occur at x = \pm 1 .

Though the Legendre differential equation is very interesting, it isn’t the reason we are interested in these polynomials. What interests us is that these polynomials have the following properties:

1. If one uses the inner product f \cdot g = \int^1_{-1} f(x) g(x) dx for the vector space of all polynomials (real coefficients) of finite degree, these polynomials are mutually orthogonal; that is, if n \ne m, P_m(x) \cdot P_n (x) = \int^1_{-1} P_n(x)P_m(x) dx = 0 .

2. deg(P_n(x)) = n .

Properties 1 and 2 imply that for all integers n , \{P_0(x), P_1(x), P_2(x), ....P_n(x) \} form an orthogonal basis for the vector subspace of all polynomials of degree n or less. If follows immediately that if Q(x) is any polynomial of degree k < m , then Q(x) \cdot P_m(x) = 0 (Q(x) is a linear combination of P_j(x) where each j < m )

Now these properties can be proved from the very definitions of the Legendre polynomials (see the two references; for example one can note that P_n is an eigenfunction for the Hermitian operator \frac{d}{dx}((1-x^2)\frac{d}{dx} P_n(x)) with associated eigenvalue n(n+1) and such eigenfunctions are orthogonal.

This little result is fairly easy to see: call the Hermitian operator A and let m \ne n, A(P_m) =\lambda_m P_m, A(P_n) =\lambda_n = A(P_n) and \lambda_n \ne \lambda_m .

Then consider: (A(P_m) \cdot P_n) = (\lambda_m P_m \cdot P_n) = \lambda_m (P_m \cdot P_n ) . But because A is Hermitian, (A(P_m) \cdot P_n) = (P_m \cdot A(P_n)) = (P_m \cdot \lambda_n P_n) = \lambda_n (P_m \cdot P_n) . Therefore, \lambda_m (P_m \cdot P_n ) = \lambda_n(P_m \cdot P_n) which means that P_m \cdot P_n = 0 .

Of course, one still has to show that this operator is Hermitian and this is what the second reference does (in effect).

The proof that the operator is Hermitian isn’t hard: assume that f, g both meet an appropriate condition (say, twice differentiable on some interval containing [-1,1] ).
Then use integration by parts with dv =\frac{d}{dx} ((1-x^2) \frac{d}{dx}f(x)), u =g(x) : \int^1_{-1} \frac{d}{dx} ((1-x^2) \frac{d}{dx}f(x))g(x) = ((1-x^2) \frac{d}{dx}f(x))g(x)|^1_{-1}-\int^1_{-1}(1-x^2)\frac{d}{dx} f(x) \frac{d}{dx}g(x) dx . But ((1-x^2) \frac{d}{dx}f(x))g(x)|^1_{-1} =0 and the result follows by symmetry.

But not every student in my class has had the appropriate applied mathematics background (say, a course in partial differential equations).

So, we will take a more basic, elementary linear algebra approach to these. For our purposed, we’d like to normalize these polynomials to be monic (have leading coefficient 1).

Our approach

Use the Gram–Schmidt process from linear algebra on the basis: 1, x, x^2, x^3, x^4.....

Start with P_0 = 1 and let U_0 = \frac{1}{\sqrt{2}} ; here the U_i are the polynomials normalized to unit length (that is, \int^{1}_{-1} (U_k(x))^2 dx = 1 . That is, U_i(x) = \sqrt{\frac{1}{\int^1_{-1}(P_i(x))^2 dx}} P_i(x)

Next let P_1(x) =x, U_1(x) = \sqrt{\frac{2}{3}} x

Let P_2(x) = x^2 - \sqrt{\frac{2}{3}} x \int^{1}_{-1} (\sqrt{\frac{2}{3}} x)x^2 -\frac{1}{\sqrt{2}}\int^{1}_{-1} \frac{1}{\sqrt{2}}x^2 = x^2 -\frac{1}{3} Note that this is not too bad since many of the integrals are just integrals of an odd function over [-1,1] which become zero.

So the general definition:

P_{n+1}(x) = x^{n+1} - U_n \int^1_{-1}x^{n+1} U_n(x) dx - U_{n-1}\int^1_{-1} U_{n-1} x^{n+1}dx .... - \frac{1}{\sqrt{2}}\int^1_{-1} \frac{1}{\sqrt{2}}x^{n+1} dx

What about the roots?
Here we can establish that each P_m(x) has m distinct, real roots in (-1,1) . Suppose P_m(x) has only k < m distinct roots of odd multiplicity in (-1,1) , say x_1, x_2, ...x_k . Let W(x) = (x-x_1)(x-x_2)...(x-x_k) ; note that W has degree k < m . Note that P_m(x)W(x) now has all roots of even multiplicity; hence the polynomial P_m(x)W(x) cannot change sign on [-1,1] as all roots have even multiplicity. But \int^{1}_{-1} P_m(x)W(x) dx = 0 because W has degree strictly less than m . That is impossible. So P_m(x) has at least m distinct roots of odd multiplicity, but since P_m(x) has degree m, they are all simple roots.

March 25, 2014

An example for “business calculus”

Filed under: applied mathematics, calculus, economics — Tags: , , — collegemathteaching @ 10:49 pm

Consider this article by Paul Krugman which contains this graph and this text:

032114krugman1-blog480

On one side we have a hypothetical but I think realistic Phillips curve, in which the rate of inflation depends on output and the relationship gets steep at high levels of utilization. On the other we have an aggregate demand curve that depends positively on expected inflation, because this reduces real interest rates at the zero lower bound. I’ve drawn the picture so that if the central bank announces a 2 percent inflation target, the actual rate of inflation will fall short of 2 percent, even if everyone believes the bank’s promise – which they won’t do for very long.

So you see my problem. Suppose that the economy really needs a 4 percent inflation target, but the central bank says, “That seems kind of radical, so let’s be more cautious and only do 2 percent.” This sounds prudent – but may actually guarantee failure.

The purpose: you can see the Philips curve (which relates unemployment to inflation: the higher the inflation, the lower the unemployment) and a linear-like (ok an affine) demand curve. You can see the concepts of derivative and concavity as being central to the analysis; that might be useful for these types of students to see.

The error term and approximation of derivatives

I’ll go ahead and work with the common 3 point derivative formulas:

This is the three-point endpoint formula: (assuming that f has 3 continuous derivatives on the appropriate interval)

f'(x_0) = \frac{1}{2h}(-3f(x_0) + 4f(x_0+h) -f(x_0 + 2h)) + \frac{h^2}{3} f^{3}(\omega) where \omega is some point in the interval.

The three point midpoint formula is:

f'(x_0) = \frac{1}{2h}(f(x_0 + h) -f(x_0 -h)) -\frac{h^2}{6}f^{3}(\omega) .

The derivation of these formulas: can be obtained from either using the Taylor series centered at x_0 or using the Lagrange polynomial through the given points and differentiating.

That isn’t the point of this note though.

The point: how can one demonstrate, by an example, the role the error term plays.

I suggest trying the following: let x vary from, say, 0 to 3 and let h = .25 . Now use the three point derivative estimates on the following functions:

1. f(x) = e^x .

2. g(x) = e^x + 10sin(\frac{\pi x}{.25}) .

Note one: the three point estimates for the derivatives will be exactly the same for both f(x) and g(x) . It is easy to see why.

Note two: the “errors” will be very, very different. It is easy to see why: look at the third derivative term: for f(x) it is e^x -10(\frac{\pi}{.25})^2sin(\frac{\pi x}{.25})

The graphs shows the story.

expsinfunction

Clearly, the 3 point derivative estimates cannot distinguish these two functions for these “sample values” of x , but one can see how in the case of g , the degree that g wanders away from f is directly related to the higher order derivative of g .

March 9, 2014

Bézier Curves

I am currently teaching Numerical Analysis and using Burden-Faires. The book covers the topics we like, but I feel that the section on splines and parametrized curves is a bit weak; in particular the discussion on Bézier curves is a bit lacking. The pity: the discussion need not be all that deep, and the standard equation for Bézier curves is actually easy to remember.

Also: where the text talks about how the Bézier curve equations differs from the “bare handed parametric cubic spline” that they derive, they don’t explain the reason for the difference.

So, I decided to write these notes. I will have to explain some basic concepts.

The setting: R^n with the usual geometry induced by the usual “dot product”.

Convex Sets in R^n

A set X \subset R^n is said to be convex if for any two points x, y \in X , the straight line segment connecting x to y is also in X ; that is, the set tx + (1-t)y \in X for all t \in [0,1] .

Pic_convex1

(from here)

Convex Hull for a set of points

Now suppose one is given a collection of points C= x_0, x_1, x_2, x_3,.... \in R^n . The convex hull H for C is the smallest convex set which contains all of C . That is, if Y is any convex set that contains C , then H \subseteq Y. In the case where the set of points is finite (say, C = \{x_0, x_1, x_2, ....x_n \} ) then H consists the set of all \sum^{n}_{i = 0} \alpha_i x_i where \alpha_i \ge 0 and \sum^{n}_{i=0} \alpha_i = 1 .

Note: the convex hull for a set of points is, in general, an example of a vector subset that is NOT a vector subspace.

Binomial Theorem and the Bernstein coefficient polynomials

Recall from algebra: if n is a positive integer and a, b numbers (real, complex, or even arbitrary field elements), (a+b)^n = \sum^{n}_{j =0} { n \choose j} a^{n-j} b^{j} , where {n \choose j} = \frac{n!}{(n-j)! j !} . For example, (a+b)^3 = a^3 + 3a^2b + 3ab^2 + b^3 .

Now consider the rather silly looking: 1^n = ((1-t) + t)^n = \sum^n_{j=0}{ n \choose j} (1-t)^{n-j} t^{j} Note that this expression is equal to 1 for ALL values of t and that for t \in [0,1] , each summand { n \choose j} (1-t)^{n-j} t^{j} is positive or zero.

These “coefficient polynomials” { n \choose j} (1-t)^{n-j} t^{j} are called the Bernstein polynomials (or Bernstein basis polynomials) and we denote them as follows: b_{j,n}(t) = { n \choose j} (1-t)^{n-j} t^{j} . We now see that for all t \in [0,1], 0 \le b_{j,n}(t) \le 1 and \sum^n_{j=0}b_{j,n}(t) = ((1-t)+t)^n =1^n =1

Definition of a Bézier curve and some of its properties

Now let P_0, P_1, P_2, ...P_n be a collection of distinct points in R^k . One can think of these points as vectors.
The Bézier curve with control points P_0, P_1, P_2, ...P_n is defined to be B(t)=  \sum^n_{j=0}b_{j,n}(t)P_j, t \in [0,1] .

Properties

B(0) = P_0, B(1) =P_n . This is clear because b_{0,n}(0) = 1, b_{n,n}(1) =1 and for i \notin \{0,1\}, b_{i,n}(0)=b_{i,n}(1) = 0 .

The polygon formed by P_0, P_1, ....P_n is called the control polygon for the Bézier curve.

For all t \in [0,1], B(t) is in the convex hull of P_0, P_1, ...P_n . This is clear because \sum^n_{j=0}b_{j,n}(t) = ((1-t)+t)^n =1^n =1 and each b_{i,n}(t) is positive.

“Guideposts”: the text talks about the “guideposts”: the text looks at a cubic Bézier curve in the plane and uses (x_0, y_0) =P_0, (x_0+ \alpha_0, y_0 + \beta_0) = P_1,  (x_1 - \alpha_1, y_1 - \beta_1)= P_2, (x_1, y_1) =P_3

Now P_1 and P_{n-1} directly affect the (one sided) tangent to the Bézier curve at t=0, t=1 . In fact we will show that if we use the one-sided parametric curve derivative, we see that B'(0) = n(P_1 - P_0), B'(1) = n(P_n - P_{n-1}) . The text calls n the scaling factor and notes that the scaling factor is 3 when n = 3 .

We’ll do the calculations for B'(0), B'(1) for the general degree n Bézier curve using elementary calculus (product rule):

First write B(t) = (1-t)^nP_0 + n(1-t)^{n-1}tP_1 + \sum^{n-2}_{j=2} b_{j,n}(t) P_j + n(1-t)t^{n-1}P_{n-1} + t^n P_n . Now take the derivative and we see:
B'(t) = -n(1-t)^{n-1}P_0 + (n(1-t)^{n-1} - n(n-1)(1-t)^{n-2}t)P_1 + \frac{d}{dt} (\sum^{n-2}_{j=2} b_{j,n}(t) P_j) +(n(n-1)(1-t)t^{n-2}-nt^{n-1})P_{n-1} + nt^{n-1}P_n

Key observation: every term of \frac{d}{dt} (\sum^{n-2}_{j=2} b_{j,n}(t) P_j) has both a factor of t and (1-t) in it; hence this middle term evaluates to zero when t \in {0,1} and is therefor irrelevant to the calculation of B'(0) and B'(1) .

So B'(0) = -nP_0 + nP_1 = n(P_1 - P_0) (the last two terms are zero at t =0 and B'(1) = -nP_{n-1} + nP_n = n(P_n - P_{n-1}) (the first two terms are zero at t = 1 ).

It follows that the DIRECTION of the (one sided) tangents at the ends of the Bézier curve depends only on the unit tangent vectors in the direction of P_1 - P_0, P_n - P_{n-1} respectively. Of course, the tangent vector has a magnitude (norm) as well, and that certainly affects the curve.

(graphic from here)

cubic02

Here are some examples of Bézier cubic curves: the points with the open circles are P_0, P_3 and the points that are filled in with gray are the control points P_1, P_2 . The last curve is two Bézier cubics joined together.

Software
The software that I provided writes the cubic Bézier curve as a “conventional” cubic in x, y coordinates: B_{x}(t) = a_3t^3 + a_2t^2 + a_1t + a_0 and B_{y} = b_3t^3 + b_2t^2 + b_1t + b_0 .

February 17, 2014

Aitken acceleration: another “easy” example for the students

In a previous post I showed some spreadsheet data to demonstrate the Aitken acceleration process. Here, I’ll go through an example where a sequence converges linearly: let p_n = L +a^n + b^n where 0 < a < b < 1 . We use the form q_n = \frac{p_{n+2}p_{n} -(p_{n+1})^2}{p_{n+2} - 2p_{n+1} + p_n} (I am too lazy to use the traditional “p-hat” notation). First, note that the denominator works out to a^n(1-a)^2 +b^n(1-b)^2

The numerator is a tiny bit more work: the L^2 terms cancel and as far as the rest:
L(a^{n+2} + b^{n+2}) + L(a^n + b^n) +(a^{n+2}+b^{n+2})(a^n + b^2)-2L(a^{n+1}+b^{n+1})-(a^{n+1}+b^{n+1})^2
which simplifies to a term involving L and one that doesn’t. Here is the term involving L :

L(a^{n+2}-2a^{n+1}  + a^n + b^{n+2} -2b^{n+1} +b^n) = L(a^n(1-a)^2 +b^n(1-b)^2)

which, of course, is just L times the denominator.

Now the terms not involving L: (a^{n+2}+b^{n+2})(a^n+b^n) - (a^{n+1} + b^{n+1})^2 = b^na^n(b^2+a^2-2ab) = b^na^n(b-a)^2

So our fraction is merely

\frac{L((a^n(1-a)^2 +b^n(1-b)^2)) + b^na^n(b-a)^2}{a^n(1-a)^2 +b^n(1-b)^2} = L + \frac{b^na^n(b-a)^2}{a^n(1-a)^2 +b^n(1-b)^2}

This can be rearranged to L + \frac{(b-a)^2}{\frac{(1-a)^2}{b^n} + \frac{(1-b)^2}{a^n}}

Clearly as n goes to infinity, the error goes to zero very quickly. It might be instructive to look at the ratio of the errors for p_n and q_n :

This ratio is
(a^n + b^n)\frac{a^n(1-a)^2 + b^n(1-b)^2}{a^nb^n(b-a)^2} =(a^n +b^n)(\frac{1}{b^n}(\frac{1-a}{b-a})^2 + \frac{1}{a^n}(\frac{1-b}{b-a})^2)

Note that in the right hand factor: both squared factors are fixed and the coefficients go to infinity as n goes to infinity. If one multiplies out, one obtains:

((\frac{a}{b})^n +1)(\frac{1-a}{b-a})^2 +  ((\frac{b}{a})^n +1)(\frac{1-b}{b-a})^2 . In the limit, the first term decreases to (\frac{1-a}{b-a})^2 and the second goes to infinity.

Hence the errors in the accelerated sequence are smaller.

February 1, 2014

Numerical methods class: round off error using geometric series and spreadsheets…

Of course computers store numbers in binary; that is numbers are represented by \sum^n_{k=-m} a_k 2^{k} = 2^{n}+a_{n-1}2^{n-1} +....+a_1 2 + a_0 1 + a_{-1}\frac{1}{2} + ...+a_{-m} \frac{1}{2^m} where each a_j \in {0,1} (of course the first coefficient is 1).

We should probably “warm up” by showing some binary expansions. First: someone might ask “how do I know that a number even HAS a binary expansion? The reason: the dyatic rationals are dense in the number line. So just consider a set of nested partitions of the number line, where each partition in the family has width \frac{1}{2^k} and it isn’t hard to see a sequence that converges to a given number. That sequence leads to the binary expansion.

Example: What is the binary expansion for 10.25?

Answer: Break 10.25 down into 10 + 0.25.

10 = 8 + 2 = 2^3 + 2^1 so the “integer part” of the binary expansion is 1010 = 2^3 + 0*2^2 + 2^1 +0*2^0 .

Now for the “faction part”: 0.25 = \frac{1}{4} = \frac{1}{2^2} = 0*\frac{1}{2} + \frac{1}{2^2} = .01 in binary.

Hence 10.25 in base 10 = 1010.01 in binary.

So what about something harder? Integers are easy, so lets look at \frac{3}{7} = \sum_{k=1}a_k\frac{1}{2^k} where each a_k \in \{0,1\} .

Clearly a_1 =0 since \frac{1}{2} is greater than \frac{3}{7} . Now a_2 = 1 so multiply both sides of \frac{3}{7} = \sum_{k=1}a_k\frac{1}{2^k} by 4 to obtain \frac{12}{7} = 1 + a_3 \frac{1}{2} + a_4 \frac{1}{4} + ...

Now subtract 1 from both sides and get \frac{5}{7} =  a_3 \frac{1}{2} + a_4 \frac{1}{4} + ... Here a_3 = 1 so multiply both sides by 2 and subtract 1 and get \frac{3}{7} =   a_4 \frac{1}{2} + a_5 \frac{1}{4} ...

Note that we are back where we started. The implication is that \frac{3}{7} = (\frac{1}{4} + \frac{1}{8})(1 + \frac{1}{2^3} + \frac{1}{2^6} +....) and so the base 2 decimal expansion for \frac{3}{7} is .\overline{011}

Of course there is nothing special about \frac{3}{7} ; a moment’s thought reveals that if one starts with \frac{p}{q} where p is less than q , the process stops (we arrive at zero on the left hand side) or we return to some other \frac{m}{q} where m is less than q ; since there are only a finite number of m we have to arrive at a previous fraction eventually (pigeonhole principle). So every rational number has a terminating or repeating binary decimal expansion.

Now it might be fun to check that we did the expansion correctly; the geometric series formula will help:

(\frac{1}{4} + \frac{1}{8})(1 + \frac{1}{2^3} + \frac{1}{2^6} +....) = (\frac{3}{8})(\frac{1}{1-\frac{1}{2^3}})=\frac{3}{8-1} =\frac{3}{7}

Now about how the computer stores a number: a typical storage scheme is what I call the 1-11-52 scheme. There are 64 bits used for the number. The first is for positive or negative. The next 11 bits are for the exponent. Now 2^{12} - 1 = 2047 so technically the exponent could be as large as 2047 . But we want to be able to store small numbers as well, so 1023 is subtracted from this number so we could get exponents that range from 1024 to -1023 . Also remember these are exponents for base 2.

This leaves 52 digits for the mantissa; that is all the computer can store. This is the place where one can look at round off error.

Let’s see two examples:

10.25 is equal to 2^3 (1 + \frac{1}{4} + \frac{1}{32}) . The first bit is a 0 because the number is positive. The exponent is 3 and so is represented by 3 + 1023 = 1026 which is 10000000011 . Now the mantissa is assumed to start with 1 every time; hence we get 0100100000000000000000000000000000000000000000000000.

So, let’s look at the \frac{3}{7} example. The first digit is 0 since the number is positive. We write the binary expansion as: 2^{-2}(1+\frac{1}{2} + \frac{1}{8} + \frac{1}{16} + ....) . The exponent is -2 which is stored as -2 + 1023 = 1021 = 01111111101 . Now the fun starts: we need an infinite number of bits for an exact representation but we have 52. .011 is what is repeated so we can repeat this 51 times plus the next 0. So the number that serves as a proxy for \frac{3}{7} is really \frac{3}{7} - (\frac{1}{4} + \frac{1}{8})^{18}\sum^{\infty}_{k=0} (\frac{1}{8})^k = \frac{3}{7}- (\frac{3}{8})^{18}(\frac{1}{1 - \frac{1}{8}}) = \frac{3}{7} - \frac{3}{7}(\frac{3}{8})^{18} = \frac{3}{7}(1-(\frac{3}{8})^{18}))

Ok, about spreadsheets:

Of course, it is difficult to use base 2 directly to demonstrate round off error, so many texts use regular decimals and instruct the students to perform calculations using “n-digit rounding” and “n-digit chopping” to show how errors can build up with iterated operations.

One common example: use the quadratic formula to find the roots of a quadratic equation. Of course the standard formula for the roots of ax^2 + bx + c =0 is \frac{-b \pm \sqrt{b^2 -4ac}}{2a} and there is an alternate formula \frac{-2c}{b \pm\sqrt{b^2 -4ac}} that leads to less round off error in the case when the “alternative formula” denominator has large magnitude.

Now the computations, when done by hand, can be tedious and more of an exercise in repeated calculator button punching than anything else.

But a spreadsheet can help, provided one can find a way to use the “ROUND(N, M)” command and the “TRUNC(N, M)” commands to good use. In each case, N is the number to be rounded or truncated and M is the decimal place.

A brief review of these commands: the ROUND command takes a decimal and rounds to the nearest \frac{1}{10^N} place; the TRUNC truncates the decimal to the nearest \frac{1}{10^N} place.
The key: M can be any integer; positive, negative or zero .

Examples: TRUNC(1234.5678, 2) = 1234.56, TRUNC(1234.5678, -1) = 1230, ROUND(1234.5678, 3) = 1234.569, ROUND(1234.5678, -2) = 1200.

Formally: if one lets a_i \in {0, 1, 2, ...9} for all i and if x = \sum_{i=-m}^{k} a_i (10)^i = a_k 10^k + a_{k-1} 10^{k-1} +....a_1 10 + a_0 + a_{-1}10^{-1} +...a_{-m}10^{-m} then TRUNC(x, M) = \sum_{i=-M}^k a_i (10)^i =a_i (10)^i = a_k 10^k + a_{-k-1} 10^{-k-1}+...+ a_{-M}10^{-M}

So if M is negative, this process stops at a positive power of 10.

That observation is the key to getting the spread sheet to round or truncate to a desired number of significant digits.

Recall that the base 10 log picks of the highest power of 10 that appears in a number; for example log10(1234.5678) = 3.xxxxx So let’s exploit that: we can modify our commands as follows:

for TRUNC we can use TRUNC(N, M -1 – INT(log10(abs(N)))) and for ROUND we can use ROUND(N, M -1 – INT(log10(abs(N)))). Subtracting the 1 and the integer part of the base 10 log moves the “start point” of the truncation or the rounding to the left of the decimal and M moves the truncation point back to the right to the proper place.

Of course, this can be cumbersome to type over and over again, so I recommend putting the properly typed formula in some “out of the way” cell and using “cut and paste” to paste the formula in the proper location for the formula.

Here is an example:
This spread sheet shows the “4 digit rounding” calculation for the roots of the quadratics x^2 - (60 + \frac{1}{60})x + 10 and 1.002x^2 - 11.01x +.01265 respectively.

roundspreadsheet

(click for a larger view).

Note that one has to make a cell for EVERY calculation because we have to use 4 digit arithmetic at EACH step. Note also the formula pasted as test in the upper right hand cell.

One can cut an paste as a cell formula as shown below:

round2
Here one uses cell references as input values.

Here is another example:

round3

Note the reference to the previous cells.

January 20, 2014

A bit more prior to admin BS

One thing that surprised me about the professor’s job (at a non-research intensive school; we have a modest but real research requirement, but mostly we teach): I never knew how much time I’d spend doing tasks that have nothing to do with teaching and scholarship. Groan….how much of this do I tell our applicants that arrive on campus to interview? 🙂

But there is something mathematical that I want to talk about; it is a follow up to this post. It has to do with what string theorist tell us: \sum^{\infty}_{k = 1} k = -\frac{1}{12} . Needless to say, they are using a non-standard definition of “value of a series”.

Where I think the problem is: when we hear “series” we think of something related to the usual process of addition. Clearly, this non-standard assignment doesn’t related to addition in the way we usually think about it.

So, it might make more sense to think of a “generalized series” as a map from the set of sequences of real numbers (or: the infinite dimensional real vector space) to the real numbers; the usual “limit of partial sums” definition has some nice properties with respect to sequence addition, scalar multiplication and with respect to a “shift operation” and addition, provided we restrict ourselves to a suitable collection of sequences (say, those whose traditional sum of components are absolutely convergent).

So, this “non-standard sum” can be thought of as a map f:V \rightarrow R^1 where f(\{1, 2, 3, 4, 5,....\}) \rightarrow -\frac{1}{12} . That is a bit less offensive than calling it a “sum”. 🙂

November 12, 2013

Why I teach multiple methods for the inverse Laplace Transform.

I’ll demonstrate with a couple of examples:

y''+4y = sin(2t), y(0) = y'(0) = 0

If we use the Laplace transform, we obtain: (s^2+4)Y = \frac{2}{s^2+4} which leads to Y = \frac{2}{(s^2+4)^2} . Now we’ve covered how to do this without convolutions. But the convolution integral is much easier: write Y = \frac{2}{(s^2+4)^2} = \frac{1}{2} \frac{2}{s^2+4}\frac{2}{s^2+4} which means that y = \frac{1}{2}(sin(2t)*sin(2t)) = \frac{1}{2}\int^t_0 sin(2u)sin(2t-2u)du = -\frac{1}{4}tcos(2t) + \frac{1}{8}sin(2t) .

Note: if the integral went too fast for you and you don’t want to use a calculator, use sin(2t-2u) = sin(2t)cos(2u) - cos(2t)sin(2u) and the integral becomes \frac{1}{2}\int^t_0 sin(2t)cos(2u)sin(2u) -cos(2t)sin^2(2u)du =

\frac{1}{2} (sin(2t))\frac{1}{4}sin^2(2u)|^t_0 - cos(2t)(\frac{1}{4})( t - \frac{1}{4}sin(4u)|^t_0 =

\frac{1}{8}sin^3(2t) - \frac{1}{4}tcos(2t) +\frac{1}{16}sin(4t)cos(2t) =

\frac{1}{8}(sin^3(2t) +sin(2t)cos^2(2t))-\frac{1}{4}tcos(2t)

= \frac{1}{8}sin(2t)(sin^2(2t) + cos^2(2t))-\frac{1}{4}tcos(2t) = -\frac{1}{4}tcos(2t) + \frac{1}{8}sin(2t)

Now if we had instead: y''+4y = sin(t), y(0)=0, y'(0) = 0

The Laplace transform of the equation becomes (s^2+4)Y = \frac{1}{s^2+1} and hence Y = \frac{1}{(s^2+1)(s^2+4)} . One could use the convolution method but partial fractions works easily: one can use the calculator (“algebra” plus “expand”) or:

\frac{A+Bs}{s^2+4} + \frac{C + Ds}{s^2+1} =\frac{1}{(s^2+4)(s^2+1)} . Get a common denominator and match numerators:

(A+Bs)(s^2+1) + (C+Ds)(s^2+4)  = 1 . One can use several methods to resolve this: here we will use s = i to see (C + Di)(3) = 1 which means that D = 0 and C = \frac{1}{3} . Now use s = 2i so obtain (A + 2iB)(-3) = 1 which means that B = 0, A = -\frac{1}{3} so Y = \frac{1}{3} (\frac{1}{s^2+1} - \frac{1}{s^2+4} so y = \frac{1}{3} (sin(t) - \frac{1}{2} sin(2t)) = \frac{1}{3}sin(t) -\frac{1}{6}sin(2t)

So, sometimes the convolution leads us to the answer quicker than other techniques and sometimes other techniques are easier.

Of course, the convolution method has utility beyond the Laplace transform setting.

November 6, 2013

A weird Laplace Transform (a resonance equation)

Filed under: applied mathematics, calculus, differential equations, Laplace transform — collegemathteaching @ 12:01 am

Ok, we have y" + y = sin(t), y(0) =0, y'(0) = 0 . Now we can solve this by, say, undetermined coefficients and obtain y = \frac{1}{2}sin(t) -\frac{1}{2}tcos(t)

But what happens when we try Laplace Transforms? It is easy to see that the Laplace transform of the equation yields (s^2+1)Y(s)=\frac{1}{s^2+1} which yields Y(s) =\frac{1}{(s^2+1)^2}

So, how do we take the inverse Laplace transform of \frac{1}{(s^2+1)^2}?

Here is one way: we recognize L(tf(t)) = -1\frac{d}{ds}F(s) where L(f(t)) = F(s) .

So, we might try integrating: \int \frac{1}{(s^2+1)^2} ds .

(no cheating with a calculator! 🙂 )

In calculus II, we do: s = tan(\theta), ds = sec^2(\theta) d\theta .

Then \int \frac{1}{(s^2+1)^2} ds is transformed into \int \frac{sec^2(\theta)}{sec^4 \theta} d\theta = \int cos^2(\theta) d \theta = \int \frac{1}{2} + \frac{1}{2}cos(2 \theta) d \theta = \frac{1}{2} \theta + \frac{1}{4}sin(2 \theta) (plus a constant, of course).

We now use sin(2\theta) = 2sin(\theta)cos(\theta) to obtain \frac{1}{2} \theta + \frac{1}{4}sin(2 \theta) = \frac{1}{2} \theta + \frac{1}{2} sin(\theta)cos(\theta) + C .

Fair enough. But now we have to convert back to s . We use tan(\theta) = s to obtain cos(\theta) = \frac{1}{\sqrt{s^2+1}}, sin(\theta) = \frac{s}{\sqrt{s^2+1}}

So \frac{1}{2} \theta + \frac{1}{2} sin(\theta)cos(\theta) converts to \frac{1}{2}arctan(s) + C +\frac{1}{2}\frac{s}{s^2+1} = \int Y(s) ds . Now we use the fact that as s goes to infinity, \int Y(s) has to go to zero; this means C = -\frac{\pi}{2} .

So what is the inverse Laplace transform of \int Y(s) ds ?

Clearly, \frac{1}{2}\frac{s}{s^2+1} gets inverse transformed to \frac{1}{2}cos(t) , so the inverse transform for this part of Y(s) is -\frac{t}{2}cos(t).

But what about the other part? \frac{d}{ds} (arctan(s) - \frac{\pi}{2}) = \frac{1}{1+s^2} so \frac{1}{1+s^2} = -L(tf(t)) which implies that tf(t) = -sin(t) so -tf(t) = sin(t) and so the inverse Laplace transform for this part of Y(s) is \frac{1}{2} sin(t) and the result follows.

Put another way: L(\frac{sin(t)}{t}) =- arctan(s) + C but since we want 0 when s = \infty, C = \frac{\pi}{2} and so L(\frac{sin(t)}{t}) = \frac{\pi}{2}- arctan(s) = arctan(\frac{1}{s}) .

« Newer PostsOlder Posts »

Blog at WordPress.com.