College Math Teaching

June 7, 2016

Infinite dimensional vector subspaces: an accessible example that W-perp-perp isn’t always W

Filed under: integrals, linear albegra — Tags: , — collegemathteaching @ 9:02 pm

This is based on a Mathematics Magazine article by Irving Katz: An Inequality of Orthogonal Complements found in Mathematics Magazine, Vol. 65, No. 4, October 1992 (258-259).

In finite dimensional inner product spaces, we often prove that (W^{\perp})^{\perp} = W My favorite way to do this: I introduce Grahm-Schmidt early and find an orthogonal basis for W and then extend it to an orthogonal basis for the whole space; the basis elements that are not basis elements are automatically the basis for W^{\perp} . Then one easily deduces that (W^{\perp})^{\perp} = W (and that any vector can easily be broken into a projection onto W, W^{\perp} , etc.

But this sort of construction runs into difficulty when the space is infinite dimensional; one points out that the vector addition operation is defined only for the addition of a finite number of vectors. No, we don’t deal with Hilbert spaces in our first course. 🙂

So what is our example? I won’t belabor the details as they can make good exercises whose solution can be found in the paper I cited.

So here goes: let V be the vector space of all polynomials. Let W_0 the subspace of even polynomials (all terms have even degree), W_1 the subspace of odd polynomials, and note that V = W_0 \oplus W_1

Let the inner product be \langle p(x), q(x) \rangle = \int^1_{-1}p(x)q(x) dx . Now it isn’t hard to see that (W_0)^{\perp} = W_1 and (W_1)^{\perp} = W_0 .

Now let U denote the subspace of polynomials whose terms all have degree that are multiples of 4 (e. g. 1 + 3x^4 - 2x^8 and note that U^{\perp} \subset W_1 .

To see the reverse inclusion, note that if p(x) \in U^{\perp} , p(x) = p_0 + p_1 where p_0 \in W_0, p_1 \in W_1 and then \int^1_{-1} (p_1(x))x^{4k} dx = 0 for any k \in \{1, 2, ... \} . So we see that it must be the case that \int^1_{-1} (p_0(x))x^{4k} dx = 0 = 2\int^1_0 (p_0(x))x^{4k} dx as well.

Now we can write: p_0(x) = c_0 + c_1 x^2 + ...c_n x^{2n} and therefore \int^1_0 p_0(x) x^{4k} dx = c_0\frac{1}{4k+1} + c_1 \frac{1}{2 + 4k+1}...+c_n \frac{1}{2n + 4k+1} = 0 for k \in \{0, 1, 2, ...2n+1 \}

Now I wish I had a more general proof of this. But these equations (for each k leads a system of equations:

\left( \begin{array}{cccc}  1 & \frac{1}{3} & \frac{1}{5} & ...\frac{1}{2n+1} \\  \frac{1}{5} & \frac{1}{7} & \frac{1}{9}...&\frac{1}{2n+5} \\  ... & ... & ... & ... \\  \frac{1}{4k+1} & \frac{1}{4k+3} & ...& \frac{1}{10n+4}     \end{array} \right)       \left( \begin{array}{c}  c_0 \\  c_1  \\  ...  \\  c_n   \end{array} \right) =     \left( \begin{array}{c}  0 \\  0  \\  ...  \\  0  \end{array} \right)

It turns out that the given square matrix is non-singular (see page 92, no. 3 of Polya and Szego: Problems and Theorems in Analysis, Vol. 2, 1976) and so the c_j = 0. This means p_0 = 0 and so U^{\perp} = W_1

Anyway, the conclusion leaves me cold a bit. It seems as if I should be able to prove: let f be some, say…C^{\infty} function over [0,1] where \int^1_0 x^{2k} f(x) dx = 0 for all k \in \{0, 1, ....\} then f = 0 . I haven’t found a proof as yet…perhaps it is false?

May 20, 2016

Student integral tricks…

Ok, classes ended last week and my brain is way out of math shape. Right now I am contemplating how to show that the complements of this object

bingsling

and of the complement of the object depicted in figure 3, are NOT homeomorphic.

brinknot

I can do this in this very specific case; I am interested in seeing what happens if the “tangle pattern” is changed. Are the complements of these two related objects *always* topologically different? I am reasonably sure yes, but my brain is rebelling at doing the hard work to nail it down.

Anyhow, finals are graded and I am usually treated to one unusual student trick. Here is one for the semester:

\int x^2 \sqrt{x+1} dx =

Now I was hoping that they would say u = x +1 \rightarrow u-1 = x \rightarrow x^2 = u^2-2u+1 at which case the integral is translated to: \int u^{\frac{5}{2}} - 2u^{\frac{3}{2}} + u^{\frac{1}{2}} du which is easy to do.

Now those wanting to do it a more difficult (but still sort of standard) way could do two repetitions of integration by parts with the first set up being x^2 = u, \sqrt{x+1}dx =dv \rightarrow du = 2xdx, v = \frac{2}{3} (x+1)^{\frac{3}{2}} and that works just fine.

But I did see this: x =tan^2(u), dx = 2tan(u)sec^2(u)du, x+1 = tan^2(x)+1 = sec^2(u) (ok, there are some domain issues here but never mind that) and we end up with the transformed integral: 2\int tan^5(u)sec^3(u) du which can be transformed to 2\int (sec^6(u) - 2 sec^4(u) + sec^2(u)) tan(u)sec(u) du by elementary trig identities.

And yes, that leads to an answer of \frac{2}{7}sec^7(u) +\frac{4}{5}sec^5(u) + \frac{2}{3}sec^3(u) + C which, upon using the triangle

integraltrick

Gives you an answer that is exactly in the same form as the desired “rationalization substitution” answer. Yeah, I gave full credit despite the “domain issues” (in the original integral, it is possible for x \in (-1,0] ).

What can I say?

December 22, 2015

Multi leaf polar graphs and total area…

Filed under: calculus, elementary mathematics, integrals — Tags: , — collegemathteaching @ 4:07 am

I saw polar coordinate calculus for the first time in 1977. I’ve taught calculus as a TA and as a professor since 1987. And yet, I’ve never thought of this simple little fact.

Consider r(\theta) = sin(n \theta), 0 \theta \ 2 \pi . Now it is well know that the area formula (area enclosed by a polar graph, assuming no “doubling”, self intersections, etc.) is A = \frac{1}{2} \int^b_a (r(\theta))^2 d \theta

Now the leaved roses have the following types of graphs: n leaves if n is odd, and 2n leaves if n is even (in the odd case, the graph doubles itself).

3leafedrose

4leafrose

6leafedrose

So here is the question: how much total area is covered by the graph (all the leaves put together, do NOT count “overlapping”)?

Well, for n an integer, the answer is: \frac{\pi}{4} if n is odd, and \frac{\pi}{2} if n is even! That’s it! Want to know why?

Do the integral: if n is odd, our total area is \frac{n}{2}\int^{\frac{\pi}{n}}_0 (sin(n \theta)^2 d\theta = \frac{n}{2}\int^{\frac{\pi}{n}}_0 \frac{1}{2} + cos(2n\theta) d\theta =\frac{\pi}{4} . If n is even, we have the same integral but the outside coefficient is \frac{2n}{2} = n which is the only difference. Aside from parity, the number of leaves does not matter as to the total area!

Now the fun starts when one considers a fractional multiple of \theta and I might ponder that some.

May 11, 2015

The hypervolume of the n-ball enclosed by a standard n-1 sphere

I am always looking for interesting calculus problems to demonstrate various concepts and perhaps generate some interest in pure mathematics.
And yes, I like to “blow off some steam” by spending some time having some non-technical mathematical fun with elementary mathematics.

This post uses only:

1. Integration by parts and basic reduction formulas.
2. Trig substitution.
3. Calculation of volumes (and hyper volumes) by the method of cross sections.
4. Induction
5. Elementary arithmetic involving factorials.

The quest: find a formula that finds the (hyper)volume of the region \{(x_1, x_2, x_3,....x_k) | \sum_{i=1}^k x_i^2 \leq R^2 \} \subset R^k

We will assume that the usual tools of calculus work as advertised.

Start. If we done the (hyper)volume of the k-ball by V_k  we will start with the assumption that V_1 = 2R ; that is, the distance between the endpoints of [-R,R] is 2R.

Step 1: we show, via induction, that V_k =c_kR^k where c_k is a constant and R is the radius.

Our proof will be inefficient for instructional purposes.

We know that V_1 =2R hence the induction hypothesis holds for the first case and c_1 = 2 . We now go to show the second case because, for the beginner, the technique will be easier to follow further along if we do the k = 2 case.

Yes, I know that you know that V_2 = \pi R^2 and you’ve seen many demonstrations of this fact. Here is another: let’s calculate this using the method of “area by cross sections”. Here is x^2 + y^2 = R^2 with some y = c cross sections drawn in.

crosssections

Now do the calculation by integrals: we will use symmetry and only do the upper half and multiply our result by 2. At each y = y_c level, call the radius from the center line to the circle R(y) so the total length of the “y is constant” level is 2R(y) and we “multiply by thickness “dy” to obtain V_2 = 4 \int^{y=R}_{y=0} R(y) dy .

But remember that the curve in question is x^2 + y^2 = R^2 and so if we set x = R(y) we have R(y) = \sqrt{R^2 -y^2} and so our integral is 4 \int^{y=R}_{y=0}\sqrt{R^2 -y^2}  dy

Now this integral is no big deal. But HOW we solve it will help us down the road. So here, we use the change of variable (aka “trigonometric substitution”): y = Rsin(t), dy =Rcos(t) to change the integral to:

4 \int^{\frac{\pi}{2}}_0 R^2 cos^2(t) dt = 4R^2 \int^{\frac{\pi}{2}}_0  cos^2(t) dt therefore

V_2 = c_2 R^2 where:

c_2 = 4\int^{\frac{\pi}{2}}_0  cos^2(t)

Yes, I know that this is an easy integral to solve, but I first presented the result this way in order to make a point.

Of course, c_2 = 4\int^{\frac{\pi}{2}}_0  cos^2(t) = 4\int^{\frac{\pi}{2}}_0 \frac{1}{2} + \frac{1}{2}cos(2t) dt = \pi

Therefore, V_2 =\pi R^2 as expected.

Exercise for those seeing this for the first time: compute c_3 and V_3 by using the above methods.

Inductive step: Assume V_k = c_kR^k Now calculate using the method of cross sections above (and here we move away from x-y coordinates to more general labeling):

V_{k+1} = 2\int^R_0 V_k dy = 2 \int^R_0 c_k (R(x_{k+1})^k dx_{k+1} =c_k 2\int^R_0 (R(x_{k+1}))^k dx_{k+1}

Now we do the substitutions: first of all, we note that x_1^2 + x_2^2 + ...x_{k}^2 + x_{k+1}^2 = R^2 and so

x_1^2 + x_2^2 ....+x_k^2 = R^2 - x_{k+1}^2 . Now for the key observation: x_1^2 + x_2^2 ..+x_k^2 =R^2(x_{k+1}) and so R(x_{k+1}) = \sqrt{R^2 - x_{k+1}^2}

Now use the induction hypothesis to note:

V_{k+1} = c_k 2\int^R_0 (R^2 - x_{k+1}^2)^{\frac{k}{2}} dx_{k+1}

Now do the substitution x_{k+1} = Rsin(t), dx_{k+1} = Rcos(t)dt and the integral is now:

V_{k+1} = c_k 2\int^{\frac{\pi}{2}}_0 R^{k+1} cos^{k+1}(t) dt = c_k(2 \int^{\frac{\pi}{2}}_0 cos^{k+1}(t) dt)R^{k+1} which is what we needed to show.

In fact, we have shown a bit more. We’ve shown that c_1 = 2 =2 \int^{\frac{\pi}{2}}_0(cos(t))dt, c_2 = 2 \cdot 2\int^{\frac{\pi}{2}}_0 cos^2(t) dt = c_1 2\int^{\frac{\pi}{2}}_0 cos^2(t) dt and, in general,

c_{k+1} = c_{k}c_{k-1}c_{k-2} ....c_1(2 \int^{\frac{\pi}{2}}_0 cos^{k+1}(t) dt) = 2^{k+1} \int^{\frac{\pi}{2}}_0(cos^{k+1}(t))dt \int^{\frac{\pi}{2}}_0(cos^{k}(t))dt \int^{\frac{\pi}{2}}_0(cos^{k-1}(t))dt .....\int^{\frac{\pi}{2}}_0(cos(t))dt

Finishing the formula

We now need to calculate these easy calculus integrals: in this case the reduction formula:

\int cos^n(x) dx = \frac{1}{n}cos^{n-1}sin(x) + \frac{n-1}{n} \int cos^{n-2}(x) dx is useful (it is merely integration by parts). Now use the limits and elementary calculation to obtain:

\int^{\frac{\pi}{2}}_0 cos^n(x) dx = \frac{n-1}{n} \int^{\frac{\pi}{2}}_0 cos^{n-2}(x)dx to obtain:

\int^{\frac{\pi}{2}}_0 cos^n(x) dx = (\frac{n-1}{n})(\frac{n-3}{n-2})......(\frac{3}{4})\frac{\pi}{4} if n is even and:
\int^{\frac{\pi}{2}}_0 cos^n(x) dx = (\frac{n-1}{n})(\frac{n-3}{n-2})......(\frac{4}{5})\frac{2}{3} if n is odd.

Now to come up with something resembling a closed formula let’s experiment and do some calculation:

Note that c_1 = 2, c_2 = \pi, c_3 = \frac{4 \pi}{3}, c_4 = \frac{(\pi)^2}{2}, c_5 = \frac{2^3 (\pi)^2)}{3 \cdot 5} = \frac{8 \pi^2}{15}, c_6 = \frac{\pi^3}{3 \cdot 2} = \frac{\pi^3}{6} .

So we can make the inductive conjecture that c_{2k} = \frac{\pi^k}{k!} and see how it holds up: c_{2k+2} = 2^2 \int^{\frac{\pi}{2}}_0(cos^{2k+2}(t))dt \int^{\frac{\pi}{2}}_0(cos^{2k+1}(t))dt \frac{\pi^k}{k!}

= 2^2 ((\frac{2k+1}{2k+2})(\frac{2k-1}{2k})......(\frac{3}{4})\frac{\pi}{4})((\frac{2k}{2k+1})(\frac{2k-2}{2k-1})......\frac{2}{3})\frac{\pi^k}{k!}

Now notice the telescoping effect of the fractions from the c_{2k+1} factor. All factors cancel except for the (2k+2) in the first denominator and the 2 in the first numerator, as well as the \frac{\pi}{4} factor. This leads to:

c_{2k+2} = 2^2(\frac{\pi}{4})\frac{2}{2k+2} \frac{\pi^k}{k!} = \frac{\pi^{k+1}}{(k+1)!} as required.

Now we need to calculate c_{2k+1} = 2\int^{\frac{\pi}{2}}_0(cos^{2k+1}(t))dt c_{2k} = 2\int^{\frac{\pi}{2}}_0(cos^{2k+1}(t))dt \frac{\pi^k}{k!}

= 2 (\frac{2k}{2k+1})(\frac{2k-2}{2k-1})......(\frac{4}{5})\frac{2}{3}\frac{\pi^k}{k!} = 2 (\frac{(2k)(2k-2)(2k-4)..(4)(2)}{(2k+1)(2k-1)...(5)(3)} \frac{\pi^k}{k!}

To simplify this further: split up the factors of the k! in the denominator and put one between each denominator factor:

= 2 (\frac{(2k)(2k-2)(2k-4)..(4)(2)}{(2k+1)(k)(2k-1)(k-1)...(3)(5)(2)(3)(1)} \pi^k Now multiply the denominator by 2^k and put one factor with each k-m factor in the denominator; also multiply by 2^k in the numerator to obtain:

(2) 2^k (\frac{(2k)(2k-2)(2k-4)..(4)(2)}{(2k+1)(2k)(2k-1)(2k-2)...(6)(5)(4)(3)(2)} \pi^k Now gather each factor of 2 in the numerator product of the 2k, 2k-2…

= (2) 2^k 2^k \pi^k \frac{k!}{(2k+1)!} = 2 \frac{(4 \pi)^k k!}{(2k+1)!} which is the required formula.

So to summarize:

V_{2k} = \frac{\pi^k}{k!} R^{2k}

V_{2k+1}= \frac{2 k! (4 \pi)^k}{(2k+1)!}R^{2k+1}

Note the following: lim_{k \rightarrow \infty} c_{k} = 0 . If this seems strange at first, think of it this way: imagine the n-ball being “inscribed” in an n-cube which has hyper volume (2R)^n . Then consider the ratio \frac{2^n R^n}{c_n R^n} = 2^n \frac{1}{c_n} ; that is, the n-ball holds a smaller and smaller percentage of the hyper volume of the n-cube that it is inscribed in; note the 2^n corresponds to the number of corners in the n-cube. One might see that the rounding gets more severe as the number of dimensions increases.

One also notes that for fixed radius R, lim_{n \rightarrow \infty} V_n = 0 as well.

There are other interesting aspects to this limit: for what dimension n does the maximum hypervolume occur? As you might expect: this depends on the radius involved; a quick glance at the hyper volume formulas will show why. For more on this topic, including an interesting discussion on this limit itself, see Dave Richardson’s blog Division by Zero. Note: his approach to finding the hyper volume formula is also elementary but uses polar coordinate integration as opposed to the method of cross sections.

October 29, 2014

Hyperbolic Trig Functions and integration…

In college calculus courses, I’ve always wrestled with “how much to cover in the hyperbolic trig functions” section.

On one hand, the hyperbolic trig functions make some integrals much easer. On the other hand: well, it isn’t as if our classes are populated with the highest caliber student (I don’t teach at MIT); many struggle with the standard trig functions. There is only so much that the average young mind can absorb.

In case your memory is rusty:

cosh(x) =\frac{e^x + e^{-x}}{2}, sinh(x) = \frac{e^x -e^{-x}}{2} and then it is immediate that the standard “half/double angle formulas hold; we do remember that \frac{d}{dx}cosh(x) = sinh(x), \frac{d}{dx} = cosh(x).

What is less immediate is the following: sinh^{-1}(x)  = ln(x+\sqrt{x^2+1}), cosh^{-1}(x) = ln(x + \sqrt{x^2 -1}) (x \ge 1).

Exercise: prove these formulas. Hint: if sinh(y) = x then e^{y} - 2x- e^{-y} =0 so multiply both sides by e^{y} to obtain e^{2y} -2x e^y - 1 =0 now use the quadratic formula to solve for e^y and keep in mind that e^y is positive.

For the other formula: same procedure, and remember that we are using the x \ge 0 branch of cosh(x) and that cosh(x) \ge 1

The following follows easily: \frac{d}{dx} sinh^{-1} (x) = \frac{1}{\sqrt{x^2 + 1}} (just set up sinh(y) = x and use implicit differentiation followed by noting cosh^2(x) -sinh^2(x) = 1 . ) and \frac{d}{dx} cosh^{-1}(x) = \frac{1}{\sqrt{x^2-1}} (similar derivation).

Now, we are off and running.

Example: \int \sqrt{x^2 + 1} dx =

We can make the substitution x =sinh(u), dx = cosh(u) du and obtain \int cosh^2(u) du = \int \frac{1}{2} (cosh(2u) + 1)du = \frac{1}{4}sinh(2u) + \frac{1}{2} u + C . Now use sinh(2u) = 2 sinh(u)cosh(u) and we obtain:

\frac{1}{2}sinh(u)cosh(u) + \frac{u}{2} + C . The back substitution isn’t that hard if we recognize cosh(u) = \sqrt{sinh^2(u) + 1} so we have \frac{1}{2} sinh(u) \sqrt{sinh^2(u) + 1} + \frac{u}{2} + C . Back substitution is now easy:

\frac{1}{2} x \sqrt{x^2+1} + \frac{1}{2} ln(x + \sqrt{x^2 + 1}) + C . No integration by parts is required and the dreaded \int sec^3(x) dx integral is avoided. Ok, I was a bit loose about the domains here; we can make this valid for negative values of x by using an absolute value with the ln(x + \sqrt{x^2 + 1}) term.

August 31, 2014

The convolution integral: do some examples in Calculus III or not?

For us, calculus III is the most rushed of the courses, especially if we start with polar coordinates. Getting to the “three integral theorems” is a real chore. (ok, Green’s, Divergence and Stoke’s theorem is really just \int_{\Omega} d \sigma = \int_{\partial \Omega} \sigma but that is the subject of another post)

But watching this lecture made me wonder: should I say a few words about how to calculate a convolution integral?

Note: I’ve discussed a type of convolution integral with regards to solving differential equations here.

In the context of Fourier Transforms, the convolution integral is defined as it was in analysis class: f*g = \int^{\infty}_{-\infty} f(x-t)g(t) dt . Typically, we insist that the functions be, say, L^1 and note that it is a bit of a chore to show that the convolution of two L^1 functions is L^1 ; one proves this via the Fubini-Tonelli Theorem.

(The straight out product of two L^1 functions need not be L^1 ; e.g, consider f(x) = \frac {1}{\sqrt{x}} for x \in (0,1] and zero elsewhere)

So, assuming that the integral exists, how do we calculate it? Easy, you say? Well, it can be, after practice.

But to test out your skills, let f(x) = g(x) be the function that is 1 for x \in [\frac{-1}{2}, \frac{1}{2}] and zero elsewhere. So, what is f*g ???

So, it is easy to see that f(x-t)g(t) only assumes the value of 1 on a specific region of the (x,t) plane and is zero elsewhere; this is just like doing an iterated integral of a two variable function; at least the first step. This is why it fits well into calculus III.

f(x-t)g(t) = 1 for the following region: (x,t), -\frac{1}{2} \le x-t \le \frac{1}{2}, -\frac{1}{2} \le t \le \frac{1}{2}

This region is the parallelogram with vertices at (-1, -\frac{1}{2}), (0, -\frac{1}{2}), (0 \frac{1}{2}), (1, \frac{1}{2}) .

convolutiondraw

Now we see that we can’t do the integral in one step. So, the function we are integrating f(x-t)f(t) has the following description:

f(x-t)f(t)=\left\{\begin{array}{c} 1,x \in [-1,0], -\frac{1}{2} t \le \frac{1}{2}+x \\ 1 ,x\in [0,1], -\frac{1}{2}+x \le t \le \frac{1}{2} \\ 0 \text{ elsewhere} \end{array}\right.

So the convolution integral is \int^{\frac{1}{2} + x}_{-\frac{1}{2}} dt = 1+x for x \in [-1,0) and \int^{\frac{1}{2}}_{-\frac{1}{2} + x} dt = 1-x for x \in [0,1] .

That is, of course, the tent map that we described here. The graph is shown here:

tentmapgraph

So, it would appear to me that a good time to do a convolution exercise is right when we study iterated integrals; just tell the students that this is a case where one “stops before doing the outside integral”.

August 25, 2014

Fourier Transform of the “almost Gaussian” function with a residue integral

This is based on the lectures on the Fourier Transform by Brad Osgood from Stanford:

And here, F(f)(s) = \int^{\infty}_{-\infty} e^{-2 \pi i st} f(t) dt provided the integral converges.

The “almost Gaussian” integrand is f(t) = e^{-\pi t^2} ; one can check that \int^{\infty}_{-\infty} e^{-\pi t^2} dt = 1 . One way is to use the fact that \int^{\infty}_{-\infty} e^{-x^2} dx = \sqrt{\pi} and do the substitution x = \sqrt{\pi} t; of course one should be able to demonstrate the fact to begin with. (side note: a non-standard way involving symmetries and volumes of revolution discovered by Alberto Delgado can be found here)

So, during this lecture, Osgood shows that F(e^{-\pi t^2}) = e^{-\pi s^2} ; that is, this modified Gaussian function is “its own Fourier transform”.

I’ll sketch out what he did in the lecture at the end of this post. But just for fun (and to make a point) I’ll give a method that uses an elementary residue integral.

Both methods start by using the definition: F(s) = \int^{\infty}_{-\infty} e^{-2 \pi i ts} e^{-\pi t^2} dt

Method 1: combine the exponential functions in the integrand:

\int^{\infty}_{-\infty} e^{-\pi(t^2 +2  i ts}  dt . Now complete the square to get: \int^{\infty}_{-\infty} e^{-\pi(t^2 +2  i ts-s^2)-\pi s^2}  dt

Now factor out the factor involving s alone and write as a square: e^{-\pi s^2}\int^{\infty}_{-\infty} e^{-\pi(t+is)^2}  dt

Now, make the substitution x = t+is, dx = dt to obtain:

e^{-\pi s^2}\int^{\infty+is}_{-\infty+is} e^{-\pi x^2}  dx

Now we show that the above integral is really equal to e^{-\pi s^2}\int^{\infty}_{-\infty} e^{-\pi x^2}  dx = e^{\pi s^2} (1) = e^{-\pi s^2}

To show this, we perform \int_{\gamma} e^{z^2} dz along the retangular path \gamma : -x, x, x+is, -x+is and let x \rightarrow \infty

countour
Now the integral around the contour is 0 because e^{-z^2} is analytic.

We wish to calculate the negative of the integral along the top boundary of the contour. Integrating along the bottom gives 1.
As far as the sides: if we fix s we note that e^{-z^2} = e^{(s^2-x^2)+2si} and the magnitude goes to zero as x \rightarrow \infty So the integral along the vertical paths approaches zero, therefore the integrals along the top and bottom contours agree in the limit and the result follows.

Method 2: The method in the video
This uses “differentiation under the integral sign”, which we talk about here.

Stat with F(s) = \int^{\infty}_{-\infty} e^{-2 \pi i ts} e^{-\pi t^2} dt and note \frac{dF}{ds} = \int^{\infty}_{-\infty} (-2 \pi i t) e^{-2 \pi i ts} e^{-\pi t^2} dt

Now we do integration by parts: u = e^{-2 \pi i ts}, dv = (-2 \pi i t)e^{-\pi t^2} \rightarrow v = i e^{-\pi t^2}, du = (-2 \pi i s)e^{-2 \pi i ts} and the integral becomes:

(i e^{-\pi t^2} e^{-2 \pi i ts}|^{\infty}_{-\infty} - (i)(-2 \pi i s) \int^{\infty}_{-\infty} e^{-2 \pi i ts} e^{-\pi t^2} dt

Now the first term is zero for all values of s as t \rightarrow \infty . The second term is merely:

-(2 \pi s) \int^{\infty}_{-\infty} e^{-2 \pi i ts} e^{-\pi t^2} dt = -(2 \pi s) F(s) .

So we have shown that \frac{d F}{ds} = (-2 \pi s)F which is a differential equation in s which has solution F = F_0 e^{- \pi s^2} (a simple separation of variables calculation will verify this). Now to solve for the constant F_0 note that F(0) = \int^{\infty}_{-\infty} e^{0} e^{-\pi t^2} dt = 1 .

The result follows.

Now: which method was easier? The second required differential equations and differentiating under the integral sign; the first required an easy residue integral.

By the way: the video comes from an engineering class. Engineers need to know this stuff!

August 21, 2014

Calculation of the Fourier Transform of a tent map, with a calculus tip….

I’ve been following these excellent lectures by Professor Brad Osgood of Stanford. As an aside: yes, he is dynamite in the classroom, but there is probably a reason that Stanford is featuring him. 🙂

And yes, his style is good for obtaining a feeling of comradery that is absent in my classroom; at least in the lower division “service” classes.

This lecture takes us from Fourier Series to Fourier Transforms. Of course, he admits that the transition here is really a heuristic trick with symbolism; it isn’t a bad way to initiate an intuitive feel for the subject though.

However, the point of this post is to offer a “algebra of calculus trick” for dealing with the sort of calculations that one might encounter.

By the way, if you say “hey, just use a calculator” you will be BANNED from this blog!!!! (just kidding…sort of. 🙂 )

So here is the deal: let f(x) represent the tent map: the support of f is [-1,1] and it has the following graph:

tentmapgraph

The formula is: f(x)=\left\{\begin{array}{c} x+1,x \in [-1,0) \\ 1-x ,x\in [0,1] \\ 0 \text{ elsewhere} \end{array}\right.

So, the Fourier Transform is F(f) = \int^{\infty}_{-\infty} e^{-2 \pi i st}f(t)dt = \int^0_{-1} e^{-2 \pi i st}(1+t)dt + \int^1_0e^{-2 \pi i st}(1-t)dt

Now, this is an easy integral to do, conceptually, but there is the issue of carrying constants around and being tempted to make “on the fly” simplifications along the way, thereby leading to irritating algebraic errors.

So my tip: just let a = -2 \pi i s and do the integrals:

\int^0_{-1} e^{at}(1+t)dt + \int^1_0e^{at}(1-t)dt and substitute and simplify later:

Now the integrals become: \int^{1}_{-1} e^{at}dt + \int^0_{-1}te^{at}dt - \int^1_0 te^{at} dt.
These are easy to do; the first is merely \frac{1}{a}(e^a - e^{-a}) and the next two have the same anti-derivative which can be obtained by a “integration by parts” calculation: \frac{t}{a}e^{at} -\frac{1}{a^2}e^{at}; evaluating the limits yields:

-\frac{1}{a^2}-(\frac{-1}{a}e^{-a} -\frac{1}{a^2}e^{-a}) - (\frac{1}{a}e^{a} -\frac{1}{a^2}e^a)+ (-\frac{1}{a^2})

Add the first integral and simplify and we get: -\frac{1}{a^2}(2 - (e^{-a} -e^{a}) . NOW use a = -2\pi i s and we have the integral is \frac{1}{4 \pi^2 s^2}(2 -(e^{2 \pi i s} -e^{-2 \pi i s}) = \frac{1}{4 \pi^2 s^2}(2 - cos(2 \pi s)) by Euler’s formula.

Now we need some trig to get this into a form that is “engineering/scientist” friendly; here we turn to the formula: sin^2(x) = \frac{1}{2}(1-cos(2x)) so 2 - cos(2 \pi s) = 4sin^2(\pi s) so our answer is \frac{sin^2( \pi s)}{(\pi s)^2} = (\frac{sin(\pi s)}{\pi s})^2 which is often denoted as (sinc(s))^2 as the “normalized” sinc(x) function is given by \frac{sinc(\pi x)}{\pi x} (as we want the function to have zeros at integers and to “equal” one at x = 0 (remember that famous limit!)

So, the point is that using a made the algebra a whole lot easier.

Now, if you are shaking your head and muttering about how this calculation was crude that that one usually uses “convolution” instead: this post is probably too elementary for you. 🙂

August 6, 2014

Where “j” comes from

I laughed at what was said from 30:30 to 31:05 or so:

If you are wondering why your engineering students want to use j = \sqrt{-1} is is because, in electrical engineering, i usually stands for “current”.

Though many of you know this, this lesson also gives an excellent reason to use the complex form of the Fourier series; e. g. if f is piece wise smooth and has period 1, write f(x) = \Sigma^{k = \infty}_{k=-\infty}c_k e^{i 2k\pi x} (usual abuse of the equals sign) rather than writing it out in sines and cosines. of course, \overline{c_{-k}} = c_k if f is real valued.

How is this easier? Well, when you give a demonstration as to what the coefficients have to be (assuming that the series exists to begin with, the orthogonality condition is very easy to deal with. Calculate: c_m= \int^1_0 e^{i 2k\pi t}e^{i 2m\pi x} dx for when k \ne m . There is nothing to it; easy integral. Of course, one has to demonstrate the validity of e^{ix} = cos(x) + isin(x) and show that the usual differentiation rules work ahead of time, but you need to do that only once.

July 31, 2014

Stupid question: why does it appear to us that differentiation is easier than anti-differentiation?

Filed under: calculus, elliptic curves, integrals — Tags: , — collegemathteaching @ 8:05 pm

This post is inspired by my rereading a favorite book of mine: Underwood Dudley’s Mathematical Cranks

mathcrank

There was the chapter about the circumference of an ellipse. Now, given \frac{x^2}{a^2} + \frac{y^2}{b^2} = 1 it isn’t hard to see that s^2 = {dx}^2 + {dy}^2 and so going with the portion in the first quadrant: one can derive that the circumference is given by the elliptic integral of the second kind, which is one of those integrals that can NOT be solved in “closed form” by anti-differentiation of elementary functions.

There are lots of integrals like this; e. g. \int e^{x^2} dx is a very famous example. Here is a good, accessible paper on the subject of non-elementary integrals (by Marchisotto and Zakeri).

So this gets me thinking: why is anti-differentiation so much harder than taking the derivative? Is this because of the functions that we’ve chosen to represent the “elementary anti-derivatives”?

I know; this is not a well formulated question; but it has always bugged me. Oh yes, I am teaching two sections of first semester calculus this upcoming semester.

Older Posts »

Blog at WordPress.com.