# College Math Teaching

## August 31, 2014

### The convolution integral: do some examples in Calculus III or not?

For us, calculus III is the most rushed of the courses, especially if we start with polar coordinates. Getting to the “three integral theorems” is a real chore. (ok, Green’s, Divergence and Stoke’s theorem is really just $\int_{\Omega} d \sigma = \int_{\partial \Omega} \sigma$ but that is the subject of another post)

But watching this lecture made me wonder: should I say a few words about how to calculate a convolution integral?

Note: I’ve discussed a type of convolution integral with regards to solving differential equations here.

In the context of Fourier Transforms, the convolution integral is defined as it was in analysis class: $f*g = \int^{\infty}_{-\infty} f(x-t)g(t) dt$. Typically, we insist that the functions be, say, $L^1$ and note that it is a bit of a chore to show that the convolution of two $L^1$ functions is $L^1$; one proves this via the Fubini-Tonelli Theorem.

(The straight out product of two $L^1$ functions need not be $L^1$; e.g, consider $f(x) = \frac {1}{\sqrt{x}}$ for $x \in (0,1]$ and zero elsewhere)

So, assuming that the integral exists, how do we calculate it? Easy, you say? Well, it can be, after practice.

But to test out your skills, let $f(x) = g(x)$ be the function that is $1$ for $x \in [\frac{-1}{2}, \frac{1}{2}]$ and zero elsewhere. So, what is $f*g$???

So, it is easy to see that $f(x-t)g(t)$ only assumes the value of $1$ on a specific region of the $(x,t)$ plane and is zero elsewhere; this is just like doing an iterated integral of a two variable function; at least the first step. This is why it fits well into calculus III.

$f(x-t)g(t) = 1$ for the following region: $(x,t), -\frac{1}{2} \le x-t \le \frac{1}{2}, -\frac{1}{2} \le t \le \frac{1}{2}$

This region is the parallelogram with vertices at $(-1, -\frac{1}{2}), (0, -\frac{1}{2}), (0 \frac{1}{2}), (1, \frac{1}{2})$.

Now we see that we can’t do the integral in one step. So, the function we are integrating $f(x-t)f(t)$ has the following description:

$f(x-t)f(t)=\left\{\begin{array}{c} 1,x \in [-1,0], -\frac{1}{2} t \le \frac{1}{2}+x \\ 1 ,x\in [0,1], -\frac{1}{2}+x \le t \le \frac{1}{2} \\ 0 \text{ elsewhere} \end{array}\right.$

So the convolution integral is $\int^{\frac{1}{2} + x}_{-\frac{1}{2}} dt = 1+x$ for $x \in [-1,0)$ and $\int^{\frac{1}{2}}_{-\frac{1}{2} + x} dt = 1-x$ for $x \in [0,1]$.

That is, of course, the tent map that we described here. The graph is shown here:

So, it would appear to me that a good time to do a convolution exercise is right when we study iterated integrals; just tell the students that this is a case where one “stops before doing the outside integral”.

## August 27, 2014

### Nice collection of Math GIFs

Filed under: basic algebra, calculus, elementary mathematics, pedagogy — Tags: — collegemathteaching @ 12:25 am

No, these GIF won’t explain better than an instructor, but I found many of them to be fun.

Via: IFLS

## August 26, 2014

### How some mathematical definitions are made

I love what Brad Osgood says at 47:37.

The context: one is showing that the Fourier transform of the convolution of two functions is the product of the Fourier transforms (very similar to what happens in the Laplace transform); that is $\mathcal{F}(f*g) = F(s)G(s)$ where $f*g = \int^{\infty}_{-\infty} f(x-t)g(t) dt$

## August 25, 2014

### Fourier Transform of the “almost Gaussian” function with a residue integral

This is based on the lectures on the Fourier Transform by Brad Osgood from Stanford:

And here, $F(f)(s) = \int^{\infty}_{-\infty} e^{-2 \pi i st} f(t) dt$ provided the integral converges.

The “almost Gaussian” integrand is $f(t) = e^{-\pi t^2}$; one can check that $\int^{\infty}_{-\infty} e^{-\pi t^2} dt = 1$. One way is to use the fact that $\int^{\infty}_{-\infty} e^{-x^2} dx = \sqrt{\pi}$ and do the substitution $x = \sqrt{\pi} t$; of course one should be able to demonstrate the fact to begin with. (side note: a non-standard way involving symmetries and volumes of revolution discovered by Alberto Delgado can be found here)

So, during this lecture, Osgood shows that $F(e^{-\pi t^2}) = e^{-\pi s^2}$; that is, this modified Gaussian function is “its own Fourier transform”.

I’ll sketch out what he did in the lecture at the end of this post. But just for fun (and to make a point) I’ll give a method that uses an elementary residue integral.

Both methods start by using the definition: $F(s) = \int^{\infty}_{-\infty} e^{-2 \pi i ts} e^{-\pi t^2} dt$

Method 1: combine the exponential functions in the integrand:

$\int^{\infty}_{-\infty} e^{-\pi(t^2 +2 i ts} dt$. Now complete the square to get: $\int^{\infty}_{-\infty} e^{-\pi(t^2 +2 i ts-s^2)-\pi s^2} dt$

Now factor out the factor involving $s$ alone and write as a square: $e^{-\pi s^2}\int^{\infty}_{-\infty} e^{-\pi(t+is)^2} dt$

Now, make the substitution $x = t+is, dx = dt$ to obtain:

$e^{-\pi s^2}\int^{\infty+is}_{-\infty+is} e^{-\pi x^2} dx$

Now we show that the above integral is really equal to $e^{-\pi s^2}\int^{\infty}_{-\infty} e^{-\pi x^2} dx = e^{\pi s^2} (1) = e^{-\pi s^2}$

To show this, we perform $\int_{\gamma} e^{z^2} dz$ along the retangular path $\gamma$: $-x, x, x+is, -x+is$ and let $x \rightarrow \infty$

Now the integral around the contour is 0 because $e^{-z^2}$ is analytic.

We wish to calculate the negative of the integral along the top boundary of the contour. Integrating along the bottom gives 1.
As far as the sides: if we fix $s$ we note that $e^{-z^2} = e^{(s^2-x^2)+2si}$ and the magnitude goes to zero as $x \rightarrow \infty$ So the integral along the vertical paths approaches zero, therefore the integrals along the top and bottom contours agree in the limit and the result follows.

Method 2: The method in the video
This uses “differentiation under the integral sign”, which we talk about here.

Stat with $F(s) = \int^{\infty}_{-\infty} e^{-2 \pi i ts} e^{-\pi t^2} dt$ and note $\frac{dF}{ds} = \int^{\infty}_{-\infty} (-2 \pi i t) e^{-2 \pi i ts} e^{-\pi t^2} dt$

Now we do integration by parts: $u = e^{-2 \pi i ts}, dv = (-2 \pi i t)e^{-\pi t^2} \rightarrow v = i e^{-\pi t^2}, du = (-2 \pi i s)e^{-2 \pi i ts}$ and the integral becomes:

$(i e^{-\pi t^2} e^{-2 \pi i ts}|^{\infty}_{-\infty} - (i)(-2 \pi i s) \int^{\infty}_{-\infty} e^{-2 \pi i ts} e^{-\pi t^2} dt$

Now the first term is zero for all values of $s$ as $t \rightarrow \infty$. The second term is merely:

$-(2 \pi s) \int^{\infty}_{-\infty} e^{-2 \pi i ts} e^{-\pi t^2} dt = -(2 \pi s) F(s)$.

So we have shown that $\frac{d F}{ds} = (-2 \pi s)F$ which is a differential equation in $s$ which has solution $F = F_0 e^{- \pi s^2}$ (a simple separation of variables calculation will verify this). Now to solve for the constant $F_0$ note that $F(0) = \int^{\infty}_{-\infty} e^{0} e^{-\pi t^2} dt = 1$.

The result follows.

Now: which method was easier? The second required differential equations and differentiating under the integral sign; the first required an easy residue integral.

By the way: the video comes from an engineering class. Engineers need to know this stuff!

### How to succeed at calculus, and why it is worth it!

Filed under: calculus, student learning — Tags: , — collegemathteaching @ 2:06 pm

This post is intended to help the student who is willing to put time and effort into succeeding in a college calculus class.

Part One: How to Study

The first thing to remember is that most students will have to study outside of class in order to learn the material. There are those who pick things up right away, but these students tend to be the rare exception.

Think of it this way: suppose you want to learn to play the piano. A teacher can help show you how to play it and provide a practice schedule. But you won’t be any good if you don’t practice.

Suppose you want to run a marathon. A coach can help you with running form, provide workout schedules and provide feedback. But if you don’t run those workouts, you won’t build up the necessary speed and endurance for success.

The same principle applies for college mathematics classes; you really learn the material when you study it and do the homework exercises.

Here are some specific tips on how to study:

1. It is optimal if you can spend a few minutes scanning the text for the upcoming lesson. If you do this, you’ll be alert for the new concepts as they are presented and the concepts might sink in quicker.

2. There is some research that indicates:
a. It is better to have several shorter study sessions rather than one long one and
b. There is an optimal time delay between study sessions and the associated lecture.

Look at it this way: if you wait too long after the lesson to study it, you would have forgotten much of what was presented. If you study right away, then you really have, in essence, a longer class room session. It is probably best to hit the material right when the initial memory starts to fade; this time interval will vary from individual to individual. For more on this and for more on learning for long term recall, see this article.

3. Learn the basic derivative formulas inside and out; that is, know what the derivatives of functions like $sin(x), cos(x), tan(x), sec(x), arctan(x), arcsin(x), exp(x), ln(x)$ are on sight; you shouldn’t have to think about them. The same goes for the basic trig identities such as $\sin ^{2}(x)+\cos ^{2}(x)=1$ and $\tan^{2}(x)+1 = \sec^{2}(x)$

Why is this? The reason is that much of calculus (though not all!) boils down to pattern recognition.

For example, suppose you need to calculate:

$\int \dfrac{(\arctan (x))^{5}}{1+x^{2}}dx=$

If you don’t know your differentiation formulas, this problem is all but impossible. On the other hand, if you do know your differentiation formulas, then you’ll immediately recognize the $arctan(x)$ and it’s derivative $\dfrac{1}{1+x^{2}}$ and you’ll see that this problem is really the very easy problem $\int u^{5}du$.

But this all starts with having “automatic” knowledge of the derivative formulas.

Note: this learning is something your professor or TA cannot do for you!

4. Be sure to do some study problems with your notes and your book closed. If you keep flipping to your notes and book to do the homework problems, you won’t be ready for the exams. You have to kick up the training wheels.
Try this; the difference will surprise you. There is also evidence that forcing yourself to recall the material FROM YOUR OWN BRAIN helps you learn the material! Give yourself frequent quizzes on what you are learning.

5. When reviewing for an exam, study the problems in mixed fashion. For example, get some note cards and write problems from the various sections on them (say, some from 3.1, some from 3.2, some from 3.3, and so on), mix the cards, then try the problems. If you just review section by section, you’ll go into each problem knowing what technique to use each time right from the start. Many times, half of the battle is knowing which technique to use with each problem; that is part of the course! Do the problems in mixed order.

If you find yourself whining complaining “I don’t know where to start” it means that you don’t know the material well enough. Remember that a trained monkey can repeat specific actions; you have to be a bit better than that!

6. Read the book, S L O W L Y, with pen and paper nearby. Make sure that you work through the examples in the text and that you understand the reasons for each step.

7. For the “more theoretical” topics, know some specific examples for specific theorems. Here is what I am talking about:

a. Intermediate value theorem: recall that if $f(x)=\frac{1}{x}$, then $f(-1)=-1,f(1)=1$ but there is no $x$ such that $f(x) = 0$. Why does this not violate the intermediate value theorem?

b. Mean value theorem: note also that there is no $c$ such that $f'(c) = \frac{f(1)-f(-1)}{2} = 0$. Why does this NOT violate the Mean Value Theorem?

c. Series: it is useful to know basic series such as those for $exp(x), sin(x), cos(x)$. It is also good to know some basic examples such as the geometric series, the divergent harmonic series $\sum \frac{1}{k}$ and the conditionally convergent series $\sum (-1)^{k}\frac{1}{k}$.

d. Limit definition of derivative: be able to work a few basic examples of the derivative via the limit definition: $f(x) = x^{n}, f(x) = \frac{1}{x}, f(x)=\sqrt{x}$ and know why the derivative of $f(x) = |x|$ and $f(x) = x^{1/3}$ do not exist at $x = 0$.

Part II: Attitude
Your attitude will be very important.

1. Remember that your effort will be essential! Again, you can’t learn to run a marathon without getting off of the couch and making your muscles sore. Learning mathematics involves some frustration and, yes, at times, some tedium. Learning is fun OVERALL but it isn’t always fun at all times. You will encounter discomfort and unpleasantness at times.

2. Remember that winners look for ways to succeed; losers and whiners look for excuses for failure. You can always find those who will be willing to enable your underachievement. Instead, seek out those who bring out your best.

3. Success is NOT guaranteed; that is what makes success rewarding! Think of how good you’ll feel about yourself if you mastered something that seemed impossible to master at first. And yes, anyone who has achieved anything that is remotely difficult has taken some lumps and bruises along the way. You will NOT be spared these.

Remember that if you duck the calculus challenge, you are, in essence, slamming many doors of opportunity shut right from the get-go.

4. On the other hand, remember that Calculus (the first two semesters anyway) is a Freshman level class; exceptional mathematical talent is not a prerequisite for success. True, calculus is easy for some but that isn’t the point. Most reasonably intelligent people can have success, if they are willing to put forth the proper effort in the proper manner.

Just think of how good it will feel to succeed in an area that isn’t your strong suit!

### Dinette set on calculus…

Filed under: calculus, media — Tags: , — collegemathteaching @ 12:47 pm

Note: if you haven’t followed Julie Larson’s comic strip Dinette Set, the characters featured in it are not, well, the world’s most intellectually minded characters (with the exception of Patty). 🙂

Ironically, I see such attitudes displayed by people…posting their thoughts on the internet via a computer or smart phone. The irony doesn’t even occur to them.

## August 22, 2014

### Fields Medal 2014 and Mathfest 2011

Filed under: academia, advanced mathematics — Tags: — collegemathteaching @ 12:11 pm

One of the main speakers at Mathfest, 2011, was one of the Fields Medalists this year: Manjur Bhargava, of Princeton.

Here is what I said about him on the first day:

Lastly Manjul Bhargava of Princeton (who is already full professor though he is less than half my age; he was an Andrew Wiles student) gave a delightful lecture on algebraic curves.

What I noted: all three of these mathematicians are successful enough to be arrogant (especially the third). They could have blown us all away. Yes, they took the time and care to give presentations that actually taught us something.

Some top-of-the-line researchers are excellent teachers too.

### Almost ready to start the semester again…

Filed under: calculus, editorial, elementary mathematics — Tags: — collegemathteaching @ 12:05 pm

I am teaching two freshman sections…it is a breaking in process for them.

## August 21, 2014

### Calculation of the Fourier Transform of a tent map, with a calculus tip….

I’ve been following these excellent lectures by Professor Brad Osgood of Stanford. As an aside: yes, he is dynamite in the classroom, but there is probably a reason that Stanford is featuring him. 🙂

And yes, his style is good for obtaining a feeling of comradery that is absent in my classroom; at least in the lower division “service” classes.

This lecture takes us from Fourier Series to Fourier Transforms. Of course, he admits that the transition here is really a heuristic trick with symbolism; it isn’t a bad way to initiate an intuitive feel for the subject though.

However, the point of this post is to offer a “algebra of calculus trick” for dealing with the sort of calculations that one might encounter.

By the way, if you say “hey, just use a calculator” you will be BANNED from this blog!!!! (just kidding…sort of. 🙂 )

So here is the deal: let $f(x)$ represent the tent map: the support of $f$ is $[-1,1]$ and it has the following graph:

The formula is: $f(x)=\left\{\begin{array}{c} x+1,x \in [-1,0) \\ 1-x ,x\in [0,1] \\ 0 \text{ elsewhere} \end{array}\right.$

So, the Fourier Transform is $F(f) = \int^{\infty}_{-\infty} e^{-2 \pi i st}f(t)dt = \int^0_{-1} e^{-2 \pi i st}(1+t)dt + \int^1_0e^{-2 \pi i st}(1-t)dt$

Now, this is an easy integral to do, conceptually, but there is the issue of carrying constants around and being tempted to make “on the fly” simplifications along the way, thereby leading to irritating algebraic errors.

So my tip: just let $a = -2 \pi i s$ and do the integrals:

$\int^0_{-1} e^{at}(1+t)dt + \int^1_0e^{at}(1-t)dt$ and substitute and simplify later:

Now the integrals become: $\int^{1}_{-1} e^{at}dt + \int^0_{-1}te^{at}dt - \int^1_0 te^{at} dt.$
These are easy to do; the first is merely $\frac{1}{a}(e^a - e^{-a})$ and the next two have the same anti-derivative which can be obtained by a “integration by parts” calculation: $\frac{t}{a}e^{at} -\frac{1}{a^2}e^{at}$; evaluating the limits yields:

$-\frac{1}{a^2}-(\frac{-1}{a}e^{-a} -\frac{1}{a^2}e^{-a}) - (\frac{1}{a}e^{a} -\frac{1}{a^2}e^a)+ (-\frac{1}{a^2})$

Add the first integral and simplify and we get: $-\frac{1}{a^2}(2 - (e^{-a} -e^{a})$. NOW use $a = -2\pi i s$ and we have the integral is $\frac{1}{4 \pi^2 s^2}(2 -(e^{2 \pi i s} -e^{-2 \pi i s}) = \frac{1}{4 \pi^2 s^2}(2 - cos(2 \pi s))$ by Euler’s formula.

Now we need some trig to get this into a form that is “engineering/scientist” friendly; here we turn to the formula: $sin^2(x) = \frac{1}{2}(1-cos(2x))$ so $2 - cos(2 \pi s) = 4sin^2(\pi s)$ so our answer is $\frac{sin^2( \pi s)}{(\pi s)^2} = (\frac{sin(\pi s)}{\pi s})^2$ which is often denoted as $(sinc(s))^2$ as the “normalized” $sinc(x)$ function is given by $\frac{sinc(\pi x)}{\pi x}$ (as we want the function to have zeros at integers and to “equal” one at $x = 0$ (remember that famous limit!)

So, the point is that using $a$ made the algebra a whole lot easier.

Now, if you are shaking your head and muttering about how this calculation was crude that that one usually uses “convolution” instead: this post is probably too elementary for you. 🙂

## August 18, 2014

### Interchanging infinite sums with integrals

Filed under: Uncategorized — collegemathteaching @ 1:03 pm

This part of this series of lectures got me to thinking about this topic; the relevant part starts at 13 minutes:

(side note: I am enjoying these and hope to finish all 30!)

Of interest here: if $u(x,t)$ describes the heat in a one dimensional circle of metal and the radius is one, one can do the analysis of the heat equation $u_t = a u_{xx}$ with initial condition $u(x,0) = f(x)$ and assuming that $u$ has a Fourier expansion (complex coefficients) one can obtain: $u(x,t) = \Sigma_k \hat{f_k}e^{-4ak \pi^2t}e^{2k \pi i x}$ which can be rewritten as $\Sigma_k (\int^1_0 f(w) e^{-2k \pi i w} dw) e^{-4ak \pi^2t}e^{2k \pi i x} = \Sigma_k (\int^1_0 f(w) e^{-2k \pi i (x-w)} ) e^{-4ak \pi^2t} dw$. Note: $k$ ranges from $-\infty$ to $\infty$ and by $\Sigma^{\infty}_{k=-\infty} c_k$ we mean $lim_{n \rightarrow \infty} \Sigma^{n}_{k=-n}c_k$ and note that $c_k, c_{-k}$ are complex conjugates for all $k$.

Now IF we could interchange the summation sign and the integral sign we’d have: $\int^1_0 \Sigma_k f(w) e^{-2k \pi i (x-w)} e^{-4ak \pi^2t} dw$. Now let $g(x,t) = e^{-2k \pi i (x)} e^{-4ak \pi^2t}$ then we could say that $u(x,t) = \int^1_0 g(x-w,t) f(w) dw$ which is a convolution product; $g(x,t)$ is the heat kernel which is a nice form. But about that interchange: when can we do it?

First note that by $\Sigma_k f_k$ we mean the limit of the sequence of partial sums: $\phi_n = \Sigma_{k = -n}^{n} f_k$ and if $\int^1_0 lim_n \phi_n = lim_n \int^1_0 \phi_n$ then the interchange is valid. NOTE: I am following the custom of not using the “differential” $dx$ and of letting it be understood that $lim_n$ means $lim_{n \rightarrow \infty}$.

The “I don’t want “TL;DR” answer” version
If you are comfortable with Lebesgue integration, then the Dominated Convergence Theorem is the standard: if $\phi_n$ are all measurable functions and $lim_n \phi_n = \phi$ (pointwise) and there exists an integrable function $g$ where $g \ge |\phi_n|$ for all $n$, then $\int^1_0 lim_n \phi_n = lim_n \int^1_0 \phi_n = \int^1_0 \phi$.

Now if the terms “Lebesgue integration” and “measurable” has you scratching your head, you can either learn a bit about it or, if you are in the “tl;dr” mode I’ll make some (hopefully) “practitioner friendly” remarks.

First of all, all Riemann integrable functions are Lebesgue integrable (provided we are NOT talking about improper integrals) and a “measurable function” is one in which the inverse image of a “measurable set” is a “measurable set”. Now “measurable set”: these include sets that single point sets, open intervals, closed intervals, countable intersections of such, countable unions of such and complements of such unions and intersections. Unfortunately there are measurable sets that aren’t formed in this manner, and there are such things as non-measurable sets. See here for the definition of “measurable set”.

Upshot: the sort of functions that appear in Fourier Series are measurable so you probably don’t have to worry. So there is probably no harm in assuming that the $\phi_n$ are Riemann integrable functions.

Pointwise convergence: this means for all $x$ in the domain of interest (here, $x \in [0,1]$), $lim \phi_n(x) = \phi(x)$.

Of course, when we are talking about the Fourier series for a given function $f$, there are conditions that must be met to get that the the series converges to a function that is “almost $f$; the video assumes that $f$ is $L_2$ which means that $\int^1_0 (f)^2$ exists. The mathematics of convergence of a Fourier series is rich; for this note we will assume that the Fourier series in question converges.

Now for the conclusion: assuming that the $\phi_n$ converge pointwise to some $\phi$ then $lim_n \int^1_0 \phi_n = \int^1_0 lim_n \phi_n$ but we need to use the Lebesgue integral to guarantee this equality..in general. This is why:

For example, suppose we enumerate the rational numbers by $q_{1},q_{2},...q_{k}...$ and define $f_{1}(x)=\left\{\begin{array}{c}1,x\neq q_{1} \\ 0,x=q_{1}\end{array}\right.$ and then inductively define $f_{k}(x)=\left\{\begin{array}{c}1,x\notin \{q_{1},q_{2},..q_{k}\} \\ 0,x\in \{q_{1},q_{2},..q_{k}\}\end{array}\right.$. Then $f_{k}\rightarrow f=\left\{\begin{array}{c}1,x\notin \{q_{1},q_{2},..q_{k}....\} \\ 0,x\in \{q_{1},q_{2},..q_{k},...\}\end{array}\right.$ and for each $k$, $\int_{0}^{1}f_{k}(x)dx=1$ but $f$, the limit function, is not Riemann integrable. It is Lebesgue integrable though, and the integral remains 1.

But, given the types of series that the practitioner will be working with (typically: only a finite number of maximums and minimums on a given interval and a finite number of jump discontinuities), one will probably not encounter such pathological behavior with the functions. I give this example to explain why the Dominated Convergence Theorem uses Lebesgue integrals.

Wait a minute you might say, didn’t I read something about “uniform convergence” of functions that lead to the limiting behavior that we need? Well, yes, and I’ll explain that here:

we say that $\phi_n \rightarrow \phi$ uniformly if for any $\epsilon > 0$ there exists $N$ such that for all $n > N$, $|\phi_n(x) - \phi(x)| < 0$ for ALL $x$ in the interval of interest. Then, it a routine exercise in Riemann integration to see that if $\phi_n \rightarrow \phi$ uniformly then $\int^1_0 \phi_n \rightarrow \int^1_0 \phi$. The down side is that we rarely have uniform convergence when we are talking about Fourier series terms. Here is why: it is known that if $\phi_n \rightarrow \phi$ uniformly and that if all of the $\phi_n$ are continuous, then the limit function $\phi$ is continuous as well. However when one obtains the Fourier series for a function with jump discontinuities (say, for a pulse wave) one sees that the terms (and hence the sequence of partial sums) of the Fourier series are continuous but what the Fourier series converges to is not continuous; hence the convergence of the series is NOT uniform.

Older Posts »