# College Math Teaching

## August 25, 2014

### Fourier Transform of the “almost Gaussian” function with a residue integral

This is based on the lectures on the Fourier Transform by Brad Osgood from Stanford:

And here, $F(f)(s) = \int^{\infty}_{-\infty} e^{-2 \pi i st} f(t) dt$ provided the integral converges.

The “almost Gaussian” integrand is $f(t) = e^{-\pi t^2}$; one can check that $\int^{\infty}_{-\infty} e^{-\pi t^2} dt = 1$. One way is to use the fact that $\int^{\infty}_{-\infty} e^{-x^2} dx = \sqrt{\pi}$ and do the substitution $x = \sqrt{\pi} t$; of course one should be able to demonstrate the fact to begin with. (side note: a non-standard way involving symmetries and volumes of revolution discovered by Alberto Delgado can be found here)

So, during this lecture, Osgood shows that $F(e^{-\pi t^2}) = e^{-\pi s^2}$; that is, this modified Gaussian function is “its own Fourier transform”.

I’ll sketch out what he did in the lecture at the end of this post. But just for fun (and to make a point) I’ll give a method that uses an elementary residue integral.

Both methods start by using the definition: $F(s) = \int^{\infty}_{-\infty} e^{-2 \pi i ts} e^{-\pi t^2} dt$

Method 1: combine the exponential functions in the integrand:

$\int^{\infty}_{-\infty} e^{-\pi(t^2 +2 i ts} dt$. Now complete the square to get: $\int^{\infty}_{-\infty} e^{-\pi(t^2 +2 i ts-s^2)-\pi s^2} dt$

Now factor out the factor involving $s$ alone and write as a square: $e^{-\pi s^2}\int^{\infty}_{-\infty} e^{-\pi(t+is)^2} dt$

Now, make the substitution $x = t+is, dx = dt$ to obtain:

$e^{-\pi s^2}\int^{\infty+is}_{-\infty+is} e^{-\pi x^2} dx$

Now we show that the above integral is really equal to $e^{-\pi s^2}\int^{\infty}_{-\infty} e^{-\pi x^2} dx = e^{\pi s^2} (1) = e^{-\pi s^2}$

To show this, we perform $\int_{\gamma} e^{z^2} dz$ along the retangular path $\gamma$: $-x, x, x+is, -x+is$ and let $x \rightarrow \infty$

Now the integral around the contour is 0 because $e^{-z^2}$ is analytic.

We wish to calculate the negative of the integral along the top boundary of the contour. Integrating along the bottom gives 1.
As far as the sides: if we fix $s$ we note that $e^{-z^2} = e^{(s^2-x^2)+2si}$ and the magnitude goes to zero as $x \rightarrow \infty$ So the integral along the vertical paths approaches zero, therefore the integrals along the top and bottom contours agree in the limit and the result follows.

Method 2: The method in the video
This uses “differentiation under the integral sign”, which we talk about here.

Stat with $F(s) = \int^{\infty}_{-\infty} e^{-2 \pi i ts} e^{-\pi t^2} dt$ and note $\frac{dF}{ds} = \int^{\infty}_{-\infty} (-2 \pi i t) e^{-2 \pi i ts} e^{-\pi t^2} dt$

Now we do integration by parts: $u = e^{-2 \pi i ts}, dv = (-2 \pi i t)e^{-\pi t^2} \rightarrow v = i e^{-\pi t^2}, du = (-2 \pi i s)e^{-2 \pi i ts}$ and the integral becomes:

$(i e^{-\pi t^2} e^{-2 \pi i ts}|^{\infty}_{-\infty} - (i)(-2 \pi i s) \int^{\infty}_{-\infty} e^{-2 \pi i ts} e^{-\pi t^2} dt$

Now the first term is zero for all values of $s$ as $t \rightarrow \infty$. The second term is merely:

$-(2 \pi s) \int^{\infty}_{-\infty} e^{-2 \pi i ts} e^{-\pi t^2} dt = -(2 \pi s) F(s)$.

So we have shown that $\frac{d F}{ds} = (-2 \pi s)F$ which is a differential equation in $s$ which has solution $F = F_0 e^{- \pi s^2}$ (a simple separation of variables calculation will verify this). Now to solve for the constant $F_0$ note that $F(0) = \int^{\infty}_{-\infty} e^{0} e^{-\pi t^2} dt = 1$.

The result follows.

Now: which method was easier? The second required differential equations and differentiating under the integral sign; the first required an easy residue integral.

By the way: the video comes from an engineering class. Engineers need to know this stuff!

## August 7, 2014

### Letting complex algebra make our calculus lives easier

Filed under: basic algebra, calculus, complex variables — Tags: , — collegemathteaching @ 1:37 am

If one wants to use complex arithmetic in elementary calculus, one should, of course, verify a few things first. One might talk about elementary complex arithmetic and about complex valued functions of a real variable at an elementary level; e. g. $f(x) + ig(x)$. Then one might discuss Euler’s formula: $e^{ix} = cos(x) + isin(x)$ and show that the usual laws of differentiation hold; i. e. show that $\frac{d}{dx} e^{ix} = ie^{ix}$ and one might show that $(e^{ix})^k = e^{ikx}$ for $k$ an integer. The latter involves some dreary trigonometry but, by doing this ONCE at the outset, one is spared of having to repeat it later.

This is what I mean: suppose we encounter $cos^n(x)$ where $n$ is an even integer. I use an even integer power because $\int cos^n(x) dx$ is more challenging to evaluate when $n$ is even.

Coming up with the general formula can be left as an exercise in using the binomial theorem. But I’ll demonstrate what is going on when, say, $n = 8$.

$cos^8(x) = (\frac{e^{ix} + e^{-ix}}{2})^8 =$

$\frac{1}{2^8} (e^{i8x} + 8 e^{i7x}e^{-ix} + 28 e^{i6x}e^{-i2x} + 56 e^{i5x}e^{-i3x} + 70e^{i4x}e^{-i4x} + 56 e^{i3x}e^{-i5x} + 28e^{i2x}e^{-i6x} + 8 e^{ix}e^{-i7x} + e^{-i8x})$

$= \frac{1}{2^8}((e^{i8x}+e^{-i8x}) + 8(e^{i6x}+e^{-i6x}) + 28(e^{i4x}+e^{-i4x})+ 56(e^{i2x}+e^{-i2x})+ 70) =$

$\frac{70}{2^8} + \frac{1}{2^7}(cos(8x) + 8cos(6x) + 28cos(4x) +56cos(2x))$

So it follows reasonably easily that, for $n$ even,

$cos^n(x) = \frac{1}{2^{n-1}}\Sigma^{\frac{n}{2}-1}_{k=0} (\binom{n}{k}cos((n-2k)x)+\frac{\binom{n}{\frac{n}{2}}}{2^n}$

So integration should be a breeze. Lets see about things like, say,

$cos(kx)sin(nx) = \frac{1}{(2)(2i)} (e^{ikx}+e^{-ikx})(e^{inx}-ie^{-inx}) =$

$\frac{1}{4i}((e^{i(k+n)x} - e^{-i(k+n)x}) + (e^{i(n-k)x}-e^{-i(n-k)x}) = \frac{1}{2}(sin((k+n)x) + sin((n-k)x)$

Of course these are known formulas, but their derivation is relatively simple when one uses complex expressions.

## August 6, 2014

### Where “j” comes from

I laughed at what was said from 30:30 to 31:05 or so:

If you are wondering why your engineering students want to use $j = \sqrt{-1}$ is is because, in electrical engineering, $i$ usually stands for “current”.

Though many of you know this, this lesson also gives an excellent reason to use the complex form of the Fourier series; e. g. if $f$ is piece wise smooth and has period 1, write $f(x) = \Sigma^{k = \infty}_{k=-\infty}c_k e^{i 2k\pi x}$ (usual abuse of the equals sign) rather than writing it out in sines and cosines. of course, $\overline{c_{-k}} = c_k$ if $f$ is real valued.

How is this easier? Well, when you give a demonstration as to what the coefficients have to be (assuming that the series exists to begin with, the orthogonality condition is very easy to deal with. Calculate: $c_m= \int^1_0 e^{i 2k\pi t}e^{i 2m\pi x} dx$ for when $k \ne m$. There is nothing to it; easy integral. Of course, one has to demonstrate the validity of $e^{ix} = cos(x) + isin(x)$ and show that the usual differentiation rules work ahead of time, but you need to do that only once.

## February 24, 2014

### A real valued function that is differentiable at an isolated point

A friend of mine is covering the Cauchy-Riemann equations in his complex variables class and wondered if there is a real variable function that is differentiable at precisely one point.

The answer is “yes”, of course, but the example I could whip up on the spot is rather pathological.

Here is one example:

Let $f$ be defined as follows:

$f(x) =\left\{ \begin{array}{c} 0, x = 0 \\ \frac{1}{q^2}, x = \frac{p}{q} \\ x^2, x \ne \frac{p}{q} \end{array}\right.$

That is, $f(x) = x^2$ if $x$ is irrational or zero, and $f(x)$ is $\frac{1}{q^2}$ if $x$ is rational and $x = \frac{p}{q}$ where $gcd(p,q) = 1$.

Now calculate $lim_{x \rightarrow 0+} \frac{f(x) - f(0)}{x-0} = lim_{x \rightarrow 0+} \frac{f(x)}{x}$

Let $\epsilon > 0$ be given and choose a positive integer $M$ so that $M > \frac{1}{\epsilon}$. Let $\delta < \frac{1}{M}$. Now if $0 < x < \delta$ and $x$ is irrational, then $\frac{f(x)}{x} = \frac{x^2}{x} = x < \frac{1}{M} < \epsilon$.

Now the fun starts: if $x$ is rational, then $x = \frac{p}{q} < \frac{1}{M}$ and $\frac{f(x)}{x} = \frac{\frac{1}{q^2}}{\frac{p}{q}} = \frac{1}{qp} < \frac{1}{M} < \epsilon$.

We looked at the right hand limit; the left hand limit works in the same manner.

Hence the derivative of $f$ exists at $x = 0$ and is equal to zero. But zero is the only place where this function is even continuous because for any open interval $I$, $inf \{|f(x)| x \in I \} = 0$.

## October 9, 2013

### Fun with complex numbers and particular solutions

I was fooling around with $y''+by'+cy = e^{at}cos(bt)$ and thought about how to use complex numbers in the case when $e^{at}cos(bt)$ is not a solution to the related homogenous equation. It then hit me: it is really quite simple.

First notes the following: $e^{rt}cos(st) = \frac{1}{2}(e^{(r + si)t} + e^{(r - si)t})$ and $e^{rt}sin(st) = \frac{1}{2i}(e^{(r + si)t} - e^{(r - si)t})$.

Then it is a routine exercise to see the following: given that $z = r+si, \bar{z} = r-si$ are NOT solutions to $p(m)= m^2 + bm + c = 0$ $p(m)$ is the characteristic equation of the differential equation. Then: attempt $y_p = Ae^{zt} + Be^{\bar{z}t}$ Put into the differential equation to see $y''_p + by'_p + cy_p = A(z^2+bz+c)e^{zt} + B(\bar{z}^2 + b\bar{z} + c)e^{\bar{z}t}$.

Then: if the forcing function is $e^{rt}cos(st)$, a particular solution is $y_p = Ae^{zt} + Be^{\bar{z}t}$ where $A = \frac{1}{2(p(z))}, B = \frac{1}{2(p(\bar{z}))}$. If the forcing function is $e^{rt}cos(st)$, a particular solution is $y_p = Ae^{zt} - Be^{\bar{z}t}$ where $A = \frac{1}{2i(p(z))}, B = \frac{1}{2i(p(\bar{z}))}$.

That isn’t profound, but it does lead to the charming exercise: if $z, \bar{z}$ are NOT roots to the quadratic with real coefficients $p(x)$, then $\frac{1}{p(z)} + \frac{1}{p(\bar{z})}$ is real as is $\frac{i}{p(z)} - \frac{i}{p(\bar{z})}$.

Let’s check this out: $\frac{1}{p(z)} + \frac{1}{p(\bar{z})} = \frac{p(\bar{z})+p(z)}{p(z)p(\bar{z})}$. Now look at the numerator and the denominator separately. The denominator: $p(z)p(\bar{z})= (z^2 + bz +c)(\bar{z}^2 + b\bar{z} + c) = (z^2 \bar(z)^2) + b(b (z \bar{z}) + (z(z \bar{z}) +\bar{z}(z \bar{z})) + c (\bar{z} + z) + (z^2 + \bar{z}^2)) + (c^2)$ Now note that every term inside a parenthesis is real.

The numerator: $z^2 + \bar{z}^2 + b (z + \bar{z}) + 2c$ is clearly real.

What about $\frac{i}{p(z)} - \frac{i}{p(\bar{z})}$? We need to only check the numerator: $i (z^2 - \bar{z}^2 + b(z - \bar{z}) + c-c)$ is indeed real.

Yeah, this is elementary but this might appear as an exercise for my next complex variables class.

## May 17, 2013

### College Misery: Poem about Residue Integrals

Filed under: academia, advanced mathematics, calculus, complex variables, integrals — Tags: , — collegemathteaching @ 12:40 am

Seriously. Check it out.

## August 4, 2012

### Day 2, Madison MAA Mathfest

The day started with a talk by Karen King from the National Council of Teachers of Mathematics.
I usually find math education talks to be dreadful, but this one was pretty good.

The talk was about the importance of future math teachers (K-12) actually having some math background. However, she pointed out that students just having passed math courses didn’t imply that they understood the mathematical issues that they would be teaching…and it didn’t imply that their students would do better.

She gave an example: about half of those seeking to teach high school math couldn’t explain why “division by zero” was undefined! They knew that it was undefined but couldn’t explain why. I found that astonishing since I knew that in high school.

Later, she pointed out that potential teachers with a math degree didn’t understand what the issues were in defining a number like $2^{\pi}$. Of course, a proper definition of this concept requires at least limits or at least a rigorous definition of the log function and she was well aware that the vast majority of high school students aren’t ready for such things. Still, the instructor should be; as she said “we all wave our hands from time to time, but WE should know when we are waving our hands.”

She stressed that we need to get future math teachers to get into the habit (she stressed the word: “habit”) of always asking themselves “why is this true” or “why is it defined in this manner”; too many of our math major courses are rule bound, and at times we write our exams in ways that reward memorization only.

Next, Bernd Sturmfels gave the second talk in his series; this was called Convex Algebraic Geometry.

You can see some of the material here. He also lead this into the concept of “Semidefinite programming”.

The best I can tell: one looks at the objects studied by algebraic geometers (root sets of polynomials of several variables) and then takes a “affine slice” of these objects.

One example: the “n-ellipse” is the set of points on the plane that satisfy $\sum^m_{k=1} \sqrt{(x-u_k)^2 + (y-v_k)^2} = d$ where $(u_k, v_k)$ are points in the plane.

Questions: what is the degree of the polynomial that describes the ellipse? What happens if we let $d$ tend to zero? What is the smallest $d$ for which the ellipse is non-vanishing (Fermat-Webber point)? Note: the 2 ellipse is the circle, the 3 ellipse (degree 8) is what we usually think of as an ellipse.

Note: these type of surfaces can be realized as the determinant of a symmetric matrix; these matrices have real eigenvalues. We can plot curves over which an eigenvalue goes to zero and then changes sign. This process leads to what is known as a spectrahedron ; this is a type of shape in space. A polyhedron can be thought of as the spectrahedron of a diagonal matrix.

Then one can seek to optimize a linear function over a spectrahedron; this leads to semidefinite programming, which, in general, is roughly as difficult as linear programming.

One use: some global optimization problems can be reduced to a semidefinite programming problem (not all).

Shorter Talks
There was a talk by Bob Palais which discussed the role of Rodrigues in the discovery of the quaternions. The idea is that Rodrigues discovered the quaternions before Hamilton did; but he talked about these in terms of rotations in space.

There were a few talks about geometry and how to introduce concepts to students; of particular interest was the concept of a geodesic. Ruth Berger talked about the “fish swimming in jello” model: basically suppose you had a sea of jello where the jello’s density was determined by its depth with the most dense jello (turning to infinite density) at the bottom; and it took less energy for the fish to swim in the less dense regions. Then if a fish wanted to swim between two points, what path would it take? The geometry induced by these geodesics results in the upper half plane model for hyperbolic space.

Nick Scoville gave a talk about discrete Morse theory. Here is a user’s guide. The idea: take a simplicial complex and assign numbers (integers) to the points, segments, triangles, etc. The assignment has to follow rules; basically the boundary of a complex has to have a lower number that what it bounds (with one exception….) and such an assignment leads to a Morse function. Critical sets can be defined and the various Betti numbers can be calculated.

Christopher Frayer then talked about the geometry of cubic polynomials. This is more interesting than it sounds.
Think about this: remember Rolles Theorem from calculus? There is an analogue of this in complex variables called the Guass-Lucas Theorem. Basically, the roots of the derivative lie in the convex hull of the roots of the polynomial. Then there is Marden’s Theorem for polynomials of degree 3. One can talk about polynomials that have a root of $z = 1$ and two other roots in the unit circle; then one can study where the the roots of the derivative lie. For a certain class of these polynomials, there is a dead circle tangent to the unit circle at 1 which encloses no roots of the derivative.

## May 3, 2012

### Composing a non-constant analytic function with a non-analytic one, part II

Filed under: advanced mathematics, analysis, calculus, complex variables, matrix algebra — collegemathteaching @ 6:40 pm

I realize that what I did in the previous post was, well, lame.
The setting: let $g$ be continuous but non-analytic in some disk $D$ in the complex plane, and let $f$ be analytic in $g(D)$ which, for the purposes of this informal note, we will take to contain an open disk. If $g(D)$ doesn’t contain an open set or if the partials of $g$ fail to exist, the question of $f(g)$ being analytic is easy and uninteresting.

Let $f(r + is ) = u(r,s) + iv(r,s)$ and $g(x+iy) = r(x,y) + is(x,y)$ where $u, v, r, s$ are real valued functions of two variables which have continuous partial derivatives. Assume that $u_r = v_s$ and $u_s = -v_r$ (the standard Cauchy-Riemann equations) in the domain of interest and that either $r_x \neq s_y$ or $r_y \neq -s_x$ in our domain of interest.

Now if the composition $f(g)$ is analytic, then the Cauchy-Riemann equations must hold; that is:
$\frac{\partial u}{\partial x} = \frac{\partial v}{\partial y}, \frac{\partial u}{\partial y} = -\frac{\partial v}{\partial x}$

Now use the chain rule and do some calculation:
From the first of these equations:
$u_r r_x + u_s s_x = v_r r_y + v_s s_y$
$u_r r_y + u_s s_y = -v_r r_x - v_s s_x$
By using the C-R equations for $u, v$ we can substitute:
$u_r r_x + u_s s_x = -u_s r_y + u_r s_y$
$u_r r_y + u_s s_y = u_s r_x - u_r s_x$
This leads to the following system of equations:
$u_r(r_x -s_y) + u_s(s_x + r_y) = 0$
$u_r(r_y + s_x) + u_s(s_y - r_x) = 0$
This leads to the matrix equation:
$\left( \begin{array}{cc}(r_x -s_y) & (s_x + r_y) \\(s_x + r_y) & (s_y - r_x) \end{array} \right)\ \left(\begin{array}{c}u_r \\u_s \end{array}\right)\ = \left(\begin{array}{c} 0 \\ 0 \end{array}\right)\$

The coefficient matrix has determinant $-((r_x - s_y)^2 + (s_x + r_y)^2)$ which is zero when BOTH $(r_x - s_y)$ and $(s_x + r_y)$ are zero, which means that the Cauchy-Riemann equations for $g$ hold. Since that is not the case, the system of equations has only the trivial solution which means $u_r = u_s = 0$ which implies (by C-R for $f$ ) that $v_r = v_s = 0$ which implies that $f$ is constant.

This result includes the “baby result” in the previous post.

## May 2, 2012

### Composition of an analystic function with a non-analytic one

Filed under: advanced mathematics, analysis, complex variables, derivatives, Power Series, series — collegemathteaching @ 7:39 pm

On a take home exam, I gave a function of the type: $f(z) = sin(k|z|)$ and asked the students to explain why such a function was continuous everywhere but not analytic anywhere.

This really isn’t hard but that got me to thinking: if $f$ is analytic at $z_0$ and NON CONSTANT, is $f(|z|)$ ever analytic? Before you laugh, remember that in calculus class, $ln|x|$ is differentiable wherever $x \neq 0$.

Ok, go ahead and laugh; after playing around with the Cauchy-Riemann equations at bit, I found that there was a much easier way, if $f$ is analytic on some open neighborhood of a real number.

Since $f$ is analytic at $z_0$, $z_0$ real, write $f = \sum ^ {\infty}_{k =0} a_k (z-z_0)^k$ and then compose $f$ with $|z|$ and substitute into the series. Now if this composition is analytic, pull out the Cauchy-Riemann equations for the composed function $f(x+iy) = u(x,y) + iv(x,y)$ and it is now very easy to see that $v_x = v_y =0$ on some open disk which then implies by the Cauchy-Riemann equations that $u_x = u_y = 0$ as well which means that the function is constant.

So, what if $z_0$ is NOT on the real axis?

Again, we write $f(x + iy) = u(x,y) + iv(x,y)$ and we use $U_{X}, U_{Y}$ to denote the partials of these functions with respect to the first and second variables respectively. Now $f(|z|) = f(\sqrt{x^2 + y^2} + 0i) = u(\sqrt{x^2 + y^2},0) + iv(\sqrt{x^2 + y^2},0)$. Now turn to the Cauchy-Riemann equations and calculate:
$\frac{\partial}{\partial x} u = u_{X}\frac{x}{\sqrt{x^2+y^2}}, \frac{\partial}{\partial y} u = u_{X}\frac{y}{\sqrt{x^2+y^2}}$
$\frac{\partial}{\partial x} v = v_{X}\frac{x}{\sqrt{x^2+y^2}}, \frac{\partial}{\partial y} v = v_{X}\frac{y}{\sqrt{x^2+y^2}}$
Insert into the Cauchy-Riemann equations:
$\frac{\partial}{\partial x} u = u_{X}\frac{x}{\sqrt{x^2+y^2}}= \frac{\partial}{\partial y} v = v_{X}\frac{y}{\sqrt{x^2+y^2}}$
$-\frac{\partial}{\partial x} v = -v_{X}\frac{x}{\sqrt{x^2+y^2}}= \frac{\partial}{\partial y} u = u_{X}\frac{y}{\sqrt{x^2+y^2}}$

From this and from the assumption that $y \neq 0$ we obtain after a little bit of algebra:
$u_{X}\frac{x}{y}= v_{X}, u_{X} = -v_{X}\frac{x}{y}$
This leads to $u_{X}\frac{x^2}{y^2} = v_{X}\frac{x}{y}=-v_{X}$ which implies either that $u_{X}$ is zero which leads to the rest of the partials being zero (by C-R), or this means that $\frac{x^2}{y^2} = -1$ which is absurd.

So $f$ must have been constant.

## April 17, 2012

### Pointwise versus Uniform convergence of sequences of continuous functions: Part II

Filed under: analysis, calculus, complex variables, uniform convergence — collegemathteaching @ 12:48 am

In my complex analysis class I was grading a problem of the following type:
given $K \subset C$ where $K$ is compact and given a sequence of continuous functions $f_n$ which converges to 0 pointwise on $K$ and if $|f_1(z)| > |f_2(z)|>...|f_k(z)|...$ for all $z \in K$ show that the convergence is uniform.

Now what about the $|f_1(z)| > |f_2(z)|>...|f_k(z)|...$ hypothesis? Can it be dispensed with?

Let $f_n(x) = sin(\frac{e \pi}{2}e^{-nx})sin(\frac{n \pi}{2} x)$ with $x \in [0,1]$. $f_n(0) = 0$ for all $n$. To see that $f_n$ converges to zero pointwise, note that $lim_{n \rightarrow \infty}e^{-nx} = 0$ for all $x > 0$, hence $lim_{n \rightarrow \infty}sin(\frac{e \pi}{2}e^{-nx}) = 0$ which implies that $f_n \rightarrow 0$ by the squeeze theorem. But $f_n$ does not converge to 0 uniformly as for $t = \frac{1}{n}$ we have $f_n(t) = 1$
Here is a graph of the functions for $n = 5, 10, 20, 40$