College Math Teaching

August 25, 2014

Fourier Transform of the “almost Gaussian” function with a residue integral

This is based on the lectures on the Fourier Transform by Brad Osgood from Stanford:

And here, F(f)(s) = \int^{\infty}_{-\infty} e^{-2 \pi i st} f(t) dt provided the integral converges.

The “almost Gaussian” integrand is f(t) = e^{-\pi t^2} ; one can check that \int^{\infty}_{-\infty} e^{-\pi t^2} dt = 1 . One way is to use the fact that \int^{\infty}_{-\infty} e^{-x^2} dx = \sqrt{\pi} and do the substitution x = \sqrt{\pi} t; of course one should be able to demonstrate the fact to begin with. (side note: a non-standard way involving symmetries and volumes of revolution discovered by Alberto Delgado can be found here)

So, during this lecture, Osgood shows that F(e^{-\pi t^2}) = e^{-\pi s^2} ; that is, this modified Gaussian function is “its own Fourier transform”.

I’ll sketch out what he did in the lecture at the end of this post. But just for fun (and to make a point) I’ll give a method that uses an elementary residue integral.

Both methods start by using the definition: F(s) = \int^{\infty}_{-\infty} e^{-2 \pi i ts} e^{-\pi t^2} dt

Method 1: combine the exponential functions in the integrand:

\int^{\infty}_{-\infty} e^{-\pi(t^2 +2  i ts}  dt . Now complete the square to get: \int^{\infty}_{-\infty} e^{-\pi(t^2 +2  i ts-s^2)-\pi s^2}  dt

Now factor out the factor involving s alone and write as a square: e^{-\pi s^2}\int^{\infty}_{-\infty} e^{-\pi(t+is)^2}  dt

Now, make the substitution x = t+is, dx = dt to obtain:

e^{-\pi s^2}\int^{\infty+is}_{-\infty+is} e^{-\pi x^2}  dx

Now we show that the above integral is really equal to e^{-\pi s^2}\int^{\infty}_{-\infty} e^{-\pi x^2}  dx = e^{\pi s^2} (1) = e^{-\pi s^2}

To show this, we perform \int_{\gamma} e^{z^2} dz along the retangular path \gamma : -x, x, x+is, -x+is and let x \rightarrow \infty

countour
Now the integral around the contour is 0 because e^{-z^2} is analytic.

We wish to calculate the negative of the integral along the top boundary of the contour. Integrating along the bottom gives 1.
As far as the sides: if we fix s we note that e^{-z^2} = e^{(s^2-x^2)+2si} and the magnitude goes to zero as x \rightarrow \infty So the integral along the vertical paths approaches zero, therefore the integrals along the top and bottom contours agree in the limit and the result follows.

Method 2: The method in the video
This uses “differentiation under the integral sign”, which we talk about here.

Stat with F(s) = \int^{\infty}_{-\infty} e^{-2 \pi i ts} e^{-\pi t^2} dt and note \frac{dF}{ds} = \int^{\infty}_{-\infty} (-2 \pi i t) e^{-2 \pi i ts} e^{-\pi t^2} dt

Now we do integration by parts: u = e^{-2 \pi i ts}, dv = (-2 \pi i t)e^{-\pi t^2} \rightarrow v = i e^{-\pi t^2}, du = (-2 \pi i s)e^{-2 \pi i ts} and the integral becomes:

(i e^{-\pi t^2} e^{-2 \pi i ts}|^{\infty}_{-\infty} - (i)(-2 \pi i s) \int^{\infty}_{-\infty} e^{-2 \pi i ts} e^{-\pi t^2} dt

Now the first term is zero for all values of s as t \rightarrow \infty . The second term is merely:

-(2 \pi s) \int^{\infty}_{-\infty} e^{-2 \pi i ts} e^{-\pi t^2} dt = -(2 \pi s) F(s) .

So we have shown that \frac{d F}{ds} = (-2 \pi s)F which is a differential equation in s which has solution F = F_0 e^{- \pi s^2} (a simple separation of variables calculation will verify this). Now to solve for the constant F_0 note that F(0) = \int^{\infty}_{-\infty} e^{0} e^{-\pi t^2} dt = 1 .

The result follows.

Now: which method was easier? The second required differential equations and differentiating under the integral sign; the first required an easy residue integral.

By the way: the video comes from an engineering class. Engineers need to know this stuff!

January 7, 2011

The Dirac Delta Function in an Elementary Differential Equations Course

The Dirac Delta Function in Differential Equations

The delta ”function” is often introduced into differential equations courses during the section on Laplace transforms. Of course the delta
”function” isn’t a function at all but rather what is known as a ”distribution” (more on this later)

A typical introduction is as follows: if one is working in classical mechanics and one applies a force F(t) to a constant mass m at time t, then one can define the impulse I of F over an interval [a,b] by I=\int_{a}^{b}F(t)dt=m(v(a)-v(b)) where v is the velocity. So we can do a translation to set a=0 and then consider a unit impulse and vary F(t)
according to where b is; that is, define
\delta ^{\varepsilon}(t)=\left\{ \begin{array}{c}\frac{1}{\varepsilon },0\leq t\leq \varepsilon  \\ 0\text{ elsewhere}\end{array}\right. .

Then F(t)=\delta ^{\varepsilon }(t) is the force function that produces unit impulse for a given \varepsilon >0.

Then we wave our hands and say \delta (t)=\lim _{\varepsilon \rightarrow 0}\delta ^{\varepsilon }(t) (this is a great reason to introduce the concept of the limit of functions in a later course) and then argue that for all functions that are continuous over an interval containing 0,
\int_{0}^{\infty }\delta (t)f(t)dt=f(0).

The (hand waving) argument at this stage goes something like: ”the mean value theorem for integrals says that there is a c_{\varepsilon }
between 0 and \varepsilon such that \int_{0}^{\varepsilon }\delta^{\varepsilon }(t)f(t)dt=\frac{1}{\varepsilon}f(c_{\varepsilon})(\varepsilon -0)=f(c_{\varepsilon }) Therefore as \varepsilon\rightarrow 0, \int_{0}^{\varepsilon }\delta^{\varepsilon}(t)f(t)dt=f(c_{\varepsilon })\rightarrow f(0) by continuity. Therefore we can define the Laplace transform L(\delta (t))=e^{-s0}=1.

Illustrating what the delta ”function” does.

I came across this example by accident; I was holding a review session for students and asked for them to give me a problem to solve.

They chose y^{\prime \prime }+ay^{\prime }+by=\delta (I can remember what a and b were but they aren’t important here as we will see) with initial conditions y(0)=0,y^{\prime }(0)=-1

So using the Laplace transform, we obtained:

(s^{2}+as+b)Y-sy(0)-y^{\prime }(0)-ay(0)=1

But with y(0)=0,y^{\prime }(0)=-1 this reduces to (s^{2}+as+b)Y+1=1\rightarrow Y=0

In other words, we have the ”same solution” as if we had y^{\prime\prime }+ay^{\prime }+by=0 with y(0)=0,y^{\prime }(0)=0.

So that might be a way to talk about the delta ”function”; it is exactly the ”impulse” one needs to ”cancel out” an initial velocity of -1 or,
equivalently, to give an initial velocity of 1 and to do so instantly.

Another approach to the delta function

Though it is true that \int_{-\infty }^{\infty }\delta^{\varepsilon }(t)dt=1 for all \varepsilon and
\int_{-\infty}^{\infty }\delta (t)dt=1 by design, note that \delta ^{\varepsilon }(t)fails to be continuous at 0 and at \varepsilon .

So, can we obtain the delta ”function” as a limit of other functions that are everywhere continuous and differentiable?

In an attempt to find such a family of functions, It is a fun exercise to look at a limit of normal density functions with mean zero:

f_{\sigma }(t)=\frac{1}{\sigma \sqrt{2\pi }}\exp (-\frac{1}{2\sigma ^{2}}t^{2}). Clearly for all
\sigma >0,\int_{-\infty }^{\infty }f_{\sigma}(t)dt=1 and \int_{0}^{\infty }f_{\sigma }(t)dt=\frac{1}{2}.

Here is the graph of some of these functions: we use \sigma = .5 , \sigma = .25 and \sigma = .1 respectively.

Calculating the Laplace transform

L(\frac{1}{\sigma \sqrt{2\pi }}\exp (-\frac{1}{2\sigma ^{2}}t^{2}))= \frac{1}{\sigma \sqrt{2\pi }}\int_{0}^{\infty }\exp (-\frac{1}{2\sigma^{2}}t^{2})\exp (-st)dt=

Do some algebra to combine the exponentials, complete the square and do some algebra to obtain:

\frac{1}{\sigma \sqrt{2\pi }}\int_{0}^{\infty }\exp (-\frac{1}{2\sigma ^{2}}(t+\sigma ^{2}s)^{2})\exp (\frac{s^{2}\sigma^{2}}{2})dt=\exp (\frac{s^{2}\sigma ^{2}}{2})[\frac{1}{\sigma \sqrt{2\pi }}\int_{0}^{\infty }\exp (-\frac{1}{2\sigma ^{2}}(t+\sigma^{2}s)^{2})dt]

Now do the usual transformation to the standard normal random variable via z=\dfrac{t+\sigma ^{2}s}{\sigma }

And we obtain:

L(f_{\sigma }(t))=\exp (\frac{s^{2}\sigma ^{2}}{2})P(Z>\sigma s) for all \sigma >0. Note: assume s>0 and that P is shorthand for the usual probability distribution function.

Now if we take a limit as \sigma \rightarrow 0 we get \frac{1}{2} on the right hand side.

Hence, one way to define \delta is as 2\lim _{\sigma \rightarrow0}f_{\sigma }(t) . This means that while
\lim_{\sigma \rightarrow0}\int_{-\infty }^{\infty }2f_{\sigma }(t)dt is off by a factor of 2,
\lim_{\sigma \rightarrow 0}\int_{0}^{\infty }2f_{\sigma }(t)dt=1 as desired.

Since we now have derivatives of the functions to examine, why don’t we?

\frac{d}{dt}2f_{\sigma }(t)=-\frac{2t}{\sigma ^{3}\sqrt{2\pi }}\exp (-\frac{1}{2\sigma ^{2}}t^{2}) which is zero at t=0 for all \sigma >0. But the behavior of the derivative is interesting: the derivative is at its minimum at t=\sigma and at its maximum at t=-\sigma (as we tell our probability students: the standard deviation is the distance from the origin to the inflection points) and as \sigma \rightarrow 0, the inflection points get closer together and the second derivative at the
origin approaches -\infty , which can be thought of as an instant drop from a positive velocity at t=0.

Here are the graphs of the derivatives of the density functions that were plotted above; note how the part of the graph through the origin becomes more vertical as the standard deviation approaches zero.

Create a free website or blog at WordPress.com.