The Dirac Delta Function in Differential Equations
The delta ”function” is often introduced into differential equations courses during the section on Laplace transforms. Of course the delta
”function” isn’t a function at all but rather what is known as a ”distribution” (more on this later)
A typical introduction is as follows: if one is working in classical mechanics and one applies a force to a constant mass at time then one can define the impulse of over an interval by where is the velocity. So we can do a translation to set and then consider a unit impulse and vary
according to where is; that is, define
Then is the force function that produces unit impulse for a given
Then we wave our hands and say (this is a great reason to introduce the concept of the limit of functions in a later course) and then argue that for all functions that are continuous over an interval containing 0,
.
The (hand waving) argument at this stage goes something like: ”the mean value theorem for integrals says that there is a
between and such that Therefore as by continuity. Therefore we can define the Laplace transform ”
Illustrating what the delta ”function” does.
I came across this example by accident; I was holding a review session for students and asked for them to give me a problem to solve.
They chose (I can remember what and were but they aren’t important here as we will see) with initial conditions
So using the Laplace transform, we obtained:
But with this reduces to
In other words, we have the ”same solution” as if we had with .
So that might be a way to talk about the delta ”function”; it is exactly the ”impulse” one needs to ”cancel out” an initial velocity of or,
equivalently, to give an initial velocity of and to do so instantly.
Another approach to the delta function
Though it is true that for all and
by design, note that fails to be continuous at and at .
So, can we obtain the delta ”function” as a limit of other functions that are everywhere continuous and differentiable?
In an attempt to find such a family of functions, It is a fun exercise to look at a limit of normal density functions with mean zero:
. Clearly for all
and .
Here is the graph of some of these functions: we use , and respectively.
Calculating the Laplace transform
Do some algebra to combine the exponentials, complete the square and do some algebra to obtain:
Now do the usual transformation to the standard normal random variable via
And we obtain:
for all . Note: assume and that is shorthand for the usual probability distribution function.
Now if we take a limit as we get on the right hand side.
Hence, one way to define is as . This means that while
is off by a factor of 2,
as desired.
Since we now have derivatives of the functions to examine, why don’t we?
which is zero at for all But the behavior of the derivative is interesting: the derivative is at its minimum at and at its maximum at (as we tell our probability students: the standard deviation is the distance from the origin to the inflection points) and as the inflection points get closer together and the second derivative at the
origin approaches which can be thought of as an instant drop from a positive velocity at .
Here are the graphs of the derivatives of the density functions that were plotted above; note how the part of the graph through the origin becomes more vertical as the standard deviation approaches zero.