# College Math Teaching

## November 25, 2013

Filed under: differential equations, Laplace transform — Tags: — collegemathteaching @ 10:33 pm

Consider: $sin(x) = x - \frac{x^3}{3!} + \frac{x^5}{5!}......$

Now take the Laplace transform of the right hand side: $\frac{1}{s^2} - \frac{3!}{s^4 3!} + \frac{5!}{s^6 5!} .... = \frac{1}{s^2} (1 -\frac{1}{s^2} + \frac{1}{s^4} ...$.

This is equal to: $\frac{1}{s^2} (\frac{1}{1 + \frac{1}{s^2}})$ for $s > 1$ which is, of course, $\frac{1}{1 + s^2}$ which is exactly what you would expect.

This technique works for $e^{x}$ but gives nonsense for $e^{x^2}$.

Update: note that we can get a power series for $e^{x^2} = 1 + x^2 + \frac{x^4}{2!} + \frac{x^6}{3!} + ....$ which, on a term by term basis, transforms to $\frac{1}{s} + \frac{2!}{s^3} + \frac{4!}{s^5 2!} + \frac{6!}{s^7 3!} + ... = \frac{1}{s} \sum_{k=0} (\frac{1}{s^2})^k\frac{(2k)!}{k!})$ which only converges at $s = \infty$.

## November 12, 2013

### Why I teach multiple methods for the inverse Laplace Transform.

I’ll demonstrate with a couple of examples:

$y''+4y = sin(2t), y(0) = y'(0) = 0$

If we use the Laplace transform, we obtain: $(s^2+4)Y = \frac{2}{s^2+4}$ which leads to $Y = \frac{2}{(s^2+4)^2}$. Now we’ve covered how to do this without convolutions. But the convolution integral is much easier: write $Y = \frac{2}{(s^2+4)^2} = \frac{1}{2} \frac{2}{s^2+4}\frac{2}{s^2+4}$ which means that $y = \frac{1}{2}(sin(2t)*sin(2t)) = \frac{1}{2}\int^t_0 sin(2u)sin(2t-2u)du = -\frac{1}{4}tcos(2t) + \frac{1}{8}sin(2t)$.

Note: if the integral went too fast for you and you don’t want to use a calculator, use $sin(2t-2u) = sin(2t)cos(2u) - cos(2t)sin(2u)$ and the integral becomes $\frac{1}{2}\int^t_0 sin(2t)cos(2u)sin(2u) -cos(2t)sin^2(2u)du =$

$\frac{1}{2} (sin(2t))\frac{1}{4}sin^2(2u)|^t_0 - cos(2t)(\frac{1}{4})( t - \frac{1}{4}sin(4u)|^t_0 =$

$\frac{1}{8}sin^3(2t) - \frac{1}{4}tcos(2t) +\frac{1}{16}sin(4t)cos(2t) =$

$\frac{1}{8}(sin^3(2t) +sin(2t)cos^2(2t))-\frac{1}{4}tcos(2t)$

$= \frac{1}{8}sin(2t)(sin^2(2t) + cos^2(2t))-\frac{1}{4}tcos(2t) = -\frac{1}{4}tcos(2t) + \frac{1}{8}sin(2t)$

Now if we had instead: $y''+4y = sin(t), y(0)=0, y'(0) = 0$

The Laplace transform of the equation becomes $(s^2+4)Y = \frac{1}{s^2+1}$ and hence $Y = \frac{1}{(s^2+1)(s^2+4)}$. One could use the convolution method but partial fractions works easily: one can use the calculator (“algebra” plus “expand”) or:

$\frac{A+Bs}{s^2+4} + \frac{C + Ds}{s^2+1} =\frac{1}{(s^2+4)(s^2+1)}$. Get a common denominator and match numerators:

$(A+Bs)(s^2+1) + (C+Ds)(s^2+4) = 1$. One can use several methods to resolve this: here we will use $s = i$ to see $(C + Di)(3) = 1$ which means that $D = 0$ and $C = \frac{1}{3}$. Now use $s = 2i$ so obtain $(A + 2iB)(-3) = 1$ which means that $B = 0, A = -\frac{1}{3}$ so $Y = \frac{1}{3} (\frac{1}{s^2+1} - \frac{1}{s^2+4}$ so $y = \frac{1}{3} (sin(t) - \frac{1}{2} sin(2t)) = \frac{1}{3}sin(t) -\frac{1}{6}sin(2t)$

So, sometimes the convolution leads us to the answer quicker than other techniques and sometimes other techniques are easier.

## November 6, 2013

### Inverse Laplace transform example: 1/(s^2 +b^2)^2

Filed under: basic algebra, differential equations, Laplace transform — Tags: — collegemathteaching @ 11:33 pm

I talked about one way to solve $y''+y = sin(t), y(0) =y'(0) = 0$ by using Laplace transforms WITHOUT using convolutions; I happen to think that using convolutions is the easiest way here.

Here is another non-convolution method: Take the Laplace transform of both sides to get $Y(s) = \frac{1}{(s^2+1)^2}$.

Now most tables have $L(tsin(at)) = \frac{2as}{(s^2 + a^2)^2}, L(tcos(at)) = \frac{s^2-a^2}{(s^2+a^2)^2}$

What we have is not in one of these forms. BUT, note the following algebra trick technique:

$\frac{1}{s^2+b^2} = (A)(\frac{s^2-b^2}{(s^2 + b^2)^2} - \frac{s^2+b^2}{(s^2+b^2)^2})$ when $A = -\frac{1}{2b^2}$.

Now $\frac{s^2-b^2}{(s^2 + b^2)^2} = L(tcos(bt))$ and $\frac{s^2+b^2}{(s^2+b^2)^2} = \frac{1}{(s^2+b^2)} = L(\frac{1}{b}sin(bt))$ and one can proceed from there.

### A weird Laplace Transform (a resonance equation)

Filed under: applied mathematics, calculus, differential equations, Laplace transform — collegemathteaching @ 12:01 am

Ok, we have $y" + y = sin(t), y(0) =0, y'(0) = 0$. Now we can solve this by, say, undetermined coefficients and obtain $y = \frac{1}{2}sin(t) -\frac{1}{2}tcos(t)$

But what happens when we try Laplace Transforms? It is easy to see that the Laplace transform of the equation yields $(s^2+1)Y(s)=\frac{1}{s^2+1}$ which yields $Y(s) =\frac{1}{(s^2+1)^2}$

So, how do we take the inverse Laplace transform of $\frac{1}{(s^2+1)^2}$?

Here is one way: we recognize $L(tf(t)) = -1\frac{d}{ds}F(s)$ where $L(f(t)) = F(s)$.

So, we might try integrating: $\int \frac{1}{(s^2+1)^2} ds$.

(no cheating with a calculator! 🙂 )

In calculus II, we do: $s = tan(\theta), ds = sec^2(\theta) d\theta$.

Then $\int \frac{1}{(s^2+1)^2} ds$ is transformed into $\int \frac{sec^2(\theta)}{sec^4 \theta} d\theta = \int cos^2(\theta) d \theta = \int \frac{1}{2} + \frac{1}{2}cos(2 \theta) d \theta = \frac{1}{2} \theta + \frac{1}{4}sin(2 \theta)$ (plus a constant, of course).

We now use $sin(2\theta) = 2sin(\theta)cos(\theta)$ to obtain $\frac{1}{2} \theta + \frac{1}{4}sin(2 \theta) = \frac{1}{2} \theta + \frac{1}{2} sin(\theta)cos(\theta) + C$.

Fair enough. But now we have to convert back to $s$. We use $tan(\theta) = s$ to obtain $cos(\theta) = \frac{1}{\sqrt{s^2+1}}, sin(\theta) = \frac{s}{\sqrt{s^2+1}}$

So $\frac{1}{2} \theta + \frac{1}{2} sin(\theta)cos(\theta)$ converts to $\frac{1}{2}arctan(s) + C +\frac{1}{2}\frac{s}{s^2+1} = \int Y(s) ds$. Now we use the fact that as $s$ goes to infinity, $\int Y(s)$ has to go to zero; this means $C = -\frac{\pi}{2}$.

So what is the inverse Laplace transform of $\int Y(s) ds$?

Clearly, $\frac{1}{2}\frac{s}{s^2+1}$ gets inverse transformed to $\frac{1}{2}cos(t)$, so the inverse transform for this part of $Y(s)$ is $-\frac{t}{2}cos(t)$.

But what about the other part? $\frac{d}{ds} (arctan(s) - \frac{\pi}{2}) = \frac{1}{1+s^2}$ so $\frac{1}{1+s^2} = -L(tf(t))$ which implies that $tf(t) = -sin(t)$ so $-tf(t) = sin(t)$ and so the inverse Laplace transform for this part of $Y(s)$ is $\frac{1}{2} sin(t)$ and the result follows.

Put another way: $L(\frac{sin(t)}{t}) =- arctan(s) + C$ but since we want $0$ when $s = \infty, C = \frac{\pi}{2}$ and so $L(\frac{sin(t)}{t}) = \frac{\pi}{2}- arctan(s) = arctan(\frac{1}{s})$ .

## October 25, 2013

### A Laplace Transform of a function of non-exponential order

Many differential equations textbooks (“First course” books) limit themselves to taking Laplace transforms of functions of exponential order. That is a reasonable thing to do. However I’ll present an example of a function NOT of exponential order that has a valid (if not very useful) Laplace transform.

Consider the following function: $n \in \{1, 2, 3,...\}$

$g(t)= \begin{cases} 1,& \text{if } 0 \leq t \leq 1\\ 10^n, & \text{if } n \leq t \leq n+\frac{1}{100^n} \\ 0, & \text{otherwise} \end{cases}$

Now note the following: $g$ is unbounded on $[0, \infty)$, $lim_{t \rightarrow \infty} g(t)$ does not exist and
$\int^{\infty}_0 g(t)dt = 1 + \frac{1}{10} + \frac{1}{100^2} + .... = \frac{1}{1 - \frac{1}{10}} = \frac{10}{9}$

One can think of the graph of $g$ as a series of disjoint “rectangles”, each of width $\frac{1}{100^n}$ and height $10^n$ The rectangles get skinnier and taller as $n$ goes to infinity and there is a LOT of zero height in between the rectangles.

Needless to say, the “boxes” would be taller and skinnier.

Note: this is an example can be easily modified to provide an example of a function which is $l^2$ (square integrable) which is unbounded on $[0, \infty)$. Hat tip to Ariel who caught the error.

It is easy to compute the Laplace transform of $g$:

$G(s) = \int^{\infty}_0 g(t)e^{-st} dt$. The transform exists if, say, $s \geq 0$ by routine comparison test as $|e^{-st}| \leq 1$ for that range of $s$ and the calculation is easy:

$G(s) = \int^{\infty}_0 g(t)e^{-st} dt = \frac{1}{s} (1-e^{-s}) + \frac{1}{s} \sum^{\infty}_{n=1} (\frac{10}{e^s})^n(1-e^{\frac{-s}{100^n}})$

Note: if one wants to, one can see that the given series representation converges for $s \geq 0$ by using the ratio test and L’Hoptial’s rule.

## November 3, 2011

### Finding a Particular solution: the Convolution Method

Background for students
Remember that when one is trying to solve a non-homogeneous differential equation, say:
$y^{\prime \prime} +3y^{\prime} +2y = cos(t)$ one finds the general solution to $y^{\prime \prime} +3y^{\prime} +2y = 0$ (which is called the homogeneous solution; in this case it is $c_1 e^{-2t} + c_2 e^{-t}$ and then finds some solution to $y^{\prime \prime} +3y^{\prime} +2y = cos(t)$. This solution, called a particular solution, will not have an arbitrary constant. Hence that solution cannot meet an arbitrary initial condition.

But adding the homogenous solution to the particular solution yields a general solution with arbitrary constants which can be solved for to meet a given initial condition.

So how does one obtain a particular solution?

Students almost always learn the so-called “method of undetermined coefficients”; this is used when the driving function is a sine, cosine, $e^{at}$, a polynomial, or some sum and product of such things. Basically, one assumes that the particular solution has a certain form than then substitutes into the differential equation and then determines the coefficients. For example, in our example, one might try $y_p = Acos(t) + Bsin(t)$ and then substitute into the differential equation to solve for $A$ and $B$. One could also try a complex form; that is, try $y_p = Ae^{it}$ and then determines $A$ and then uses the real part of the solution.

A second method for finding particular solution is to use variation of parameters. Here is how that goes: one obtains two linearly independent homogeneous solutions $y_1, y_2$ and then seeks a particular solution of the form $y_p = v_1y_1 + v_2y_2$ where $v_1 = -\int \frac{f(t)y_2}{W} dt$ and $v_2 = \int \frac{f(t)y_1}{W} dt$ where $W$ is the determinant of the Wronskian matrix. This method can solve differential equations like $y^{\prime \prime} + y = tan(t)$ and sometimes is easier to use when the driving function is messy.
But sometimes it can lead to messy, non transparent solutions when “undetermined coefficients” is much easier; for example, try solving $y^{\prime \prime} + 4y = cos(5t)$ with variation of parameters. Then try to do it with undetermined coefficients; though the answers are the same, one method yields a far “cleaner” answer.

There is a third way that gives a particular solution that meets a specific initial condition. Though this method can yield a not-so-easy-to-do-by-hand integral and can sometimes lead to what I might call an answer in obscured form, the answer is in the form of a definite integral that can be evaluated by numerical integration techniques (if one wants, say, the graph of a solution).

This method is the Convolution Method. Many texts introduce convolutions in the Laplace transform section but there is no need to wait until then.

What is a convolution?
We can define the convolution of two functions $f$ and $g$ to be:
$f*g = \int_0^t g(u)f(t-u)du$. Needless to say, $f$ and $g$ need to meet appropriate “integrability” conditions; this is usually not a problem in a differential equations course.

Example: if $f = e^t, g=cos(t)$, then $f*g = \frac{1}{2}(e^t - cos(t) + sin(t))$. Notice that the dummy variable gets “integrated out” and the variable $t$ remains.

There are many properties of convolutions that I won’t get into here; one interesting one is that $f*g = g*f$; proving this is an interesting exercise in change of variable techniques in integration.

The Convolution Method
If $y(t)$ is a homogenous solution to a second order linear differential equation that meets initial conditions: $y(0)=0, y^{\prime}(0) =1$ and $f$ is the forcing function, then $y_p = f*y$ is the particular solution that meets $y_p(0)=0, y_p^{\prime}(0) =0$

How might we use this method and why is it true? We’ll answer the “how” question first.

Suppose we want to solve $y^{\prime \prime} + y = tan(t)$. The homogeneous solution is $y_h = c_1 cos(t) + c_2 sin(t)$ and it is easy to see that we need $c_1 = 0, c_2 = 1$ to meet the $y_h(0)=0, y^{\prime}_h(0) =1$ condition. So a particular solution is $sin(t)*tan(t) = tan(t)*sin(t)= \int_0^t tan(u)sin(t-u)du =$ $\int_0^t tan(u)(sin(t)cos(u)-cos(t)sin(u))du = sin(t)\int_0^t sin(u)du - cos(t)\int_0^t \frac{sin^2(u)}{cos(u)}du =$ $sin(t)(1-cos(t)) -cos(t)ln|sec(t) + tan(t)| + sin(t)cos(t) =$ $sin(t) -cos(t)ln|sec(t)+tan(t)|$

This particular solution meets $y_p(0)=0, y_p^{\prime}(0) = 0$.

Why does this work?
This is where “differentiation under the integral sign” comes into play. So we write $f*y = \int_0^t f(u)y(t-u)du$.
Then $(f*y)^{\prime} =$?

Look at the convolution integral as $g(x,z) = \int_0^x f(u)y(z-u)du$. Now think of $x(t) = t, z(t) = t$. Then from calculus III: $\frac{d}{dt} g(x,z) = g_x \frac{dx}{dt} + g_z \frac{dz}{dt}$. Of course, $\frac{dx}{dt}=\frac{dz}{dt}=1$.
$g_x= f(x)y(z-x)$ by the Fundamental Theorem of calculus and $g_z = \int_0^x f(u) y^{\prime}(z-u) du$ by differentiation under the integral sign.

So we let $x = t, z = t$ and we see $\frac{d}{dt} (f*y) = f(t)y(0) + \int_0^t f(u) y^{\prime}(t-u) du$ which equals $\int_0^t f(u) y^{\prime}(t-u) du$ because $y(0) = 0$. Now by the same reasoning $\frac{d^2}{dt^2} (f*y) = f(t)y^{\prime}(0) + \int_0^t f(u) y^{\prime \prime}(t-u) du = f(t)+ \int_0^t f(u) y^{\prime \prime}(t-u) du$ because $y^{\prime}(0) = 1$.
Now substitute into the differential equation $y^{\prime \prime} + ay^{\prime} + by = f(t)$ and use the linear property of integrals to obtain $f(t) + \int_0^t f(u) (y^{\prime \prime}(t-u) + ay^{\prime}(t-u) + by(t-u))du =$ $f(t) + \int_0^t f(u) (0)du = f(t)$

It is easy to see that $(f*y)(0) = 0.$ Now check $\frac{d}{dt} f*y(0) = f(t)y(0) + \int_0^0 f(u) y^{\prime}(t-u) du = 0$.

## October 31, 2011

### Differentiation Under the Integral Sign

Suppose we have $F(s) = \int_a^b f(s,t)dt$ and we’d like to know what $\frac{d}{ds} F$ is.
The answer is $\frac{d}{ds}F(s) = \int_a^b \frac{\partial}{\partial s} f(s,t)dt$.

This is an important result in applied mathematics; I’ll give some applications (there are many!) in our next post. Both examples are from a first course in differential equations.

First, I should give the conditions on $f(s,t)$ to make this result true: continuity of $f(s,t)$ and $\frac{\partial}{\partial s} f(s,t)$ on some rectangle in $(s,t)$ space which contains all of the points in question (including the interval of integration) is sufficient.

Why is the formula true? The proof isn’t hard at all and it makes use of the Mean Value Theorem and of some basic theorems concerning limits and integrals.

Some facts that we’ll use: if $M = max{|f|}$ on some interval $(a,b)$, then $|\int_a^b f(t)dt| \leq M |b-a|$ and the Mean Value Theorem.

Now recall from calculus: $\frac{d}{ds} F =lim_{s_0 \rightarrow s} \frac{F(s_0)-F(s)}{s_0 - s} = lim_{s_0 \rightarrow s} \frac{1}{s_0 -s} \int_a^b f(s_0,t)-f(s,t) dt =lim_{s_0 \rightarrow s} \int_a^b \frac{f(s_0,t)-f(s,t)}{s_0 - s_0} dt$

We now employ one of the most common tricks of mathematics; we guess at the “right answer” and then show that the right answer is what we guessed.

We will examine the integrand (the function being integrated). Does $\frac{f(s_0,t)-f(s,t)}{s_0 - s}$ remind you of anything? Right; this is the fraction from the Mean Value Theorem; that is, there is some $s*$ between $s$ and $s_0$ such that $\frac{f(s_0,t)-f(s,t)}{s_0 - s} = \frac{\partial}{\partial s} f(s*,t)$

Because we are assuming the continuity of the partial derivative, we can say that for $s$ sufficiently close to $s_0$, $|\frac{f(s_0,t)-f(s,t)}{s_0 - s} - \frac{\partial}{\partial s} f(s,t)| < \epsilon$

This means that $| \int_a^b \frac{f(s_0,t)-f(s,t)}{s_0 - s} - \frac{\partial}{\partial s} f(s,t) dt | < \int_a^b |\frac{f(s_0,t)-f(s,t)}{s_0 - s} - \frac{\partial}{\partial s} f(s,t)| dt < \epsilon (b-a)$

Now realize that $\epsilon$ can be made as small as desired by letting $s_0$ get sufficiently close to $s$ so it follows by the $\epsilon-\delta$ definition of limit that:
$lim_{s_0 \rightarrow s}\int_a^b \frac{f(s_0,t)-f(s,t)}{s_0 - s} - \frac{\partial}{\partial s} f(s,t) dt=0$ which implies that
$lim_{s_0 \rightarrow s}\int_a^b \frac{f(s_0,t)-f(s,t)}{s_0 - s}dt -\int_a^b \frac{\partial}{\partial s} f(s,t) dt=0$
Therefore $lim_{s_0 \rightarrow s} \frac{F(s_0)-F(s)}{s_0 - s} - \int_a^b \frac{\partial}{\partial s} f(s,t) dt=0$
So the result follows.

Next post: we’ll give a couple of applications of this

## January 7, 2011

### The Dirac Delta Function in an Elementary Differential Equations Course

The Dirac Delta Function in Differential Equations

The delta ”function” is often introduced into differential equations courses during the section on Laplace transforms. Of course the delta
”function” isn’t a function at all but rather what is known as a ”distribution” (more on this later)

A typical introduction is as follows: if one is working in classical mechanics and one applies a force $F(t)$ to a constant mass $m$ at time $t,$ then one can define the impulse $I$ of $F$ over an interval $[a,b]$ by $I=\int_{a}^{b}F(t)dt=m(v(a)-v(b))$ where $v$ is the velocity. So we can do a translation to set $a=0$ and then consider a unit impulse and vary $F(t)$
according to where $b$ is; that is, define
$\delta ^{\varepsilon}(t)=\left\{ \begin{array}{c}\frac{1}{\varepsilon },0\leq t\leq \varepsilon \\ 0\text{ elsewhere}\end{array}\right. .$

Then $F(t)=\delta ^{\varepsilon }(t)$ is the force function that produces unit impulse for a given $\varepsilon >0.$

Then we wave our hands and say $\delta (t)=\lim _{\varepsilon \rightarrow 0}\delta ^{\varepsilon }(t)$ (this is a great reason to introduce the concept of the limit of functions in a later course) and then argue that for all functions that are continuous over an interval containing 0,
$\int_{0}^{\infty }\delta (t)f(t)dt=f(0)$.

The (hand waving) argument at this stage goes something like: ”the mean value theorem for integrals says that there is a $c_{\varepsilon }$
between $0$ and $\varepsilon$ such that $\int_{0}^{\varepsilon }\delta^{\varepsilon }(t)f(t)dt=\frac{1}{\varepsilon}f(c_{\varepsilon})(\varepsilon -0)=f(c_{\varepsilon })$ Therefore as $\varepsilon\rightarrow 0,$ $\int_{0}^{\varepsilon }\delta^{\varepsilon}(t)f(t)dt=f(c_{\varepsilon })\rightarrow f(0)$ by continuity. Therefore we can define the Laplace transform $L(\delta (t))=e^{-s0}=1.$

Illustrating what the delta ”function” does.

I came across this example by accident; I was holding a review session for students and asked for them to give me a problem to solve.

They chose $y^{\prime \prime }+ay^{\prime }+by=\delta$ (I can remember what $a$ and $b$ were but they aren’t important here as we will see) with initial conditions $y(0)=0,y^{\prime }(0)=-1$

So using the Laplace transform, we obtained:

$(s^{2}+as+b)Y-sy(0)-y^{\prime }(0)-ay(0)=1$

But with $y(0)=0,y^{\prime }(0)=-1$ this reduces to $(s^{2}+as+b)Y+1=1\rightarrow Y=0$

In other words, we have the ”same solution” as if we had $y^{\prime\prime }+ay^{\prime }+by=0$ with $y(0)=0,y^{\prime }(0)=0$.

So that might be a way to talk about the delta ”function”; it is exactly the ”impulse” one needs to ”cancel out” an initial velocity of $-1$ or,
equivalently, to give an initial velocity of $1$ and to do so instantly.

Another approach to the delta function

Though it is true that $\int_{-\infty }^{\infty }\delta^{\varepsilon }(t)dt=1$ for all $\varepsilon$ and
$\int_{-\infty}^{\infty }\delta (t)dt=1$ by design, note that $\delta ^{\varepsilon }(t)$fails to be continuous at $0$ and at $\varepsilon$.

So, can we obtain the delta ”function” as a limit of other functions that are everywhere continuous and differentiable?

In an attempt to find such a family of functions, It is a fun exercise to look at a limit of normal density functions with mean zero:

$f_{\sigma }(t)=\frac{1}{\sigma \sqrt{2\pi }}\exp (-\frac{1}{2\sigma ^{2}}t^{2})$. Clearly for all
$\sigma >0,\int_{-\infty }^{\infty }f_{\sigma}(t)dt=1$ and $\int_{0}^{\infty }f_{\sigma }(t)dt=\frac{1}{2}$.

Here is the graph of some of these functions: we use $\sigma = .5$, $\sigma = .25$ and $\sigma = .1$ respectively.

Calculating the Laplace transform

$L(\frac{1}{\sigma \sqrt{2\pi }}\exp (-\frac{1}{2\sigma ^{2}}t^{2}))=$ $\frac{1}{\sigma \sqrt{2\pi }}\int_{0}^{\infty }\exp (-\frac{1}{2\sigma^{2}}t^{2})\exp (-st)dt=$

Do some algebra to combine the exponentials, complete the square and do some algebra to obtain:

$\frac{1}{\sigma \sqrt{2\pi }}\int_{0}^{\infty }\exp (-\frac{1}{2\sigma ^{2}}(t+\sigma ^{2}s)^{2})\exp (\frac{s^{2}\sigma^{2}}{2})dt=\exp (\frac{s^{2}\sigma ^{2}}{2})[\frac{1}{\sigma \sqrt{2\pi }}\int_{0}^{\infty }\exp (-\frac{1}{2\sigma ^{2}}(t+\sigma^{2}s)^{2})dt]$

Now do the usual transformation to the standard normal random variable via $z=\dfrac{t+\sigma ^{2}s}{\sigma }$

And we obtain:

$L(f_{\sigma }(t))=\exp (\frac{s^{2}\sigma ^{2}}{2})P(Z>\sigma s)$ for all $\sigma >0$. Note: assume $s>0$ and that $P$ is shorthand for the usual probability distribution function.

Now if we take a limit as $\sigma \rightarrow 0$ we get $\frac{1}{2}$ on the right hand side.

Hence, one way to define $\delta$ is as $2\lim _{\sigma \rightarrow0}f_{\sigma }(t)$ . This means that while
$\lim_{\sigma \rightarrow0}\int_{-\infty }^{\infty }2f_{\sigma }(t)dt$ is off by a factor of 2,
$\lim_{\sigma \rightarrow 0}\int_{0}^{\infty }2f_{\sigma }(t)dt=1$ as desired.

Since we now have derivatives of the functions to examine, why don’t we?

$\frac{d}{dt}2f_{\sigma }(t)=-\frac{2t}{\sigma ^{3}\sqrt{2\pi }}\exp (-\frac{1}{2\sigma ^{2}}t^{2})$ which is zero at $t=0$ for all $\sigma >0.$ But the behavior of the derivative is interesting: the derivative is at its minimum at $t=\sigma$ and at its maximum at $t=-\sigma$ (as we tell our probability students: the standard deviation is the distance from the origin to the inflection points) and as $\sigma \rightarrow 0,$ the inflection points get closer together and the second derivative at the
origin approaches $-\infty ,$ which can be thought of as an instant drop from a positive velocity at $t=0$.

Here are the graphs of the derivatives of the density functions that were plotted above; note how the part of the graph through the origin becomes more vertical as the standard deviation approaches zero.