College Math Teaching

April 12, 2020

A tidbit with respect to Laplace transforms and sin(x)/x

Filed under: complex variables, integrals, Laplace transform, media — collegemathteaching @ 9:01 pm

I’ve discovered the channel “blackpenredpen” and it is delightful.
It is a nice escape into mathematics that, while far from research level, is “fun” and beyond mere fluff.

And that got me to thinking about \int^{\infty}_0 \frac{sin(x)}{x} dx . Yes, this can be done by residues

But I’ll look at this with Laplace Transforms.

We know that \mathcal{L}(sin(x)) = \int^{\infty}_0 e^{-st}sin(t)dt = \frac{1}{s^2+1}
But note that the antiderivative of e^{-st} with respect to s is -\frac{1}{t}e^{-st} That might not seem like much help, but then notice \int^{\infty}_0 e^{-st} ds = \frac{-1}{t}e^{-st}|^{\infty}_0 = \frac{1}{t} (assuming s > 0

So why not: \int^{\infty}_0 \int^{\infty}_0 e^{-st}sin(t)dt ds = \int^{\infty}_0 \frac{1}{s^2+1} ds =arctan(s)|^{\infty}_0 = \frac{\pi}{2}
Now since the left hand side is just a double integral over the first quadrant (an infinite rectangle) the order of integration can be interchanged:
\int^{\infty}_0 \int^{\infty}_0 e^{-st}sin(t)dt ds = \int^{\infty}_0 \int^{\infty}_0 e^{-st}sin(t)ds dt  = \int^{\infty}_0 sin(t) \int^{\infty}_0 e^{-st}ds dt = \int^{\infty}_0 sin(t)\frac{1}{t} dt

and that is equal to \frac{\pi}{2} .

Note: \int_0^x\frac{sin(t)}{t} dt is sometimes called the Si(x) function

\

April 6, 2020

How I am cutting corners in class

Filed under: complex variables, differential equations, Laplace transform — collegemathteaching @ 12:06 am

Ok, which is more difficult?

1. Solve x'' + 6x' + 13x = sin(t), x(0) = x'(0) = 0 using Laplace transforms or:

2. Given Y = \frac{1}{s^4 + 6s^3 +10s^2 + 6s + 9} find the inverse Laplace transform.

Clearly, 2 is harder and in texts I’ve used, we had to do those prior to doing 1. But, in a way, you have to do 2 in order to do 1:

(s^2 + 6s + 13)X(s) = \frac{1}{s^2+1}  \rightarrow X(s) = \frac{1}{(s^2 + 6s +13)(s^2 + 1)}

But this is already factored and students can be taught to “attempt to factor and if you can’t, complete the square” and this leads immediately to:

X(s) = \frac{1}{(s^2 + 6s + 9 +4)(s^2+1)} = \frac{As + B}{(s+3)^2+4} + \frac{Cs + D}{s^2 +1} which can be resolved by partial fractions.

In our “one less week plus online” I will do much more of 1 than 2.

Of course, there is still some work to do; we still have to solve (As+B)(s^2+1) +(Cs+D)((s+3)^2 +4) =1

I will teach the “eliminate the term method by using complex numbers:

Let s = i to get

(Ci+D)(12+6i) = 12D-6C +(12C + 6D)i = 1 \rightarrow D = -2C \rightarrow -30C = 1 \rightarrow C = -\frac{1}{30}, D = \frac{1}{15}
Let s = -3+2i

\rightarrow s^2+1 = 6-12i \rightarrow (-3A+B +2iA)(6-12i)

= -18A+6B +24A +36iA+12Ai-12Bi = -6A+6B +(48A-12B)i = 1

\rightarrow B=4A, 6A+6B= 1\rightarrow 30A=1

So we have A = \frac{1}{30}, B = \frac{2}{15}, C = -\frac{1}{30}, D = \frac{1}{15}

The particular solution part pulls back to -\frac{1}{30} cos(t) + \frac{1}{15} sin(t)

There is a work to do for the other part:

To get the s+3 shift we have to add and subtract 3; this leads to:

\frac{A(s+3) + B-3A}{(s+3)^2+4} =\frac{A(s+3) }{(s+3)^2+4} + \frac{ B-3A}{(s+3)^2+4}

\frac{1}{30}\frac{(s+3) }{(s+3)^2+4} + \frac{1}{30}\frac{1}{2}\frac{2}{(s+3)^2+4} (adjusting the second term for the 4 = 2^2
And this part pulls back to \frac{1}{30}e^{-3t}cos(2t) +\frac{1}{60} e^{-3t}sin(2t)

Yeah, I know; if you are reading this, you already know this stuff, but I think using i helps speed things up a bit.

And yes, you could have just used the convolution integral and have been done with it, though one would have had to have used
\frac{1}{2}e^{-3t}sin(2t) * sin(t) =\int^t_0\frac{1}{2}e^{-3u}sin(2u)sin(2t-2u)du and been done with it. (you remembered the 1/2, didn’t you? )

December 21, 2018

Over-scheduling of senior faculty and lower division courses: how important is course prep?

It seems as if the time faculty is expected to spend on administrative tasks is growing exponentially. In our case: we’ve had some administrative upheaval with the new people coming in to “clean things up”, thereby launching new task forces, creating more committees, etc. And this is a time suck; often more senior faculty more or less go through the motions when it comes to course preparation for the elementary courses (say: the calculus sequence, or elementary differential equations).

And so:

1. Does this harm the course quality and if so..
2. Is there any effect on the students?

I should first explain why I am thinking about this; I’ll give some specific examples from my department.

1. Some time ago, a faculty member gave a seminar in which he gave an “elementary” proof of why \int e^{x^2} dx is non-elementary. Ok, this proof took 40-50 minutes to get through. But at the end, the professor giving the seminar exclaimed: “isn’t this lovely?” at which, another senior member (one who didn’t have a Ph. D. had had been around since the 1960’s) asked “why are you happy that yet again, we haven’t had success?” The fact that a proof that \int e^{x^2} dx could not be expressed in terms of the usual functions by the standard field operations had been given; the whole point had eluded him. And remember, this person was in our calculus teaching line up.

2. Another time, in a less formal setting, I had mentioned that I had given a brief mention to my class that one could compute and improper integral (over the real line) of an unbounded function that that a function could have a Laplace transform. A junior faculty member who had just taught differential equations tried to inform me that only functions of exponential order could have a Laplace transform; I replied that, while many texts restricted Laplace transforms to such functions, that was not mathematically necessary (though it is a reasonable restriction for an applied first course). (briefly: imagine a function whose graph consisted of a spike of height e^{n^2} at integer points over an interval of width \frac{1}{2^{2n} e^{2n^2}} and was zero elsewhere.

3. In still another case, I was talking about errors in answer keys and how, when I taught courses that I wasn’t qualified to teach (e. g. actuarial science course), it was tough for me to confidently determine when the answer key was wrong. A senior, still active research faculty member said that he found errors in an answer key..that in some cases..the interval of absolute convergence for some power series was given as a closed interval.

I was a bit taken aback; I gently reminded him that \sum \frac{x^k}{k^2} was such a series.

I know what he was confused by; there is a theorem that says that if \sum a_k x^k converges (either conditionally or absolutely) for some x=x_1 then the series converges absolutely for all x_0 where |x_0| < |x_1| The proof isn’t hard; note that convergence of \sum a_k x^k means eventually, |a_k x^k| < M for some positive M then compare the “tail end” of the series: use |\frac{x_0}{x_1}| < r < 1 and then |a_k (x_0)^k| = |a_k x_1^k (\frac{x_0}{x_1})^k| < |r^k|M and compare to a convergent geometric series. Mind you, he was teaching series at the time..and yes, is a senior, research active faculty member with years and years of experience; he mentored me so many years ago.

4. Also…one time, a sharp young faculty member asked around “are there any real functions that are differentiable exactly at one point? (yes: try f(x) = x^2 if x is rational, x^3 if x is irrational.

5. And yes, one time I had forgotten that a function could be differentiable but not be C^1 (try: x^2 sin (\frac{1}{x}) at x = 0

What is the point of all of this? Even smart, active mathematicians forget stuff if they haven’t reviewed it in a while…even elementary stuff. We need time to review our courses! But…does this actually affect the students? I am almost sure that at non-elite universities such as ours, the answer is “probably not in any way that can be measured.”

Think about it. Imagine the following statements in a differential equations course:

1. “Laplace transforms exist only for functions of exponential order (false)”.
2. “We will restrict our study of Laplace transforms to functions of exponential order.”
3. “We will restrict our study of Laplace transforms to functions of exponential order but this is not mathematically necessary.”

Would students really recognize the difference between these three statements?

Yes, making these statements, with confidence, requires quite a bit of difference in preparation time. And our deans and administrators might not see any value to allowing for such preparation time as it doesn’t show up in measures of performance.

November 25, 2013

A fact about Laplace Transforms that no one cares about….

Filed under: differential equations, Laplace transform — Tags: — collegemathteaching @ 10:33 pm

Consider: sin(x) = x - \frac{x^3}{3!} + \frac{x^5}{5!}......

Now take the Laplace transform of the right hand side: \frac{1}{s^2} - \frac{3!}{s^4 3!} + \frac{5!}{s^6 5!} .... = \frac{1}{s^2} (1 -\frac{1}{s^2} + \frac{1}{s^4} ....

This is equal to: \frac{1}{s^2} (\frac{1}{1 + \frac{1}{s^2}}) for s > 1 which is, of course, \frac{1}{1 + s^2} which is exactly what you would expect.

This technique works for e^{x} but gives nonsense for e^{x^2} .

Update: note that we can get a power series for e^{x^2} = 1 + x^2 + \frac{x^4}{2!} + \frac{x^6}{3!} + .... which, on a term by term basis, transforms to \frac{1}{s} + \frac{2!}{s^3} + \frac{4!}{s^5 2!} + \frac{6!}{s^7 3!} + ... = \frac{1}{s} \sum_{k=0} (\frac{1}{s^2})^k\frac{(2k)!}{k!}) which only converges at s = \infty .

November 12, 2013

Why I teach multiple methods for the inverse Laplace Transform.

I’ll demonstrate with a couple of examples:

y''+4y = sin(2t), y(0) = y'(0) = 0

If we use the Laplace transform, we obtain: (s^2+4)Y = \frac{2}{s^2+4} which leads to Y = \frac{2}{(s^2+4)^2} . Now we’ve covered how to do this without convolutions. But the convolution integral is much easier: write Y = \frac{2}{(s^2+4)^2} = \frac{1}{2} \frac{2}{s^2+4}\frac{2}{s^2+4} which means that y = \frac{1}{2}(sin(2t)*sin(2t)) = \frac{1}{2}\int^t_0 sin(2u)sin(2t-2u)du = -\frac{1}{4}tcos(2t) + \frac{1}{8}sin(2t) .

Note: if the integral went too fast for you and you don’t want to use a calculator, use sin(2t-2u) = sin(2t)cos(2u) - cos(2t)sin(2u) and the integral becomes \frac{1}{2}\int^t_0 sin(2t)cos(2u)sin(2u) -cos(2t)sin^2(2u)du =

\frac{1}{2} (sin(2t))\frac{1}{4}sin^2(2u)|^t_0 - cos(2t)(\frac{1}{4})( t - \frac{1}{4}sin(4u)|^t_0 =

\frac{1}{8}sin^3(2t) - \frac{1}{4}tcos(2t) +\frac{1}{16}sin(4t)cos(2t) =

\frac{1}{8}(sin^3(2t) +sin(2t)cos^2(2t))-\frac{1}{4}tcos(2t)

= \frac{1}{8}sin(2t)(sin^2(2t) + cos^2(2t))-\frac{1}{4}tcos(2t) = -\frac{1}{4}tcos(2t) + \frac{1}{8}sin(2t)

Now if we had instead: y''+4y = sin(t), y(0)=0, y'(0) = 0

The Laplace transform of the equation becomes (s^2+4)Y = \frac{1}{s^2+1} and hence Y = \frac{1}{(s^2+1)(s^2+4)} . One could use the convolution method but partial fractions works easily: one can use the calculator (“algebra” plus “expand”) or:

\frac{A+Bs}{s^2+4} + \frac{C + Ds}{s^2+1} =\frac{1}{(s^2+4)(s^2+1)} . Get a common denominator and match numerators:

(A+Bs)(s^2+1) + (C+Ds)(s^2+4)  = 1 . One can use several methods to resolve this: here we will use s = i to see (C + Di)(3) = 1 which means that D = 0 and C = \frac{1}{3} . Now use s = 2i so obtain (A + 2iB)(-3) = 1 which means that B = 0, A = -\frac{1}{3} so Y = \frac{1}{3} (\frac{1}{s^2+1} - \frac{1}{s^2+4} so y = \frac{1}{3} (sin(t) - \frac{1}{2} sin(2t)) = \frac{1}{3}sin(t) -\frac{1}{6}sin(2t)

So, sometimes the convolution leads us to the answer quicker than other techniques and sometimes other techniques are easier.

Of course, the convolution method has utility beyond the Laplace transform setting.

November 6, 2013

Inverse Laplace transform example: 1/(s^2 +b^2)^2

Filed under: basic algebra, differential equations, Laplace transform — Tags: — collegemathteaching @ 11:33 pm

I talked about one way to solve y''+y = sin(t), y(0) =y'(0) = 0 by using Laplace transforms WITHOUT using convolutions; I happen to think that using convolutions is the easiest way here.

Here is another non-convolution method: Take the Laplace transform of both sides to get Y(s) = \frac{1}{(s^2+1)^2} .

Now most tables have L(tsin(at)) = \frac{2as}{(s^2 + a^2)^2}, L(tcos(at)) = \frac{s^2-a^2}{(s^2+a^2)^2}

What we have is not in one of these forms. BUT, note the following algebra trick technique:

\frac{1}{s^2+b^2} = (A)(\frac{s^2-b^2}{(s^2 + b^2)^2} - \frac{s^2+b^2}{(s^2+b^2)^2}) when A = -\frac{1}{2b^2} .

Now \frac{s^2-b^2}{(s^2 + b^2)^2} = L(tcos(bt)) and \frac{s^2+b^2}{(s^2+b^2)^2} = \frac{1}{(s^2+b^2)} = L(\frac{1}{b}sin(bt)) and one can proceed from there.

A weird Laplace Transform (a resonance equation)

Filed under: applied mathematics, calculus, differential equations, Laplace transform — collegemathteaching @ 12:01 am

Ok, we have y" + y = sin(t), y(0) =0, y'(0) = 0 . Now we can solve this by, say, undetermined coefficients and obtain y = \frac{1}{2}sin(t) -\frac{1}{2}tcos(t)

But what happens when we try Laplace Transforms? It is easy to see that the Laplace transform of the equation yields (s^2+1)Y(s)=\frac{1}{s^2+1} which yields Y(s) =\frac{1}{(s^2+1)^2}

So, how do we take the inverse Laplace transform of \frac{1}{(s^2+1)^2}?

Here is one way: we recognize L(tf(t)) = -1\frac{d}{ds}F(s) where L(f(t)) = F(s) .

So, we might try integrating: \int \frac{1}{(s^2+1)^2} ds .

(no cheating with a calculator! 🙂 )

In calculus II, we do: s = tan(\theta), ds = sec^2(\theta) d\theta .

Then \int \frac{1}{(s^2+1)^2} ds is transformed into \int \frac{sec^2(\theta)}{sec^4 \theta} d\theta = \int cos^2(\theta) d \theta = \int \frac{1}{2} + \frac{1}{2}cos(2 \theta) d \theta = \frac{1}{2} \theta + \frac{1}{4}sin(2 \theta) (plus a constant, of course).

We now use sin(2\theta) = 2sin(\theta)cos(\theta) to obtain \frac{1}{2} \theta + \frac{1}{4}sin(2 \theta) = \frac{1}{2} \theta + \frac{1}{2} sin(\theta)cos(\theta) + C .

Fair enough. But now we have to convert back to s . We use tan(\theta) = s to obtain cos(\theta) = \frac{1}{\sqrt{s^2+1}}, sin(\theta) = \frac{s}{\sqrt{s^2+1}}

So \frac{1}{2} \theta + \frac{1}{2} sin(\theta)cos(\theta) converts to \frac{1}{2}arctan(s) + C +\frac{1}{2}\frac{s}{s^2+1} = \int Y(s) ds . Now we use the fact that as s goes to infinity, \int Y(s) has to go to zero; this means C = -\frac{\pi}{2} .

So what is the inverse Laplace transform of \int Y(s) ds ?

Clearly, \frac{1}{2}\frac{s}{s^2+1} gets inverse transformed to \frac{1}{2}cos(t) , so the inverse transform for this part of Y(s) is -\frac{t}{2}cos(t).

But what about the other part? \frac{d}{ds} (arctan(s) - \frac{\pi}{2}) = \frac{1}{1+s^2} so \frac{1}{1+s^2} = -L(tf(t)) which implies that tf(t) = -sin(t) so -tf(t) = sin(t) and so the inverse Laplace transform for this part of Y(s) is \frac{1}{2} sin(t) and the result follows.

Put another way: L(\frac{sin(t)}{t}) =- arctan(s) + C but since we want 0 when s = \infty, C = \frac{\pi}{2} and so L(\frac{sin(t)}{t}) = \frac{\pi}{2}- arctan(s) = arctan(\frac{1}{s}) .

October 25, 2013

A Laplace Transform of a function of non-exponential order

Many differential equations textbooks (“First course” books) limit themselves to taking Laplace transforms of functions of exponential order. That is a reasonable thing to do. However I’ll present an example of a function NOT of exponential order that has a valid (if not very useful) Laplace transform.

Consider the following function: n \in \{1, 2, 3,...\}

g(t)= \begin{cases}      1,& \text{if } 0 \leq t \leq 1\\      10^n,              & \text{if } n \leq t \leq n+\frac{1}{100^n} \\  0,  & \text{otherwise}  \end{cases}

Now note the following: g is unbounded on [0, \infty) , lim_{t \rightarrow \infty} g(t) does not exist and
\int^{\infty}_0 g(t)dt = 1 + \frac{1}{10} + \frac{1}{100^2} + .... = \frac{1}{1 - \frac{1}{10}} = \frac{10}{9}

One can think of the graph of g as a series of disjoint “rectangles”, each of width \frac{1}{100^n} and height 10^n The rectangles get skinnier and taller as n goes to infinity and there is a LOT of zero height in between the rectangles.

notexponentialorder

Needless to say, the “boxes” would be taller and skinnier.

Note: this is an example can be easily modified to provide an example of a function which is l^2 (square integrable) which is unbounded on [0, \infty) . Hat tip to Ariel who caught the error.

It is easy to compute the Laplace transform of g :

G(s) = \int^{\infty}_0 g(t)e^{-st} dt . The transform exists if, say, s \geq 0 by routine comparison test as |e^{-st}| \leq 1 for that range of s and the calculation is easy:

G(s) = \int^{\infty}_0 g(t)e^{-st} dt = \frac{1}{s} (1-e^{-s}) + \frac{1}{s} \sum^{\infty}_{n=1} (\frac{10}{e^s})^n(1-e^{\frac{-s}{100^n}})

Note: if one wants to, one can see that the given series representation converges for s \geq 0 by using the ratio test and L’Hoptial’s rule.

November 3, 2011

Finding a Particular solution: the Convolution Method

Background for students
Remember that when one is trying to solve a non-homogeneous differential equation, say:
y^{\prime \prime} +3y^{\prime} +2y = cos(t) one finds the general solution to y^{\prime \prime} +3y^{\prime} +2y = 0 (which is called the homogeneous solution; in this case it is c_1 e^{-2t} + c_2 e^{-t} and then finds some solution to y^{\prime \prime} +3y^{\prime} +2y = cos(t) . This solution, called a particular solution, will not have an arbitrary constant. Hence that solution cannot meet an arbitrary initial condition.

But adding the homogenous solution to the particular solution yields a general solution with arbitrary constants which can be solved for to meet a given initial condition.

So how does one obtain a particular solution?

Students almost always learn the so-called “method of undetermined coefficients”; this is used when the driving function is a sine, cosine, e^{at} , a polynomial, or some sum and product of such things. Basically, one assumes that the particular solution has a certain form than then substitutes into the differential equation and then determines the coefficients. For example, in our example, one might try y_p = Acos(t) + Bsin(t) and then substitute into the differential equation to solve for A and B . One could also try a complex form; that is, try y_p = Ae^{it} and then determines A and then uses the real part of the solution.

A second method for finding particular solution is to use variation of parameters. Here is how that goes: one obtains two linearly independent homogeneous solutions y_1, y_2 and then seeks a particular solution of the form y_p = v_1y_1 + v_2y_2 where v_1 = -\int \frac{f(t)y_2}{W} dt and v_2 = \int \frac{f(t)y_1}{W} dt where W is the determinant of the Wronskian matrix. This method can solve differential equations like y^{\prime \prime} + y = tan(t) and sometimes is easier to use when the driving function is messy.
But sometimes it can lead to messy, non transparent solutions when “undetermined coefficients” is much easier; for example, try solving y^{\prime \prime} + 4y = cos(5t) with variation of parameters. Then try to do it with undetermined coefficients; though the answers are the same, one method yields a far “cleaner” answer.

There is a third way that gives a particular solution that meets a specific initial condition. Though this method can yield a not-so-easy-to-do-by-hand integral and can sometimes lead to what I might call an answer in obscured form, the answer is in the form of a definite integral that can be evaluated by numerical integration techniques (if one wants, say, the graph of a solution).

This method is the Convolution Method. Many texts introduce convolutions in the Laplace transform section but there is no need to wait until then.

What is a convolution?
We can define the convolution of two functions f and g to be:
f*g = \int_0^t g(u)f(t-u)du . Needless to say, f and g need to meet appropriate “integrability” conditions; this is usually not a problem in a differential equations course.

Example: if f = e^t, g=cos(t) , then f*g = \frac{1}{2}(e^t - cos(t) + sin(t)) . Notice that the dummy variable gets “integrated out” and the variable t remains.

There are many properties of convolutions that I won’t get into here; one interesting one is that f*g = g*f ; proving this is an interesting exercise in change of variable techniques in integration.

The Convolution Method
If y(t) is a homogenous solution to a second order linear differential equation that meets initial conditions: y(0)=0, y^{\prime}(0) =1 and f is the forcing function, then y_p = f*y is the particular solution that meets y_p(0)=0, y_p^{\prime}(0) =0

How might we use this method and why is it true? We’ll answer the “how” question first.

Suppose we want to solve y^{\prime \prime} + y = tan(t) . The homogeneous solution is y_h = c_1 cos(t) + c_2 sin(t) and it is easy to see that we need c_1 = 0, c_2 = 1 to meet the y_h(0)=0, y^{\prime}_h(0) =1 condition. So a particular solution is sin(t)*tan(t) = tan(t)*sin(t)= \int_0^t tan(u)sin(t-u)du = \int_0^t tan(u)(sin(t)cos(u)-cos(t)sin(u))du = sin(t)\int_0^t sin(u)du - cos(t)\int_0^t \frac{sin^2(u)}{cos(u)}du = sin(t)(1-cos(t)) -cos(t)ln|sec(t) + tan(t)| + sin(t)cos(t) = sin(t) -cos(t)ln|sec(t)+tan(t)|

This particular solution meets y_p(0)=0, y_p^{\prime}(0) = 0 .

Why does this work?
This is where “differentiation under the integral sign” comes into play. So we write f*y = \int_0^t f(u)y(t-u)du .
Then (f*y)^{\prime} = ?

Look at the convolution integral as g(x,z) = \int_0^x f(u)y(z-u)du . Now think of x(t) = t, z(t) = t . Then from calculus III: \frac{d}{dt} g(x,z) = g_x \frac{dx}{dt} + g_z \frac{dz}{dt} . Of course, \frac{dx}{dt}=\frac{dz}{dt}=1 .
g_x= f(x)y(z-x) by the Fundamental Theorem of calculus and g_z = \int_0^x f(u) y^{\prime}(z-u) du by differentiation under the integral sign.

So we let x = t, z = t and we see \frac{d}{dt} (f*y) = f(t)y(0) + \int_0^t f(u) y^{\prime}(t-u) du which equals \int_0^t f(u) y^{\prime}(t-u) du because y(0) = 0 . Now by the same reasoning \frac{d^2}{dt^2} (f*y) = f(t)y^{\prime}(0) + \int_0^t f(u) y^{\prime \prime}(t-u) du = f(t)+ \int_0^t f(u) y^{\prime \prime}(t-u) du because y^{\prime}(0) = 1 .
Now substitute into the differential equation y^{\prime \prime} + ay^{\prime} + by = f(t) and use the linear property of integrals to obtain f(t) + \int_0^t f(u) (y^{\prime \prime}(t-u) + ay^{\prime}(t-u) + by(t-u))du = f(t) + \int_0^t f(u) (0)du = f(t)

It is easy to see that (f*y)(0) = 0. Now check \frac{d}{dt} f*y(0) = f(t)y(0) + \int_0^0 f(u) y^{\prime}(t-u) du = 0 .

October 31, 2011

Differentiation Under the Integral Sign

Suppose we have F(s) = \int_a^b f(s,t)dt and we’d like to know what \frac{d}{ds} F is.
The answer is \frac{d}{ds}F(s) = \int_a^b \frac{\partial}{\partial s} f(s,t)dt .

This is an important result in applied mathematics; I’ll give some applications (there are many!) in our next post. Both examples are from a first course in differential equations.

First, I should give the conditions on f(s,t) to make this result true: continuity of f(s,t) and \frac{\partial}{\partial s} f(s,t) on some rectangle in (s,t) space which contains all of the points in question (including the interval of integration) is sufficient.

Why is the formula true? The proof isn’t hard at all and it makes use of the Mean Value Theorem and of some basic theorems concerning limits and integrals.

Some facts that we’ll use: if M = max{|f|} on some interval (a,b) , then |\int_a^b f(t)dt| \leq M |b-a| and the Mean Value Theorem.

Now recall from calculus: \frac{d}{ds} F =lim_{s_0 \rightarrow s} \frac{F(s_0)-F(s)}{s_0 - s} = lim_{s_0 \rightarrow s} \frac{1}{s_0 -s} \int_a^b f(s_0,t)-f(s,t) dt =lim_{s_0 \rightarrow s} \int_a^b \frac{f(s_0,t)-f(s,t)}{s_0 - s_0} dt

We now employ one of the most common tricks of mathematics; we guess at the “right answer” and then show that the right answer is what we guessed.

We will examine the integrand (the function being integrated). Does \frac{f(s_0,t)-f(s,t)}{s_0 - s} remind you of anything? Right; this is the fraction from the Mean Value Theorem; that is, there is some s* between s and s_0 such that \frac{f(s_0,t)-f(s,t)}{s_0 - s} = \frac{\partial}{\partial s} f(s*,t)

Because we are assuming the continuity of the partial derivative, we can say that for s sufficiently close to s_0 , |\frac{f(s_0,t)-f(s,t)}{s_0 - s} - \frac{\partial}{\partial s} f(s,t)|  < \epsilon

This means that | \int_a^b \frac{f(s_0,t)-f(s,t)}{s_0 - s} - \frac{\partial}{\partial s} f(s,t) dt | < \int_a^b |\frac{f(s_0,t)-f(s,t)}{s_0 - s} - \frac{\partial}{\partial s} f(s,t)| dt < \epsilon (b-a)

Now realize that \epsilon can be made as small as desired by letting s_0 get sufficiently close to s so it follows by the \epsilon-\delta definition of limit that:
lim_{s_0 \rightarrow s}\int_a^b \frac{f(s_0,t)-f(s,t)}{s_0 - s} - \frac{\partial}{\partial s} f(s,t) dt=0 which implies that
lim_{s_0 \rightarrow s}\int_a^b \frac{f(s_0,t)-f(s,t)}{s_0 - s}dt -\int_a^b \frac{\partial}{\partial s} f(s,t) dt=0
Therefore lim_{s_0 \rightarrow s} \frac{F(s_0)-F(s)}{s_0 - s} - \int_a^b \frac{\partial}{\partial s} f(s,t) dt=0
So the result follows.

Next post: we’ll give a couple of applications of this

Older Posts »

Blog at WordPress.com.