College Math Teaching

October 31, 2011

Differentiation Under the Integral Sign

Suppose we have F(s) = \int_a^b f(s,t)dt and we’d like to know what \frac{d}{ds} F is.
The answer is \frac{d}{ds}F(s) = \int_a^b \frac{\partial}{\partial s} f(s,t)dt .

This is an important result in applied mathematics; I’ll give some applications (there are many!) in our next post. Both examples are from a first course in differential equations.

First, I should give the conditions on f(s,t) to make this result true: continuity of f(s,t) and \frac{\partial}{\partial s} f(s,t) on some rectangle in (s,t) space which contains all of the points in question (including the interval of integration) is sufficient.

Why is the formula true? The proof isn’t hard at all and it makes use of the Mean Value Theorem and of some basic theorems concerning limits and integrals.

Some facts that we’ll use: if M = max{|f|} on some interval (a,b) , then |\int_a^b f(t)dt| \leq M |b-a| and the Mean Value Theorem.

Now recall from calculus: \frac{d}{ds} F =lim_{s_0 \rightarrow s} \frac{F(s_0)-F(s)}{s_0 - s} = lim_{s_0 \rightarrow s} \frac{1}{s_0 -s} \int_a^b f(s_0,t)-f(s,t) dt =lim_{s_0 \rightarrow s} \int_a^b \frac{f(s_0,t)-f(s,t)}{s_0 - s_0} dt

We now employ one of the most common tricks of mathematics; we guess at the “right answer” and then show that the right answer is what we guessed.

We will examine the integrand (the function being integrated). Does \frac{f(s_0,t)-f(s,t)}{s_0 - s} remind you of anything? Right; this is the fraction from the Mean Value Theorem; that is, there is some s* between s and s_0 such that \frac{f(s_0,t)-f(s,t)}{s_0 - s} = \frac{\partial}{\partial s} f(s*,t)

Because we are assuming the continuity of the partial derivative, we can say that for s sufficiently close to s_0 , |\frac{f(s_0,t)-f(s,t)}{s_0 - s} - \frac{\partial}{\partial s} f(s,t)|  < \epsilon

This means that | \int_a^b \frac{f(s_0,t)-f(s,t)}{s_0 - s} - \frac{\partial}{\partial s} f(s,t) dt | < \int_a^b |\frac{f(s_0,t)-f(s,t)}{s_0 - s} - \frac{\partial}{\partial s} f(s,t)| dt < \epsilon (b-a)

Now realize that \epsilon can be made as small as desired by letting s_0 get sufficiently close to s so it follows by the \epsilon-\delta definition of limit that:
lim_{s_0 \rightarrow s}\int_a^b \frac{f(s_0,t)-f(s,t)}{s_0 - s} - \frac{\partial}{\partial s} f(s,t) dt=0 which implies that
lim_{s_0 \rightarrow s}\int_a^b \frac{f(s_0,t)-f(s,t)}{s_0 - s}dt -\int_a^b \frac{\partial}{\partial s} f(s,t) dt=0
Therefore lim_{s_0 \rightarrow s} \frac{F(s_0)-F(s)}{s_0 - s} - \int_a^b \frac{\partial}{\partial s} f(s,t) dt=0
So the result follows.

Next post: we’ll give a couple of applications of this

Advertisements

October 10, 2011

The Picard Iterates: how they can yield an interval of existence.

One of the many good things about my teaching career is that as I teach across the curriculum, I fill in the gaps of my own education.
I got my Ph. D. in topology (low dimensional manifolds; in particular, knot theory) and hadn’t seen much of differential equations beyond my “engineering oriented” undergraduate course.

Therefore, I learned more about existence and uniqueness theorems when I taught differential equations; though I never taught the existence and uniqueness theorems in a class, I learned the proofs just for my own background. In doing so I learned about the Picard iterated integral technique for the first time; how this is used to establish “uniqueness of solution” can be found here.

However I recently discovered (for myself) what thousands of mathematicians already know: the Picard process can be used to yield an interval of existence for a solution for a differential equation, even if we cannot obtain the solution in closed form.

The situation
I assigned my numerical methods class to solve y'= t + y^2 with y(0)=1 and to produce the graph of y(t) from t = 0 to t = 3 .

There is a unique solution to this and the solution is valid so long as the t and y value of the solution curve stays finite; note that \frac{\partial }{\partial y}f(t,y)=2y.

So, is it possible that the y values for this solution become unbounded?

Answer: yes.
What follows are the notes I gave to my class.

Numeric output seems to indicate this, but numeric output is NOT proof.

To find a proof of this, let’s turn to the Picard iteration technique. We
know that the Picard iterates will converge to the unique solution.

y_{0}=1

y_{1}=1+\int_{0}^{t}x+1dx=\frac{1}{2}t^{2}+t+1

y_{2}=1+\int_{0}^{t}x+(\frac{1}{2}x^{2}+x+1)^{2}dx=

y_{2}=\frac{1}{20}t^{5}+\frac{1}{4}t^{4}+\frac{2}{3}t^{3}+\frac{3}{2}t^{2}+t+1

The integrals get pretty ugly around here; I used MATLAB to calculate the
higher order iterates. I’ll show you y_{3}

y_{3}=\frac{49}{60}t^{5}+\frac{13}{12}t^{4}+\frac{4}{3}t^{3}+\frac{3}{2}t^{2}+t+1+O(t^{11})

where O(t^{11}) means assorted polynomial terms from order 6 to 11.

Here is one more:

y_{4}=\frac{17}{12}t^{5}+\frac{17}{12}t^{4}+\frac{4}{3}t^{3}+\frac{3}{2}t^{2}+t+1+O(t^{23})

We notice some patterns developing here. First of all, the coefficient of
the t^{n} term is staying the same for all y_{m} where m\geq n.

That is tedious to prove. But what is easier to show (and sufficient) is
that the coefficients for the t^{n} terms for y_{n} all appear to be
bigger than 1. This is important!

Why? If we can show that this is the case, then our ”limit” solution \sum_{k=0}^{\infty }a_{k}t^{k} will have an interval of convergence less than 1. Why? Substitute t=1 and see that the sum
diverges because the a_{k} not only fail to converge to zero, but they
stay greater than 1.

So, can we prove this general pattern?

YES!

Here is the idea: y_{m}=q(t)+p(t) where p(t) is a polynomial of order m
and q(t) is a polynomial whose terms all have order m+1 or greater.

Now put into the Picard process:

y_{m+1}=1+\int_{0}^{t}((q(x)+p(x))^{2}+xdx=

1+\int_{0}^{t}((q(x)^{2}+2p(x)q(x))dx+\int_{0}^{t}(p(x))^{2}+xdx

Note: all of the terms of y_{m+1} of degree m+1 or higher must come from
the second integral.

Now by induction we can assume that all of the coefficients of the
polynomial p(x) are greater than or equal to one.

When we ”square out” the polynomial, the coefficients of the new
polynomial will consist of the sum of positive numbers, each of which is
greater than 1. For the coefficients of the polynomial (p(x))^{2} of
degree m or higher: if one is interested in the k^{\prime }th
coefficient, one has to add at least k+1 numbers together, each of which
is bigger than one.

Now when one does the integration on these particular terms, one, of course,
divides by k+1 (power rule for integration). But that means that the
coefficient (after integration) is then greater than 1.

Here is a specific example:

Say p(x)=a+bx+cx^{2}+dx^{3}

Now p(x)^{2}=a^{2}+(ab+ab)x+(ac+ca+b^{2})x^{2}+(ad+da+bc+cb)x^{3}+\{O(x^{6})\}

Remember that a,b,c,d are all greater than or equal to one.

Now p(x)^{2}+x=a^{2}+(ab+ab+1)x+(ac+ca+b^{2})x^{2}+(ad+da+bc+cb)x^{3}+\{O(x^{6})\}

Now when we integrate term by term, we get:

\int_{0}^{t}(p(x))^{2}+xdx=a^{2}x+\frac{1}{2}(ab+ab+1)x^{2}+\frac{1}{3}(ac+ca+b^{2})x^{3}+\frac{1}{4}(ad+da+bc+cb)x^{4}+\{O(x^{7})\}

But note that ab+ab+1>2,ac+ca+b^{2}\geq 3, and ad+da+bc+cb\geq 4

Since all of the factors are greater than or equal to 1.

Hence in our new polynomial approximation, the order 4 terms or less all
have coefficients which are greater than or equal to one.

We can make this into a Proposition:

Proposition
Suppose p(x)=\sum_{j=0}^{k}a_{j}x^{j} where each a_{j}\geq 1.

If q(t)=\sum_{j=0}^{2k+1}b_{j}x^{j}=1+\int_{0}^{x}((p(t))^{2}+t)dt

Then for all j\leq k+1,b_{j}\geq 1.

Proof. Of course, b_{0}=1,b_{1}=a_{0}^{2}, and b_{2}=\frac{2a_{0}a_{1}+1}{2}

Let n\leq k+1.

Then we can calculate: (since all of the a_{n-1},a_{n-2},....a_{1} are
defined):

If n is odd, then b_{n}=\frac{1}{n}(2a_{0}a_{n-1}+2a_{1}a_{n-2}+...2a_{\frac{n-3}{2}}a_{\frac{n+1}{2}}+(a_{\frac{n-1}{2}})^{2})\geq \frac{1}{n}(2\ast \frac{n-1}{2}+1)=1

If n is even then b_{n}=\frac{1}{n}(2a_{0}a_{n-1}+2a_{1}a_{n-2}+....2a_{\frac{n-1}{2}}a_{\frac{n+1}{2}})\geq \frac{1}{n}(2\ast \frac{n}{2})=1

The Proposition is proved.

Of course, this possibly fails for b_{n} where n>k+1 as we would fail to
have a sufficient number of terms in our sum.

Now if one wants a challenge, one can modify the above arguments to show that the coefficients of the approximating polynomial never get ”too big”; that is, the coefficient of the k^{\prime }th order term is less than, say, k.

It isn’t hard to show that b_{n}\leq \max a_{i}^{2} where i\in\{0,1,2,...n-1\}

Then one can compare to the derivative of the geometric series to show that
one gets convergence on an interval up to but not including 1.

Create a free website or blog at WordPress.com.