I am teaching a numerical analysis class this semester and we just started the section on differential equations. I want them to understand when we can expect to have a solution and when a solution satisfying a given initial condition is going to be unique.

We had gone over the “existence” theorem, which basically says: given and initial condition where where is some rectangle in the plane, if is a continuous function over , then we are guaranteed to have at least one solution to the differential equation which is guaranteed to be valid so long as stays in .

I might post a proof of this theorem later; however an outline of how a proof goes will be useful here. With no loss of generality, assume that and the rectangle has the lines as vertical boundaries. Let , the line of slope . Now partition the interval into and create a polygonal path as follows: use slope at , slope at and so on to the right; reverse this process going left. The idea: we are using Euler’s differential equation approximation method to obtain an initial piecewise approximation. Then do this again for step size

In this way, we obtain an infinite family of continuous approximation curves. Because is continuous over , it is also bounded, hence the curves have slopes whose magnitude are bounded by some . Hence this family is equicontinuous (for any given one can use in continuity arguments, no matter which curve in the family we are talking about. Of course, these curves are uniformly bounded, hence by the Arzela-Ascoli Theorem (not difficult) we can extract a subsequence of these curves which converges to a limit function.

Seeing that this limit function satisfies the differential equation isn’t that hard; if one chooses close enough, one shows that whenever is large enough and is sufficiently close to . This is not hard to do because the segments of the functions are getting smaller and the differential equation is satisfied exactly at the nodes and, because is continuous, cannot vary very much if are close together.

But a careful proof of the above is for another time.

So, once we establish that there is at least one solution satisfying a given initial condition, how do we establish that there is only one?

First, let’s establish the domain: I’ll assume that the point in question is and I’ll first start with a rectangle of the form: and in the interior of this rectangle . Now we state a proposed condition (often called “Lipschitz in y”:

If is continuous on and for all we have a where then the differential equation has exactly one solution where which is valid so long as the graph remains in .

Here is the proof: where . This is clear but perhaps a strange step.

But now suppose that there are two solutions, say and where . So set and note the following: and on . Now because and are different functions on , there is some where, say, . A Mean Value Theorem argument applied to means that we can assume that we can select our so that on that interval (since ).

So, on this selected interval about we have (we can remove the absolute value signs.).

Now we set up the differential equation: which has a unique solution whose graph is always positive; . Note that the graphs of meet at . But hence there is some where .

But since , the graphs of must meet at some point in the interval , let be the first such point. Then we have AND . Then apply Rolle’s Theorem to the function find a point between where which is impossible as on that interval.

So, no such point can exist.

Note that we used the fact that the solution to is always positive. Though this is an easy differential equation to solve, note the key fact that if we tried to separate the variables, we’d calculate and find that this is an improper integral which diverges to positive hence its primitive cannot change sign nor reach zero. So, if we had where is an infinite improper integral and , we would get exactly the same result for exactly the same reason.

Hence we can recover Osgood’s Uniqueness Theorem which states:

If is continuous on and for all we have a where where is a positive function and diverges to at then the differential equation has exactly one solution where which is valid so long as the graph remains in .