This is “part 2” of a previous post about series. Once again, this is not designed for students seeing the material for the first time.
I’ll deal with general power series first, then later discuss Taylor Series (one method of obtaining a power series).
General Power Series Unless otherwise stated, we will be dealing with series such as though, of course, a series can expanded about any point in the real line (e. g. ) Note: unless it is needed, I’ll be suppressing the index in the summation sign.
So, IF we have a function ..
what do we mean by this and what can we say about it?
The first thing to consider, of course, is convergence. One main fact is that the open interval of convergence of a power series centered at 0 is either:
1. Non existent …series converges at only (e. g. ; try the ratio test) or
2. Converges absolutely on for some and diverges for all (like a geometric series) (anything possible for ) or
3. Converges absolutely on the whole real line.
Many texts state this result but do not prove it. I can understand as this result is really an artifact of calculus of a complex variable. But it won’t hurt to sketch out a proof at least for the “converge” case for so I’ll do that.
So, let’s assume that converges (either absolute OR conditional ) So, by the divergence test, we know that the sequence of terms . So we can find some index such that
So write: and converges absolutely for . The divergence follows from a simple “reversal of the inequalities (that is, if the series diverged at ).
Though the relation between real and complex variables might not be apparent, it CAN be useful. Here is an example: suppose one wants to find the open interval of convergence of, say, the series representation for ? Of course, the function itself is continuous on the whole real line. But to find the open interval of convergence, look for the complex root of that is closest to zero. That would be so the interval is
So, what about a function defined as a power series?
For one, a power series expansion of a function at a specific point (say, ) is unique:
Say on the interval of convergence, then substituting yields . So subtract from both sides and get: . Now assuming one can “factor out an x” from both sides (and you can) we get , etc.
Yes, this should have property been phrased as a limit and I’ve yet to show that is continuous on its open interval of absolute convergence.
Now if you say “doesn’t that follow from uniform convergence which follows from the Weierstrass M test” you’d be correct…and you are probably wasting your time reading this.
Now if you remember hearing those words but could use a refresher, here goes:
We want for sufficiently close AND both within the open interval of absolute convergence, say Choose where and such that and where (these are just polynomials).
The rest follows:
Calculus On the open interval of absolute convergence, a power series can be integrated term by term and differentiated term by term. Neither result, IMHO, is “obvious.” In fact, sans extra criteria, for series of functions, in general, it is false. Quickly: if you remember Fourier Series, think about the Fourier series for a rectangular pulse wave; note that the derivative is zero everywhere except for the jump points (where it is undefined), then differentiate the constituent functions and get a colossal mess.
Differentiation: most texts avoid the direct proof (with good reason; it is a mess) but it can follow from analysis results IF one first shows that the absolute (and uniform) convergence of implies the absolute (and uniform) convergence of
So, let’s start here: if is absolutely convergent on then so is .
Here is why: because we are on the open interval of absolute convergence, WLOG, assume and find where is also absolutely convergent. Now, note that so the series converges by direct comparison on which establishes what we wanted to show.
Of course, this doesn’t prove that the expected series IS the derivative; we have a bit more work to do.
Again, working on the open interval of absolute convergence, let’s look at:
Now, we use the fact that is absolutely convergent on and given any we can find so that for between and
So let’s do that: pick and then note:
Now apply the Mean Value Theorem to each term in the second term:
where each is between and . By the choice of the second term is less than , which is arbitrary, and the first term is the first terms of the “expected” derivative series.
So, THAT is “term by term” differentiation…and notice that we’ve used our hypothesis …almost all of it.
Term by term integration
Theoretically, integration combines easier with infinite summation than differentiation does. But given we’ve done differentiation, we can then do anti-differentiation by showing that converges and then differentiating.
But let’s do this independently; it is good for us. And we’ll focus on the definite integral.
(of course, )
Once again, choose so that
Then and this is less than (or equal to)
But is arbitrary and so the result follows, for definite integrals. It is an easy exercise in the Fundamental Theorem of Calculus to extract term by term anti-differentiation.
Taylor Series
Ok, now that we can say stuff about a function presented as a power series, what about finding a power series representation for a function, PROVIDED there is one? Note: we’ll need the function to have an infinite number of derivatives on an open interval about the point of expansion. We’ll also need another condition, which we will explain as we go along.
We will work with expanding about . Let be the function of interest and assume all of the relevant derivatives exist.
Start with which can be thought of as our “degree 0” expansion plus remainder term.
But now, let’s use integration by parts on the integral with (it is a clever choice for ; it just has to work.
So now we have:
It looks like we might run into sign trouble on the next iteration, but we won’t as we will see: do integration by parts again:
and so we have:
This turns out to be .
An induction argument yields
For the series to exist (and be valid over an open interval) all of the derivatives have to exist and;
.
Note: to get the Lagrange remainder formula that you see in some texts, let for and then
It is a bit trickier to get the equality as the error formula; it is a Mean Value Theorem for integrals calculation.
About the remainder term going to zero: this is necessary. Consider the classic counterexample:
It is an exercise in the limit definition and L’Hopital’s Rule to show that for all and so a Taylor expansion at zero is just the zero function, and this is valid at 0 only.
As you can see, the function appears to flatten out near x =0 $ but it really is NOT constant.
Note: of course, using the Taylor method isn’t always the best way. For example, if we were to try to get the Taylor expansion of at it is easier to use the geometric series for and substitute ; the uniqueness of the power series expansion allows for that.