I am going to take a break from the Lebesgue stuff and maybe write more on that tomorrow.

My numerical analysis class just turned in some homework and some really have some misunderstanding about Taylor Series and Power Series. I’ll provide some helpful hints to perplexed students.

For the experts who might be reading this: my assumption is that we are dealing with functions which are real analytic over some interval. To students: this means that can be differentiated as often as we’d like, that the series converges absolutely on some open interval and that the remainder term goes to zero as the number of terms approaches infinity.

This post will be about **computing** such a series.

First, I’ll give a helpful reminder that is crucial in calculating these series: a Taylor series is really just a power series representation of a function. And if one finds a power series which represents a function over a given interval and is expanded about a given point, THAT SERIES IS UNIQUE, no matter how you come up with it. I’ll explain with an example:

Say you want to represent over the interval . You could compute it this way: you probably learned about the geometric series and that for .

Well, you could compute it by Taylor’s theorem which says that such a series can be obtained by:

If you do such a calculation for one obtains , , and plugging into Taylor’s formula leads to the usual geometric series. That is, the series can be calculated by any valid method; one does NOT need to retreat to the Taylor definition for calculation purposes.

Example: in the homework problem, students were asked to calculate Taylor polynomials (of various orders and about ) for a function that looked like this:

. Some students tried to calculate the various derivatives and plug into Taylor’s formula with grim results. It is much easier than that if one remembers that power series are unique! Sure, one CAN use Taylor’s formula but that doesn’t mean that one should. Instead it is much easier if one remembers that Now to get one just substitutes for and obtains: . Then and one subtracts off to obtain the full power series:

Now calculating the bound for the remainder after terms is, in general, a pain. Sure, one can estimate with a graph, but that sort of defeats the point of approximating to begin with; one can use thumb rules which overstate the magnitude of the remainder term.

### Like this:

Like Loading...

*Related*

Would you mind a question from a high school math teacher? I’m teaching this topic right now as part of the AP Calc. BC curriculum. Recently, I asked my students for a Taylor series (c=0) for f(x) = -1/(1 + x)^2. I expected that students would simply take the derivative of the series for 1/(1 + x), and indeed some did. They got Sigma (-1)^k*k*x^(k – 1) for k = 0 to inf.

However, another group expanded (1 + x)^2, then rewrote the result to get -1/(1 – (-2x – x^2)), then used that to get to the geometric series Sigma (-1)^(k+1)*(x^2 + 2x)^k, k=0 to inf.

The two series generate two different sets of terms, and I can’t see them being equivalent algebraically. However, Excel shows them converging to the same value for x close to 0. Is it possible for a function to have two different series representations? I thought not, and yet these do seem to be. Can you help us, please?

Comment by Lawrence Bickford — February 26, 2013 @ 5:21 pm

Four comments:

1. Power series expanded about the same point (in this case, zero) are unique. Once you found one, you found them all. 🙂

2. The second method, at least formally (up to a sign difference) , gives the same coefficients as the first. To do this, calculate, say, the first 4 non-zero terms. What may be deceptive is that, say,

does NOT break down neatly in powers of ; you have some expansion and grouping to do.

3. The second method only makes sense for less than 1. Domains are important!

4. The second method really gives a “series of functions” type answer which has the power series expansion answer that one obtains after the “grouping of like powers of x” operation. It turns out that there is a theorem that says that these sorts of operations work over the interval of absolute convergence; this is where those “uniform convergence” theorems from your analysis class come in.

Comment by collegemathteaching — February 26, 2013 @ 8:02 pm

Thank you very much for the responses. I overlooked the obvious (#2). Thanks again.

Comment by Lawrence Bickford — February 27, 2013 @ 12:48 pm