Let me start by saying that this is NOT: this is not an introduction for calculus students (too steep) nor is this intended for experienced calculus teachers. Nor is this a “you should teach it THIS way” or “introduce the concepts in THIS order or emphasize THESE topics”; that is for the individual teacher to decide.
Rather, this is a quick overview to help the new teacher (or for the teacher who has not taught it in a long time) decide for themselves how to go about it.
And yes, I’ll be giving a lot of opinions; disagree if you like.
What series will be used for.
Of course, infinite series have applications in probability theory (discrete density functions, expectation and higher moment values of discrete random variables), financial mathematics (perpetuities), etc. and these are great reasons to learn about them. But in calculus, these tend to be background material for power series.
Power series: , the most important thing is to determine the open interval of absolute convergence; that is, the intervals on which converges.
We teach that these intervals are *always* symmetric about (that is, at only, on some open interval or the whole real line. Side note: this is an interesting place to point out the influence that the calculus of complex variables has on real variable calculus! These open intervals are the most important aspect as one can prove that one can differentiate and integrate said series “term by term” on the open interval of absolute convergence; sometimes one can extend the results to the boundary of the interval.
Therefore, if time is limited, I tend to focus on material more relevant for series that are absolutely convergent though there are some interesting (and fun) things one can do for a series which is conditionally convergent (convergent, but not absolutely convergent; e. g. .
Important principles: I think it is a good idea to first deal with geometric series and then series with positive terms…make that “non-negative” terms.
Geometric series: ; here we see that for , and is equal to for ; to show this do the old “shifted sum” addition: , then subtract: as most of the terms cancel with the subtraction.
Now to show the geometric series converges, (convergence being the standard kind: the “n’th partial sum, then the series converges if an only if the sequence of partial sums converges; yes there are other types of convergence)
Now that we’ve established that for the geometric series, and we get convergence if goes to zero, which happens only if .
Why geometric series: two of the most common series tests (root and ratio tests) involve a comparison to a geometric series. Also, the geometric series concept is used both in the theory of improper integrals and in measure theory (e. g., showing that the rational numbers have measure zero).
Series of non-negative terms. For now, we’ll assume that has all (suppressing the indices).
Main principle: though most texts talk about the various tests, I believe that most of the tests involved really involve three key principles, two of which the geometric series and the following result from sequences of positive numbers:
Key sequence result: every monotone bounded sequence of positive numbers converges to its least upper bound.
True: many calculus texts don’t do that much with the least upper bound concept but I feel it is intuitive enough to at least mention. If the least upper bound is, say, , then if is the sequence in question, there has to be some such that for any small, positive . Then because is monotone, for all
The third key principle is “common sense” : if converges (standard convergence) then as a sequence. This is pretty clear if the are non-negative; the idea is that the sequence of partial sums cannot converge to a limit unless becomes arbitrarily small. Of course, this is true even if the terms are not all positive.
Secondary results I think that the next results are “second order” results: the main results depend on these, and these depend on the key 3 that we just discussed.
The first of these secondary results is the direct comparison test for series of non-negative terms:
Direct comparison test
If and converges, then so does . If diverges, then so does .
The proof is basically the “bounded monotone sequence” principle applied to the partial sums. I like to call it “if you are taller than an NBA center then you are tall” principle.
Evidently, some see this result as a “just get to something else” result, but it is extremely useful; one can apply this to show that the exponential of a square matrix is defined; it is the principle behind the Weierstrass M-test, etc. Do not underestimate this test!
Absolute convergence: this is the most important kind of convergence for power series as this is the type of convergence we will have on an open interval. A series is absolutely convergent if converges. Now, of course, absolute convergence implies convergence:
Note and if converges, then converges by direct comparison. Now note is the difference of two convergent series: and therefore converges.
Integral test This is an important test for convergence at a point. This test assumes that is a non-negative, non-decreasing function on some (that is, ) Then converges if and only if converges as an improper integral.
Proof: is just a right endpoint Riemann sum for and therefore the sequence of partial sums is an increasing, bounded sequence. Now if the sum converges, note that is the right endpoint estimate for so the integral can be defined as a limit of a bounded, increasing sequence so the integral converges.
Yes, these are crude whiteboards but they get the job done.
Note: we need the hypothesis that is decreasing (or non-decreasing). Example: the function certainly has converging but diverging.
Going the other way, defining gives an unbounded function with unbounded sum but the integral converges to the sum . The “boxes” get taller and skinnier.
Note: the above shows the integral and sum starting at 0; same principle though.
Now wait a minute: we haven’t really gone over how students will do most of their homework and exam problems. We’ve covered none of these: p-test, limit comparison test, ratio test, root test. Ok, logically, we have but not practically.
Let’s remedy that. First, start with the “point convergence” tests.
p-test. This says that converges if and diverges otherwise. Proof: Integral test.
Limit comparison test Given two series of positive terms: and
Suppose
If converges and then so does .
If diverges and then so does
I’ll show the “converge” part of the proof: choose then such that This means and we get convergence by direct comparison. See how useful that test is?
But note what is going on: it really isn’t necessary for to exist; for the convergence case it is only necessary that there be some for which ; if one is familiar with the limit superior (“limsup”) that is enough to make the test work.
We will see this again.
Why limit comparison is used: Something like clearly converges, but nailing down the proof with direct comparison can be hard. But a limit comparison with is pretty easy.
Ratio test this test is most commonly used when the series has powers and/or factorials in it. Basically: given consider (if the limit exists..if it doesn’t..stay tuned).
If the series converges. If the series diverges. If the test is inconclusive.
Note: if it turns out that there is exists some such that for all we have then the series converges (we can use the limsup concept here as well)
Why this works: suppose there exists some such that for all we have Then write
now factor out a to obtain
Now multiply the terms by 1 in a clever way:
See where this is going: each ratio is less than so we have:
which is a convergent geometric series.
See: there is geometric series and the direct comparison test, again.
Root Test No, this is NOT the same as the ratio test. In fact, it is a bit “stronger” than the ratio test in that the root test will work for anything the ratio test works for, but there are some series that the root test works for that the ratio test comes up empty.
I’ll state the “lim sup” version of the ratio test: if there exists some such that, for all we have then the series converges (exercise: find the “divergence version”).
As before: if the condition is met, so the original series converges by direction comparison.
Now as far as my previous remark about the ratio test: Consider the series:
Yes, this series is bounded by the convergent geometric series with and therefore converges by direct comparison. And the limsup version of the root test works as well.
But the ratio test is a disaster as which is unbounded..but .
What about non-absolute convergence (aka “conditional convergence”)
Series like converges but does NOT converge absolutely (p-test). On one hand, such series are a LOT of fun..but the convergence is very slow and unstable and so might say that these series are not as important as the series that converges absolutely. But there is a lot of interesting mathematics to be had here.
So, let’s chat about these a bit.
We say is conditionally convergent if the series converges but diverges.
One elementary tool for dealing with these is the alternating series test:
for this, let and for all .
Then converges if and only if as a sequence.
That the sequence of terms goes to zero is necessary. That it is sufficient in this alternating case: first note that the terms of the sequence of partial sums are bounded above by (as the magnitudes get steadily smaller) and below by (same reason. Note also that so the sequence of partial sums of even index are an increasing bounded sequence and therefore converges to some limit, say, . But and so by a routine “epsilon-N” argument the odd partial sums converge to as well.
Of course, there are conditionally convergent series that are NOT alternating. And conditionally convergent series have some interesting properties.
One of the most interesting properties is that such series can be “rearranged” (“derangment” in Knopp’s book) to either converge to any number of choice or to diverge to infinity or to have no limit at all.
Here is an outline of the arguments:
To rearrange a series to converge to , start with the positive terms (which must diverge as the series is conditionally convergent) and add them up to exceed ; stop just after is exceeded. Call that partial sum . Note: this could be 0 terms. Now use the negative terms to go of the left of and stop the first one past. Call that Then move to the right, past again with the positive terms..note that the overshoot is smaller as the terms are smaller. This is . Then go back again to get to the left of . Repeat.
Note that at every stage, every partial sum after the first one past is between some and the bracket and the distance is shrinking to become arbitrarily small.
To rearrange a series to diverge to infinity: Add the positive terms to exceed 1. Add a negative term. Then add the terms to exceed 2. Add a negative term. Repeat this for each positive integer .
Have fun with this; you can have the partial sums end up all over the place.
That’s it for now; I might do power series later.