College Math Teaching

April 28, 2023

Taylor Polynomials without series (advantages, drawbacks)

Filed under: calculus, series, Taylor polynomial., Taylor Series — oldgote @ 12:32 am

I was in a weird situation this semester in my “applied calculus” (aka “business calculus”) class. I had an awkward amount of time left (1 week) and I still wanted to do something with Taylor polynomials, but I had nowhere near enough time to cover infinite series and power series.

So, I just went ahead and introduced it “user’s manual” style, knowing that I could justify this, if I had to (and no, I didn’t), even without series. BUT there are some drawbacks too.

Let’s see how this goes. We’ll work with series centered at c =0 (expand about 0) and assume that f has as many continuous derivatives as desired on an interval connecting 0 to x .

Now we calculate: \int^x_0 f'(t) dt = f(x) -f(0) , of course. But we could do the integral another way: let’s use parts and say u = f'(t), dv = dt \rightarrow du = f''(t), v = (t-x) . Note the choice for v and that x is a constant in the integral. We then get f(x) -f(0)=\int^x_0 f'(t) dt = (f'(t)(t-x)|^x_0 -\int^x_0f''(t)(t-x) dx . Evaluation:

f(x) =f(0)+f'(0)x -\int^x_0f''(t)(t-x) dx and we’ve completed the first step.

Though we *could* do the inductive step now, it is useful to grind through a second iteration to see the pattern.

We take our expression and compute \int^x_0f''(t)(t-x) dx  by parts again, with u = f''(t), dv =t-x \rightarrow du =f'''(t), v = {(t-x)^2 \over 2!} and insert into our previous expression:

f(x) =f(0)+f'(0)x - (f''(t){(t-x)^2 \over 2!}|^x_0 + \int^x_0 f'''(t){(t-x)^2 \over 2!} dt which works out to:

f(x) = f(0)+f'(0)x +f''(0){x^2 \over 2} + \int^x_0 f'''(t){(t-x)^2 \over 2!} dt and note the alternating sign of the integral.

Now to use induction: assume that:

f(x) = f(0)+f'(0)x +f''(0){x^2 \over 2} + ....f^{(k)}(0){x^k \over k!} + (-1)^k \int^x_0 f^{(k+1)}(t) {(t-x)^k \over k!} dt

Now let’s look at the integral: as usual, use parts as before and we obtain:

(-1)^k (f^{(k+2)}(t) {(t-x)^{k+1} \over (k+1)!}|^x_0 - \int^x_0 f^{(k+2)}(t) {(t-x)^{k+1} \over (k+1)!} dt ). Taking some care with the signs we end up with

(-1)^k (-f^{(k+1)}(0){(-x)^{k+1} \over (k+1)! } )+ (-1)^{k+1}\int^x_0 f^{(k+2)}(t) {(t-x)^{k+1} \over (k+1)!} dt which works out to (-1)^{2k+2} (f^{(k+1)}(0) {x^{k+1} \over (k+1)!} )+ (-1)^{k+1}\int^x_0 f^{(k+2)}(t) {(t-x)^{k+1} \over (k+1)!} dt .

Substituting this evaluation into our inductive step equation gives the desired result.

And note: NOTHING was a assumed except for f having the required number of continuous derivatives!

BUT…yes, there is a catch. The integral is often regarded as a “correction term.” But the Taylor polynomial is really only useful so long as the integral can be made small. And that is the issue with this approach: there are times when the integral cannot be made small; it is possible that x can be far enough out that the associated power series does NOT converge on (-x, x) and the integral picks that up, but it may well be hidden, or at least non-obvious.

And that is why, in my opinion, it is better to do series first.

Let’s show an example.

Consider f(x) = {1 \over 1+x } . We know from work with the geometric series that its series expansion is 1 -x +x^2-x^3....+(-1)^k x^k + .... and that the interval of convergence is (-1,1) But note that f is smooth over [0, \infty) and so our Taylor polynomial, with integral correction, should work for x > 0 .

So, nothing that f^{(k)} = (-1)^k(k!)(1+x)^{-(k+1)} our k-th Taylor polynomial relation is:

f(x) =1-x+x^2-x^3 .....+(-1)^kx^k +(-1)^k \int^x_0 (-1)^{k+1}(k+1)!{1 \over (1+t)^{k+2} } {(t-x)^k \over k!} dt

Let’s focus on the integral; the “remainder”, if you will.

Rewrite it as: (-1)^{2k+1} (k+1) \int^x_0 ({(t -x) \over (t+1) })^k {1 \over (t+1)} dt .

Now this integral really isn’t that hard to do, if we use an algebraic trick:

Rewrite ({(t -x) \over (t+1) })^k  = ({(t+1 -x-1) \over (t+1) })^k = (1-{(x+1) \over (t+1) })^k

Now the integral is a simple substitutions integral: let u = 1-{(x+1) \over (t+1) } \rightarrow du = (x+1)( {1 \over (t+1)})^2 dt so our integral is transformed into:

(-1) ({k+1 \over x+1}) \int^0_{-x} u^{k} du = (-1) {k  \over (k+1)(x+1)} (-(-x)^{k+1}) = (-1)^{k+1} {k+1 \over (k+1)(x+1)} x^{k+1} =(-1)^{k+1}{1 \over (x+1)}x^{k+1}

This remainder cannot be made small if x \geq 1 no matter how big we make k

But, in all honesty, this remainder could have been computed with simple algebra.

{1 \over x+1} =1-x+x^2....+(-1)^k x^k + R and now solve for R algebraically .

The larger point is that the “error” is hidden in the integral remainder term, and this can be tough to see in the case where the associated Taylor series has a finite radius of convergence but is continuous on the whole real line, or a half line.

May 25, 2021

Power series: review for inexperienced calculus teachers

Filed under: infinite series, Power Series, series, Taylor Series — oldgote @ 5:59 pm

This is “part 2” of a previous post about series. Once again, this is not designed for students seeing the material for the first time.

I’ll deal with general power series first, then later discuss Taylor Series (one method of obtaining a power series).

General Power Series Unless otherwise stated, we will be dealing with series such as \sum a_n x^n though, of course, a series can expanded about any point in the real line (e. g. \sum a_n (x-c)^n ) Note: unless it is needed, I’ll be suppressing the index in the summation sign.

So, IF we have a function f(x) = \sum a_n x^n = a_0 + a_1 x + a_2 x^2 + ....a_k x^k + a_{k+1} x^{k+1} ..

what do we mean by this and what can we say about it?

The first thing to consider, of course, is convergence. One main fact is that the open interval of convergence of a power series centered at 0 is either:
1. Non existent …series converges at x = 0 only (e. g. \sum_{k=0} (k!)x^k ; try the ratio test) or

2. Converges absolutely on (-r, r) for some r >0 and diverges for all |x| > r (like a geometric series) (anything possible for |x| = r ) or

3. Converges absolutely on the whole real line.

Many texts state this result but do not prove it. I can understand as this result is really an artifact of calculus of a complex variable. But it won’t hurt to sketch out a proof at least for the “converge” case for (-r, r) so I’ll do that.

So, let’s assume that \sum a_n c^n converges (either absolute OR conditional ) So, by the divergence test, we know that the sequence of terms a_n c^n \rightarrow 0 . So we can find some index M such that n > M \rightarrow |a_n c^n| < 1

So write: |a_n x^n| = |a_n||c^n||| {x^n \over c^n} | < | {x^n \over c^n} | and \sum  | {x^n \over c^n} |  converges absolutely for |x| < |c| . The divergence follows from a simple “reversal of the inequalities (that is, if the series diverged at x = c ).

Though the relation between real and complex variables might not be apparent, it CAN be useful. Here is an example: suppose one wants to find the open interval of convergence of, say, the series representation for {1 \over 3+2x^2 } ? Of course, the function itself is continuous on the whole real line. But to find the open interval of convergence, look for the complex root of 3+2 x^2 that is closest to zero. That would be x = \sqrt{{3 \over 2}} i so the interval is |x| <  \sqrt{{3 \over 2}}

So, what about a function defined as a power series?

For one, a power series expansion of a function at a specific point (say, x =0 ) is unique:

Say a_0 + a_1 x + a_2 x^2 + a_3 x^3.....  = b_0 + b_1 x + b_2 x^2 +  b_3 x^3 .... on the interval of convergence, then substituting x=0 yields a_0 = b_0 . So subtract from both sides and get: a_1 x + a_2 x^2 + a_3 x^3..  =  b_1 x + b_2 x^2 +  b_3 x^3 .... Now assuming one can “factor out an x” from both sides (and you can) we get a_1 = b_1 , etc.

Yes, this should have property been phrased as a limit and I’ve yet to show that f(x) = \sum a_n x^n is continuous on its open interval of absolute convergence.

Now if you say “doesn’t that follow from uniform convergence which follows from the Weierstrass M test” you’d be correct…and you are probably wasting your time reading this.

Now if you remember hearing those words but could use a refresher, here goes:

We want |f(x) -f(t)| < \epsilon for x, t sufficiently close AND both within the open interval of absolute convergence, say (-r, r) Choose s < r where |s| > max(|x|, |t|) and M such that {\epsilon \over 4} > \sum_{k=M} |a_k s^k| and \delta >0 where |x-t| < \delta \rightarrow {\epsilon \over 2 }  >| \sum_{k=0} ^{M-1} a_k x^k  -  \sum_{k=0} ^{M-1} a_k t^k | (these are just polynomials).

The rest follows: |f(x) -f(t)| = | \sum_{k=0} ^{M-1} a_k x^k  -  \sum_{k=0} ^{M-1} a_k t^k  + \sum_{k=M} |a_k x^k| - \sum_{k=M} |a_k t^k| | \leq   | \sum_{k=0} ^{M-1} a_k x^k  -  \sum_{k=0} ^{M-1} a_k t^k | + 2 |  \sum_{k=M} |a_k s^k| | \leq \epsilon

Calculus On the open interval of absolute convergence, a power series can be integrated term by term and differentiated term by term. Neither result, IMHO, is “obvious.” In fact, sans extra criteria, for series of functions, in general, it is false. Quickly: if you remember Fourier Series, think about the Fourier series for a rectangular pulse wave; note that the derivative is zero everywhere except for the jump points (where it is undefined), then differentiate the constituent functions and get a colossal mess.

Differentiation: most texts avoid the direct proof (with good reason; it is a mess) but it can follow from analysis results IF one first shows that the absolute (and uniform) convergence of \sum a_n x^n implies the absolute (and uniform) convergence of \sum n a_n x^{n-1}

So, let’s start here: if \sum a_n x^n is absolutely convergent on (-r, r) then so is \sum a_n nx^{n-1} .

Here is why: because we are on the open interval of absolute convergence, WLOG, assume x > 0 and find s > 0 where \sum a_n (x+s)^n is also absolutely convergent. Now, note that (x+s)^n = x^n + s nx^{n-1}  + ...s^n > snx^{n-1} so the series s \sum a_n nx^{n-1} converges by direct comparison on (-r, r) which establishes what we wanted to show.

Of course, this doesn’t prove that the expected series IS the derivative; we have a bit more work to do.

Again, working on the open interval of absolute convergence, let’s look at:

lim_{x \rightarrow t} {f(x) -f(t) \over x-t } =  lim_{x \rightarrow t}  {1 \over x-t} (\sum a_n x^n -\sum a_n t^n)

Now, we use the fact that \sum a_n n x^{n-1} is absolutely convergent on (-r, r) and given any \epsilon > 0 we can find M > 0 so that n > m \rightarrow \sum_{n} a_k k s^{k-1} for s between x and t

So let’s do that: pick \epsilon >0 and then note:

lim_{x \rightarrow t}  {1 \over x-t} (\sum a_n x^n -\sum a_n t^n)  =

lim_{x \rightarrow t}  {1 \over x-t} (\sum_{k=0}^{n-1}a_k( x^k-t^k) ) +\sum_{k=n} a_k (x^k-t^n) ) =

lim_{x \rightarrow t}  \sum_{k=0}^{n-1} a_k {x^k-t^k \over x-t} + \sum_{k=n} a_k {x^k-t^n \over x-t}

=  \sum_{k=1}^{n-1} a_k k x^{k-1}  + lim_{x \rightarrow t}    \sum_{k=n} a_k {x^k-t^n \over x-t}

Now apply the Mean Value Theorem to each term in the second term:

=  \sum_{k=1}^{n-1} a_k k x^{k-1}  +  lim_{x \rightarrow t}  \sum_{k=n} a_k k(s_k)^{k-1} where each s_k is between x and t . By the choice of n the second term is less than \epsilon , which is arbitrary, and the first term is the first n terms of the “expected” derivative series.

So, THAT is “term by term” differentiation…and notice that we’ve used our hypothesis …almost all of it.

Term by term integration

Theoretically, integration combines easier with infinite summation than differentiation does. But given we’ve done differentiation, we can then do anti-differentiation by showing that \sum a_k {x^{k+1} \over k+1} converges and then differentiating.

But let’s do this independently; it is good for us. And we’ll focus on the definite integral.

\int^b_a \sum a_k x^k dx = (of course, [a,b] \subset (-r, r) )

Once again, choose n so that \sum_{k=n} a_k x^k < \epsilon

Then \int^b_a  \sum a_k x^k dx  =   \int^b_a \sum_{k=0}^{n-1} a_kx^k dx + \int^b_a \sum_{k=n} a_k x^k dx and this is less than (or equal to)

\sum_{k=0}^{n-1} \int^b_a a_kx^k dx + \epsilon (b-a)

But \epsilon is arbitrary and so the result follows, for definite integrals. It is an easy exercise in the Fundamental Theorem of Calculus to extract term by term anti-differentiation.

Taylor Series

Ok, now that we can say stuff about a function presented as a power series, what about finding a power series representation for a function, PROVIDED there is one? Note: we’ll need the function to have an infinite number of derivatives on an open interval about the point of expansion. We’ll also need another condition, which we will explain as we go along.

We will work with expanding about x = 0 . Let f(x) be the function of interest and assume all of the relevant derivatives exist.

Start with \int^x_0 f'(t) dt  = f(x) -f(0) \rightarrow f(x) =f(0) + \int ^x_0 f'(t) dt which can be thought of as our “degree 0” expansion plus remainder term.

But now, let’s use integration by parts on the integral with u = f'(t), dv = dt, du = f''(t) and v = (t-x)  (it is a clever choice for v ; it just has to work.

So now we have: f(x) = f(0) + |^x_0 f'(t)(t-x) -  \int^x_0f''(t) (t-x) dt = f(0) + xf'(0) -    \int^x_0f''(t) (t-x) dt

It looks like we might run into sign trouble on the next iteration, but we won’t as we will see: do integration by parts again:

u =f''(t), dv = (t-x), du = f'''(t), v = {1 \over 2} (t-x)^2 and so we have:

f(x) =f(0) + xf'(0) -( |^x_0 f''(t) {1 \over 2}(t-x)^2 -{1 \over 2} \int^x_0 f'''(t)(t-x)^2 dt

This turns out to be f(0) +xf'(0) +{1 \over 2} f''(0)x^2 + \int^x_0 f'''(t) (t-x)^2 dt .

An induction argument yields

f(x) = \sum^n_{k=0}  f^{(k)}{1 \over k!}x^k +{1 \over n} \int^x_0 f^{(n+1)}(t) (t-x)^n dt

For the series to exist (and be valid over an open interval) all of the derivatives have to exist and;

lim_{n \rightarrow \infty}  {1 \over n!} \int^x_0 f^{(n+1)}(t) (t-x)^n dt  = 0 .

Note: to get the Lagrange remainder formula that you see in some texts, let M = max \{|f^{(n+1)} (t) \} for t \in [0,x] and then |  {1 \over n!} \int^x_0 f^{(n+1)}(t) (t-x)^n dt | \leq {1 \over n!} M \int^x_0 (t-x)^n dt| = {M \over (n+1)!}| |^x_0(t-x)^{(n+1)} | =M{x^{(n+1)} \over (n+1)!}

It is a bit trickier to get the equality f^{(n+1)}(\eta) {x^{n+1} \over (n+1)!} as the error formula; it is a Mean Value Theorem for integrals calculation.

About the remainder term going to zero: this is necessary. Consider the classic counterexample:

f(x) = \begin{cases} e^{-1 \over x^2} & \text{ for} x \neq 0  \\ 0 & \text{ otherwise} \end{cases}

It is an exercise in the limit definition and L’Hopital’s Rule to show that f^{(k)} (0) = 0 for all k \in \{0, 1, 2, ... \} and so a Taylor expansion at zero is just the zero function, and this is valid at 0 only.

As you can see, the function appears to flatten out near x =0 $ but it really is NOT constant.

Note: of course, using the Taylor method isn’t always the best way. For example, if we were to try to get the Taylor expansion of {1 \over 1+x^2} at x = 0 it is easier to use the geometric series for {1 \over 1-u} and substitute u= -x^2; the uniqueness of the power series expansion allows for that.

May 21, 2021

Introduction to infinite series for inexperienced calculus teachers

Filed under: calculus, mathematics education, pedagogy, Power Series, sequences, series — oldgote @ 1:26 pm

Let me start by saying that this is NOT: this is not an introduction for calculus students (too steep) nor is this intended for experienced calculus teachers. Nor is this a “you should teach it THIS way” or “introduce the concepts in THIS order or emphasize THESE topics”; that is for the individual teacher to decide.

Rather, this is a quick overview to help the new teacher (or for the teacher who has not taught it in a long time) decide for themselves how to go about it.

And yes, I’ll be giving a lot of opinions; disagree if you like.

What series will be used for.

Of course, infinite series have applications in probability theory (discrete density functions, expectation and higher moment values of discrete random variables), financial mathematics (perpetuities), etc. and these are great reasons to learn about them. But in calculus, these tend to be background material for power series.

Power series: \sum^{\infty}_{k=0} a_k (x-c)^k , the most important thing is to determine the open interval of absolute convergence; that is, the intervals on which \sum^{\infty}_{k=0} |a_k (x-c)^k | converges.

We teach that these intervals are *always* symmetric about x = c (that is, at x = c only, on some open interval (c-\delta, c+ \delta) or the whole real line. Side note: this is an interesting place to point out the influence that the calculus of complex variables has on real variable calculus! These open intervals are the most important aspect as one can prove that one can differentiate and integrate said series “term by term” on the open interval of absolute convergence; sometimes one can extend the results to the boundary of the interval.

Therefore, if time is limited, I tend to focus on material more relevant for series that are absolutely convergent though there are some interesting (and fun) things one can do for a series which is conditionally convergent (convergent, but not absolutely convergent; e. g. \sum^{\infty}_{k=1} (-1)^{k+1} {1 \over k} .

Important principles: I think it is a good idea to first deal with geometric series and then series with positive terms…make that “non-negative” terms.

Geometric series: \sum ^{\infty}_{k =0} x^k ; here we see that for x \neq 1 , \sum ^{n}_{k =0} x^k= {1-x^{n+1} \over 1-x } and is equal to n+1 for n = 1 ; to show this do the old “shifted sum” addition: S = 1 + x + x^2 + ...x^n , xS = x+x^2 + ...+x^{n+1} then subtract: S-xS = (1-x)S = 1-x^{n+1} as most of the terms cancel with the subtraction.

Now to show the geometric series converges, (convergence being the standard kind: \sum^n_{k = 0} c_k = S_n the “n’th partial sum, then the series \sum^{\infty}_{k = 0} c_k  converges if an only if the sequence of partial sums S_n converges; yes there are other types of convergence)

Now that we’ve established that for the geometric series, S_n =  {1-x^{n+1} \over 1-x }  and we get convergence if |x^{n+1}| goes to zero, which happens only if |x| < 1 .

Why geometric series: two of the most common series tests (root and ratio tests) involve a comparison to a geometric series. Also, the geometric series concept is used both in the theory of improper integrals and in measure theory (e. g., showing that the rational numbers have measure zero).

Series of non-negative terms. For now, we’ll assume that \sum a_k has all a_k \geq 0 (suppressing the indices).

Main principle: though most texts talk about the various tests, I believe that most of the tests involved really involve three key principles, two of which the geometric series and the following result from sequences of positive numbers:

Key sequence result: every monotone bounded sequence of positive numbers converges to its least upper bound.

True: many calculus texts don’t do that much with the least upper bound concept but I feel it is intuitive enough to at least mention. If the least upper bound is, say, b , then if a_n is the sequence in question, there has to be some N  > 0 such that a_n > b-\delta for any small, positive \delta . Then because a_n is monotone, b> a_{m} > b-\delta for all m > n

The third key principle is “common sense” : if \sum c_k converges (standard convergence) then c_k \rightarrow 0 as a sequence. This is pretty clear if the c_k are non-negative; the idea is that the sequence of partial sums S_n cannot converge to a limit unless |S_n -S_{n+1}| becomes arbitrarily small. Of course, this is true even if the terms are not all positive.

Secondary results I think that the next results are “second order” results: the main results depend on these, and these depend on the key 3 that we just discussed.

The first of these secondary results is the direct comparison test for series of non-negative terms:

Direct comparison test

If 0< c_n \leq b_n  and \sum b_n converges, then so does \sum c_n . If \sum c_n diverges, then so does \sum b_n .

The proof is basically the “bounded monotone sequence” principle applied to the partial sums. I like to call it “if you are taller than an NBA center then you are tall” principle.

Evidently, some see this result as a “just get to something else” result, but it is extremely useful; one can apply this to show that the exponential of a square matrix is defined; it is the principle behind the Weierstrass M-test, etc. Do not underestimate this test!

Absolute convergence: this is the most important kind of convergence for power series as this is the type of convergence we will have on an open interval. A series is absolutely convergent if \sum |c_k| converges. Now, of course, absolute convergence implies convergence:

Note 0 < |c_k| -c_k \leq 2|c_k| and if \sum |c_k| converges, then \sum |c_k|-c_k converges by direct comparison. Now note c_k = |c_k|-(|c_k| -c_k) \rightarrow \sum c_k is the difference of two convergent series: \sum |c_k| -\sum (|c_k|-c_k ) and therefore converges.

Integral test This is an important test for convergence at a point. This test assumes that f is a non-negative, non-decreasing function on some [1, \infty) (that is, a >b \rightarrow f(a) \geq f(b) ) Then \sum f(n) converges if and only if \int_1^{\infty} f(x)dx converges as an improper integral.

Proof: \sum_{n=2} f(n) is just a right endpoint Riemann sum for \int_1^{\infty} f(x)dx and therefore the sequence of partial sums is an increasing, bounded sequence. Now if the sum converges, note that \sum_{n=1} f(n) is the right endpoint estimate for \int_1^{\infty} f(x)dx so the integral can be defined as a limit of a bounded, increasing sequence so the integral converges.

Yes, these are crude whiteboards but they get the job done.

Note: we need the hypothesis that f is decreasing (or non-decreasing). Example: the function f(x) = \begin{cases}  x , & \text{ if } x \notin \{1, 2, 3,...\} \\ 0, & \text{ otherwise} \end{cases} certainly has \sum f(n) converging but \int^{\infty}_{1} f(x) dx diverging.

Going the other way, defining f(x) = \begin{cases}  2^n , & \text{ if }  x \in [n, n+2^{-2n}] \\0, & \text{ otherwise} \end{cases} gives an unbounded function with unbounded sum \sum_{n=1} 2^n but the integral converges to the sum \sum_{n=1} 2^{-n} =1 . The “boxes” get taller and skinnier.

Note: the above shows the integral and sum starting at 0; same principle though.

Now wait a minute: we haven’t really gone over how students will do most of their homework and exam problems. We’ve covered none of these: p-test, limit comparison test, ratio test, root test. Ok, logically, we have but not practically.

Let’s remedy that. First, start with the “point convergence” tests.

p-test. This says that \sum {1 \over k^p} converges if p> 1 and diverges otherwise. Proof: Integral test.

Limit comparison test Given two series of positive terms: \sum b_k and \sum c_k

Suppose lim_{k \rightarrow \infty} {b_k \over c_k} = L

If \sum c_k converges and 0 \leq L < \infty then so does \sum b_k .

If \sum c_k diverges and 0 < L \leq \infty then so does \sum b_k

I’ll show the “converge” part of the proof: choose \epsilon = L then N such that n > N \rightarrow  {b_n \over c_n } < 2L This means \sum_{k=n} b_k \leq \sum_{k=n} c_k and we get convergence by direct comparison. See how useful that test is?

But note what is going on: it really isn’t necessary for lim_{k \rightarrow \infty} {b_k \over c_k}  to exist; for the convergence case it is only necessary that there be some M for which M >  {b_k \over c_k}  ; if one is familiar with the limit superior (“limsup”) that is enough to make the test work.

We will see this again.

Why limit comparison is used: Something like \sum {1 \over 4k^5-2k^2-14} clearly converges, but nailing down the proof with direct comparison can be hard. But a limit comparison with \sum {1 \over k^5} is pretty easy.

Ratio test this test is most commonly used when the series has powers and/or factorials in it. Basically: given \sum c_n consider lim_{k \rightarrow \infty} {c_{k+1} \over c_{k}} = L (if the limit exists..if it doesn’t..stay tuned).

If L < 1 the series converges. If L > 1 the series diverges. If L = 1 the test is inconclusive.

Note: if it turns out that there is exists some N >0 such that for all n > N we have {c_{n+1} \over c_n } < \gamma < 1 then the series converges (we can use the limsup concept here as well)

Why this works: suppose there exists some N >0 such that for all n > N we have {c_{n+1} \over c_n } < \gamma < 1 Then write \sum_{k=n} c_k = c_n + c_{n+1} + c_{n+2} + ....

now factor out a c_n to obtain c_n (1 + {c_{n+1} \over c_n} + {c_{n+2} \over c_n} + {c_{n+3} \over c_{n}} +....)

Now multiply the terms by 1 in a clever way:

c_n (1 + {c_{n+1} \over c_n} + {c_{n+2} \over c_{n+1}}{c_{n+1} \over c_n} + {c_{n+3} \over c_{n+2}}  {c_{n+2} \over c_{n+1}}  {c_{n+1} \over c_{n}}   +....) See where this is going: each ratio is less than \gamma so we have:

\sum_{k=n} c_k \leq c_n \sum_{j=0} (\gamma)^j which is a convergent geometric series.

See: there is geometric series and the direct comparison test, again.

Root Test No, this is NOT the same as the ratio test. In fact, it is a bit “stronger” than the ratio test in that the root test will work for anything the ratio test works for, but there are some series that the root test works for that the ratio test comes up empty.

I’ll state the “lim sup” version of the ratio test: if there exists some N such that, for all n>N we have (c_n)^{1 \over n} < \gamma < 1 then the series converges (exercise: find the “divergence version”).

As before: if the condition is met, \sum_{k=n} c_n \leq \sum_{k=n} \gamma^k so the original series converges by direction comparison.

Now as far as my previous remark about the ratio test: Consider the series:

1 + ({1 \over 3}) + ({2 \over 3})^2 + ({1 \over 3})^3 + ({2 \over 3})^4 +...({1 \over 3})^{2k-1} +({2 \over 3})^{2k} ...

Yes, this series is bounded by the convergent geometric series with r = {2 \over 3} and therefore converges by direct comparison. And the limsup version of the root test works as well.

But the ratio test is a disaster as {({2 \over 3})^{2k}  \over  ({1 \over 3})^{2k-1} } ={2^{2k} \over 3 } which is unbounded..but {({1 \over 3})^{2k+1}  \over  ({2 \over 3})^{2k} }  ={1 \over (2^{2k} 3) } .

What about non-absolute convergence (aka “conditional convergence”)

Series like \sum_{k=1} (-1)^{k+1} {1 \over k} converges but does NOT converge absolutely (p-test). On one hand, such series are a LOT of fun..but the convergence is very slow and unstable and so might say that these series are not as important as the series that converges absolutely. But there is a lot of interesting mathematics to be had here.

So, let’s chat about these a bit.

We say \sum c_k is conditionally convergent if the series converges but \sum |c_k| diverges.

One elementary tool for dealing with these is the alternating series test:

for this, let c_k >0 and for all k, c_{k+1} < c_k .

Then \sum_{k=1} (-1)^{k+1} c_k converges if and only if c_k \rightarrow 0 as a sequence.

That the sequence of terms goes to zero is necessary. That it is sufficient in this alternating case: first note that the terms of the sequence of partial sums are bounded above by c_1 (as the magnitudes get steadily smaller) and below by c_1 - c_2 (same reason. Note also that S_{2k+2} = S_{2k} -c_{2k+1} + c_{2k+2} < S_{2k} so the sequence of partial sums of even index are an increasing bounded sequence and therefore converges to some limit, say, L . But S_{2k+1} = S_{2k} + c_{2k+1} and so by a routine “epsilon-N” argument the odd partial sums converge to L as well.

Of course, there are conditionally convergent series that are NOT alternating. And conditionally convergent series have some interesting properties.

One of the most interesting properties is that such series can be “rearranged” (“derangment” in Knopp’s book) to either converge to any number of choice or to diverge to infinity or to have no limit at all.

Here is an outline of the arguments:

To rearrange a series to converge to L , start with the positive terms (which must diverge as the series is conditionally convergent) and add them up to exceed L ; stop just after L is exceeded. Call that partial sum u_1. Note: this could be 0 terms. Now use the negative terms to go of the left of L and stop the first one past. Call that l_1 Then move to the right, past L again with the positive terms..note that the overshoot is smaller as the terms are smaller. This is u_2 . Then go back again to get l_2 to the left of L . Repeat.

Note that at every stage, every partial sum after the first one past L is between some u_i, l_i and the u_i, l_i bracket L and the distance is shrinking to become arbitrarily small.

To rearrange a series to diverge to infinity: Add the positive terms to exceed 1. Add a negative term. Then add the terms to exceed 2. Add a negative term. Repeat this for each positive integer n .

Have fun with this; you can have the partial sums end up all over the place.

That’s it for now; I might do power series later.

May 10, 2021

Series convergence tests: the “harder to use in calculus 1” tests may well be the most useful.

I talked about the root and ratio test here and how the root test is the stronger of the two tests. What I should point out that the proof of the root test depends on the basic comparison test.

And so..a professor on Twitter asked:

Of course, one proves the limit comparison test by the direct comparison test. But in a calculus course, the limit comparison test might appear to be more readily useful..example:

Show \sum {1 \over k^2-1} converges.

So..what about the direct comparison test?

As someone pointed out: the direct comparison can work very well when you don’t know much about the matrix.

One example can be found when one shows that the matrix exponential e^A where A is a n \times n matrix.

For those unfamiliar: e^A = \sum^{\infty}_{k=0} {A^k \over k!} where the powers make sense as A is square and we merely add the corresponding matrix entries.

What enables convergence is the factorial in the denominators of the individual terms; the i-j’th element of each A^k can get only so large.

But how does one prove convergence?

The usual way is to dive into matrix norms; one that works well is |A| = \sum_{(i,j)} |a_{i,j}| (just sum up the absolute value of the elements (the Taxi cab norm or l_1 norm )

Then one can show |AB| \leq |A||B| and |a_{i,j}| \leq |A| and together this implies the following:

For any index k where a^k_{i,j} is the i-j’th element of A^k we have:

| a^k_{i,j}  | \leq |A^k| \leq |A|^k

It then follows that | [ e^A ]_{i,j} | \leq \sum^{\infty}_{k=0} {|A^k |\over k!} \leq  \sum^{\infty}_{k=0} {|A|^k \over k!} =e^{|A|} . Therefore every series that determines an entry of the matrix e^A is an absolutely convergent series by direct comparison. and is therefore a convergent series.

December 21, 2018

Over-scheduling of senior faculty and lower division courses: how important is course prep?

It seems as if the time faculty is expected to spend on administrative tasks is growing exponentially. In our case: we’ve had some administrative upheaval with the new people coming in to “clean things up”, thereby launching new task forces, creating more committees, etc. And this is a time suck; often more senior faculty more or less go through the motions when it comes to course preparation for the elementary courses (say: the calculus sequence, or elementary differential equations).

And so:

1. Does this harm the course quality and if so..
2. Is there any effect on the students?

I should first explain why I am thinking about this; I’ll give some specific examples from my department.

1. Some time ago, a faculty member gave a seminar in which he gave an “elementary” proof of why \int e^{x^2} dx is non-elementary. Ok, this proof took 40-50 minutes to get through. But at the end, the professor giving the seminar exclaimed: “isn’t this lovely?” at which, another senior member (one who didn’t have a Ph. D. had had been around since the 1960’s) asked “why are you happy that yet again, we haven’t had success?” The fact that a proof that \int e^{x^2} dx could not be expressed in terms of the usual functions by the standard field operations had been given; the whole point had eluded him. And remember, this person was in our calculus teaching line up.

2. Another time, in a less formal setting, I had mentioned that I had given a brief mention to my class that one could compute and improper integral (over the real line) of an unbounded function that that a function could have a Laplace transform. A junior faculty member who had just taught differential equations tried to inform me that only functions of exponential order could have a Laplace transform; I replied that, while many texts restricted Laplace transforms to such functions, that was not mathematically necessary (though it is a reasonable restriction for an applied first course). (briefly: imagine a function whose graph consisted of a spike of height e^{n^2} at integer points over an interval of width \frac{1}{2^{2n} e^{2n^2}} and was zero elsewhere.

3. In still another case, I was talking about errors in answer keys and how, when I taught courses that I wasn’t qualified to teach (e. g. actuarial science course), it was tough for me to confidently determine when the answer key was wrong. A senior, still active research faculty member said that he found errors in an answer key..that in some cases..the interval of absolute convergence for some power series was given as a closed interval.

I was a bit taken aback; I gently reminded him that \sum \frac{x^k}{k^2} was such a series.

I know what he was confused by; there is a theorem that says that if \sum a_k x^k converges (either conditionally or absolutely) for some x=x_1 then the series converges absolutely for all x_0 where |x_0| < |x_1| The proof isn’t hard; note that convergence of \sum a_k x^k means eventually, |a_k x^k| < M for some positive M then compare the “tail end” of the series: use |\frac{x_0}{x_1}| < r < 1 and then |a_k (x_0)^k| = |a_k x_1^k (\frac{x_0}{x_1})^k| < |r^k|M and compare to a convergent geometric series. Mind you, he was teaching series at the time..and yes, is a senior, research active faculty member with years and years of experience; he mentored me so many years ago.

4. Also…one time, a sharp young faculty member asked around “are there any real functions that are differentiable exactly at one point? (yes: try f(x) = x^2 if x is rational, x^3 if x is irrational.

5. And yes, one time I had forgotten that a function could be differentiable but not be C^1 (try: x^2 sin (\frac{1}{x}) at x = 0

What is the point of all of this? Even smart, active mathematicians forget stuff if they haven’t reviewed it in a while…even elementary stuff. We need time to review our courses! But…does this actually affect the students? I am almost sure that at non-elite universities such as ours, the answer is “probably not in any way that can be measured.”

Think about it. Imagine the following statements in a differential equations course:

1. “Laplace transforms exist only for functions of exponential order (false)”.
2. “We will restrict our study of Laplace transforms to functions of exponential order.”
3. “We will restrict our study of Laplace transforms to functions of exponential order but this is not mathematically necessary.”

Would students really recognize the difference between these three statements?

Yes, making these statements, with confidence, requires quite a bit of difference in preparation time. And our deans and administrators might not see any value to allowing for such preparation time as it doesn’t show up in measures of performance.

February 22, 2018

What is going on here: sum of cos(nx)…

Filed under: analysis, derivatives, Fourier Series, pedagogy, sequences of functions, series, uniform convergence — collegemathteaching @ 9:58 pm

This started innocently enough; I was attempting to explain why we have to be so careful when we attempt to differentiate a power series term by term; that when one talks about infinite sums, the “sum of the derivatives” might fail to exist if the sum is infinite.

Anyone who is familiar with Fourier Series and the square wave understands this well:

\frac{4}{\pi} \sum^{\infty}_{k=1} \frac{1}{2k-1}sin((2k-1)x)  = (\frac{4}{\pi})( sin(x) + \frac{1}{3}sin(3x) + \frac{1}{5}sin(5x) +.....) yields the “square wave” function (plus zero at the jump discontinuities)

Here I graphed to 2k-1 = 21

Now the resulting function fails to even be continuous. But the resulting function is differentiable except for the points at the jump discontinuities and the derivative is zero for all but a discrete set of points.

(recall: here we have pointwise convergence; to get a differentiable limit, we need other conditions such as uniform convergence together with uniform convergence of the derivatives).

But, just for the heck of it, let’s differentiate term by term and see what we get:

(\frac{4}{\pi})\sum^{\infty}_{k=1} cos((2k-1)x) = (\frac{4}{\pi})(cos(x) + cos(3x) + cos(5x) + cos(7x) +.....)...

It is easy to see that this result doesn’t even converge to a function of any sort.

Example: let’s see what happens at x = \frac{\pi}{4}: cos(\frac{\pi}{4}) = \frac{1}{\sqrt{2}}

cos(\frac{\pi}{4}) + cos(3\frac{\pi}{4}) =0

cos(\frac{\pi}{4}) + cos(3\frac{\pi}{4}) + cos(5\frac{\pi}{4}) = -\frac{1}{\sqrt{2}}

cos(\frac{\pi}{4}) + cos(3\frac{\pi}{4}) + cos(5\frac{\pi}{4}) + cos(7\frac{\pi}{4}) = 0

And this repeats over and over again; no limit is possible.

Something similar happens for x = \frac{p}{q}\pi where p, q are relatively prime positive integers.

But something weird is going on with this sum. I plotted the terms with 2k-1 \in \{1, 3, ...35 \}

(and yes, I am using \frac{\pi}{4} csc(x) as a type of “envelope function”)

BUT…if one, say, looks at cos(29x) + cos(31x) + cos(33x) + cos(35x)

we really aren’t getting a convergence (even at irrational multiples of \pi ). But SOMETHING is going on!

I decided to plot to cos(61x)

Something is going on, though it isn’t convergence. Note: by accident, I found that the pattern falls apart when I skipped one of the terms.

This is something to think about.

I wonder: for all x \in (0, \pi), sup_{n \in \{1, 3, 5, 7....\}}|\sum^{n}_{k \in \{1,3,..\}}cos(kx)| \leq |csc(x)| and we can somehow get close to csc(x) for given values of x by allowing enough terms…but the value of x is determined by how many terms we are using (not always the same value of x ).

January 26, 2016

More Fun with Divergent Series: redefining series convergence (Cesàro, etc.)

Filed under: analysis, calculus, sequences, series — Tags: , , — collegemathteaching @ 10:21 pm

This post is more designed to entertain myself than anything else. This builds up a previous post which talks about deleting enough terms from a divergent series to make it a convergent one.

This post is inspired by Chapter 8 of Konrad Knopp’s classic Theory and Application of Infinite Series. The title of the chapter is Divergent Series.

Notation: when I talk about a series converging, I mean “converging” in the usual sense; e. g. if s_n = \sum_{k=0}^{k=n} a_k and lim_{n \rightarrow \infty}s_n = s then \sum_{k=0}^{\infty} a_k is said to be convergent with sum s .

All of this makes sense since things like limits are carefully defined. But as Knopp points out, in the “days of old”, mathematicians say these series as formal objects rather than the result of careful construction. So some of these mathematicians (like Euler) had no problem saying things like \sum^{\infty}_{k=0} (-1)^k = 1-1+1-1+1..... = \frac{1}{2} . Now this is complete nonsense by our usual modern definition. But we might note that \frac{1}{1-x} = \sum^{\infty}_{k=0} x^k for -1 < x < 1 and note that x = -1 IS in the domain of the left hand side.

So, is there a way of redefining the meaning of “infinite sum” that gives us this result, while not changing the value of convergent series (defined in the standard way)? As Knopp points out in his book, the answer is “yes” and he describes several definitions of summation that

1. Do not change the value of an infinite sum that converges in the traditional sense and
2. Allows for more series to coverge.

We’ll discuss one of these methods, commonly referred to as Cesàro summation. There are ways to generalize this.

How this came about

Consider the Euler example: 1 -1 + 1 -1 + 1 -1...... . Clearly, s_{2k} = 1, s_{2k+1} = 0 and so this geometric series diverges. But notice that the arithmetic average of the partial sums, computed as c_n = \frac{s_0 + s_1 +...+s_n}{n+1} does tend to \frac{1}{2} as n tends to infinity: c_{2n} = \frac{\frac{2n}{2}}{2n+1} = \frac{n}{2n+1} whereas c_{2n+1} = \frac{\frac{2n}{2}}{2n+2} =\frac{n}{2n+2} and both of these quantities tend to \frac{1}{2} as n tends to infinity.

So, we need to see that this method of summing is workable; that is, do infinite sums that converge in the previous sense still converge to the same number with this method?

The answer is, of course, yes. Here is how to see this: Let x_n be a sequence that converges to zero. Then for any \epsilon > 0 we can find M such that k > M implies that |x_k| < \epsilon . So for n > k we have \frac{x_1 + x_2 + ...+ x_{k-1} + x_k + ...+ x_n}{n} = \frac{x_1+ ...+x_{k-1}}{n} + \frac{x_k + x_{k+1} + ....x_n}{n} Because k is fixed, the first fraction tends to zero as n tends to infinity. The second fraction is smaller than \epsilon in absolute value. But \epsilon is arbitrary, hence this arithmetic average of this null sequence is itself a null sequence.

Now let x_n \rightarrow L and let c_n = \frac{x_1 + x_2 + ...+ x_{k-1} + x_k + ...+ x_n}{n} Now subtract note c_n-L =  \frac{(x_1-L) + (x_2-L) + ...+ (x_{k-1}-L) +(x_k-L) + ...+ (x_n-L)}{n} and the x_n-L forms a null sequence. Then so do the c_n-L .

Now to be useful, we’d have to show that series that are summable in the Cesàro obey things like the multiplicative laws; they do but I am too lazy to show that. See the Knopp book.

I will mention a couple of interesting (to me) things though. Neither is really profound.

1. If a series diverges to infinity (that is, if for any positive M there exists n such that for all k \geq n, s_k > M , then this series is NOT Cesàro summable. It is relatively easy to see why: given such an M, k then consider \frac{s_1 + s_2 + s_3 + ...+s_{k-1} + s_k + s_{k+1} + ...s_n}{n} = \frac{s_1+ s_2 + ...+s_{k-1}}{n} + \frac{s_k + s_{k+1} .....+s_{n}}{n} which is greater than \frac{n-k}{n} M for large n . Hence the Cesàro partial sum becomes unbounded.

Upshot: there is no hope in making something like \sum^{\infty}_{n=1} \frac{1}{n} into a convergent series by this method. Now there is a way of making an alternating, divergent series into a convergent one via doing something like a “double Cesàro sum” (take arithmetic averages of the arithmetic averages) but that is a topic for another post.

2. Cesàro summation may speed up convergent of an alternating series which passes the alternating series test, OR it might slow it down. I’ll have to develop this idea more fully. But I invite the reader to try Cesàro summation for \sum^{\infty}_{k=1} (-1)^{k+1} \frac{1}{k} and on \sum^{\infty}_{k=1} (-1)^{k+1} \frac{1}{k^2} and on \sum^{\infty}_{k=0} (-1)^k \frac{1}{2^k} . In the first two cases, the series converges slowly enough so that Cesàro summation speeds up convergence. Cesàro slows down the convergence in the geometric series though. It is interesting to ponder why.

January 14, 2016

Trimming a divergent series into a convergent one

Filed under: calculus, induction, sequences, series — Tags: , , — collegemathteaching @ 10:28 pm

This post is motivated by this cartoon
harmonic-series

which I found at a Evelyn Lamb’s post on an AMS Blog, this fun Forbes math post by Kevin Kundson and by a June 2015 article in Mathematics Magazine by R. John Ferdinands called Selective Sums of an Infinite Series.

Here is the following question: start with a divergent series of positive terms which form a decreasing (non-increasing) sequence which tends to zero, say, \sum^{\infty}_{k =1} \frac{1}{k} . Now how does one select a subset of series terms to delete so as to obtain a convergent series? The Kundson article shows that one can do this with the harmonic series by, say, deleting all numbers that contain a specific digit (say, 9). I’ll talk about the proof here. But I’d like to start more basic and to bring in language used in the Ferdinands article.

So, let’s set the stage: we will let \sum a_k denote the divergent sum in question. All terms will be positive, a_{k} \geq a_{k+1} for all k and lim_{k \rightarrow \infty} a_k = 0 . Now let c_k represent a sequence where c_k \in \{0,1\} for all k ; then \sum c_ka_k is called a selective sum of \sum a_k . I’ll call the c_k the selecting sequence and, from the start, rule out selecting sequences that are either eventually 1 (which means that the selected series diverges since the original series did) or eventually zero (just a finite sum).

Now we’ll state a really easy result:

There is some non-eventually constant c_k such that \sum c_ka_k converges. Here is why: because lim_{k \rightarrow \infty} a_k = 0 , for each n \in \{1,2,3...\} one can find a maximal index n_j, n_j \notin \{n_1, n_2, ...n_{j-1} \} so that \frac{1}{2^n} > a_{n_j} . Now select c_k = 1 if k \in \{n_1, n_2, n_3,... \} and c_k =0 otherwise. Then \sum \frac{1}{2^k} > \sum c_ka_k and therefore the selected series converges by comparison with a convergent geometric series.

Of course, this result is petty lame; this technique discards a lot of terms. A cheap way to discard “fewer” terms (“fewer” meaning: in terms of “set inclusion”): Do the previous construction, but instead of using \frac{1}{2} use \frac{M}{M+1} where M is a positive integer of choice. Note that \sum^{\infty}_{k=1} (\frac{M}{M+1})^k = M

Here is an example of how this works: Consider the divergent series \sum \frac{1}{\sqrt{k}} and the convergent geometric series \sum (\frac{1000}{1001})^k Of course \frac{1000}{1001} < 1 so c_1 = 0 but then for k \in \{2,3,....4169 \} we have (\frac{1000}{1001})^k > \frac{1}{\sqrt{k}} . So c_k = 1 for k \in \{2,3,4,....4169 \} . But c_{4170} = 0 because (\frac{1000}{1001})^{4170} < \frac{1}{\sqrt{4170}}. The next non-zero selection coefficient is c_{4171} as (\frac{1000}{1001})^{4170} > \frac{1}{\sqrt{4171}} .

Now playing with this example, we see that \frac{1}{\sqrt{k}} > (\frac{1000}{1001})^{4171} for k \in \{4172, 4173,....4179 \} but not for k = 4180 . So c_k = 0 for k \in \{4172,....4179 \} and c_{4180} = 1 . So the first few n_j are \{2, 3, ....4169, 4171, 4180 \} . Of course the gap between the n_j grows as k does.

Now let’s get back to the cartoon example. From this example, we’ll attempt to state a more general result.

Claim: given \sum^{\infty}_{k=1} c_k \frac{1}{k} where c_k = 0 if k contains a 9 as one of its digits, then \sum^{\infty}_{k=1} c_k \frac{1}{k} converges. Hint on how to prove this (without reading the solution): count the number of integers between 10^k and 10^{k+1} that lack a 9 as a digit. Then do a comparison test with a convergent geometric series, noting that every term \frac{1}{10^k}, \frac{1}{10^k + 1}......, \frac{1}{8(10^k) +88} is less than or equal to \frac{1}{10^k} .

How to prove the claim: we can start by “counting” the number of integers between 0 and 10^k that contain no 9’s as a digit.

Between 0 and 9: clearly 0-8 inclusive, or 9 numbers.

Between 10 and 99: a moment’s thought shows that we have 8(9) = 72 numbers with no 9 as a digit (hint: consider 10-19, 20-29…80-89) so this means that we have 9 + 8(9) = 9(1+8) = 9^2 numbers between 0 and 99 with no 9 as a digit.

This leads to the conjecture: there are 9^k numbers between 0 and 10^k -1 with no 9 as a digit and (8)9^{k-1} between 10^{k-1} and 10^k-1 with no 9 as a digit.

This is verified by induction. This is true for k = 1

Assume true for k = n . Then to find the number of numbers without a 9 between 10^n and 10^{n+1} -1 we get 8 (9^n) which then means we have 9^n + 8(9^n) = 9^n (8+1) = 9^{n+1} numbers between 0 and 10^{n+1}-1 with no 9 as a digit. So our conjecture is proved by induction.

Now note that 0+ 1 + \frac{1}{2} + ....+ \frac{1}{8} < 8*1*1

\frac{1}{10} + ...+ \frac{1}{18} + \frac{1}{20} + ...+ \frac{1}{28} + \frac{1}{30} + ...+ \frac{1}{88} < 8*9*\frac{1}{10}

\frac{1}{100} + ...\frac{1}{88} + \frac{1}{200} + ....\frac{1}{888} < 8*(9^2)\frac{1}{100}

This establishes that \sum_{k=10^n}^{10^{n+1}-1} c_k \frac{1}{k} < 8*(9^k)\frac{1}{10^k}

So it follows that \sum^{\infty}_{k=1} c_k \frac{1}{k} < 8\sum^{\infty}{k=0} (\frac{9}{10})^k = 8 \frac{1}{1-\frac{9}{10}} = 80 and hence our selected sum is convergent.

Further questions: ok, what is going on is that we threw out enough terms of the harmonic series for the series to converge. Between terms \frac{1}{10^k} and \frac{1}{10^{k+1}-1} we allowed 8*(9^k) terms to survive.

This suggests that if we permit up to M (10-\epsilon)^k terms between 10^k and 10^{k+1}-1 to survive (M, \epsilon fixed and positive) then we will have a convergent series. I’d be interested in seeing if there is an generalization of this.

But I am tried, I have a research article to review and I need to start class preparation for the upcoming spring semester. So I’ll stop here. For now. 🙂

October 29, 2015

The Alternating Series Test: the need for hypothesis

Filed under: calculus, series — Tags: — collegemathteaching @ 9:49 pm

It is well known that if series \sum a_k meets the following conditions:

1. (a_k)(a_{k+1}) < 0 for all k
2. lim_{k \rightarrow \infty} a_k = 0
3. |a_k| > |a_{k+1} | for all k

the series converges. This is the famous “alternating series test”.

I know that I am frequently remiss in discussing what can go wrong if condition 3 is not met.

An example that is useful is 1 - \frac{1}{\sqrt{2}} + \frac{1}{3} - \frac{1}{\sqrt{4}} + ...+\frac{1}{2n-1} - \frac{1}{\sqrt{2n}} .....

Clearly this series meets conditions 1 and 2: the series alternates and the terms approach zero. But the series can be written (carefully) as:

\sum_{k=1}^{\infty} (\frac{1}{2k-1} - \frac{1}{\sqrt{2k}}) .

Then one can combine the terms in the parenthesis and then do a limit comparison to the series \sum_{k=1}^{\infty} \frac{1}{k} to see the series diverges.

January 20, 2014

A bit more prior to admin BS

One thing that surprised me about the professor’s job (at a non-research intensive school; we have a modest but real research requirement, but mostly we teach): I never knew how much time I’d spend doing tasks that have nothing to do with teaching and scholarship. Groan….how much of this do I tell our applicants that arrive on campus to interview? 🙂

But there is something mathematical that I want to talk about; it is a follow up to this post. It has to do with what string theorist tell us: \sum^{\infty}_{k = 1} k = -\frac{1}{12} . Needless to say, they are using a non-standard definition of “value of a series”.

Where I think the problem is: when we hear “series” we think of something related to the usual process of addition. Clearly, this non-standard assignment doesn’t related to addition in the way we usually think about it.

So, it might make more sense to think of a “generalized series” as a map from the set of sequences of real numbers (or: the infinite dimensional real vector space) to the real numbers; the usual “limit of partial sums” definition has some nice properties with respect to sequence addition, scalar multiplication and with respect to a “shift operation” and addition, provided we restrict ourselves to a suitable collection of sequences (say, those whose traditional sum of components are absolutely convergent).

So, this “non-standard sum” can be thought of as a map f:V \rightarrow R^1 where f(\{1, 2, 3, 4, 5,....\}) \rightarrow -\frac{1}{12} . That is a bit less offensive than calling it a “sum”. 🙂

Older Posts »

Create a free website or blog at WordPress.com.