College Math Teaching

October 4, 2016

Linear Transformation or not? The vector space operations matter.

Filed under: calculus, class room experiment, linear albegra, pedagogy — collegemathteaching @ 3:31 pm

This is nothing new; it is an example for undergraduates.

Consider the set R^+ = \{x| x > 0 \} endowed with the “vector addition” x \oplus y = xy where xy represents ordinary real number multiplication and “scalar multiplication r \odot x = x^r where r \in R and x^r is ordinary exponentiation. It is clear that \{R^+, R | \oplus, \odot \} is a vector space with 1 being the vector “additive” identity and 0 playing the role of the scalar zero and 1 playing the multiplicative identity. Verifying the various vector space axioms is a fun, if trivial exercise.

Now consider the function L(x) = ln(x) with domain R^+ . (here: ln(x) is the natural logarithm function). Now ln(xy) = ln(x) + ln(y) and ln(x^a) = aln(x) . This shows that L:R^+ \rightarrow R (the range has the usual vector space structure) is a linear transformation.

What is even better: ker(L) =\{x|ln(x) = 0 \} which shows that ker(L) = \{1 \} so L is one to one (of course, we know that from calculus).

And, given z \in R, ln(e^z) = z so L is also onto (we knew that from calculus or precalculus).

So, R^+ = \{x| x > 0 \} is isomorphic to R with the usual vector operations, and of course the inverse linear transformation is L^{-1}(y) = e^y .

Upshot: when one asks “is F a linear transformation or not”, one needs information about not only the domain set but also the vector space operations.

June 15, 2016

Elementary Math in the news: elections

Filed under: calculus, elementary mathematics, news — Tags: — collegemathteaching @ 9:11 pm

Ok, mostly I am trying to avoid writing up the painful details of a proposed mathematics paper.
But I do follow elections relatively closely. In the California Democratic primary, CNN called the election for Hillary Clinton late on June 7; at the time she lead Bernie Sanders 1,940,588-1,502,043, which is a margin of 438,537 votes. Percentage wise, the lead was 55.8-43.2, or 12.6 percentage points.

But due to mail in balloting and provisional ballot counting, there were still many votes to count. As of this morning, the totals were:

2,360,266-1,887,178 for a numerical lead of 473,088 votes. Percentage wise, the lead was 55.1-44.0, or 11.1 percentage points.

So, the lead grew numerically, but shrunk percentage wise.

“Big deal”, you say? Well, from reading social media, it is not obvious (to some) how a lead can grow numerically but shrink as a percentage.

Conceptually, it is pretty easy to explain: suppose one has an election involving 1100 voters who MUST choose between candidates. Say the first 100 votes that are counted happened to come from a strongly pro-Hillary group, and the tally after 100 was 90 Hillary, 10 Bernie. Then suppose the next 1000 was closer, say 550 for Hillary and 450 for Bernie. Then the lead grew by 100 votes (80 to 180) but the percentage lead shrunk from 80 percentage points to a 16.36 percentage point lead (58.18 to 41.82 percent). And it is easy to see that if the rest of the vote was really 55 percent Hillary, her percent of the vote would asymptotically shrink to close to 55 percent as the number of votes counted went up.

So, how might one have students model it? Let H(t), B(t) be increasing functions of t which represent the number of votes for Hillary and Bernie as a function of time. Assume no mistakes, hence H(t), B(t) can be assumed to be increasing functions. So we want a case there D(t) = H(t)-B(t) is an increasing function but P(t) = \frac{H(t)}{H(t)+ B(t)} decreases with time.

Without calculus: rewrite P(t) = \frac{1}{1+\frac{B(t)}{H(t)}} and note that P(t) decreases as \frac{B(t)}{H(t)} increases; that is, as B(t) outgrows H(t) . But H(t) must continue to outgrow B(t) . That is, the new ballots must still include more Hillary Bernie ballots, but the ratio of Bernie ballots to Hillary ballots must be going down.

If we use some calculus, we see that H'(t) must exceed B'(t) but to make P(t) decrease, use the quotient rule plus a tiny bit of algebra to conclude that H'(t)B(t)-B'(t)H(t) must be negative, or that \frac{B'(t)}{B(t)} > \frac{H'(t)}{H(t)} . That is, the Bernie ballots must be growing at a higher percentage rate than the Hillary ballots are.

None of this is surprising, but it might let the students get a feel of what derivatives are and what proportional change means.

June 7, 2016

Pop-math: getting it wrong but being close enough to give the public a feel for it

Space filling curves: for now, we’ll just work on continuous functions f: [0,1] \rightarrow [0,1] \times [0,1] \subset R^2 .

A curve is typically defined as a continuous function f: [0,1] \rightarrow M where M is, say, a manifold (a 2’nd countable metric space which has neighborhoods either locally homeomorphic to R^k or R^{k-1}) . Note: though we often think of smooth or piecewise linear curves, we don’t have to do so. Also, we can allow for self-intersections.

However, if we don’t put restrictions such as these, weird things can happen. It can be shown (and the video suggests a construction, which is correct) that there exists a continuous, ONTO function f: [0,1] \rightarrow [0,1] \times [0,1] ; such a gadget is called a space filling curve.

It follows from elementary topology that such an f cannot be one to one, because if it were, because the domain is compact, f would have to be a homeomorphism. But the respective spaces are not homeomorphic. For example: the closed interval is disconnected by the removal of any non-end point, whereas the closed square has no such separating point.

Therefore, if f is a space filling curve, the inverse image of a points is actually an infinite number of points; the inverse (as a function) cannot be defined.

And THAT is where this article and video goes off of the rails, though, practically speaking, one can approximate the space filling curve as close as one pleases by an embedded curve (one that IS one to one) and therefore snake the curve through any desired number of points (pixels?).

So, enjoy the video which I got from here (and yes, the text of this post has the aforementioned error)

May 20, 2016

Student integral tricks…

Ok, classes ended last week and my brain is way out of math shape. Right now I am contemplating how to show that the complements of this object


and of the complement of the object depicted in figure 3, are NOT homeomorphic.


I can do this in this very specific case; I am interested in seeing what happens if the “tangle pattern” is changed. Are the complements of these two related objects *always* topologically different? I am reasonably sure yes, but my brain is rebelling at doing the hard work to nail it down.

Anyhow, finals are graded and I am usually treated to one unusual student trick. Here is one for the semester:

\int x^2 \sqrt{x+1} dx =

Now I was hoping that they would say u = x +1 \rightarrow u-1 = x \rightarrow x^2 = u^2-2u+1 at which case the integral is translated to: \int u^{\frac{5}{2}} - 2u^{\frac{3}{2}} + u^{\frac{1}{2}} du which is easy to do.

Now those wanting to do it a more difficult (but still sort of standard) way could do two repetitions of integration by parts with the first set up being x^2 = u, \sqrt{x+1}dx =dv \rightarrow du = 2xdx, v = \frac{2}{3} (x+1)^{\frac{3}{2}} and that works just fine.

But I did see this: x =tan^2(u), dx = 2tan(u)sec^2(u)du, x+1 = tan^2(x)+1 = sec^2(u) (ok, there are some domain issues here but never mind that) and we end up with the transformed integral: 2\int tan^5(u)sec^3(u) du which can be transformed to 2\int (sec^6(u) - 2 sec^4(u) + sec^2(u)) tan(u)sec(u) du by elementary trig identities.

And yes, that leads to an answer of \frac{2}{7}sec^7(u) +\frac{4}{5}sec^5(u) + \frac{2}{3}sec^3(u) + C which, upon using the triangle


Gives you an answer that is exactly in the same form as the desired “rationalization substitution” answer. Yeah, I gave full credit despite the “domain issues” (in the original integral, it is possible for x \in (-1,0] ).

What can I say?

February 5, 2016

More fun with selective sums of divergent series

Just a reminder: if \sum_{k=1}^{\infty} a_k is a series and c_1, c_2, ...c_n ,, is some sequence consisting of 0’s and 1’s then a selective sum of the series is \sum_{k=1}^{\infty} c_k a_k . The selective sum concept is discussed in the MAA book Real Infinite Series (MAA Textbooks) by Bonar and Khoury (2006) and I was introduced to the concept by Ferdinands’s article Selective Sums of an Infinite Series in the June 2015 edition of Mathematics Magazine (Vol. 88, 179-185).

There is much of interest there, especially if one considers convergent series or alternating series.

This post will be about divergent series of positive terms for which lim_{n \rightarrow \infty} a_n = 0 and a_{n+1} < a_n for all n .

The first fun result is this one: any selected x > 0 is a selective sum of such a series. The proof of this isn’t that bad. Since lim_{n \rightarrow \infty} a_n = 0 we can find a smallest n such that a_n \leq x . Clearly if a_n = x we are done: our selective sum has c_n = 1 and the rest of the c_k = 0 .

If not, set n_1 = n and note that because the series diverges, there is a largest m_1 so that \sum_{k=n_1}^{m_1} a_k \leq x . Now if \sum_{k=n_1}^{m_1} a_k = x we are done, else let \epsilon_1 = x - \sum_{k=n_1}^{m_1} a_k and note \epsilon_1 < a_{m_1+1} . Now because the a_k tend to zero, there is some first n_2 so that a_{n_2} \leq \epsilon_1 . If this is equality then the required sum is a_{n_2} + \sum_{k=n_1}^{m_1} a_k , else we can find the largest m_2 so that \sum_{k=n_1}^{m_1} a_k + \sum_{k=n_2}^{m_2} a_k \leq x

This procedure can be continued indefinitely. So if we label \sum_{k=n_j}^{m_{j}} a_k = s_j we see that s_1 + s_2 + ...s_{n} = t_{n} form an increasing, bounded sequence which converges to the least upper bound of its range, and it isn’t hard to see that the least upper bound is x because x-t_{n} =\epsilon_n < a_{m_n+1}

So now that we can obtain any positive real number as the selective sum of such a series, what can we say about the set of all selective sums for which almost all of the c_k = 0 (that is, all but a finite number of the c_k are zero).

Answer: the set of all such selective sums are dense in the real line, and this isn’t that hard to see, given our above construction. Let (a,b) be any open interval in the real line and let a < x < b . Then one can find some N such that for all n > N we have x - a_n > a . Now consider our construction and choose m large enough such that x - t_m > x - a_n > a . Then the t_m represents the finite selected sum that lies in the interval (a,b) .

We can be even more specific if we now look at a specific series, such as the harmonic series \sum_{k=1}^{\infty} \frac{1}{k} . We know that the set of finite selected sums forms a dense subset of the real line. But it turns out that the set of select sums is the rationals. I’ll give a slightly different proof than one finds in Bonar and Khoury.

First we prove that every rational in (0,1] is a finite select sum. Clearly 1 is a finite select sum. Otherwise: Given \frac{p}{q} we can find the minimum n so that \frac{1}{n} \leq \frac{p}{q} < \frac{1}{n-1} . If \frac{p}{q} = \frac{1}{n} we are done. Otherwise: the strict inequality shows that pn-p < q which means pn-q < p . Then note \frac{p}{q} - \frac{1}{n} = \frac{pn-q}{qn} and this fraction has a strictly smaller numerator than p . So we can repeat our process with this new rational number. And this process must eventually terminate because the numerators generated from this process form a strictly decreasing sequence of positive integers. The process can only terminate when the new faction has a numerator of 1. Hence the original fraction is some sum of fractions with numerator 1.

Now if the rational number r in question is greater than one, one finds n_1 so that \sum^{n_1}_{k=1} \frac{1}{k} \leq r but \sum^{n_1+1}_{k=1} \frac{1}{k} > r . Then write r-\sum^{n_1+1}_{k=1} \frac{1}{k} and note that its magnitude is less than \frac{1}{n_1+1} . We then use the procedure for numbers in (0,1) noting that our starting point excludes the previously used terms of the harmonic series.

There is more we can do, but I’ll stop here for now.

January 26, 2016

More Fun with Divergent Series: redefining series convergence (Cesàro, etc.)

Filed under: analysis, calculus, sequences, series — Tags: , , — collegemathteaching @ 10:21 pm

This post is more designed to entertain myself than anything else. This builds up a previous post which talks about deleting enough terms from a divergent series to make it a convergent one.

This post is inspired by Chapter 8 of Konrad Knopp’s classic Theory and Application of Infinite Series. The title of the chapter is Divergent Series.

Notation: when I talk about a series converging, I mean “converging” in the usual sense; e. g. if s_n = \sum_{k=0}^{k=n} a_k and lim_{n \rightarrow \infty}s_n = s then \sum_{k=0}^{\infty} a_k is said to be convergent with sum s .

All of this makes sense since things like limits are carefully defined. But as Knopp points out, in the “days of old”, mathematicians say these series as formal objects rather than the result of careful construction. So some of these mathematicians (like Euler) had no problem saying things like \sum^{\infty}_{k=0} (-1)^k = 1-1+1-1+1..... = \frac{1}{2} . Now this is complete nonsense by our usual modern definition. But we might note that \frac{1}{1-x} = \sum^{\infty}_{k=0} x^k for -1 < x < 1 and note that x = -1 IS in the domain of the left hand side.

So, is there a way of redefining the meaning of “infinite sum” that gives us this result, while not changing the value of convergent series (defined in the standard way)? As Knopp points out in his book, the answer is “yes” and he describes several definitions of summation that

1. Do not change the value of an infinite sum that converges in the traditional sense and
2. Allows for more series to coverge.

We’ll discuss one of these methods, commonly referred to as Cesàro summation. There are ways to generalize this.

How this came about

Consider the Euler example: 1 -1 + 1 -1 + 1 -1...... . Clearly, s_{2k} = 1, s_{2k+1} = 0 and so this geometric series diverges. But notice that the arithmetic average of the partial sums, computed as c_n = \frac{s_0 + s_1 +...+s_n}{n+1} does tend to \frac{1}{2} as n tends to infinity: c_{2n} = \frac{\frac{2n}{2}}{2n+1} = \frac{n}{2n+1} whereas c_{2n+1} = \frac{\frac{2n}{2}}{2n+2} =\frac{n}{2n+2} and both of these quantities tend to \frac{1}{2} as n tends to infinity.

So, we need to see that this method of summing is workable; that is, do infinite sums that converge in the previous sense still converge to the same number with this method?

The answer is, of course, yes. Here is how to see this: Let x_n be a sequence that converges to zero. Then for any \epsilon > 0 we can find M such that k > M implies that |x_k| < \epsilon . So for n > k we have \frac{x_1 + x_2 + ...+ x_{k-1} + x_k + ...+ x_n}{n} = \frac{x_1+ ...+x_{k-1}}{n} + \frac{x_k + x_{k+1} + ....x_n}{n} Because k is fixed, the first fraction tends to zero as n tends to infinity. The second fraction is smaller than \epsilon in absolute value. But \epsilon is arbitrary, hence this arithmetic average of this null sequence is itself a null sequence.

Now let x_n \rightarrow L and let c_n = \frac{x_1 + x_2 + ...+ x_{k-1} + x_k + ...+ x_n}{n} Now subtract note c_n-L =  \frac{(x_1-L) + (x_2-L) + ...+ (x_{k-1}-L) +(x_k-L) + ...+ (x_n-L)}{n} and the x_n-L forms a null sequence. Then so do the c_n-L .

Now to be useful, we’d have to show that series that are summable in the Cesàro obey things like the multiplicative laws; they do but I am too lazy to show that. See the Knopp book.

I will mention a couple of interesting (to me) things though. Neither is really profound.

1. If a series diverges to infinity (that is, if for any positive M there exists n such that for all k \geq n, s_k > M , then this series is NOT Cesàro summable. It is relatively easy to see why: given such an M, k then consider \frac{s_1 + s_2 + s_3 + ...+s_{k-1} + s_k + s_{k+1} + ...s_n}{n} = \frac{s_1+ s_2 + ...+s_{k-1}}{n} + \frac{s_k + s_{k+1} .....+s_{n}}{n} which is greater than \frac{n-k}{n} M for large n . Hence the Cesàro partial sum becomes unbounded.

Upshot: there is no hope in making something like \sum^{\infty}_{n=1} \frac{1}{n} into a convergent series by this method. Now there is a way of making an alternating, divergent series into a convergent one via doing something like a “double Cesàro sum” (take arithmetic averages of the arithmetic averages) but that is a topic for another post.

2. Cesàro summation may speed up convergent of an alternating series which passes the alternating series test, OR it might slow it down. I’ll have to develop this idea more fully. But I invite the reader to try Cesàro summation for \sum^{\infty}_{k=1} (-1)^{k+1} \frac{1}{k} and on \sum^{\infty}_{k=1} (-1)^{k+1} \frac{1}{k^2} and on \sum^{\infty}_{k=0} (-1)^k \frac{1}{2^k} . In the first two cases, the series converges slowly enough so that Cesàro summation speeds up convergence. Cesàro slows down the convergence in the geometric series though. It is interesting to ponder why.

January 14, 2016

Trimming a divergent series into a convergent one

Filed under: calculus, induction, sequences, series — Tags: , , — collegemathteaching @ 10:28 pm

This post is motivated by this cartoon

which I found at a Evelyn Lamb’s post on an AMS Blog, this fun Forbes math post by Kevin Kundson and by a June 2015 article in Mathematics Magazine by R. John Ferdinands called Selective Sums of an Infinite Series.

Here is the following question: start with a divergent series of positive terms which form a decreasing (non-increasing) sequence which tends to zero, say, \sum^{\infty}_{k =1} \frac{1}{k} . Now how does one select a subset of series terms to delete so as to obtain a convergent series? The Kundson article shows that one can do this with the harmonic series by, say, deleting all numbers that contain a specific digit (say, 9). I’ll talk about the proof here. But I’d like to start more basic and to bring in language used in the Ferdinands article.

So, let’s set the stage: we will let \sum a_k denote the divergent sum in question. All terms will be positive, a_{k} \geq a_{k+1} for all k and lim_{k \rightarrow \infty} a_k = 0 . Now let c_k represent a sequence where c_k \in \{0,1\} for all k ; then \sum c_ka_k is called a selective sum of \sum a_k . I’ll call the c_k the selecting sequence and, from the start, rule out selecting sequences that are either eventually 1 (which means that the selected series diverges since the original series did) or eventually zero (just a finite sum).

Now we’ll state a really easy result:

There is some non-eventually constant c_k such that \sum c_ka_k converges. Here is why: because lim_{k \rightarrow \infty} a_k = 0 , for each n \in \{1,2,3...\} one can find a maximal index n_j, n_j \notin \{n_1, n_2, ...n_{j-1} \} so that \frac{1}{2^n} > a_{n_j} . Now select c_k = 1 if k \in \{n_1, n_2, n_3,... \} and c_k =0 otherwise. Then \sum \frac{1}{2^k} > \sum c_ka_k and therefore the selected series converges by comparison with a convergent geometric series.

Of course, this result is petty lame; this technique discards a lot of terms. A cheap way to discard “fewer” terms (“fewer” meaning: in terms of “set inclusion”): Do the previous construction, but instead of using \frac{1}{2} use \frac{M}{M+1} where M is a positive integer of choice. Note that \sum^{\infty}_{k=1} (\frac{M}{M+1})^k = M

Here is an example of how this works: Consider the divergent series \sum \frac{1}{\sqrt{k}} and the convergent geometric series \sum (\frac{1000}{1001})^k Of course \frac{1000}{1001} < 1 so c_1 = 0 but then for k \in \{2,3,....4169 \} we have (\frac{1000}{1001})^k > \frac{1}{\sqrt{k}} . So c_k = 1 for k \in \{2,3,4,....4169 \} . But c_{4170} = 0 because (\frac{1000}{1001})^{4170} < \frac{1}{\sqrt{4170}}. The next non-zero selection coefficient is c_{4171} as (\frac{1000}{1001})^{4170} > \frac{1}{\sqrt{4171}} .

Now playing with this example, we see that \frac{1}{\sqrt{k}} > (\frac{1000}{1001})^{4171} for k \in \{4172, 4173,....4179 \} but not for k = 4180 . So c_k = 0 for k \in \{4172,....4179 \} and c_{4180} = 1 . So the first few n_j are \{2, 3, ....4169, 4171, 4180 \} . Of course the gap between the n_j grows as k does.

Now let’s get back to the cartoon example. From this example, we’ll attempt to state a more general result.

Claim: given \sum^{\infty}_{k=1} c_k \frac{1}{k} where c_k = 0 if k contains a 9 as one of its digits, then \sum^{\infty}_{k=1} c_k \frac{1}{k} converges. Hint on how to prove this (without reading the solution): count the number of integers between 10^k and 10^{k+1} that lack a 9 as a digit. Then do a comparison test with a convergent geometric series, noting that every term \frac{1}{10^k}, \frac{1}{10^k + 1}......, \frac{1}{8(10^k) +88} is less than or equal to \frac{1}{10^k} .

How to prove the claim: we can start by “counting” the number of integers between 0 and 10^k that contain no 9’s as a digit.

Between 0 and 9: clearly 0-8 inclusive, or 9 numbers.

Between 10 and 99: a moment’s thought shows that we have 8(9) = 72 numbers with no 9 as a digit (hint: consider 10-19, 20-29…80-89) so this means that we have 9 + 8(9) = 9(1+8) = 9^2 numbers between 0 and 99 with no 9 as a digit.

This leads to the conjecture: there are 9^k numbers between 0 and 10^k -1 with no 9 as a digit and (8)9^{k-1} between 10^{k-1} and 10^k-1 with no 9 as a digit.

This is verified by induction. This is true for k = 1

Assume true for k = n . Then to find the number of numbers without a 9 between 10^n and 10^{n+1} -1 we get 8 (9^n) which then means we have 9^n + 8(9^n) = 9^n (8+1) = 9^{n+1} numbers between 0 and 10^{n+1}-1 with no 9 as a digit. So our conjecture is proved by induction.

Now note that 0+ 1 + \frac{1}{2} + ....+ \frac{1}{8} < 8*1*1

\frac{1}{10} + ...+ \frac{1}{18} + \frac{1}{20} + ...+ \frac{1}{28} + \frac{1}{30} + ...+ \frac{1}{88} < 8*9*\frac{1}{10}

\frac{1}{100} + ...\frac{1}{88} + \frac{1}{200} + ....\frac{1}{888} < 8*(9^2)\frac{1}{100}

This establishes that \sum_{k=10^n}^{10^{n+1}-1} c_k \frac{1}{k} < 8*(9^k)\frac{1}{10^k}

So it follows that \sum^{\infty}_{k=1} c_k \frac{1}{k} < 8\sum^{\infty}{k=0} (\frac{9}{10})^k = 8 \frac{1}{1-\frac{9}{10}} = 80 and hence our selected sum is convergent.

Further questions: ok, what is going on is that we threw out enough terms of the harmonic series for the series to converge. Between terms \frac{1}{10^k} and \frac{1}{10^{k+1}-1} we allowed 8*(9^k) terms to survive.

This suggests that if we permit up to M (10-\epsilon)^k terms between 10^k and 10^{k+1}-1 to survive (M, \epsilon fixed and positive) then we will have a convergent series. I’d be interested in seeing if there is an generalization of this.

But I am tried, I have a research article to review and I need to start class preparation for the upcoming spring semester. So I’ll stop here. For now.🙂

December 22, 2015

Multi leaf polar graphs and total area…

Filed under: calculus, elementary mathematics, integrals — Tags: , — collegemathteaching @ 4:07 am

I saw polar coordinate calculus for the first time in 1977. I’ve taught calculus as a TA and as a professor since 1987. And yet, I’ve never thought of this simple little fact.

Consider r(\theta) = sin(n \theta), 0 \theta \ 2 \pi . Now it is well know that the area formula (area enclosed by a polar graph, assuming no “doubling”, self intersections, etc.) is A = \frac{1}{2} \int^b_a (r(\theta))^2 d \theta

Now the leaved roses have the following types of graphs: n leaves if n is odd, and 2n leaves if n is even (in the odd case, the graph doubles itself).




So here is the question: how much total area is covered by the graph (all the leaves put together, do NOT count “overlapping”)?

Well, for n an integer, the answer is: \frac{\pi}{4} if n is odd, and \frac{\pi}{2} if n is even! That’s it! Want to know why?

Do the integral: if n is odd, our total area is \frac{n}{2}\int^{\frac{\pi}{n}}_0 (sin(n \theta)^2 d\theta = \frac{n}{2}\int^{\frac{\pi}{n}}_0 \frac{1}{2} + cos(2n\theta) d\theta =\frac{\pi}{4} . If n is even, we have the same integral but the outside coefficient is \frac{2n}{2} = n which is the only difference. Aside from parity, the number of leaves does not matter as to the total area!

Now the fun starts when one considers a fractional multiple of \theta and I might ponder that some.

October 29, 2015

The Alternating Series Test: the need for hypothesis

Filed under: calculus, series — Tags: — collegemathteaching @ 9:49 pm

It is well known that if series \sum a_k meets the following conditions:

1. (a_k)(a_{k+1}) < 0 for all k
2. lim_{k \rightarrow \infty} a_k = 0
3. |a_k| > |a_{k+1} | for all k

the series converges. This is the famous “alternating series test”.

I know that I am frequently remiss in discussing what can go wrong if condition 3 is not met.

An example that is useful is 1 - \frac{1}{\sqrt{2}} + \frac{1}{3} - \frac{1}{\sqrt{4}} + ...+\frac{1}{2n-1} - \frac{1}{\sqrt{2n}} .....

Clearly this series meets conditions 1 and 2: the series alternates and the terms approach zero. But the series can be written (carefully) as:

\sum_{k=1}^{\infty} (\frac{1}{2k-1} - \frac{1}{\sqrt{2k}}) .

Then one can combine the terms in the parenthesis and then do a limit comparison to the series \sum_{k=1}^{\infty} \frac{1}{k} to see the series diverges.

July 13, 2015

Trolled by Newton’s Law of Cooling…

Filed under: calculus, differential equations, editorial — Tags: , — collegemathteaching @ 8:55 pm

From a humor website: there is a Facebook account called “customer service” who trolls customers making complaints. Though that isn’t a topic here, it is interesting to see Newton’s Cooling Law get mentioned:


Older Posts »

Blog at