College Math Teaching

January 27, 2016

A popular video and covering spaces…

Filed under: media, popular mathematics, topology — Tags: , , , , — collegemathteaching @ 11:16 pm

Think back to how you introduced the sine and cosine functions on the real line. Ok, you didn’t do it quite this way, but what you did, in effect, is to define $sin(u) = Im(e^{iu})$ and $cos(u) = Re(e^{iu})$ and then use “elementary trigonometry” to relate the “angle” $u$ to the arc length subtended on the circle $|z| = 1$. One notes that the map $\rho: R^1 \rightarrow C^1$ defined by $\rho(u) = e^{iu}$ has period $2\pi$

Note: the direction “to the right” on the real line is taken to be “counterclockwise” on the circle (red arrows).

Skip if you haven’t had a topology class
The top line is known as the “universal covering space” for the circle. The reason for the terminology has to do with topology. Depending on how long ago you had your topology course, you might remember that the fundamental group of the real line is trivial and the associated group of deck transformations is infinite cyclic (generated by the map $d(u) = u + 2\pi$ ). One then shows that the fundamental group of the circle is the quotient of the group of deck transformations with the fundamental group of the real line; hence the fundamental group of the circle is infinite cylic.

Resume if you haven’t had a topology class

Notice the following: if one, say, “takes a walk” along the line in the direction of the red arrow, the action of the “covering mapping” is to take the same walk in the counter clockwise direction of the circle. That is, the covering action does the following: a walk on the line in the direction of points $A_1, B_1, C_1, A_2, B_2, C_2....$ corresponds to a walk on the circle $A, B, C, A, B, C....$. That is, walking from $A_1$ to $A_2$ corresponds to a complete lap of the circle.

(that is, on the real line, $A_{n+1} = A_{n} + 2\pi$)

Now note the following: for BOTH the line and the circle, the direction is well defined. “To the right” on the real line” is “counter clockwise” on the circle.

However: on the real line, it makes perfect sense to say that $A_1$ is “before” $B_1$ which is “before” $C_1$ which is “before” $A_2$ and so on; this is merely:

$A_1 < B_1 < C_1 < A_2 < B_2 ...$. This is order is valid no matter where one starts on the line.

However, this “universal ordering” makes no sense on the circle, UNLESS one specifies a start point. True, one moves from $A$ to $B$ to $C$ and back to $A$ again..but if one started at $B$ and started to walk, it would appear that $A$ came AFTER $B$ and not before.

So what?

This quirky animation from CraveFX starts off innocently enough, a janitorial worker mops up a leaky refrigerator and then picks up a coin on the ground. It’s not until you see what causes the refrigerator to leak and why the coin is on the ground that you realize that you’re watching an intricate moving puzzle piece before your eyes. The characters are stuck in an infinite loop caused by another character in their own infinite loop. It’s chaotic and great and hard to keep up with.

The video is below. Now the question: “what action occurred before what other action”? and the answer is “it depends on when you started watching”. The direction of time corresponds to the red arrows in the above diagrams; THAT is well defined. Why? The reason is the Second Law of Thermodynamics; spills do NOT reverse themselves, hence the direction is set in stone, so to speak. But as far as order, it depends ON WHEN THE VIEWER STARTED WATCHING.

Anyway, this video reminded me of covering spaces.

January 26, 2016

More Fun with Divergent Series: redefining series convergence (Cesàro, etc.)

Filed under: analysis, calculus, sequences, series — Tags: , , — collegemathteaching @ 10:21 pm

This post is more designed to entertain myself than anything else. This builds up a previous post which talks about deleting enough terms from a divergent series to make it a convergent one.

This post is inspired by Chapter 8 of Konrad Knopp’s classic Theory and Application of Infinite Series. The title of the chapter is Divergent Series.

Notation: when I talk about a series converging, I mean “converging” in the usual sense; e. g. if $s_n = \sum_{k=0}^{k=n} a_k$ and $lim_{n \rightarrow \infty}s_n = s$ then $\sum_{k=0}^{\infty} a_k$ is said to be convergent with sum $s$.

All of this makes sense since things like limits are carefully defined. But as Knopp points out, in the “days of old”, mathematicians say these series as formal objects rather than the result of careful construction. So some of these mathematicians (like Euler) had no problem saying things like $\sum^{\infty}_{k=0} (-1)^k = 1-1+1-1+1..... = \frac{1}{2}$. Now this is complete nonsense by our usual modern definition. But we might note that $\frac{1}{1-x} = \sum^{\infty}_{k=0} x^k$ for $-1 < x < 1$ and note that $x = -1$ IS in the domain of the left hand side.

So, is there a way of redefining the meaning of “infinite sum” that gives us this result, while not changing the value of convergent series (defined in the standard way)? As Knopp points out in his book, the answer is “yes” and he describes several definitions of summation that

1. Do not change the value of an infinite sum that converges in the traditional sense and
2. Allows for more series to coverge.

We’ll discuss one of these methods, commonly referred to as Cesàro summation. There are ways to generalize this.

Consider the Euler example: $1 -1 + 1 -1 + 1 -1......$. Clearly, $s_{2k} = 1, s_{2k+1} = 0$ and so this geometric series diverges. But notice that the arithmetic average of the partial sums, computed as $c_n = \frac{s_0 + s_1 +...+s_n}{n+1}$ does tend to $\frac{1}{2}$ as $n$ tends to infinity: $c_{2n} = \frac{\frac{2n}{2}}{2n+1} = \frac{n}{2n+1}$ whereas $c_{2n+1} = \frac{\frac{2n}{2}}{2n+2} =\frac{n}{2n+2}$ and both of these quantities tend to $\frac{1}{2}$ as $n$ tends to infinity.

So, we need to see that this method of summing is workable; that is, do infinite sums that converge in the previous sense still converge to the same number with this method?

The answer is, of course, yes. Here is how to see this: Let $x_n$ be a sequence that converges to zero. Then for any $\epsilon > 0$ we can find $M$ such that $k > M$ implies that $|x_k| < \epsilon$. So for $n > k$ we have $\frac{x_1 + x_2 + ...+ x_{k-1} + x_k + ...+ x_n}{n} = \frac{x_1+ ...+x_{k-1}}{n} + \frac{x_k + x_{k+1} + ....x_n}{n}$ Because $k$ is fixed, the first fraction tends to zero as $n$ tends to infinity. The second fraction is smaller than $\epsilon$ in absolute value. But $\epsilon$ is arbitrary, hence this arithmetic average of this null sequence is itself a null sequence.

Now let $x_n \rightarrow L$ and let $c_n = \frac{x_1 + x_2 + ...+ x_{k-1} + x_k + ...+ x_n}{n}$ Now subtract note $c_n-L = \frac{(x_1-L) + (x_2-L) + ...+ (x_{k-1}-L) +(x_k-L) + ...+ (x_n-L)}{n}$ and the $x_n-L$ forms a null sequence. Then so do the $c_n-L$.

Now to be useful, we’d have to show that series that are summable in the Cesàro obey things like the multiplicative laws; they do but I am too lazy to show that. See the Knopp book.

I will mention a couple of interesting (to me) things though. Neither is really profound.

1. If a series diverges to infinity (that is, if for any positive $M$ there exists $n$ such that for all $k \geq n, s_k > M$, then this series is NOT Cesàro summable. It is relatively easy to see why: given such an $M, k$ then consider $\frac{s_1 + s_2 + s_3 + ...+s_{k-1} + s_k + s_{k+1} + ...s_n}{n} = \frac{s_1+ s_2 + ...+s_{k-1}}{n} + \frac{s_k + s_{k+1} .....+s_{n}}{n}$ which is greater than $\frac{n-k}{n} M$ for large $n$. Hence the Cesàro partial sum becomes unbounded.

Upshot: there is no hope in making something like $\sum^{\infty}_{n=1} \frac{1}{n}$ into a convergent series by this method. Now there is a way of making an alternating, divergent series into a convergent one via doing something like a “double Cesàro sum” (take arithmetic averages of the arithmetic averages) but that is a topic for another post.

2. Cesàro summation may speed up convergent of an alternating series which passes the alternating series test, OR it might slow it down. I’ll have to develop this idea more fully. But I invite the reader to try Cesàro summation for $\sum^{\infty}_{k=1} (-1)^{k+1} \frac{1}{k}$ and on $\sum^{\infty}_{k=1} (-1)^{k+1} \frac{1}{k^2}$ and on $\sum^{\infty}_{k=0} (-1)^k \frac{1}{2^k}$. In the first two cases, the series converges slowly enough so that Cesàro summation speeds up convergence. Cesàro slows down the convergence in the geometric series though. It is interesting to ponder why.

The walk of shame…but

Filed under: academia, research — Tags: , — collegemathteaching @ 9:07 pm

Well, I walked to our university library with a whole stack of books that I had checked out to do a project…one which didn’t work out.

But I did check out a new book to get some new ideas…and in the book I found a little bit of my work in it (properly attributed). That was uplifting.

Now to get to work…

January 20, 2016

Congratulations to the Central Missouri State Mathematics Department

Filed under: advanced mathematics, editorial, number theory — Tags: — collegemathteaching @ 10:43 pm

The largest known prime has been discovered by mathematicians at Central Missouri State University.

For what it is worth, it is: $2^{74,207,281} -1$.

Now if you want to be depressed, go to the Smithsonian Facebook page and read the comment. The Dunning-Kruger effect is real. Let’s just say that in our era, our phones are smarter than our people. 🙂

January 14, 2016

Trimming a divergent series into a convergent one

Filed under: calculus, induction, sequences, series — Tags: , , — collegemathteaching @ 10:28 pm

This post is motivated by this cartoon

which I found at a Evelyn Lamb’s post on an AMS Blog, this fun Forbes math post by Kevin Kundson and by a June 2015 article in Mathematics Magazine by R. John Ferdinands called Selective Sums of an Infinite Series.

Here is the following question: start with a divergent series of positive terms which form a decreasing (non-increasing) sequence which tends to zero, say, $\sum^{\infty}_{k =1} \frac{1}{k}$. Now how does one select a subset of series terms to delete so as to obtain a convergent series? The Kundson article shows that one can do this with the harmonic series by, say, deleting all numbers that contain a specific digit (say, 9). I’ll talk about the proof here. But I’d like to start more basic and to bring in language used in the Ferdinands article.

So, let’s set the stage: we will let $\sum a_k$ denote the divergent sum in question. All terms will be positive, $a_{k} \geq a_{k+1}$ for all $k$ and $lim_{k \rightarrow \infty} a_k = 0$. Now let $c_k$ represent a sequence where $c_k \in \{0,1\}$ for all $k$; then $\sum c_ka_k$ is called a selective sum of $\sum a_k$. I’ll call the $c_k$ the selecting sequence and, from the start, rule out selecting sequences that are either eventually 1 (which means that the selected series diverges since the original series did) or eventually zero (just a finite sum).

Now we’ll state a really easy result:

There is some non-eventually constant $c_k$ such that $\sum c_ka_k$ converges. Here is why: because $lim_{k \rightarrow \infty} a_k = 0$, for each $n \in \{1,2,3...\}$ one can find a maximal index $n_j, n_j \notin \{n_1, n_2, ...n_{j-1} \}$ so that $\frac{1}{2^n} > a_{n_j}$. Now select $c_k = 1$ if $k \in \{n_1, n_2, n_3,... \}$ and $c_k =0$ otherwise. Then $\sum \frac{1}{2^k} > \sum c_ka_k$ and therefore the selected series converges by comparison with a convergent geometric series.

Of course, this result is petty lame; this technique discards a lot of terms. A cheap way to discard “fewer” terms (“fewer” meaning: in terms of “set inclusion”): Do the previous construction, but instead of using $\frac{1}{2}$ use $\frac{M}{M+1}$ where $M$ is a positive integer of choice. Note that $\sum^{\infty}_{k=1} (\frac{M}{M+1})^k = M$

Here is an example of how this works: Consider the divergent series $\sum \frac{1}{\sqrt{k}}$ and the convergent geometric series $\sum (\frac{1000}{1001})^k$ Of course $\frac{1000}{1001} < 1$ so $c_1 = 0$ but then for $k \in \{2,3,....4169 \}$ we have $(\frac{1000}{1001})^k > \frac{1}{\sqrt{k}}$. So $c_k = 1$ for $k \in \{2,3,4,....4169 \}$. But $c_{4170} = 0$ because $(\frac{1000}{1001})^{4170} < \frac{1}{\sqrt{4170}}$. The next non-zero selection coefficient is $c_{4171}$ as $(\frac{1000}{1001})^{4170} > \frac{1}{\sqrt{4171}}$.

Now playing with this example, we see that $\frac{1}{\sqrt{k}} > (\frac{1000}{1001})^{4171}$ for $k \in \{4172, 4173,....4179 \}$ but not for $k = 4180$. So $c_k = 0$ for $k \in \{4172,....4179 \}$ and $c_{4180} = 1$. So the first few $n_j$ are $\{2, 3, ....4169, 4171, 4180 \}$. Of course the gap between the $n_j$ grows as $k$ does.

Now let’s get back to the cartoon example. From this example, we’ll attempt to state a more general result.

Claim: given $\sum^{\infty}_{k=1} c_k \frac{1}{k}$ where $c_k = 0$ if $k$ contains a 9 as one of its digits, then $\sum^{\infty}_{k=1} c_k \frac{1}{k}$ converges. Hint on how to prove this (without reading the solution): count the number of integers between $10^k$ and $10^{k+1}$ that lack a 9 as a digit. Then do a comparison test with a convergent geometric series, noting that every term $\frac{1}{10^k}, \frac{1}{10^k + 1}......, \frac{1}{8(10^k) +88}$ is less than or equal to $\frac{1}{10^k}$.

How to prove the claim: we can start by “counting” the number of integers between 0 and $10^k$ that contain no 9’s as a digit.

Between 0 and 9: clearly 0-8 inclusive, or 9 numbers.

Between 10 and 99: a moment’s thought shows that we have $8(9) = 72$ numbers with no 9 as a digit (hint: consider 10-19, 20-29…80-89) so this means that we have $9 + 8(9) = 9(1+8) = 9^2$ numbers between 0 and 99 with no 9 as a digit.

This leads to the conjecture: there are $9^k$ numbers between 0 and $10^k -1$ with no 9 as a digit and $(8)9^{k-1}$ between $10^{k-1}$ and $10^k-1$ with no 9 as a digit.

This is verified by induction. This is true for $k = 1$

Assume true for $k = n$. Then to find the number of numbers without a 9 between $10^n$ and $10^{n+1} -1$ we get $8 (9^n)$ which then means we have $9^n + 8(9^n) = 9^n (8+1) = 9^{n+1}$ numbers between 0 and $10^{n+1}-1$ with no 9 as a digit. So our conjecture is proved by induction.

Now note that $0+ 1 + \frac{1}{2} + ....+ \frac{1}{8} < 8*1*1$

$\frac{1}{10} + ...+ \frac{1}{18} + \frac{1}{20} + ...+ \frac{1}{28} + \frac{1}{30} + ...+ \frac{1}{88} < 8*9*\frac{1}{10}$

$\frac{1}{100} + ...\frac{1}{88} + \frac{1}{200} + ....\frac{1}{888} < 8*(9^2)\frac{1}{100}$

This establishes that $\sum_{k=10^n}^{10^{n+1}-1} c_k \frac{1}{k} < 8*(9^k)\frac{1}{10^k}$

So it follows that $\sum^{\infty}_{k=1} c_k \frac{1}{k} < 8\sum^{\infty}{k=0} (\frac{9}{10})^k = 8 \frac{1}{1-\frac{9}{10}} = 80$ and hence our selected sum is convergent.

Further questions: ok, what is going on is that we threw out enough terms of the harmonic series for the series to converge. Between terms $\frac{1}{10^k}$ and $\frac{1}{10^{k+1}-1}$ we allowed $8*(9^k)$ terms to survive.

This suggests that if we permit up to $M (10-\epsilon)^k$ terms between $10^k$ and $10^{k+1}-1$ to survive ($M, \epsilon$ fixed and positive) then we will have a convergent series. I’d be interested in seeing if there is an generalization of this.

But I am tried, I have a research article to review and I need to start class preparation for the upcoming spring semester. So I’ll stop here. For now. 🙂