College Math Teaching

February 22, 2018

What is going on here: sum of cos(nx)…

Filed under: analysis, derivatives, Fourier Series, pedagogy, sequences of functions, series, uniform convergence — collegemathteaching @ 9:58 pm

This started innocently enough; I was attempting to explain why we have to be so careful when we attempt to differentiate a power series term by term; that when one talks about infinite sums, the “sum of the derivatives” might fail to exist if the sum is infinite.

Anyone who is familiar with Fourier Series and the square wave understands this well:

$\frac{4}{\pi} \sum^{\infty}_{k=1}$ $\frac{1}{2k-1}sin((2k-1)x) = (\frac{4}{\pi})( sin(x) + \frac{1}{3}sin(3x) + \frac{1}{5}sin(5x) +.....)$ yields the “square wave” function (plus zero at the jump discontinuities)

Here I graphed to $2k-1 = 21$

Now the resulting function fails to even be continuous. But the resulting function is differentiable except for the points at the jump discontinuities and the derivative is zero for all but a discrete set of points.

(recall: here we have pointwise convergence; to get a differentiable limit, we need other conditions such as uniform convergence together with uniform convergence of the derivatives).

But, just for the heck of it, let’s differentiate term by term and see what we get:

$(\frac{4}{\pi})\sum^{\infty}_{k=1} cos((2k-1)x) = (\frac{4}{\pi})(cos(x) + cos(3x) + cos(5x) + cos(7x) +.....)...$

It is easy to see that this result doesn’t even converge to a function of any sort.

Example: let’s see what happens at $x = \frac{\pi}{4}: cos(\frac{\pi}{4}) = \frac{1}{\sqrt{2}}$

$cos(\frac{\pi}{4}) + cos(3\frac{\pi}{4}) =0$

$cos(\frac{\pi}{4}) + cos(3\frac{\pi}{4}) + cos(5\frac{\pi}{4}) = -\frac{1}{\sqrt{2}}$

$cos(\frac{\pi}{4}) + cos(3\frac{\pi}{4}) + cos(5\frac{\pi}{4}) + cos(7\frac{\pi}{4}) = 0$

And this repeats over and over again; no limit is possible.

Something similar happens for $x = \frac{p}{q}\pi$ where $p, q$ are relatively prime positive integers.

But something weird is going on with this sum. I plotted the terms with $2k-1 \in \{1, 3, ...35 \}$

(and yes, I am using $\frac{\pi}{4} csc(x)$ as a type of “envelope function”)

BUT…if one, say, looks at $cos(29x) + cos(31x) + cos(33x) + cos(35x)$

we really aren’t getting a convergence (even at irrational multiples of $\pi$). But SOMETHING is going on!

I decided to plot to $cos(61x)$

Something is going on, though it isn’t convergence. Note: by accident, I found that the pattern falls apart when I skipped one of the terms.

This is something to think about.

I wonder: for all $x \in (0, \pi), sup_{n \in \{1, 3, 5, 7....\}}|\sum^{n}_{k \in \{1,3,..\}}cos(kx)| \leq |csc(x)|$ and we can somehow get close to $csc(x)$ for given values of $x$ by allowing enough terms…but the value of $x$ is determined by how many terms we are using (not always the same value of $x$).

January 16, 2015

Power sets, Function spaces and puzzling notation

I’ll probably be posting point-set topology stuff due to my being excited about teaching the course…finally.

Power sets and exponent notation
If $A$ is a set, then the power set of $A$, often denoted by $2^A$, is a set that consists of all subsets of $A$.

For example, if $A = \{1, 2, 3 \}$, then $2^A = \{ \emptyset , \{1 \}, \{ 2 \}, \{3 \}, \{1, 2 \}, \{1,3 \}, \{2, 3 \}, \{1, 2, 3 \} \}$. Now is is no surprise that if the set $A$ is finite and has $n$ elements, then $2^A$ has $2^n$ elements.

However, there is another helpful way of listing $2^A$. A subset of $A$ can be defined by which elements of $A$ that it has. So, if we order the elements of $A$ as $1, 2, 3$ then the power set of $A$ can be identified as follows: $\emptyset = (0, 0, 0), \{1 \} = (1, 0, 0), \{ 2 \} = (0,1,0), \{ 3 \} = (0, 0, 1), \{1,2 \} = (1, 1, 0), \{1,3 \} = (1, 0, 1), \{2,3 \} = (0, 1, 1), \{1, 2, 3 \} = (1, 1, 1)$

So there is a natural correspondence between the elements of a power set and a sequence of binary digits. Of course, this makes the counting much easier.

The binary notation might seem like an unnecessary complication at first, but now consider the power set of the natural numbers: $2^N$. Of course, listing the power sets would be, at least, cumbersome if not impossible! But there the binary notation really shows its value. Remember that the binary notation is a sequence of 0’s and 1’s where a 0 in the i’th slot means that element isn’t an element in a subset and a 1 means that it is.

Since a subset of the natural numbers is defined by its list of elements, every subset has an infinite binary sequence associated with it. We can order the sequence in the usual order 1, 2, 3, 4, ….
and the sequence 1, 0, 0, 0…… corresponds to the set with just 1 in it, the sequence 1, 0, 1, 0, 1, 0, 1, 0,… corresponds to the set consisting of all odd integers, etc.

Then, of course, one can use Cantor’s Diagonal Argument to show that $2^N$ is uncountable; in fact, if one uses the fact that every non-negative real number has a binary expansion (possibly infinite), one then shows that $2^N$ has the same cardinality as the real numbers.

Power notation
We can expand on this power notation. Remember that $2^A$ can be thought of setting up a “slot” or an “index” for each element of $A$ and then assigning a $1$ or $0$ for every element of $A$. One can then think of this in an alternate way: $2^A$ can be thought of as the set of ALL functions from the elements of $A$ to the set $\{ 0, 1 \}$. This coincides with the “power set” concept as set membership is determined by being either “in” or “not in”. So, the set in the exponent can be thought of either as the indexing set and the base as the value each indexed value can take on (sequences, in the case that the exponent set is either finite or countably finite), OR this can be thought of as the set of all functions where the exponent set is the domain and the base set is the range.

Remember, we are talking about ALL possible functions and not all “continuous” functions, or all “morphisms”, etc.

So, $N^N$ can be thought of as either set set of all possible sequences of positive integers, or, equivalently, the set of all functions of $N$ to $N$.

Then $R^N$ is the set of all real number sequences (i. e. the types of sequences we study in calculus), or, equivalently, the set of all real valued functions of the positive integers.

Now it is awkward to try to assign an ordering to the reals, so when we consider $R^R$ it is best to think of this as the set of all functions $f: R \rightarrow R$, or equivalently, the set of all strings which are indexed by the real numbers and have real values.

Note that sequences don’t really seem to capture $R^R$ in the way that they capture, say, $R^N$. But there is another concept that does, and that concept is the concept of the net, which I will talk about in a subsequent post.

August 7, 2014

Engineers need to know this stuff part II

This is a 50 minute lecture in a engineering class; one can easily see the mathematical demands put on the students. Many of the seemingly abstract facts from calculus (differentiability, continuity, convergence of a sequence of functions) are heavily used. Of particular interest to me is the remarks from 45 to 50 minutes into the video:

Here is what is going on: if we have a sequence of functions $f_n$ defined on some interval $[a,b]$ and if $f$ is defined on $[a,b]$, $lim_{n \rightarrow \infty} \int^b_a (f_n(x) - f(x))^2 dx =0$ then we say that $f_n \rightarrow f$ “in mean” (or “in the $L^2$ norm”). Basically, as $n$ grows, the area between the graphs of $f_n$ and $f$ gets arbitrarily small.

However this does NOT mean that $f_n$ converges to $f$ point wise!

If that seems strange: remember that the distance between the graphs can say fixed over a set of decreasing measure.

Here is an example that illustrates this: consider the intervals $[0, \frac{1}{2}], [\frac{1}{2}, \frac{5}{6}], [\frac{3}{4}, 1], [\frac{11}{20}, \frac{3}{4}],...$ The intervals have length $\frac{1}{2}, \frac{1}{3}, \frac{1}{4},...$ and start by moving left to right on $[0,1]$ and then moving right to left and so on. They “dance” on [0,1]. Let $f_n$ the the function that is 1 on the interval and 0 off of it. Then clearly $lim_{n \rightarrow \infty} \int^b_a (f_n(x) - 0)^2 dx =0$ as the interval over which we are integrating is shrinking to zero, but this sequence of functions doesn’t converge point wise ANYWHERE on $[0,1]$. Of course, a subsequence of functions converges pointwise.

January 18, 2014

Fun with divergent series (and uses: e. g. string theory)

One “fun” math book is Knopp’s book Theory and Application of Infinite Series. I highly recommend it to anyone who frequently teaches calculus, or to talented, motivated calculus students.

One of the more interesting chapters in the book is on “divergent series”. If that sounds boring consider the following:

we all know that $\sum^{\infty}_{n=0} x^n = \frac{1}{1-x}$ when $|x| < 1$ and diverges elsewhere, PROVIDED one uses the “sequence of partial sums” definition of covergence of sums. But, as Knopp points out, there are other definitions of convergence which leaves all the convergent (by the usual definition) series convergent (to the same value) but also allows one to declare a larger set of series to be convergent.

Consider $1 - 1 + 1 -1 + 1.......$

of course this is a divergent geometric series by the usual definition. But note that if one uses the geometric series formula:

$\sum^{\infty}_{n=0} x^n = \frac{1}{1-x}$ and substitutes $x = -1$ which IS in the domain of the right hand side (but NOT in the interval of convergence in the left hand side) one obtains $1 -1 +1 -1 + 1.... = \frac{1}{2}$.

Now this is nonsense unless we use a different definition of sum convergence, such as the Cesaro summation: if $s_k$ is the usual “partial sum of the first $k$ terms: $s_k = \sum^{n=k}_{n =0}a_n$ then one declares the Cesaro sum of the series to be $lim_{m \rightarrow \infty} \frac{1}{m}\sum^{m}_{k=1}s_k$ provided this limit exists (this is the arithmetic average of the partial sums).

(see here)

So for our $1 -1 + 1 -1 ....$ we easily see that $s_{2k+1} = 0, s_{2k} = 1$ so for $m$ even we see $\frac{1}{m}\sum^{m}_{k=1}s_k = \frac{\frac{m}{2}}{m} = \frac{1}{2}$ and for $m$ odd we get $\frac{\frac{m-1}{2}}{m}$ which tends to $\frac{1}{2}$ as $m$ tends to infinity.

Now, we have this weird type of assignment.

But that won’t help with $\sum^{\infty}_{k = 1} k = 1 + 2 + 3 + 4 + 5.....$. But weirdly enough, string theorists find a way to assign this particular series a number! In fact, the number that they assign to this makes no sense at all: $-\frac{1}{12}$.

What the heck? Well, one way this is done is explained here:

Consider $\sum^{\infty}_{k=0}x^k = \frac{1}{1-x}$ Now differentiate term by term to get $1 +2x + 3x^2+4x^3 .... = \frac{1}{(1-x)^2}$ and now multiply both sides by $x$ to obtain $x + 2x^2 + 3x^3 + .... = \frac{x}{(1-x)^2}$ This has a pole of order 2 at $x = 1$. But now substitute $x = e^h$ and calculate the Laurent series about $h = 0$; the 0 order term turns out to be $\frac{1}{12}$. Yes, this has applications in string theory!

Now of course, if one uses the usual definitions of convergence, I played fast and loose with the usual intervals of convergence and when I could differentiate term by term. This theory is NOT the usual calculus theory.

Now if you want to see some “fun nonsense” applied to this (spot how many “errors” are made….it is a nice exercise):

What is going on: when one sums a series, one is really “assigning a value” to an object; think of this as a type of morphism of the set of series to the set of numbers. The usual definition of “sum of a series” is an especially nice morphism as it allows, WITH PRECAUTIONS, some nice algebraic operations in the domain (the set of series) to be carried over into the range. I say “with precautions” because of things like the following:

1. If one is talking about series of numbers, then one must have an absolutely convergent series for derangements of a given series to be assigned the same number. Example: it is well known that a conditionally convergent alternating series can be arranged to converge to any value of choice.

2. If one is talking about a series of functions (say, power series where one sums things like $x^n$) one has to be in OPEN interval of absolute convergence to justify term by term differentiation and integration; then of course a series is assigned a function rather than a number.

So when one tries to go with a different notion of convergence, one must be extra cautious as to which operations in the domain space carry through under the “assignment morphism” and what the “equivalence classes” of a given series are (e. g. can a series be deranged and keep the same sum?)

May 29, 2013

Thoughts about Formal Laurent series and non-standard equivalence classes

I admit that I haven’t looked this up in the literature; I don’t know how much of this has been studied.

The objects of my concern: Laurent Series, which can be written like this: $\sum^{\infty}_{j = -\infty} a_j t^j$; examples might be:
$...-2t^{-2} + -1t^{-1} + 0 + t + 2t^2 ... = \sum^{\infty}_{j = -\infty} j t^j$. I’ll denote these series by $p(t)$.

Note: in this note, I am not at all concerned about convergence; I am thinking formally.

The following terminology is non-standard: we’ll call a Laurent series $p(t)$ of “bounded power” if there exists some integer $M$ such that $a_m = 0$ for all $m \ge M$; that is, $p(t) = \sum^{k}_{j = -\infty} j t^j$ for some $k \le M$.

Equivalence classes: two Laurent series $p(t), q(t)$ will be called equivalent if there exists an integer (possibly negative or zero) $k$ such that $t^k p(t) = q(t)$. The multiplication here is understood to be formal “term by term” multiplication.

Addition and subtraction of the Laurent series is the usual term by term operation.

Let $p_1(t), p_2(t), p_3(t)....p_k(t)....$ be a sequence of equivalent Laurent series. We say that the sequence $p_n(t)$ converges to a Laurent series $p(t)$ if for every positive integer $M$ we can find an integer $n$ such that for all $k \ge n$, $p(t) - p_k = t^M \sum^{\infty}_{j=1} a_j t^j$; that is, the difference is a non-Laurent series whose smallest power becomes arbitrarily large as the sequence of Laurent series gets large.

Example: $p_k(t) = \sum^{k}_{j = -\infty} t^j$ converges to $p(t) = \sum^{\infty}_{j = -\infty} t^j$.

The question: given a Laurent series to be used as a limit, is there a sequence of equivalent “bounded power” Laurent series that converges to it?
If I can answer this question “yes”, I can prove a theorem in topology. 🙂

But I don’t know if this is even plausible or not.

April 12, 2012

Pointwise vs. Uniform convergence for functions: Importance of being continuous

In my complex analysis class I was grading a problem of the following type:
given $K \subset C$ where $K$ is compact and given a sequence of continuous functions $f_n$ which converges to 0 pointwise on $K$ and if $|f_1(z)| > |f_2(z)|>...|f_k(z)|...$ for all $z \in K$ show that the convergence is uniform.

The proof is easy enough to do; my favorite way is to pick $\epsilon > 0$ for a given $z \in K$ and $n$ such that $|f_n(z)| < \epsilon$ find a “delta disk” about $z$ so that for all $w$ in that disk, $|f_n(w)| < \epsilon$ also. Then cover $K$ by these open “delta disks” and then one can select a finite number of such disks, each with an associated $n$ and then let $M$ be the maximum of this finite collection of $n$.

But we used the fact that $f_n$ is continuous in our proof.

Here is what can happen if the $f_n$ in question are NOT continuous:

Let’s work on the real interval $[0,1]$. Define $g(x) = q$ if $x = \frac{p}{q}$ in lowest terms, and let $g(x) = 0$ if $x$ is irrational.

Now let $f_n(x) = \frac{g(x)}{n}$. Clearly $f_n$ converges to 0 pointwise and the $f_n$ have the decreasing function property. Nevertheless, it is easy to see that the convergence is far from uniform; in fact for each $n, f_n$ is unbounded!

Of course, we can also come up with a sequence of bounded functions that converge to 0 pointwise but fail to converge uniformly.

For this example, choose as our domain $[0,1]$ and let $h(x) = \frac{q-1}{q}$ if $x = \frac{p}{q}$ in lowest terms, and let $h(x) = 0$ if $x$ is irrational. Now let our sequence $f_n(x) = h(x)^n$. Clearly $f_n$ converges to zero pointwise. To see that this convergence is not uniform: let $\epsilon > 0$ be given and if $(\frac{q-1}{q})^n < \epsilon, n > \frac{ln(\epsilon)}{ln(\frac{q-1}{q})}$ and the right hand side of the inequality varies with $q$ and is, in fact, unbounded. Given a fixed $n$ and $\epsilon$ one can always find a $q$ to exceed the fixed $n$. Hence $n$ varies with $q$