College Math Teaching

February 22, 2018

What is going on here: sum of cos(nx)…

Filed under: analysis, derivatives, Fourier Series, pedagogy, sequences of functions, series, uniform convergence — collegemathteaching @ 9:58 pm

This started innocently enough; I was attempting to explain why we have to be so careful when we attempt to differentiate a power series term by term; that when one talks about infinite sums, the “sum of the derivatives” might fail to exist if the sum is infinite.

Anyone who is familiar with Fourier Series and the square wave understands this well:

\frac{4}{\pi} \sum^{\infty}_{k=1} \frac{1}{2k-1}sin((2k-1)x)  = (\frac{4}{\pi})( sin(x) + \frac{1}{3}sin(3x) + \frac{1}{5}sin(5x) +.....) yields the “square wave” function (plus zero at the jump discontinuities)

Here I graphed to 2k-1 = 21

Now the resulting function fails to even be continuous. But the resulting function is differentiable except for the points at the jump discontinuities and the derivative is zero for all but a discrete set of points.

(recall: here we have pointwise convergence; to get a differentiable limit, we need other conditions such as uniform convergence together with uniform convergence of the derivatives).

But, just for the heck of it, let’s differentiate term by term and see what we get:

(\frac{4}{\pi})\sum^{\infty}_{k=1} cos((2k-1)x) = (\frac{4}{\pi})(cos(x) + cos(3x) + cos(5x) + cos(7x) +.....)...

It is easy to see that this result doesn’t even converge to a function of any sort.

Example: let’s see what happens at x = \frac{\pi}{4}: cos(\frac{\pi}{4}) = \frac{1}{\sqrt{2}}

cos(\frac{\pi}{4}) + cos(3\frac{\pi}{4}) =0

cos(\frac{\pi}{4}) + cos(3\frac{\pi}{4}) + cos(5\frac{\pi}{4}) = -\frac{1}{\sqrt{2}}

cos(\frac{\pi}{4}) + cos(3\frac{\pi}{4}) + cos(5\frac{\pi}{4}) + cos(7\frac{\pi}{4}) = 0

And this repeats over and over again; no limit is possible.

Something similar happens for x = \frac{p}{q}\pi where p, q are relatively prime positive integers.

But something weird is going on with this sum. I plotted the terms with 2k-1 \in \{1, 3, ...35 \}

(and yes, I am using \frac{\pi}{4} csc(x) as a type of “envelope function”)

BUT…if one, say, looks at cos(29x) + cos(31x) + cos(33x) + cos(35x)

we really aren’t getting a convergence (even at irrational multiples of \pi ). But SOMETHING is going on!

I decided to plot to cos(61x)

Something is going on, though it isn’t convergence. Note: by accident, I found that the pattern falls apart when I skipped one of the terms.

This is something to think about.

I wonder: for all x \in (0, \pi), sup_{n \in \{1, 3, 5, 7....\}}|\sum^{n}_{k \in \{1,3,..\}}cos(kx)| \leq |csc(x)| and we can somehow get close to csc(x) for given values of x by allowing enough terms…but the value of x is determined by how many terms we are using (not always the same value of x ).

August 7, 2014

Engineers need to know this stuff part II

This is a 50 minute lecture in a engineering class; one can easily see the mathematical demands put on the students. Many of the seemingly abstract facts from calculus (differentiability, continuity, convergence of a sequence of functions) are heavily used. Of particular interest to me is the remarks from 45 to 50 minutes into the video:

Here is what is going on: if we have a sequence of functions f_n defined on some interval [a,b] and if f is defined on [a,b] , lim_{n \rightarrow \infty} \int^b_a (f_n(x) - f(x))^2 dx =0 then we say that f_n \rightarrow f “in mean” (or “in the L^2 norm”). Basically, as n grows, the area between the graphs of f_n and f gets arbitrarily small.

However this does NOT mean that f_n converges to f point wise!

If that seems strange: remember that the distance between the graphs can say fixed over a set of decreasing measure.

Here is an example that illustrates this: consider the intervals [0, \frac{1}{2}], [\frac{1}{2}, \frac{5}{6}], [\frac{3}{4}, 1], [\frac{11}{20}, \frac{3}{4}],... The intervals have length \frac{1}{2}, \frac{1}{3}, \frac{1}{4},... and start by moving left to right on [0,1] and then moving right to left and so on. They “dance” on [0,1]. Let f_n the the function that is 1 on the interval and 0 off of it. Then clearly lim_{n \rightarrow \infty} \int^b_a (f_n(x) - 0)^2 dx =0 as the interval over which we are integrating is shrinking to zero, but this sequence of functions doesn’t converge point wise ANYWHERE on [0,1] . Of course, a subsequence of functions converges pointwise.

April 17, 2012

Pointwise versus Uniform convergence of sequences of continuous functions: Part II

Filed under: analysis, calculus, complex variables, uniform convergence — collegemathteaching @ 12:48 am

In my complex analysis class I was grading a problem of the following type:
given K \subset C where K is compact and given a sequence of continuous functions f_n which converges to 0 pointwise on K and if |f_1(z)| > |f_2(z)|>...|f_k(z)|... for all z \in K show that the convergence is uniform.

In a previous post, I talked about why it was important that each f_k(z) be continuous.

Now what about the |f_1(z)| > |f_2(z)|>...|f_k(z)|... hypothesis? Can it be dispensed with?

Answer: well, no.

Let’s look at an example in real variables:

Let f_n(x) = sin(\frac{e \pi}{2}e^{-nx})sin(\frac{n \pi}{2} x) with x \in [0,1] . f_n(0) = 0 for all n . To see that f_n converges to zero pointwise, note that lim_{n \rightarrow \infty}e^{-nx} = 0 for all x > 0 , hence lim_{n \rightarrow \infty}sin(\frac{e \pi}{2}e^{-nx}) = 0 which implies that f_n \rightarrow 0 by the squeeze theorem. But f_n does not converge to 0 uniformly as for t = \frac{1}{n} we have f_n(t) = 1

Here is a graph of the functions for n = 5, 10, 20, 40

April 12, 2012

Pointwise vs. Uniform convergence for functions: Importance of being continuous

In my complex analysis class I was grading a problem of the following type:
given K \subset C where K is compact and given a sequence of continuous functions f_n which converges to 0 pointwise on K and if |f_1(z)| > |f_2(z)|>...|f_k(z)|... for all z \in K show that the convergence is uniform.

The proof is easy enough to do; my favorite way is to pick \epsilon > 0 for a given z \in K and n such that |f_n(z)| < \epsilon find a “delta disk” about z so that for all w in that disk, |f_n(w)| < \epsilon also. Then cover K by these open “delta disks” and then one can select a finite number of such disks, each with an associated n and then let M be the maximum of this finite collection of n .

But we used the fact that f_n is continuous in our proof.

Here is what can happen if the f_n in question are NOT continuous:

Let’s work on the real interval [0,1] . Define g(x) = q if x = \frac{p}{q} in lowest terms, and let g(x) = 0 if x is irrational.

Now let f_n(x) = \frac{g(x)}{n} . Clearly f_n converges to 0 pointwise and the f_n have the decreasing function property. Nevertheless, it is easy to see that the convergence is far from uniform; in fact for each n, f_n is unbounded!

Of course, we can also come up with a sequence of bounded functions that converge to 0 pointwise but fail to converge uniformly.

For this example, choose as our domain [0,1] and let h(x) = \frac{q-1}{q} if x = \frac{p}{q} in lowest terms, and let h(x) = 0 if x is irrational. Now let our sequence f_n(x) = h(x)^n . Clearly f_n converges to zero pointwise. To see that this convergence is not uniform: let \epsilon > 0 be given and if (\frac{q-1}{q})^n < \epsilon, n > \frac{ln(\epsilon)}{ln(\frac{q-1}{q})} and the right hand side of the inequality varies with q and is, in fact, unbounded. Given a fixed n and \epsilon one can always find a q to exceed the fixed n . Hence n varies with q

Blog at WordPress.com.