# College Math Teaching

## January 18, 2014

### Fun with divergent series (and uses: e. g. string theory)

One “fun” math book is Knopp’s book Theory and Application of Infinite Series. I highly recommend it to anyone who frequently teaches calculus, or to talented, motivated calculus students.

One of the more interesting chapters in the book is on “divergent series”. If that sounds boring consider the following:

we all know that $\sum^{\infty}_{n=0} x^n = \frac{1}{1-x}$ when $|x| < 1$ and diverges elsewhere, PROVIDED one uses the “sequence of partial sums” definition of covergence of sums. But, as Knopp points out, there are other definitions of convergence which leaves all the convergent (by the usual definition) series convergent (to the same value) but also allows one to declare a larger set of series to be convergent.

Consider $1 - 1 + 1 -1 + 1.......$

of course this is a divergent geometric series by the usual definition. But note that if one uses the geometric series formula:

$\sum^{\infty}_{n=0} x^n = \frac{1}{1-x}$ and substitutes $x = -1$ which IS in the domain of the right hand side (but NOT in the interval of convergence in the left hand side) one obtains $1 -1 +1 -1 + 1.... = \frac{1}{2}$.

Now this is nonsense unless we use a different definition of sum convergence, such as the Cesaro summation: if $s_k$ is the usual “partial sum of the first $k$ terms: $s_k = \sum^{n=k}_{n =0}a_n$ then one declares the Cesaro sum of the series to be $lim_{m \rightarrow \infty} \frac{1}{m}\sum^{m}_{k=1}s_k$ provided this limit exists (this is the arithmetic average of the partial sums).

(see here)

So for our $1 -1 + 1 -1 ....$ we easily see that $s_{2k+1} = 0, s_{2k} = 1$ so for $m$ even we see $\frac{1}{m}\sum^{m}_{k=1}s_k = \frac{\frac{m}{2}}{m} = \frac{1}{2}$ and for $m$ odd we get $\frac{\frac{m-1}{2}}{m}$ which tends to $\frac{1}{2}$ as $m$ tends to infinity.

Now, we have this weird type of assignment.

But that won’t help with $\sum^{\infty}_{k = 1} k = 1 + 2 + 3 + 4 + 5.....$. But weirdly enough, string theorists find a way to assign this particular series a number! In fact, the number that they assign to this makes no sense at all: $-\frac{1}{12}$.

What the heck? Well, one way this is done is explained here:

Consider $\sum^{\infty}_{k=0}x^k = \frac{1}{1-x}$ Now differentiate term by term to get $1 +2x + 3x^2+4x^3 .... = \frac{1}{(1-x)^2}$ and now multiply both sides by $x$ to obtain $x + 2x^2 + 3x^3 + .... = \frac{x}{(1-x)^2}$ This has a pole of order 2 at $x = 1$. But now substitute $x = e^h$ and calculate the Laurent series about $h = 0$; the 0 order term turns out to be $\frac{1}{12}$. Yes, this has applications in string theory!

Now of course, if one uses the usual definitions of convergence, I played fast and loose with the usual intervals of convergence and when I could differentiate term by term. This theory is NOT the usual calculus theory.

Now if you want to see some “fun nonsense” applied to this (spot how many “errors” are made….it is a nice exercise):

What is going on: when one sums a series, one is really “assigning a value” to an object; think of this as a type of morphism of the set of series to the set of numbers. The usual definition of “sum of a series” is an especially nice morphism as it allows, WITH PRECAUTIONS, some nice algebraic operations in the domain (the set of series) to be carried over into the range. I say “with precautions” because of things like the following:

1. If one is talking about series of numbers, then one must have an absolutely convergent series for derangements of a given series to be assigned the same number. Example: it is well known that a conditionally convergent alternating series can be arranged to converge to any value of choice.

2. If one is talking about a series of functions (say, power series where one sums things like $x^n$) one has to be in OPEN interval of absolute convergence to justify term by term differentiation and integration; then of course a series is assigned a function rather than a number.

So when one tries to go with a different notion of convergence, one must be extra cautious as to which operations in the domain space carry through under the “assignment morphism” and what the “equivalence classes” of a given series are (e. g. can a series be deranged and keep the same sum?)

## September 20, 2013

### Ok, have fun and justify this…

Filed under: calculus, popular mathematics, Power Series, series, Taylor Series — Tags: — collegemathteaching @ 7:59 pm

Ok, you say, “this works”; this is a series representation for $\pi$. Ok, it is but why?

Now if you tell me: $\int^1_0 \frac{dx}{1+x^2} = arctan(1) = \frac{\pi}{4}$ and that $\frac{1}{1+x^2} = \sum^{\infty}_{k=0} (-1)^k x^{2k}$ and term by term integration yields:
$\sum^{\infty}_{k=0} (-1)^k \frac{1}{2k+1}x^{2k+1}$ I’d remind you of: “interval of absolute convergence” and remind you that the series for $\frac{1}{1+x^2}$ does NOT converge at $x = 1$ and that one has to be in the open interval of convergence to justify term by term integration.

True, the series DOES converge to $\frac{\pi}{4}$ but it is NOT that elementary to see. 🙂

Boooo!

(Yes, the series IS correct…but the justification is trickier than merely doing the “obvious”).

## May 2, 2012

### Composition of an analystic function with a non-analytic one

Filed under: advanced mathematics, analysis, complex variables, derivatives, Power Series, series — collegemathteaching @ 7:39 pm

On a take home exam, I gave a function of the type: $f(z) = sin(k|z|)$ and asked the students to explain why such a function was continuous everywhere but not analytic anywhere.

This really isn’t hard but that got me to thinking: if $f$ is analytic at $z_0$ and NON CONSTANT, is $f(|z|)$ ever analytic? Before you laugh, remember that in calculus class, $ln|x|$ is differentiable wherever $x \neq 0$.

Ok, go ahead and laugh; after playing around with the Cauchy-Riemann equations at bit, I found that there was a much easier way, if $f$ is analytic on some open neighborhood of a real number.

Since $f$ is analytic at $z_0$, $z_0$ real, write $f = \sum ^ {\infty}_{k =0} a_k (z-z_0)^k$ and then compose $f$ with $|z|$ and substitute into the series. Now if this composition is analytic, pull out the Cauchy-Riemann equations for the composed function $f(x+iy) = u(x,y) + iv(x,y)$ and it is now very easy to see that $v_x = v_y =0$ on some open disk which then implies by the Cauchy-Riemann equations that $u_x = u_y = 0$ as well which means that the function is constant.

So, what if $z_0$ is NOT on the real axis?

Again, we write $f(x + iy) = u(x,y) + iv(x,y)$ and we use $U_{X}, U_{Y}$ to denote the partials of these functions with respect to the first and second variables respectively. Now $f(|z|) = f(\sqrt{x^2 + y^2} + 0i) = u(\sqrt{x^2 + y^2},0) + iv(\sqrt{x^2 + y^2},0)$. Now turn to the Cauchy-Riemann equations and calculate:
$\frac{\partial}{\partial x} u = u_{X}\frac{x}{\sqrt{x^2+y^2}}, \frac{\partial}{\partial y} u = u_{X}\frac{y}{\sqrt{x^2+y^2}}$
$\frac{\partial}{\partial x} v = v_{X}\frac{x}{\sqrt{x^2+y^2}}, \frac{\partial}{\partial y} v = v_{X}\frac{y}{\sqrt{x^2+y^2}}$
Insert into the Cauchy-Riemann equations:
$\frac{\partial}{\partial x} u = u_{X}\frac{x}{\sqrt{x^2+y^2}}= \frac{\partial}{\partial y} v = v_{X}\frac{y}{\sqrt{x^2+y^2}}$
$-\frac{\partial}{\partial x} v = -v_{X}\frac{x}{\sqrt{x^2+y^2}}= \frac{\partial}{\partial y} u = u_{X}\frac{y}{\sqrt{x^2+y^2}}$

From this and from the assumption that $y \neq 0$ we obtain after a little bit of algebra:
$u_{X}\frac{x}{y}= v_{X}, u_{X} = -v_{X}\frac{x}{y}$
This leads to $u_{X}\frac{x^2}{y^2} = v_{X}\frac{x}{y}=-v_{X}$ which implies either that $u_{X}$ is zero which leads to the rest of the partials being zero (by C-R), or this means that $\frac{x^2}{y^2} = -1$ which is absurd.

So $f$ must have been constant.

## January 29, 2011

### Taylor Series: student misunderstanding

Filed under: advanced mathematics, calculus, Power Series, student learning, Taylor Series — collegemathteaching @ 10:05 pm

I am going to take a break from the Lebesgue stuff and maybe write more on that tomorrow.
My numerical analysis class just turned in some homework and some really have some misunderstanding about Taylor Series and Power Series. I’ll provide some helpful hints to perplexed students.

For the experts who might be reading this: my assumption is that we are dealing with functions $f$ which are real analytic over some interval. To students: this means that $f$ can be differentiated as often as we’d like, that the series converges absolutely on some open interval and that the remainder term goes to zero as the number of terms approaches infinity.

This post will be about computing such a series.
First, I’ll give a helpful reminder that is crucial in calculating these series: a Taylor series is really just a power series representation of a function. And if one finds a power series which represents a function over a given interval and is expanded about a given point, THAT SERIES IS UNIQUE, no matter how you come up with it. I’ll explain with an example:

Say you want to represent $f(x) = 1/(1-x)$ over the interval $(-1,1)$. You could compute it this way: you probably learned about the geometric series and that $f(x) = 1/(1-x) = 1 + x + x^2 + x^3....+ x^k+.... = \sum_{i=0}^{\infty} x^i$ for $x \in (-1,1)$.

Well, you could compute it by Taylor’s theorem which says that such a series can be obtained by:
$f(x) = f(0) + f^{'}(0)x + f^{''}())x^2/2! + f^{iii}(0)x^3/3! +.... + \sum_{k=0}^{\infty} f^{k}(0)x^k/k!$ If you do such a calculation for $f(x) = 1/1-x$ one obtains $f^{'} = (1-x)^2$, $f^{''} = 2(1-x)^3$, $f^{iii} = 3!(1-x)^4 ....$ and plugging into Taylor’s formula leads to the usual geometric series. That is, the series can be calculated by any valid method; one does NOT need to retreat to the Taylor definition for calculation purposes.

Example: in the homework problem, students were asked to calculate Taylor polynomials (of various orders and about $x=0$) for a function that looked like this:

$f(x) = 3x(sin(3x)) - (x-3)^2$. Some students tried to calculate the various derivatives and plug into Taylor’s formula with grim results. It is much easier than that if one remembers that power series are unique! Sure, one CAN use Taylor’s formula but that doesn’t mean that one should. Instead it is much easier if one remembers that $sin(x) = x -x^3/3! + x^5/5! - x^7/7!......$ Now to get $sin(3x)$ one just substitutes $3x$ for $x$ and obtains: $sin(3x) = 3x -(x^3)3^3/3! + (x^5)3^5/5! -( x^7)3^7/7!......$. Then $3x(sin(3x)) =9x^2 -(x^4)3^4/3! + (x^6)3^6/5! -( x^8)3^8/7!....$ and one subtracts off $(x-3)^2 = x^2 -6x +9$ to obtain the full power series: $-9 + 6x+ 8x^2 -(x^4)3^4/3! + (x^6)3^6/5! -( x^8)3^8/7!....= -9 + 6x+ 8x^2 +\sum_{k=2}^{\infty} (-1)^{k+1}x^{2k}3^{2k}/(2k-1)!$

Now calculating the bound for the remainder after $k$ terms is, in general, a pain. Sure, one can estimate with a graph, but that sort of defeats the point of approximating to begin with; one can use thumb rules which overstate the magnitude of the remainder term.