# College Math Teaching

## September 20, 2013

### Ok, have fun and justify this…

Filed under: calculus, popular mathematics, Power Series, series, Taylor Series — Tags: — collegemathteaching @ 7:59 pm

Ok, you say, “this works”; this is a series representation for $\pi$. Ok, it is but why?

Now if you tell me: $\int^1_0 \frac{dx}{1+x^2} = arctan(1) = \frac{\pi}{4}$ and that $\frac{1}{1+x^2} = \sum^{\infty}_{k=0} (-1)^k x^{2k}$ and term by term integration yields:
$\sum^{\infty}_{k=0} (-1)^k \frac{1}{2k+1}x^{2k+1}$ I’d remind you of: “interval of absolute convergence” and remind you that the series for $\frac{1}{1+x^2}$ does NOT converge at $x = 1$ and that one has to be in the open interval of convergence to justify term by term integration.

True, the series DOES converge to $\frac{\pi}{4}$ but it is NOT that elementary to see. ðŸ™‚

Boooo!

(Yes, the series IS correct…but the justification is trickier than merely doing the “obvious”).

## May 29, 2013

### Thoughts about Formal Laurent series and non-standard equivalence classes

I admit that I haven’t looked this up in the literature; I don’t know how much of this has been studied.

The objects of my concern: Laurent Series, which can be written like this: $\sum^{\infty}_{j = -\infty} a_j t^j$; examples might be:
$...-2t^{-2} + -1t^{-1} + 0 + t + 2t^2 ... = \sum^{\infty}_{j = -\infty} j t^j$. I’ll denote these series by $p(t)$.

Note: in this note, I am not at all concerned about convergence; I am thinking formally.

The following terminology is non-standard: we’ll call a Laurent series $p(t)$ of “bounded power” if there exists some integer $M$ such that $a_m = 0$ for all $m \ge M$; that is, $p(t) = \sum^{k}_{j = -\infty} j t^j$ for some $k \le M$.

Equivalence classes: two Laurent series $p(t), q(t)$ will be called equivalent if there exists an integer (possibly negative or zero) $k$ such that $t^k p(t) = q(t)$. The multiplication here is understood to be formal “term by term” multiplication.

Addition and subtraction of the Laurent series is the usual term by term operation.

Let $p_1(t), p_2(t), p_3(t)....p_k(t)....$ be a sequence of equivalent Laurent series. We say that the sequence $p_n(t)$ converges to a Laurent series $p(t)$ if for every positive integer $M$ we can find an integer $n$ such that for all $k \ge n$, $p(t) - p_k = t^M \sum^{\infty}_{j=1} a_j t^j$; that is, the difference is a non-Laurent series whose smallest power becomes arbitrarily large as the sequence of Laurent series gets large.

Example: $p_k(t) = \sum^{k}_{j = -\infty} t^j$ converges to $p(t) = \sum^{\infty}_{j = -\infty} t^j$.

The question: given a Laurent series to be used as a limit, is there a sequence of equivalent “bounded power” Laurent series that converges to it?
If I can answer this question “yes”, I can prove a theorem in topology. ðŸ™‚

But I don’t know if this is even plausible or not.

## May 2, 2012

### Composition of an analystic function with a non-analytic one

Filed under: advanced mathematics, analysis, complex variables, derivatives, Power Series, series — collegemathteaching @ 7:39 pm

On a take home exam, I gave a function of the type: $f(z) = sin(k|z|)$ and asked the students to explain why such a function was continuous everywhere but not analytic anywhere.

This really isn’t hard but that got me to thinking: if $f$ is analytic at $z_0$ and NON CONSTANT, is $f(|z|)$ ever analytic? Before you laugh, remember that in calculus class, $ln|x|$ is differentiable wherever $x \neq 0$.

Ok, go ahead and laugh; after playing around with the Cauchy-Riemann equations at bit, I found that there was a much easier way, if $f$ is analytic on some open neighborhood of a real number.

Since $f$ is analytic at $z_0$, $z_0$ real, write $f = \sum ^ {\infty}_{k =0} a_k (z-z_0)^k$ and then compose $f$ with $|z|$ and substitute into the series. Now if this composition is analytic, pull out the Cauchy-Riemann equations for the composed function $f(x+iy) = u(x,y) + iv(x,y)$ and it is now very easy to see that $v_x = v_y =0$ on some open disk which then implies by the Cauchy-Riemann equations that $u_x = u_y = 0$ as well which means that the function is constant.

So, what if $z_0$ is NOT on the real axis?

Again, we write $f(x + iy) = u(x,y) + iv(x,y)$ and we use $U_{X}, U_{Y}$ to denote the partials of these functions with respect to the first and second variables respectively. Now $f(|z|) = f(\sqrt{x^2 + y^2} + 0i) = u(\sqrt{x^2 + y^2},0) + iv(\sqrt{x^2 + y^2},0)$. Now turn to the Cauchy-Riemann equations and calculate:
$\frac{\partial}{\partial x} u = u_{X}\frac{x}{\sqrt{x^2+y^2}}, \frac{\partial}{\partial y} u = u_{X}\frac{y}{\sqrt{x^2+y^2}}$
$\frac{\partial}{\partial x} v = v_{X}\frac{x}{\sqrt{x^2+y^2}}, \frac{\partial}{\partial y} v = v_{X}\frac{y}{\sqrt{x^2+y^2}}$
Insert into the Cauchy-Riemann equations:
$\frac{\partial}{\partial x} u = u_{X}\frac{x}{\sqrt{x^2+y^2}}= \frac{\partial}{\partial y} v = v_{X}\frac{y}{\sqrt{x^2+y^2}}$
$-\frac{\partial}{\partial x} v = -v_{X}\frac{x}{\sqrt{x^2+y^2}}= \frac{\partial}{\partial y} u = u_{X}\frac{y}{\sqrt{x^2+y^2}}$

From this and from the assumption that $y \neq 0$ we obtain after a little bit of algebra:
$u_{X}\frac{x}{y}= v_{X}, u_{X} = -v_{X}\frac{x}{y}$
This leads to $u_{X}\frac{x^2}{y^2} = v_{X}\frac{x}{y}=-v_{X}$ which implies either that $u_{X}$ is zero which leads to the rest of the partials being zero (by C-R), or this means that $\frac{x^2}{y^2} = -1$ which is absurd.

So $f$ must have been constant.

## August 9, 2011

### Quantum Mechanics and Undergraduate Mathematics IX: Time evolution of an Observable Density Function

We’ll assume a state function $\psi$ and an observable whose Hermitian operator is denoted by $A$ with eigenvectors $\alpha_k$ and eigenvalues $a_k$. If we take an observation (say, at time $t = 0$ ) we obtain the probability density function $p(Y = a_k) = | \langle \alpha_k, \psi \rangle |^2$ (we make the assumption that there is only one eigenvector per eigenvalue).

We saw how the expectation (the expected value of the associated density function) changes with time. What about the time evolution of the density function itself?

Since $\langle \alpha_k, \psi \rangle$ completely determines the density function and because $\psi$ can be expanded as $\psi = \sum_{k=1} \langle \alpha_k, \psi \rangle \alpha_k$ it make sense to determine $\frac{d}{dt} \langle \alpha_k, \psi \rangle$. Note that the eigenvectors $\alpha_k$ and eigenvalues $a_k$ do not change with time and therefore can be regarded as constants.

$\frac{d}{dt} \langle \alpha_k, \psi \rangle = \langle \alpha_k, \frac{\partial}{\partial t}\psi \rangle = \langle \alpha_k, \frac{-i}{\hbar}H\psi \rangle = \frac{-i}{\hbar}\langle \alpha_k, H\psi \rangle$

We can take this further: we now write $H\psi = H\sum_j \langle \alpha_j, \psi \rangle \alpha_j = \sum_j \langle \alpha_j, \psi \rangle H \alpha_j$ We now substitute into the previous equation to obtain:
$\frac{d}{dt} \langle \alpha_k, \psi \rangle = \frac{-i}{\hbar}\langle \alpha_k, \sum_j \langle \alpha_j, \psi \rangle H \alpha_j \rangle = \frac{-i}{\hbar}\sum_j \langle \alpha_k, H\alpha_j \rangle \langle \alpha_j, \psi \rangle$

Denote $\langle \alpha_j, \psi \rangle$ by $a_j$. Then we see that we have the infinite coupled differential equations: $\frac{d}{dt} a_k = \frac{-i}{\hbar} \sum_j a_j \langle \alpha_k, H\alpha_j \rangle$. That is, the rate of change of one of the $a_k$ depends on all of the $a_j$ which really isn’t a surprise.

We can see this another way: because we have a density function, $\sum_j |\langle \alpha_j, \psi \rangle |^2 =1$. Now rewrite: $\sum_j |\langle \alpha_j, \psi \rangle |^2 = \sum_j \langle \alpha_j, \psi \rangle \overline{\langle \alpha_j, \psi \rangle } = \sum_j a_j \overline{ a_j} = 1$. Now differentiate with respect to $t$ and use the product rule: $\sum_j \frac{d}{dt}a_j \overline{ a_j} + a_j \frac{d}{dt} \overline{ a_j} = 0$

Things get a bit easier if the original operator $A$ is compatible with the Hamiltonian $H$; in this case the operators share common eigenvectors. We denote the eigenvectors for $H$ by $\eta$ and then
$\frac{d}{dt} a_k = \frac{-i}{\hbar} \sum_j a_j \langle \alpha_k, H\alpha_j \rangle$ becomes:
$\frac{d}{dt} \langle \eta_j, \psi \rangle = \frac{-i}{\hbar} \sum_j \langle \eta_j, \psi \rangle \langle \eta_k, H\eta_j \rangle$ Now use the fact that the $\eta_j$ are eigenvectors for $H$ and are orthogonal to each other to obtain:
$\frac{d}{dt} \langle \eta_k, \psi \rangle = \frac{-i}{\hbar} e_k \langle \eta_k, \psi \rangle$ where $e_k$ is the eigenvalue for $H$ associated with $\eta_k$.

Now we use differential equations (along with existence and uniqueness conditions) to obtain:
$\langle \eta_k, \psi \rangle = \langle_k, \psi_0 \rangle exp(-ie_k \frac{t}{\hbar})$ where $\psi_0$ is the initial state vector (before it had time to evolve).

This has two immediate consequences:

1. $\psi(x,t) = \sum_j \langle \eta_j, \psi_0 \rangle exp(-ie_j \frac{t}{\hbar}) \eta_j$
That is the general solution to the time-evolution equation. The reader might be reminded that $exp(ib) = cos(b) + i sin (b)$

2. Returning to the probability distribution: $P(Y = e_k) = |\langle \eta_k, \psi \rangle |^2 = |\langle \eta_k, \psi_0 \rangle |^2 ||exp(-ie_k \frac{t}{\hbar})|^2 = |\langle \eta_k, \psi_0 \rangle |^2$. But since $A$ is compatible with $H$, we have the same eigenvectors, hence we see that the probability density function does not change AT ALL. So such an observable really is a “constant of motion”.

Stationary States
Since $H$ is an observable, we can always write $\psi(x,t) = \sum_j \langle \eta_j, \psi(x,t) \rangle \eta_j$. Then we have $\psi(x,t)= \sum_j \langle \eta_j, \psi_0 \rangle exp(-ie_j \frac{t}{\hbar}) \eta_j$

Now suppose $\psi_0$ is precisely one of the eigenvectors for the Hamiltonian; say $\psi_0 = \eta_k$ for some $k$. Then:

1. $\psi_(x,t) = exp(-ie_k \frac{t}{\hbar}) \eta_k$
2. For any $t \geq 0 , P(Y = e_k) = 1, P(Y \neq e_k) = 0$

Note: no other operator has made an appearance.
Now recall our first postulate: states are determined only up to scalar multiples of unity modulus. Hence the state undergoes NO time evolution, no matter what observable is being observed.

We can see this directly: let $A$ be an operator corresponding to any observable. Then $\langle \alpha_k, A \psi_k \rangle = \langle \alpha_k, A exp(-i e_k \frac{t}{\hbar})\eta_k \rangle = exp(-i e_k \frac{t}{\hbar}\langle \alpha_k, A \eta_k \rangle$. Then because the probability distribution is completely determined by the eigenvalues $e_k$ and $|\langle \alpha_k, A \eta_k \rangle |$ and $|exp(-i e_k \frac{t}{\hbar}| = 1$, the distribution does NOT change with time. This motivates us to define the stationary states of a system: $\psi_{(k)} = exp(- e_k \frac{t}{\hbar})\eta_k$.

Gillespie notes that much of the problem solving in quantum mechanics is solving the Eigenvalue problem: $H \eta_k = e_k \eta_k$ which is often difficult to do. But if one can do that, one can determine the stationary states of the system.

## April 3, 2011

### Infinite Series: the Root and Ratio tests

Filed under: advanced mathematics, analysis, calculus, infinite series, series — collegemathteaching @ 3:29 am

This post is written mostly for those who are new to teaching calculus rather than for students learning calculus for the first time. Experienced calculus teachers and those whose analysis background is still fresh will likely be bored. ðŸ™‚

The setting will be series $\Sigma^\infty_{k=1} a_k$ with $a_k > 0$ We will use the usual notion of convergence; that is, the series converges if the sequence of partial sums converge. If that last statement puzzles you, there are other non-standard notions of convergence.

I’ll give the usual statements of the root test and the ratio test, given $\Sigma^\infty_{k=1} a_k$ with $a_k > 0$

Root Test
Suppose $lim_{k \rightarrow \infty}(a_k)^{\frac{1}{k}} = c$. If $c >1$ the series diverges. If $c < 1$ the series converges. If $c = 1$ the test is inconclusive.

Ratio Test
Suppose $lim_{k \rightarrow \infty} a_{k+1}/a_k = c$. If $c >1$ the series diverges. If $c < 1$ the series converges. If $c = 1$ the test is inconclusive.

Quick examples of how these tests are used
Example one: show that $\Sigma^{\infty}_{k=1} (x/k)^{k}$ converges for all $x$. Apply the root test and note $lim_{k \rightarrow \infty} ((x/k))^{k})^{1/k} = lim_{k \rightarrow \infty} ((x/k) = 0$ for all $x \geq 0$ hence the series converges absolutely for all $x$.

Example two: show that show that $\Sigma^{\infty}_{k=1} (x^k/k!)$ converges for all $x$. Consider $lim_{k \rightarrow \infty} (x^{k+1}/(k+1)!)/((x^k)/k!) = lim_{k \rightarrow \infty} x/(k+1) = 0 < 1$ for all $x \geq 0$ hence the series converges absolutely for all $x$.

However these tests, as taught, are often more limited than they need to be. For example, if one considers the series $\Sigma^{\infty}_{k=1} (|sin(k)|/2)^{k}$ the root test, as stated, doesn't apply as $lim_{k \rightarrow \infty} ((|sin(k)|)/2)^{k})^{1/k}$ fails to exist, though it is clear that the $sup{((|sin(k)|)/2)^{k})^{1/k}} = 1/2$ and the series is dominated by the convergent geometric series $\Sigma^{\infty}_{k=1} (1/2)^k$ and therefore converges.

There is also a common misconception that the root test and the ratio tests are equivalent. They aren’t; in fact, we’ll show that if a series passes the ratio test, it will also pass the root test but that the reverse is false. We’ll also provide an easy to understand stronger version of these two tests. The stronger versions should be easily within the grasp of the best calculus students and within the grasp of beginning analysis/advanced calculus students.

Note: there is nothing original here; I can recommend the books Counterexamples in Analysis by Bernard R. Gelbaum and Theory and Application of Infinite Series by Konrad Knop for calculus instructors who aren’t analysts. Much of what I say can be found there in one form or another.

The proofs of the tests and what can be learned
Basically, both proofs are merely basic comparisons with a convergent geometric series.

Proof of the root test (convergence)
If $lim_{k \rightarrow \infty} (a_k)^{1/k} = c$ with $c < 1$ then there exists some $d <1$ and some index $N$ such that for all $n > N$, ${a_n}^{1/n} < d$. Hence for all $n> N$, ${a_n} < d^n$ where $d < 1$. Therefore the series converges by a direct comparison with the convergent geometric series $\Sigma^{\infty}_{k = N} d^{k}$.

Note the following: requiring $lim_{k \rightarrow \infty} (a_k)^{1/k}$ to exist is overkill; what is important that, for some index $N$, $k$, $sup_{k>N} (a_k)^{1/k} < c < 1$. This is enough for the comparison with the geometric series to work. In fact, in the language of analysis, we can replace the limit condition with $limsup (a_k)^{1/k} = c < 1$. If you are fuzzy about limit superior, this is a good reference.

In fact, we can weaken the hypothesis a bit further. Since the convergence of a series really depends on the convergence of the “tail” of a series, we really only need: for some index $N$ and all $n > N$, $limsup (a_{N+k})^{1/k} = c < 1$, $k \in \{0, 1, 2,...\}$. This point may seem pedantic but we’ll use this in just a bit.

Note: we haven’t talked about divergence; one can work out a stronger test for divergence by using limit inferior.

Proof of the ratio test (convergence)
We’ll prove the stronger version of the ratio test: if $limsup a_{k+1}/a_k = c < 1$ then there is an index $N$ and some number $d < 1$ such that for all $n\geq N$, $a_{n+1}/a_n < d$.
Simple algebra implies that $a_{n+1} < (a_n)d$ and $a_{n+2} < (a_{n+1})d < (a_n)d^2$ and in general $a_{n+j} < (a_n)d^j$. Hence the series $\Sigma ^{\infty}_{k = N} a_k$ is dominated by $(a_N)(\Sigma ^{\infty}_{k = 0} d^k)$ which is a convergent geometric series.

Comparing the root and the ratio tests
Consider the convergent series $1/2 + (1/3)^2 + (1/2)^3 + (1/3)^4 +.....+ (1/2)^{2k- 1} + (1/3)^{2k}....$.
Then clearly $limsup a_{k} = 1/2$ hence the root test works. But the ratio test yields the following:
$(a_{2k+1})/a_{2k} = (1/2)^{2k+1}/(1/3)^{2k} = (3/2)^{2k}/2$ which tends to infinity as $k$ goes to infinity. Note: since the limit does not exist, the traditional ratio test doesn’t apply. The limit inferior tends to zero so a strengthened ratio test doesn’t imply divergence.

So the root test is not equivalent to the ratio test.

But suppose the ratio test yields convergence; that is:
$limsup a_{k+1}/a_k = c < 1$. Then by the same arguments used in the proof:
$a_{N+j} < (a_N)d^j$. Then we can take $j'th$ roots of both sides and note: $(a_{n+j})^{1/j} < d(a_n)^{1/j} < d$ hence the weakened hypothesis of the root test is met.

That is, the root test is a stronger test than the ratio test, though, of course, it is sometimes more difficult to apply.

We’ll state the tests in the stronger form for convergence; the base assumption that $\Sigma a_{k}$ has positive terms:

Root test: if there exists an index $N$ such that for all $n \leq N$ we have $limsup_{j} (a_{N+j})^{1/j} \leq c < 1$ then the series converges.

Ratio test: if $limsup (a_{k+1})/a_{k} \leq c < 1$ then the series converges.

It is a routine exercise to restate these tests in stronger form for divergence.

## March 11, 2011

### Infinite Series: a “cutsie” video

Filed under: calculus, cantor set, media, series — oldgote @ 5:27 pm

Here is something I might try: show that one can fill up some region with disks of decreasing radii. This visual will demonstrate that the areas form a convergent series, but then we’ll show that the sum of the radii might well be divergent.