# College Math Teaching

## August 4, 2012

### Day 2, Madison MAA Mathfest

The day started with a talk by Karen King from the National Council of Teachers of Mathematics.
I usually find math education talks to be dreadful, but this one was pretty good.

The talk was about the importance of future math teachers (K-12) actually having some math background. However, she pointed out that students just having passed math courses didn’t imply that they understood the mathematical issues that they would be teaching…and it didn’t imply that their students would do better.

She gave an example: about half of those seeking to teach high school math couldn’t explain why “division by zero” was undefined! They knew that it was undefined but couldn’t explain why. I found that astonishing since I knew that in high school.

Later, she pointed out that potential teachers with a math degree didn’t understand what the issues were in defining a number like $2^{\pi}$. Of course, a proper definition of this concept requires at least limits or at least a rigorous definition of the log function and she was well aware that the vast majority of high school students aren’t ready for such things. Still, the instructor should be; as she said “we all wave our hands from time to time, but WE should know when we are waving our hands.”

She stressed that we need to get future math teachers to get into the habit (she stressed the word: “habit”) of always asking themselves “why is this true” or “why is it defined in this manner”; too many of our math major courses are rule bound, and at times we write our exams in ways that reward memorization only.

Next, Bernd Sturmfels gave the second talk in his series; this was called Convex Algebraic Geometry.

You can see some of the material here. He also lead this into the concept of “Semidefinite programming”.

The best I can tell: one looks at the objects studied by algebraic geometers (root sets of polynomials of several variables) and then takes a “affine slice” of these objects.

One example: the “n-ellipse” is the set of points on the plane that satisfy $\sum^m_{k=1} \sqrt{(x-u_k)^2 + (y-v_k)^2} = d$ where $(u_k, v_k)$ are points in the plane.

Questions: what is the degree of the polynomial that describes the ellipse? What happens if we let $d$ tend to zero? What is the smallest $d$ for which the ellipse is non-vanishing (Fermat-Webber point)? Note: the 2 ellipse is the circle, the 3 ellipse (degree 8) is what we usually think of as an ellipse.

Note: these type of surfaces can be realized as the determinant of a symmetric matrix; these matrices have real eigenvalues. We can plot curves over which an eigenvalue goes to zero and then changes sign. This process leads to what is known as a spectrahedron ; this is a type of shape in space. A polyhedron can be thought of as the spectrahedron of a diagonal matrix.

Then one can seek to optimize a linear function over a spectrahedron; this leads to semidefinite programming, which, in general, is roughly as difficult as linear programming.

One use: some global optimization problems can be reduced to a semidefinite programming problem (not all).

Shorter Talks
There was a talk by Bob Palais which discussed the role of Rodrigues in the discovery of the quaternions. The idea is that Rodrigues discovered the quaternions before Hamilton did; but he talked about these in terms of rotations in space.

There were a few talks about geometry and how to introduce concepts to students; of particular interest was the concept of a geodesic. Ruth Berger talked about the “fish swimming in jello” model: basically suppose you had a sea of jello where the jello’s density was determined by its depth with the most dense jello (turning to infinite density) at the bottom; and it took less energy for the fish to swim in the less dense regions. Then if a fish wanted to swim between two points, what path would it take? The geometry induced by these geodesics results in the upper half plane model for hyperbolic space.

Nick Scoville gave a talk about discrete Morse theory. Here is a user’s guide. The idea: take a simplicial complex and assign numbers (integers) to the points, segments, triangles, etc. The assignment has to follow rules; basically the boundary of a complex has to have a lower number that what it bounds (with one exception….) and such an assignment leads to a Morse function. Critical sets can be defined and the various Betti numbers can be calculated.

Christopher Frayer then talked about the geometry of cubic polynomials. This is more interesting than it sounds.
Think about this: remember Rolles Theorem from calculus? There is an analogue of this in complex variables called the Guass-Lucas Theorem. Basically, the roots of the derivative lie in the convex hull of the roots of the polynomial. Then there is Marden’s Theorem for polynomials of degree 3. One can talk about polynomials that have a root of $z = 1$ and two other roots in the unit circle; then one can study where the the roots of the derivative lie. For a certain class of these polynomials, there is a dead circle tangent to the unit circle at 1 which encloses no roots of the derivative.

## May 3, 2012

### Composing a non-constant analytic function with a non-analytic one, part II

Filed under: advanced mathematics, analysis, calculus, complex variables, matrix algebra — collegemathteaching @ 6:40 pm

I realize that what I did in the previous post was, well, lame.
The setting: let $g$ be continuous but non-analytic in some disk $D$ in the complex plane, and let $f$ be analytic in $g(D)$ which, for the purposes of this informal note, we will take to contain an open disk. If $g(D)$ doesn’t contain an open set or if the partials of $g$ fail to exist, the question of $f(g)$ being analytic is easy and uninteresting.

Let $f(r + is ) = u(r,s) + iv(r,s)$ and $g(x+iy) = r(x,y) + is(x,y)$ where $u, v, r, s$ are real valued functions of two variables which have continuous partial derivatives. Assume that $u_r = v_s$ and $u_s = -v_r$ (the standard Cauchy-Riemann equations) in the domain of interest and that either $r_x \neq s_y$ or $r_y \neq -s_x$ in our domain of interest.

Now if the composition $f(g)$ is analytic, then the Cauchy-Riemann equations must hold; that is:
$\frac{\partial u}{\partial x} = \frac{\partial v}{\partial y}, \frac{\partial u}{\partial y} = -\frac{\partial v}{\partial x}$

Now use the chain rule and do some calculation:
From the first of these equations:
$u_r r_x + u_s s_x = v_r r_y + v_s s_y$
$u_r r_y + u_s s_y = -v_r r_x - v_s s_x$
By using the C-R equations for $u, v$ we can substitute:
$u_r r_x + u_s s_x = -u_s r_y + u_r s_y$
$u_r r_y + u_s s_y = u_s r_x - u_r s_x$
This leads to the following system of equations:
$u_r(r_x -s_y) + u_s(s_x + r_y) = 0$
$u_r(r_y + s_x) + u_s(s_y - r_x) = 0$
This leads to the matrix equation:
$\left( \begin{array}{cc}(r_x -s_y) & (s_x + r_y) \\(s_x + r_y) & (s_y - r_x) \end{array} \right)\ \left(\begin{array}{c}u_r \\u_s \end{array}\right)\ = \left(\begin{array}{c} 0 \\ 0 \end{array}\right)\$

The coefficient matrix has determinant $-((r_x - s_y)^2 + (s_x + r_y)^2)$ which is zero when BOTH $(r_x - s_y)$ and $(s_x + r_y)$ are zero, which means that the Cauchy-Riemann equations for $g$ hold. Since that is not the case, the system of equations has only the trivial solution which means $u_r = u_s = 0$ which implies (by C-R for $f$ ) that $v_r = v_s = 0$ which implies that $f$ is constant.

This result includes the “baby result” in the previous post.

## May 2, 2012

### Composition of an analystic function with a non-analytic one

Filed under: advanced mathematics, analysis, complex variables, derivatives, Power Series, series — collegemathteaching @ 7:39 pm

On a take home exam, I gave a function of the type: $f(z) = sin(k|z|)$ and asked the students to explain why such a function was continuous everywhere but not analytic anywhere.

This really isn’t hard but that got me to thinking: if $f$ is analytic at $z_0$ and NON CONSTANT, is $f(|z|)$ ever analytic? Before you laugh, remember that in calculus class, $ln|x|$ is differentiable wherever $x \neq 0$.

Ok, go ahead and laugh; after playing around with the Cauchy-Riemann equations at bit, I found that there was a much easier way, if $f$ is analytic on some open neighborhood of a real number.

Since $f$ is analytic at $z_0$, $z_0$ real, write $f = \sum ^ {\infty}_{k =0} a_k (z-z_0)^k$ and then compose $f$ with $|z|$ and substitute into the series. Now if this composition is analytic, pull out the Cauchy-Riemann equations for the composed function $f(x+iy) = u(x,y) + iv(x,y)$ and it is now very easy to see that $v_x = v_y =0$ on some open disk which then implies by the Cauchy-Riemann equations that $u_x = u_y = 0$ as well which means that the function is constant.

So, what if $z_0$ is NOT on the real axis?

Again, we write $f(x + iy) = u(x,y) + iv(x,y)$ and we use $U_{X}, U_{Y}$ to denote the partials of these functions with respect to the first and second variables respectively. Now $f(|z|) = f(\sqrt{x^2 + y^2} + 0i) = u(\sqrt{x^2 + y^2},0) + iv(\sqrt{x^2 + y^2},0)$. Now turn to the Cauchy-Riemann equations and calculate:
$\frac{\partial}{\partial x} u = u_{X}\frac{x}{\sqrt{x^2+y^2}}, \frac{\partial}{\partial y} u = u_{X}\frac{y}{\sqrt{x^2+y^2}}$
$\frac{\partial}{\partial x} v = v_{X}\frac{x}{\sqrt{x^2+y^2}}, \frac{\partial}{\partial y} v = v_{X}\frac{y}{\sqrt{x^2+y^2}}$
Insert into the Cauchy-Riemann equations:
$\frac{\partial}{\partial x} u = u_{X}\frac{x}{\sqrt{x^2+y^2}}= \frac{\partial}{\partial y} v = v_{X}\frac{y}{\sqrt{x^2+y^2}}$
$-\frac{\partial}{\partial x} v = -v_{X}\frac{x}{\sqrt{x^2+y^2}}= \frac{\partial}{\partial y} u = u_{X}\frac{y}{\sqrt{x^2+y^2}}$

From this and from the assumption that $y \neq 0$ we obtain after a little bit of algebra:
$u_{X}\frac{x}{y}= v_{X}, u_{X} = -v_{X}\frac{x}{y}$
This leads to $u_{X}\frac{x^2}{y^2} = v_{X}\frac{x}{y}=-v_{X}$ which implies either that $u_{X}$ is zero which leads to the rest of the partials being zero (by C-R), or this means that $\frac{x^2}{y^2} = -1$ which is absurd.

So $f$ must have been constant.

## April 17, 2012

### Pointwise versus Uniform convergence of sequences of continuous functions: Part II

Filed under: analysis, calculus, complex variables, uniform convergence — collegemathteaching @ 12:48 am

In my complex analysis class I was grading a problem of the following type:
given $K \subset C$ where $K$ is compact and given a sequence of continuous functions $f_n$ which converges to 0 pointwise on $K$ and if $|f_1(z)| > |f_2(z)|>...|f_k(z)|...$ for all $z \in K$ show that the convergence is uniform.

Now what about the $|f_1(z)| > |f_2(z)|>...|f_k(z)|...$ hypothesis? Can it be dispensed with?

Let’s look at an example in real variables:

Let $f_n(x) = sin(\frac{e \pi}{2}e^{-nx})sin(\frac{n \pi}{2} x)$ with $x \in [0,1]$. $f_n(0) = 0$ for all $n$. To see that $f_n$ converges to zero pointwise, note that $lim_{n \rightarrow \infty}e^{-nx} = 0$ for all $x > 0$, hence $lim_{n \rightarrow \infty}sin(\frac{e \pi}{2}e^{-nx}) = 0$ which implies that $f_n \rightarrow 0$ by the squeeze theorem. But $f_n$ does not converge to 0 uniformly as for $t = \frac{1}{n}$ we have $f_n(t) = 1$

Here is a graph of the functions for $n = 5, 10, 20, 40$

## April 12, 2012

### Pointwise vs. Uniform convergence for functions: Importance of being continuous

In my complex analysis class I was grading a problem of the following type:
given $K \subset C$ where $K$ is compact and given a sequence of continuous functions $f_n$ which converges to 0 pointwise on $K$ and if $|f_1(z)| > |f_2(z)|>...|f_k(z)|...$ for all $z \in K$ show that the convergence is uniform.

The proof is easy enough to do; my favorite way is to pick $\epsilon > 0$ for a given $z \in K$ and $n$ such that $|f_n(z)| < \epsilon$ find a “delta disk” about $z$ so that for all $w$ in that disk, $|f_n(w)| < \epsilon$ also. Then cover $K$ by these open “delta disks” and then one can select a finite number of such disks, each with an associated $n$ and then let $M$ be the maximum of this finite collection of $n$.

But we used the fact that $f_n$ is continuous in our proof.

Here is what can happen if the $f_n$ in question are NOT continuous:

Let’s work on the real interval $[0,1]$. Define $g(x) = q$ if $x = \frac{p}{q}$ in lowest terms, and let $g(x) = 0$ if $x$ is irrational.

Now let $f_n(x) = \frac{g(x)}{n}$. Clearly $f_n$ converges to 0 pointwise and the $f_n$ have the decreasing function property. Nevertheless, it is easy to see that the convergence is far from uniform; in fact for each $n, f_n$ is unbounded!

Of course, we can also come up with a sequence of bounded functions that converge to 0 pointwise but fail to converge uniformly.

For this example, choose as our domain $[0,1]$ and let $h(x) = \frac{q-1}{q}$ if $x = \frac{p}{q}$ in lowest terms, and let $h(x) = 0$ if $x$ is irrational. Now let our sequence $f_n(x) = h(x)^n$. Clearly $f_n$ converges to zero pointwise. To see that this convergence is not uniform: let $\epsilon > 0$ be given and if $(\frac{q-1}{q})^n < \epsilon, n > \frac{ln(\epsilon)}{ln(\frac{q-1}{q})}$ and the right hand side of the inequality varies with $q$ and is, in fact, unbounded. Given a fixed $n$ and $\epsilon$ one can always find a $q$ to exceed the fixed $n$. Hence $n$ varies with $q$

## January 12, 2012

### So you want to take a course in complex variables

Ok, what should you have at your fingertips prior to taking such a course?

I consider the following to be minimal prerequisites:

Basic calculus

1. limits (epsilon-delta, 2-d limits)

2. limit definition of the derivative

3. basic calculus differentiation and integration formulas:
chain rule, product rule, quotient rule, integration and differentiation of polynomials, log, exponentials, basic trig functions, hyperbolic trig functions, inverse trig functions.

4. Fundamental Theorem of calculus.

5. Sequences (convergence)

6. Series: geometric series test, ratio test, comparison tests

7. Power series: interval of convergence, absolute convergence

8. Power series: term by term differentiation, term by term integrals

9. Taylor/Power series for 1/(1-x), sin(x), cos(x), exp(x)

Multi-variable calculus

1. partial derivatives

3. parametrized curves

4. polar coordinates

5. line and path integrals

6. conservative vector fields

7. Green’s Theorem (for integration of a closed loop in a plane)

The challenge
Some of complex variables will look “just like calculus”. And, some of the calculations WILL be “just like calculus; for example it will turn out if $\delta$ is any piecewise smooth curve running from $z_1$ to $z_2$ then $\int_{\delta} e^z dz = e^{z_2} - e^{z_1}$. But in many cases, the similarity vanishes and more care must be taken.

You will learn many things such as:
1. The complex function $sin(z)$ is unbounded!

2. No non-constant everywhere differentiable function is bounded; compare that to $f(x) = \frac{1}{1+x^2}$ in calculus.

3. Integrals can have some strange properties. For example, if $\delta$ is the unit circle taken once around in the standard direction, $\int_{\delta} Log(z) dz$ depends on where one chooses to start and stop, even if the start and stop points are the same!

4. You’ll come to understand why the Taylor series (expanded about $x = 0$) for $\frac{1}{1+x^2}$ has radius of convergence equal to one…it isn’t just an artifact of the trick used to calculate the series.

5. You’ll come to understand that being differentiable on an open disk is a very strong condition for complex functions; in particular being differentiable on an open disk means being INFINITELY differentiable on that open set (compare to $f(x) = x^{4/3}$ which has one derivative but NOT two derivatives at $x = 0$

There is much more, of course.