# College Math Teaching

## July 10, 2020

### This always bothered me about partial fractions…

Filed under: algebra, calculus, complex variables, elementary mathematics, integration by substitution — Tags: — collegemathteaching @ 12:03 am

Let’s look at an “easy” starting example: write $\frac{1}{(x-1)(x+1)} = \frac{A}{x-1} + \frac{B}{x+1}$
We know how that goes: multiply both sides by $(x-1)(x+1)$ to get $1 = A(x+1) + B(x-1)$ and then since this must be true for ALL $x$, substitute $x=-1$ to get $B = -{1 \over 2}$ and then substitute $x = 1$ to get $A = {1 \over 2}$. Easy-peasy.

BUT…why CAN you do such a substitution since the original domain excludes $x =1, x = -1$?? (and no, I don’t want to hear about residues and “poles of order 1”; this is calculus 2. )

Lets start with $\frac{1}{(x-1)(x+1)} = \frac{A}{x-1} + \frac{B}{x+1}$ with the restricted domain, say $x \neq 1$
Now multiply both sides by $x-1$ and note that, with the restricted domain $x \neq 1$ we have:

$\frac{1}{x+1} = A + \frac{B(x-1)}{x+1}$ But both sides are equal on the domain $(-1, 1) \cup (1, \infty)$ and the limit on the left hand side is $lim_{x \rightarrow 1} {1 \over x+1 } = {1 \over 2}$ So the right hand side has a limit which exists and is equal to $A$. So the result follows..and this works for the calculation for B as well.

Yes, no engineer will care about this. But THIS is the reason we can substitute the non-domain points.

As an aside: if you are trying to solve something like ${x^2 + 3x + 2 \over (x^2+1)(x-3) } = {Ax + B \over x^2+1 } + {C \over x-3 }$ one can do the denominator clearing, and, as appropriate substitute $x = i$ and compare real and imaginary parts ..and yes, now you can use poles and residues.

## November 1, 2016

### A test for the independence of random variables

Filed under: algebra, probability, statistics — Tags: , — collegemathteaching @ 10:36 pm

We are using Mathematical Statistics with Applications (7’th Ed.) by Wackerly, Mendenhall and Scheaffer for our calculus based probability and statistics course.

They present the following Theorem (5.5 in this edition)

Let $Y_1$ and $Y_2$ have a joint density $f(y_1, y_2)$ that is positive if and only if $a \leq y_1 \leq b$ and $c \leq y_2 \leq d$ for constants $a, b, c, d$ and $f(y_1, y_2)=0$ otherwise. Then $Y_1, Y_2$ are independent random variables if and only if $f(y_1, y_2) = g(y_1)h(y_2)$ where $g(y_1), h(y_2)$ are non-negative functions of $y_1, y_2$ alone (respectively).

Ok, that is fine as it goes, but then they apply the above theorem to the joint density function: $f(y_1, y_2) = 2y_1$ for $(y_1,y_2) \in [0,1] \times [0,1]$ and 0 otherwise. Do you see the problem? Technically speaking, the theorem doesn’t apply as $f(y_1, y_2)$ is NOT positive if and only if $(y_1, y_2)$ is in some closed rectangle.

It isn’t that hard to fix, I don’t think.

Now there is the density function $f(y_1, y_2) = y_1 + y_2$ on $[0,1] \times [0,1]$ and zero elsewhere. Here, $Y_1, Y_2$ are not independent.

But how does one KNOW that $y_1 + y_2 \neq g(y_1)h(y_2)$?

I played around a bit and came up with the following:

Statement: $\sum^{n}_{i=1} a_i(x_i)^{r_i} \neq f_1(x_1)f_2(x_2).....f_n(x_n)$ (note: assume $r_i \in \{1,2,3,....\}, a_i \neq 0$

Proof of the statement: substitute $x_2 =x_3 = x_4....=x_n = 0$ into both sides to obtain $a_1 x_1^{r_1} = f_1(x_1)(f_2(0)f_3(0)...f_n(0))$ Now none of the $f_k(0) = 0$ else function equality would be impossible. The same argument shows that $a_2 x_2^{r_2} = f_2(x_2)f_1(0)f_3(0)f_4(0)...f_n(0)$ with none of the $f_k(0) = 0$.

Now substitute $x_1=x_2 =x_3 = x_4....=x_n = 0$ into both sides and get $0 = f_1(0)f_2(0)f_3(0)f_4(0)...f_n(0)$ but no factor on the right hand side can be zero.

This is hardly profound but I admit that I’ve been negligent in pointing this out to classes.

## October 3, 2016

### Lagrange Polynomials and Linear Algebra

Filed under: algebra, linear albegra — Tags: — collegemathteaching @ 9:24 pm

We are discussing abstract vector spaces in linear algebra class. So, I decided to do an application.

Let $P_n$ denote the polynomials of degree $n$ or less; the coefficients will be real numbers. Clearly $P_n$ is $n+1$ dimensional and $\{1, x, x^2, ...x^n \}$ constitutes a basis.

Now there are many reasons why we might want to find a degree $n$ polynomial that takes on certain values for certain values of $x$. So, choose $x_0, x_1, x_2, ..., x_{n-1}$. So, let’s construct an alternate basis as follows: $L_0 = \frac{(x-x_1)(x-x_2)(x-x_3)..(x-x_{n})}{(x_0 - x_1)(x_0-x-x_2)..(x_0 - x_{n})}, L_1 = \frac{(x-x_0)(x-x_2)(x-x_3)..(x-x_{n})}{(x_1 - x_0)(x_1-x-x_2)..(x_1 - x_{n})}, ...L_k = \frac{(x-x_0)(x-x_1)(x-x_2)..(x-x_{k-1})(x-x_{k+1})...(x-x_{n})}{(x_k - x_1)(x_k-x-x_2)..(x_k - x_{k-1})(x_k - x_{k+1})...(x_k - x_{n})}.$ $....L_{n} = \frac{(x-x_0)(x-x_1)(x-x_2)..(x-x_{n-1})}{(x_{n}- x_1)(x_{n}-x-x_2)..(x_{n} - x_{n})}$

This is a blizzard of subscripts but the idea is pretty simple. Note that $L_k(x_k) = 1$ and $L_k(x_j) = 0$ if $j \neq k$.

But let’s look at a simple example: suppose we want to form a new basis for $P_2$ and we are interested in fixing $x$ values of $-1, 0, 1$.

So $L_0 = \frac{(x)(x-1)}{(-1-0)(-1-1)} = \frac{(x)(x-1)}{2}, L_1 = \frac{(x+1)(x-1)}{(0+1)(0-1)} = -(x+1)(x-1),$
$L_2 = \frac{(x+1)x}{(1+1)(1-0)} = \frac{(x+1)(x)}{2}$. Then we note that

$L_0(-1) = 1, L_0(0) =0, L_0(1) =0, L_1(-1)=0, L_1(0) = 1, L_1(1) = 0, L_2(-1)=0, L_2(0) =0, L_2(1)=1$

Now, we claim that the $L_k$ are linearly independent. This is why:

Suppose $a_0 L_0 + a_1 L_1 + ....a_n L_n =0$ as a vector. We can now solve for the $a_i$ Substitute $x_i$ into the right hand side of the equation to get $a_iL_i(x_i) = 0$ (note: $L_k(x_i) = 0$ for $i \neq k$). So $L_0, L_1, ...L_n$ are $n+1$ linearly independent vectors in $P_n$ and therefore constitute a basis.

Example: suppose we want to have a degree two polynomial $p(x)$ where $p(-1) =5, p(0) =3, p(1) = 17.$. We use our new basis to obtain:

$p(x) = 5L_0(x) + 3 L_1(x) + 17L_2(x) = \frac{5}{2}(x)(x-1) -3(x+1)(x-1) + \frac{17}{2}x(x+1)$. It is easy to check that $p(-1) = 5, p(0) =3, p(1) = 17$

## September 23, 2016

### Carmichael Numbers: “not quite” primes…

Filed under: algebra, elementary number theory, number theory, recreational mathematics — collegemathteaching @ 9:49 pm

We had a fun mathematics seminar yesterday.

Andrew Shallue gave a talk about the Carmichael numbers and gave a glimpse into his research. Along the way he mentioned the work of another mathematician…one that I met during my ultramarathon/marathon walking adventures! Talk about a small world..

So, to kick start my brain cells, I’ll say a few words about these.

First of all, prime numbers are very important in encryption schemes and it is a great benefit to be able to find them. However, for very large numbers, it can be difficult to determine whether a number is prime or not.

So one can take short cuts in determining whether a number is *likely* prime or not: one can say “ok, prime numbers have property P and if this number doesn’t have property P, it is not a prime. But if it DOES have property P, we hare X percent sure that it really is a prime.

If this said property is relatively “easy” to implement (via a computer), we might be able to live with the small amount of errors that our test generates.

One such test is to see if this given number satisfies “Fermat’s Little Theorem” which is as follows:

Let $a$ be a positive integer and $p$ be a prime, and suppose $a \neq kp$, that is $a \neq 0 (mod p)$ Then $a^{p-1} = 1 (mod p)$

If you forgotten how this works, recall that $Z_p$ is a field if $p$ is a prime, so $a \in Z_p, a \neq 0 (mod p)$ means that the set $\{a, 2a, 3a, ...(p-1)a \}$ consists of $\{1, 2, 3, ...(p-1) \}$. So take the product $(a)(2a)(3a)...((p-1)a)) = 1(2)(3)..(p-1)a^{p-1} = 1(2)(3)...(p-1) (mod p)$. Now note that we are working in a field, so we can cancel the $(1)(2)...(p-1)$ factor on both sides to get $a^{p-1} = 1 (mod p)$.

So one way to check to see if a number $q$ might be a prime is to check all $a^{q-1}$ for all $a \leq q$ and see if $a^{q-1} = 1 mod q$.
Now this is NOT a perfect check; there are non-prime numbers for which $a^{q-1} = 1 mod q$ for all $a \leq q$; these are called the Carmichael numbers. The 3 smallest such numbers are 561, 41041 and 825265.

The talk was about much more than this, but this was interesting.

## July 28, 2015

### J. H. Conway, Terry Tao and avoiding work

Filed under: advanced mathematics, algebra, media — Tags: , , , , , — collegemathteaching @ 7:48 pm

The mainstream media recently had some excellent articles on two mathematical giants:

John Conway and Terrance Tao. I’ve never met Terry Tao though I do read (or try to follow) his blog.

I did meet John Conway when he visited the University of Texas. He is a friend of my dissertation advisor and gave some talks on knot diagram colorings.

I had a private conversation with him at a party, and he gave me some ideas which resulted in three papers for me! Here is one of them.

Yes, I am avoiding studying a book on the theory of interest; I am teaching that course this fall and need to get ahead of the game.

Unfortunately, when I don’t teach, my use of time becomes undisciplined.

## June 19, 2015

### Scientific American article about finite simple groups

Filed under: advanced mathematics, algebra, mathematician — Tags: , — collegemathteaching @ 2:42 pm

For those of you who are a bit rusty: a finite group is a group that has a finite number of elements. A simple group is one that has no proper non-trivial normal subgroups (that is, only the identity and the whole group are normal subgroups).

It is a theorem that if $G$ is a finite simple group then $G$ falls into one of the following categories:

1. Cyclic (of prime order, of course)
2. Alternating (and not isomorphic to $A_4$ of course)
3. A member of a subclass of Lie Groups
4. One of 26 other groups that don’t fall into 1, 2 or 3.

Scientific American has a nice article about this theorem and the effort to get it written down and understood; the problem is that the proof of such a theorem is far from simple; it spans literally hundreds of research articles and would take thousands of pages to be complete. And, those who have an understanding of this result are aging and won’t be with us forever.

Here is a link to the preview of the article; if you don’t subscribe to SA it is probably in your library.

## April 8, 2015

### How my running (and walking and swimming and weight lifting and sports experience) helps with my teaching

Filed under: algebra, editorial, elementary mathematics — Tags: — collegemathteaching @ 4:52 pm

If you teach at an institution that has a competitive sports team, you’ll probably notice that the coaches spend time on recruiting. It is easy to see why: though athletes train hard to improve their performances, their inherent athletic ability provides an upper bound of how well they will do.

I played sports in high school, but wasn’t within an AU of being able to compete at the college level, any division. I remember summer wrestling; those recruited to wrestle for our team basically had their way with me on the mat.

In my current sports, I always do poorly in competition. For example, in my best running marathon, the winner beat me by 74 minutes! (winning time was 2:19).

It wasn’t that I didn’t try or didn’t train: it was that because I am a poor natural athlete, training only “moves the needle” just a bit, and not nearly enough for me to be competitive.

A coach could give me this workout or that workout…and get angry with me. But I have athletic limitations.

The same principle applies in mathematics.

Right now I am teaching the second semester of calculus for non-technical majors.

One question was: find the maximum and minimum of $T(t) = 55 -21 cos(\frac{2 \pi (t-32)}{365})$. They were told that this function modeled daily temperature where $t$ was in days, $t=0$ on January 1.

Now I asked the class some questions. And, well, let’s just say that they didn’t just recognize what the various terms and factors meant.

Now we took the derivative to find the local maximum and local minimum values and most of them got $\frac{42 \pi}{365}sin(\frac{2 \pi (t-32)}{365})$. Now we set this equal to zero and all of them that we got zero when the argument was 0 or an integral multiple of $\pi$.

But now, when we had $\frac{2 \pi (t-32)}{365} = 0$ I said “of course, this gives us the solution $t = 32$. And you guessed it…one of the students asked “why”. It took about a minute of explanation for her to see it. I kid you not.

So, I reminded myself of what it must have been like for my sports coaches in high school…..what it was like for them to work with me.

## March 24, 2015

Filed under: advanced mathematics, algebra, famous mathematicians — Tags: , — collegemathteaching @ 2:52 am

For the prize: what is the significance of the two chains, and which one directly applies to the human subject of this doodle?

The human subject of the doodle is Emmy Noether.

## January 30, 2015

### Nilpotent ring elements

Filed under: advanced mathematics, algebra, matrix algebra, ring theory — Tags: , — collegemathteaching @ 3:12 am

I’ve been trying to brush up on ring theory; it has been a long time since I studied rings in any depth and I need some ring theory to do some work in topology. In a previous post, I talked about ideal topologies and I might discuss divisor toplogies (starting with the ring of integers).

So, I grabbed an old text, skimmed the first part and came across an exercise:

an element $x \in R$ is nilpotent if there is some positive integer $n$ such that $x^n = 0$. So, given $x, y$ nilpotent in a commutative ring $R$ one has to show that $x+y$ is also nilpotent and that this result might not hold if $R$ is not a commutative ring.

Examples: in the ring $Z_9, 3^2 =0$ so $3$ is nilpotent. In the matrix ring of 2 by 2 matrices,

$\left( \begin{array}{cc} 0 & 0 \\ 1 & 0 \end{array} \right)$ and $\left( \begin{array}{cc} 0 & 1 \\ 0 & 0 \end{array} \right)$ are both nilpotent elements, though their sum:

$\left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right)$ is not; the square of this matrix is the identity matrix.

Immediately I thought to let $m, n$ be the smallest integers for $x^m =y^n = 0$ and thought to apply the binomial theorem to $(x+y)^{mn}$ (of course that is overkill; it is simpler to use $(x+y)^{m+n}$. Lets use $(x+y)^{m+n}$. I could easily see why $x^{m+n} = y^{m+n} =0$ but why were the middle terms ${m+n \choose k} x^{(m+n)-k}y^k$ also zero?

Then it dawned on me: $x^n=0 \rightarrow x^{n+k}=0$ for all $k \geq 0$. Duh. Now it made sense. ðŸ™‚