# College Math Teaching

## March 19, 2019

### My brush with mathematical greatness

Filed under: editorial — Tags: — collegemathteaching @ 6:18 pm

Yes, I TA’ed for Karen Uhlenbeck. She was patient with me and nice to me, even though I was a nothing ..actually, a below average graduate student and she was a department superstar..holder of an endowed chair. Karen Uhlenbeck just won the Abel Prize.

## March 16, 2019

### The beta function integral: how to evaluate them

My interest in “beta” functions comes from their utility in Bayesian statistics. A nice 78 minute introduction to Bayesian statistics and how the beta distribution is used can be found here; you need to understand basic mathematical statistics concepts such as “joint density”, “marginal density”, “Bayes’ Rule” and “likelihood function” to follow the youtube lecture. To follow this post, one should know the standard “3 semesters” of calculus and know what the gamma function is (the extension of the factorial function to the real numbers); previous exposure to the standard “polar coordinates” proof that $\int^{\infty}_{-\infty} e^{x^2} dx = \sqrt{\pi}$ would be very helpful.

So, what it the beta function? it is $\beta(a,b) = \frac{\Gamma(a) \Gamma(b)}{\Gamma(a+b)}$ where $\Gamma(x) = \int_0^{\infty} t^{x-1}e^{-t} dt$. Note that $\Gamma(n+1) = n!$ for integers $n$ The gamma function is the unique “logarithmically convex” extension of the factorial function to the real line, where “logarithmically convex” means that the logarithm of the function is convex; that is, the second derivative of the log of the function is positive. Roughly speaking, this means that the function exhibits growth behavior similar to (or “greater”) than $e^{x^2}$

Now it turns out that the beta density function is defined as follows: $\frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)} x^{a-1}(1-x)^{b-1}$ for $0 < x < 1$ as one can see that the integral is either proper or a convergent improper integral for $0 < a < 1, 0 < b < 1$.

I'll do this in two steps. Step one will convert the beta integral into an integral involving powers of sine and cosine. Step two will be to write $\Gamma(a) \Gamma(b)$ as a product of two integrals, do a change of variables and convert to an improper integral on the first quadrant. Then I'll convert to polar coordinates to show that this integral is equal to $\Gamma(a+b) \beta(a,b)$

Step one: converting the beta integral to a sine/cosine integral. Limit $t \in [0, \frac{\pi}{2}]$ and then do the substitution $x = sin^2(t), dx = 2 sin(t)cos(t) dt$. Then the beta integral becomes: $\int_0^1 x^{a-1}(1-x)^{b-1} dx = 2\int_0^{\frac{\pi}{2}} (sin^2(t))^{a-1}(1-sin^2(t))^{b-1} sin(t)cos(t)dt = 2\int_0^{\frac{\pi}{2}} (sin(t))^{2a-1}(cos(t))^{2b-1} dt$

Step two: transforming the product of two gamma functions into a double integral and evaluating using polar coordinates.

Write $\Gamma(a) \Gamma(b) = \int_0^{\infty} x^{a-1} e^{-x} dx \int_0^{\infty} y^{b-1} e^{-y} dy$

Now do the conversion $x = u^2, dx = 2udu, y = v^2, dy = 2vdv$ to obtain: $\int_0^{\infty} 2u^{2a-1} e^{-u^2} du \int_0^{\infty} 2v^{2b-1} e^{-v^2} dv$ (there is a tiny amount of algebra involved)

From which we now obtain $4\int^{\infty}_0 \int^{\infty}_0 u^{2a-1}v^{2b-1} e^{-(u^2+v^2)} dudv$

Now we switch to polar coordinates, remembering the $rdrd\theta$ that comes from evaluating the Jacobian of $x = rcos(\theta), y = rsin(\theta)$ $4 \int^{\frac{\pi}{2}}_0 \int^{\infty}_0 r^{2a +2b -1} (cos(\theta))^{2a-1}(sin(\theta))^{2b-1} e^{-r^2} dr d\theta$

This splits into two integrals: $2 \int^{\frac{\pi}{2}}_0 (cos(\theta))^{2a-1}(sin(\theta))^{2b-1} d \theta 2\int^{\infty}_0 r^{2a +2b -1}e^{-r^2} dr$

The first of these integrals is just $\beta(a,b)$ so now we have: $\Gamma(a) \Gamma(b) = \beta(a,b) 2\int^{\infty}_0 r^{2a +2b -1}e^{-r^2} dr$

The second integral: we just use $r^2 = x \rightarrow 2rdr = dx \rightarrow \frac{1}{2}\frac{1}{\sqrt{x}}dx = dr$ to obtain: $2\int^{\infty}_0 r^{2a +2b -1}e^{-r^2} dr = \int^{\infty}_0 x^{a+b-\frac{1}{2}} e^{-x} \frac{1}{\sqrt{x}}dx = \int^{\infty}_0 x^{a+b-1} e^{-x} dx =\Gamma(a+b)$ (yes, I cancelled the 2 with the 1/2)

And so the result follows.

That seems complicated for a simple little integral, doesn’t it?

## March 14, 2019

### Sign test for matched pairs, Wilcoxon Signed Rank test and Mann-Whitney using a spreadsheet

Filed under: statistics, Uncategorized — Tags: , , , — collegemathteaching @ 10:33 pm

Our goal: perform non-parametric statistical tests for two samples, both paired and independent. We only assume that both samples come from similar distributions, possibly shifted.

I’ll show the steps with just a bit of discussion of what the tests are doing; the text I am using is Mathematical Statistics (with Applications) by Wackerly, Mendenhall and Scheaffer (7’th ed.) and Mathematical Statistics and Data Analysis by John Rice (3’rd ed.).

First the data: 56 students took a final exam. The professor gave some questions and a committee gave some questions. Student performance was graded and the student performance was graded as a “percent out of 100” on each set of questions (committee graded their own questions, professor graded his questions).

The null hypothesis: student performance was the same on both sets of questions. Yes, this data was close enough to being normal that a paired t-test would have been appropriate and one was done for the committee. But because I am teaching a section on non-parametric statistics, I decided to run a paired sign test and a Wilcoxon signed rank test (and then, for the heck of it, a Mann-Whitney test which assumes independent samples..which these were NOT (of course)). The latter was to demonstrate the technique for the students.

There were 56 exams and “pi” was the score on my questions, “pii” the score on committee questions. The screen shot shows a truncated view. The sign test for matched pairs.
The idea behind this test: take each pair and score it +1 if sample 1 is larger and score it -1 if the second sample is larger. Throw out ties (use your head here; too many ties means we can’t reject the null hypothesis ..the idea is that ties should be rare).

Now set up a binomial experiment where $n$ is the number of pairs. We’d expect that if the null hypothesis is true, $p = .5$ where $p$ is the probability that the pair gets a score of +1. So the expectation would be $np = \frac{n}{2}$ and the standard deviation would be $\frac{1}{2} \sqrt{n}$, that is, $\sqrt{npq}$

This is easy to do in a spreadsheet. Just use the difference in rows: Now use the “sign” function to return a +1 if the entry from sample 1 is larger, -1 if the entry from sample 2 is larger, or 0 if they are the same. I use “copy, paste, delete” to store the data from ties, which show up very easily. Now we need to count the number of “+1”. That can be a tedious, error prone process. But the “countif” command in Excel handles this easily. Now it is just a matter of either using a binomial calculator or just using the normal approximation (I don’t bother with the continuity correction) Here we reject the null hypothesis that the scores are statistically the same.

Of course, this matched pairs sign test does not take magnitude of differences into account but rather only the number of times sample 1 is bigger than sample 2…that is, only “who wins” and not “by what score”. Clearly, the magnitude of the difference could well matter.

That brings us to the Wilcoxon signed rank test. Here we list the differences (as before) but then use the “absolute value” function to get the magnitudes of such differences. Now we need to do an “average rank” of these differences (throwing out a few “zero differences” if need be). By “average rank” I mean the following: if there are “k” entries between ranks n, n+1, n+2, ..n+k-1, then each of these gets a rank $\frac{n + n+1 + n+2 +...+ n+k-1}{k} = n + \frac{(k-1)}{2}$

(use $\sum^n_{k=1} k = \frac{n(n+1)}{2}$ to work this out).

Needless to say, this can be very tedious. But the “rank.avg” function in Excel really helps.

Example: rank.avg(di, $d$2:$d$55, 1) does the following: it ranks the entry in cell di versus the cells in d2: d55 (the dollar signs make the cell addresses “absolute” references, so this doesn’t change as you move down the spreadsheet) and the “1” means you rank from lowest to highest.

Now the test works in the following manner: if the populations are roughly the same, the larger or smaller ranked differences will each come from the same population roughly half the time. So we denote $T^{-}$ the sum of the ranks of the negative differences (in this case, where “pii” is larger) and $T^{+}$ is the sum of the positive differences.

One easy way to tease this out: $T^{+} + T^{-1} = \frac{1}{2}n(n+1)$ and $T^{+} - T^{-}$ can be computed by summing the entries in which the larger differences in “pii” get a negative sign. This is easily done by multiplying the absolute value of the differences by the sign of the differences. Now note that $\frac{1}{2}((T^{+} + T^{-1}) + (T^{+} - T^{-1})) = T^{+}$ and $\frac{1}{2}((T^{+} + T^{-1}) - (T^{+} +-T^{-1})) = T^{-}$  One can use a T table (this is a different T than “student T”) or one can use the normal approximation (if n is greater than, say, 25) with $E(T^{+}) = \frac{n(n+1)}{2}, V(T^{+}) = \frac{n(n+1)(2n+1)}{24}$ and use the normal approximation. How these are obtained: the expectation is merely the one half the sum of all the ranks (what one would expect if the distributions were the same) and the variance comes from $n$ Bernouilli random variables $I_k$ (one for each pair) with $p = \frac{1}{2}$ where the variance is $\frac{1}{4} \sum^n_{k=1} k^2 = \frac{1}{4} \frac{n(n+1)(2n+1)}{6}$

Here is a nice video describing the process by hand:

Mann-Whitney test
This test doesn’t apply here as the populations are, well, anything but independent, but we’ll pretend so we can crunch this data set.

Here the idea is very well expressed:

Do the following: label where the data comes from, and rank it all together. Then add the ranks of the population, of say, the first sample. If the samples are the same, the sums of the ranks should be the same for both populations.

Again, do a “rank average” and yes, Excel can do this over two different columns of data, while keeping the ranks themselves in separate columns.

And one can compare, using either column’s rank sum: the expectation would be $E = \frac{n_1(n_1 +n_2 + 1}{2}$ and variance would be $V = \frac{n_1n_2(n_1+n_2+1)}{12}$

Where this comes from: this is really a random sample of since $n_1$ drawn without replacement from a population of integers $1, 2, ... n_1+n_2$ (all possible ranks…and how they are ordered and the numbers we get). The expectation is $n_1 \mu$ and the variance is $n_1 \sigma^2 \frac{n_1+n_2-n_1}{n_1+n_2 -1}$ where $\mu = \frac{n_1+n_2 +1}{2}, \sigma^2 \frac{(n_1+n_2)^2-1}{12}$ (should remind you of the uniform distribution). The rest follows from algebra.

So this is how it goes:  Note: I went ahead and ran the “matched pairs” t-test to contrast with the matched pairs sign test and Wilcoxon test, and the “two sample t-test with unequal variances” to contrast to the Mann-Whitney test..use the “unequal variances” assumption as the variance of sample pii is about double that of pi (I provided the F-test).   