College Math Teaching

April 24, 2018

And I trolled my complex variables class

Filed under: advanced mathematics, analysis, class room experiment, complex variables — collegemathteaching @ 6:34 pm

One question on my last exam: find the Laurent series for \frac{1}{z + 2i} centered at z = -2i which converges on the punctured disk |z+2i| > 0 . And yes, about half the class missed it.

I am truly evil.

October 4, 2016

Linear Transformation or not? The vector space operations matter.

Filed under: calculus, class room experiment, linear albegra, pedagogy — collegemathteaching @ 3:31 pm

This is nothing new; it is an example for undergraduates.

Consider the set R^+ = \{x| x > 0 \} endowed with the “vector addition” x \oplus y = xy where xy represents ordinary real number multiplication and “scalar multiplication r \odot x = x^r where r \in R and x^r is ordinary exponentiation. It is clear that \{R^+, R | \oplus, \odot \} is a vector space with 1 being the vector “additive” identity and 0 playing the role of the scalar zero and 1 playing the multiplicative identity. Verifying the various vector space axioms is a fun, if trivial exercise.

Now consider the function L(x) = ln(x) with domain R^+ . (here: ln(x) is the natural logarithm function). Now ln(xy) = ln(x) + ln(y) and ln(x^a) = aln(x) . This shows that L:R^+ \rightarrow R (the range has the usual vector space structure) is a linear transformation.

What is even better: ker(L) =\{x|ln(x) = 0 \} which shows that ker(L) = \{1 \} so L is one to one (of course, we know that from calculus).

And, given z \in R, ln(e^z) = z so L is also onto (we knew that from calculus or precalculus).

So, R^+ = \{x| x > 0 \} is isomorphic to R with the usual vector operations, and of course the inverse linear transformation is L^{-1}(y) = e^y .

Upshot: when one asks “is F a linear transformation or not”, one needs information about not only the domain set but also the vector space operations.

March 13, 2015

Moving from “young Turk” to “old f***”

Filed under: calculus, class room experiment, editorial, pedagogy — Tags: , , — collegemathteaching @ 9:09 pm

Today, one of our hot “young” (meaning: new here) mathematicians came to me and wanted to inquire about a course switch. He noted that his two course load included two different courses (two preparations) and that I was teaching different sections of the same two courses…was I interested in doing a course swap so that he had only one preparation (he is teaching 8 hours) and I’d only have two?

I said: “when I was your age, I minimized the number of preparations. But at my age, teaching two sections of the same low level course makes me want to bash my head against the wall”. That is, by my second lesson of the same course in the same day; I just want to be just about anywhere else on campus; I have no interest, no enthusiasm, etc.

I specifically REQUESTED 3 preparations to keep myself from getting bored; that is what 24 years of teaching this stuff does to you.

COMMENTARY
Every so often, someone has the grand idea to REFORM the teaching of (whatever) and the “reformers” usually get at least a few departments to go along with it.

The common thing said is that it gets professors to reexamine their teaching of (whatever).

But I wonder if many try these things….just out of pure boredom. Seriously, read the buzzwords of the “reform paper” I linked to; there is really nothing new there.

January 23, 2015

Making a math professor happy…

Filed under: calculus, class room experiment, elementary mathematics — Tags: , — collegemathteaching @ 10:28 pm

Calculus III: we are talking about polar curves. I give the usual lesson about how to graph r = sin(2 \theta) and r = sin(3 \theta) and give the usual “if n is even, the graph of r = sin (n \theta) has 2n petals and if n is odd, it has n petals.

Question: “does that mean it is impossible to have a graph with 6 petals then”? 🙂

Yes, one can have intersecting petals and one try: r = |sin(3 \theta) | . But you aren’t going to get it without a trick of some sort.

6petals

November 22, 2014

One upside to a topologist teaching numerical analysis…

Yes, I was glad when we hired people with applied mathematics expertise; though I am enjoying teaching numerical analysis, it is killing me. My training is in pure mathematics (in particular, topology) and so class preparation is very intense for me.

But I so love being able to show the students the very real benefits that come from the theory.

Here is but one example: right now, I am talking about numerical solutions to “stiff” differential equations; basically, a differential equation is “stiff” if the magnitude of the differential equation is several orders of magnitude larger than the magnitude of the solution.

A typical example is the differential equation y' = -\lambda y , y(0) = 1 for \lambda > 0 . Example: y' = -20y, y(0) = 1 . Note that the solution y(t) = e^{-20t} decays very quickly to zero though the differential equation is 20 times larger.

One uses such an equation to test a method to see if it works well for stiff differential equations. One such method is the Euler method: w_{i+1} = w_{i} + h f(t_i, w_i) which becomes w_{i+1} = w_i -20h \lambda w_i. There is a way of assigning a method to a polynomial; in this case the polynomial is p(\mu) = \mu - (1+h\lambda) and if the roots of this polynomial have modulus less than 1, then the method will converge. Well here, the root is (1+h\lambda) and calculating: -1 > 1+ h \lambda > 1 which implies that -2 >   h \lambda > 0 . This is a good reference.

So for \lambda = 20 we find that h has to be less than \frac{1}{10} . And so I ran Euler’s method for the initial problem on [0,1] and showed that the solution diverged wildly for using 9 intervals, oscillated back and forth (with equal magnitudes) for using 10 intervals, and slowly converged for using 11 intervals. It is just plain fun to see the theory in action.

April 7, 2014

Numerical integration: why the brain is still required…

Filed under: class room experiment, integrals, numerical methods, pedagogy — Tags: — collegemathteaching @ 4:59 pm

I gave the following demonstration in class today: \int^1_0 sin^2(512 \pi x) dx =

Now, of course, even a C student in calculus II would be able to solve this exactly using sin^2(u) = \frac{1}{2} - \frac{1}{2}cos(2u) to obtain: \int^1_0 sin^2(512 \pi x) dx=\frac{1}{2}

But what about the “just bully” numerical methods we’ve learned?

Romberg integration fails miserably, at least at first:

integrationfail1

(for those who don’t know about Romberg integration: the first column gives trapezoid rule approximations, the second gives Simpson’s rule approximations and the third gives Boole’s rule; the value of \Delta x gets cut in half as the rows go down).

I said “at first” as if one goes to, say, 20 rows, one can start to get near the correct answer.

Adaptive quadrature: is even a bigger fail:

integrationfail2

The problem here is that this routine quits when the refined Simpson’s rule approximation agrees with the less refined approximation (to within a certain tolerance), and here, the approximations are both zero, hence there is perfect agreement, very early in the process.

So, what to do?

One should note, of course, that the integrand is positive except for a finite number of points where it is zero. Hence one knows right away that the results are bogus.

One quick way to get closer: just tweak the limits of integration by a tiny amount and calculate, say, \int^{.999}_{.001} sin(512*\pi *x) dx and do some mathematics!

integrationfail3

The point: the integration routines cannot replace thinking.

March 30, 2014

About that “viral” common core meme

Filed under: class room experiment, editorial, pedagogy — Tags: , — collegemathteaching @ 10:09 pm

This is making the rounds on social media:

commoncoremath

Now a good explanation as to what is going on can be found here; it is written by an experienced high school math teacher.

I’ll give my take on this; I am NOT writing this for other math professors; they would likely be bored by what I am about to say.

My take
First of all, I am NOT defending the mathematics standards of Common Core. For one: I haven’t read them. Another: I have no experience teaching below the college level. What works in my classroom would probably not work in most high school and grade school classrooms.

But I think that I can give some insight as to what is going on with this example (in the photo).

When one teaches mathematics, one often teaches BOTH how to calculate and the concepts behind the calculation techniques. Of course, one has to learn the calculation technique; no one (that I know) disputes that.

What is going on in the photo
The second “calculation” is an exercise designed to help students learn the concept of subtraction and NOT “this is how you do the calculation”.

Suppose one wants to show the students that subtracting two numbers yields “the distance on the number line between those numbers”. So, “how far away from 12 is 32? Well, one moves 3 units to get to 15, then 5 to get to 20. Now that we are at 20 (a multiple of 10), it is easy to move one unit of 10 to get to 30, then 2 more units to get to 32. So we’ve moved 20 units total.

Think of it this way: in the days prior to google maps and gps systems, imagine you are taking a trip from, say, Morton, IL to Chicago and you wanted to take interstate highways all of the way. You wanted to figure the mileage.

You notice (I am making these numbers up) that the “distance between big cities” map lists 45 miles from Peoria to Bloomington and 150 miles from Bloomington to Chicago. Then you look at the little numbers on the map to see that Morton is between Peoria and Bloomington: 10 miles away from Peoria.

So, to find the distance, you calculate (45-10) + 150 = 185 miles; you used the “known mileages” as guide posts and used the little map numbers as a guide to get from the small town (Morton) to the nearest city for which the “table mileage” was calculated.

That is what is going on in the photo.

Why the concept is important

There are many reasons. The “distance between nodes” concept is heavily used in graph theory and in operations research. But I’ll give a demonstration in numerical methods:

Suppose one needs a numerical approximation of \int^{48}_0 \sqrt{1 + cos^2(x)} dx . Now if one just approaches with by a Newton-Coats method (say, Simpson’s rule) or by Romberg, or even by a quadrature method, one runs into problems. The reason: the integrand is oscillatory and the range of integration is very long.

But one notices that the integrand is periodic; there is no need to integrate along the entire range.

Note that there are 7 complete periods of 2 \pi between 0 and 48. So one merely needs to calculate 7 \int^{2 \pi}_0 \sqrt{1+cos^2(x)} dx + \int^{48 - 14 \pi}_0 \sqrt{1+ cos^2(x)} dx and these two integrals are much more readily approximated.

In fact, why not approximate 30 \int^{\frac{\pi}{2}}_0 \sqrt{1+cos^2(x)} dx + \int^{48 - 15 \pi}_0 \sqrt{1 + cos^2(x)}dx which is even better?

The concept of calculating distance in terms of set segment lengths comes in handy.

Or, one can think of it this way
When we teach derivatives, we certainly teach how to calculate using the standard differentiation rules. BUT we also teach the limit definition as well, though one wouldn’t use that definition in the middle of, say, “find the maximum and minimum of f(x) = x-\frac{1}{x} on the interval [\frac{1}{4}, 3] ” Of course, one uses the rules.

But if you saw some kid’s homework and saw f'(x) being calculated by the limit definition, would you assume that the professor was some idiot who wanted to turn a simple calculation into something more complicated?

March 25, 2014

The error term and approximation of derivatives

I’ll go ahead and work with the common 3 point derivative formulas:

This is the three-point endpoint formula: (assuming that f has 3 continuous derivatives on the appropriate interval)

f'(x_0) = \frac{1}{2h}(-3f(x_0) + 4f(x_0+h) -f(x_0 + 2h)) + \frac{h^2}{3} f^{3}(\omega) where \omega is some point in the interval.

The three point midpoint formula is:

f'(x_0) = \frac{1}{2h}(f(x_0 + h) -f(x_0 -h)) -\frac{h^2}{6}f^{3}(\omega) .

The derivation of these formulas: can be obtained from either using the Taylor series centered at x_0 or using the Lagrange polynomial through the given points and differentiating.

That isn’t the point of this note though.

The point: how can one demonstrate, by an example, the role the error term plays.

I suggest trying the following: let x vary from, say, 0 to 3 and let h = .25 . Now use the three point derivative estimates on the following functions:

1. f(x) = e^x .

2. g(x) = e^x + 10sin(\frac{\pi x}{.25}) .

Note one: the three point estimates for the derivatives will be exactly the same for both f(x) and g(x) . It is easy to see why.

Note two: the “errors” will be very, very different. It is easy to see why: look at the third derivative term: for f(x) it is e^x -10(\frac{\pi}{.25})^2sin(\frac{\pi x}{.25})

The graphs shows the story.

expsinfunction

Clearly, the 3 point derivative estimates cannot distinguish these two functions for these “sample values” of x , but one can see how in the case of g , the degree that g wanders away from f is directly related to the higher order derivative of g .

March 14, 2014

Approximating the derivative and round off error: class demonstration

In numerical analysis we are covering “approximate differentiation”. One of the formulas we are using: f'(x_0) = \frac{f(x_0 + h) -f(x_0 -h)}{2h} - \frac{h^2}{6} f^{(3)}(\zeta) where \zeta is some number in [x_0 -h, x_0 + h] ; of course we assume that the third derivative is continuous in this interval.

The derivation can be done in a couple of ways: one can either use the degree 2 Lagrange polynomial through x_0-h, x_0, x_0 + h and differentiate or one can use the degree 2 Taylor polynomial expanded about x = x_0 and use x = x_0 \pm h and solve for f'(x_0) ; of course one runs into some issues with the remainder term if one uses the Taylor method.

But that isn’t the issue that I want to talk about here.

The issue: “what should we use for h ?” In theory, we should get a better approximation if we make h as small as possible. But if we are using a computer to make a numerical evaluation, we have to concern ourselves with round off error. So what we actually calculate will NOT be f'(x_0) = \frac{f(x_0 + h) -f(x_0 -h)}{2h} but rather f'(x_0) = \frac{\hat{f}(x_0 + h) -\hat{f}(x_0 -h)}{2h} where \hat{f}(x_0 \pm h) = f(x_0 \pm h) - e(x_0 \pm h) where e(x_0 \pm h) is the round off error used in calculating the function at x = x_0 \pm h (respectively).

So, it is an easy algebraic exercise to show that:

f'(x_0) - \frac{f(x_0 + h) -f(x_0 -h)}{2h} = - \frac{h^2}{6} f^{(3)}(\zeta)-\frac{e(x_0 +h) -e(x_0 -h)}{2h} and the magnitude of the actual error is bounded by \frac{h^2 M}{6} + \frac{\epsilon}{2} where M = max\{f^{(3)}(\eta)\} on some small neighborhood of x_0 and \epsilon is a bound on the round-off error of representing f(x_0 \pm h) .

It is an easy calculus exercise (“take the derivative and set equal to zero and check concavity” easy) to see that this error bound is a minimum when h = (\frac{3\epsilon}{M})^{\frac{1}{3}} .

Now, of course, it is helpful to get a “ball park” estimate for what \epsilon is. Here is one way to demonstrate this to the students: solve for \epsilon and obtain \frac{M h^3}{3} = \epsilon and then do some experimentation to determine \epsilon .

That is: obtain an estimate of h by using this “3 point midpoint” estimate for a known derivative near a value of x_0 for which M (a bound for the 3’rd derivative) is easy to obtain, and then obtain an educated guess for h .

Here are a couple of examples: one uses Excel and one uses MATLAB. I used f(x) = e^x at x = 0; of course f'(0) = 1 and M = 1 is reasonable here (just a tiny bit off). I did the 3-point estimation calculation for various values of h and saw where the error started to increase again.

Here is the Excel output for f(x) = e^x at x =0 and at x = 1 respectively. In the first case, use M = 1 and in the second M = e
roundofferrorder1

In the x = 0 case, we see that the error starts to increase again at about h = 10^{-5} ; the same sort of thing appears to happen for x = 1 .

So, in the first case, \epsilon is about \frac{1}{3} \times (10^{-5})^3 = 3.333 \times 10^{-16} ; it is roughly 10^{-15} at x =1 .

Note: one can also approach h by using powers of \frac{1}{2} instead; something interesting happens in the x = 0 case; the x = 1 case gives results similar to what we’ve shown. Reason (I think): 1 is easy to represent in base 2 and the powers of \frac{1}{2} can be represented exactly.

Now we turn to MATLAB and here we do something slightly different: we graph the error for different values of h . Since the values of h are very small, we use a -log_{10} scale by doing the following (approximating f'(0) for f(x) = e^x )

rounoffmatlabcommand. By design, N = -log_{10}(H) . The graph looks like:

roundoffmatlabgraph

Now, the small error scale makes things hard to read, so we turn to using the log scale, this time on the y axis: let LE = -log_{10}(E) and run plot(N, LE):

roundlogscale and sure enough, you can see where the peak is: about 10^{-5} , which is the same as EXCEL.

December 4, 2012

Teaching Linear Regression and ANOVA: using “cooked” data with Excel

During the linear regression section of our statistics course, we do examples with spreadsheets. Many spreadsheets have data processing packages that will do linear regression and provide output which includes things such as confidence intervals for the regression coefficients, the r, r^2 values, and an ANOVA table. I sometimes use this output as motivation to plunge into the study of ANOVA (analysis of variance) and have found that “cooked” linear regression examples to be effective teaching tools.

The purpose of this note is NOT to provide an introduction to the type of ANOVA that is used in linear regression (one can find a brief introduction here or, of course, in most statistics textbooks) but to show a simple example using the “random number generation” features in the Excel (with the data analysis pack loaded into it).

I’ll provide some screen shots to show what I did.

If you are familiar with Excel (or spread sheets in general), this note will be too slow-paced for you.

Brief Background (informal)

I’ll start the “ANOVA for regression” example with a brief discussion of what we are looking for: suppose we have some data which can be thought of as a set of n points in the plane (x_i, y_i). Of course the set of y values has a variance which is calculated as \frac{1}{n-1} \sum^n_{i=1}(y_i - \bar{y})^2 = \frac{1}{n-1}SS

It turns out that the “sum of squares” SS = \sum^n_{i=1} (y_i - \hat{y_i})^2 + \sum^n_{i=1}(\hat{y_i} - \bar{y})^2 where the first term is called “sum of squares error” and the second term is called “sum of squares regression”; or: SS = SSE + SSR. Here is an informal way of thinking about this: SS is what you use to calculate the “sample variation” of the y values (one divides this term by “n-1” ). This “grand total” can be broken into two parts: the first part is the difference between the actual y values and the y values predicted by the regression line. The second is the difference between the predicted y values (from the regression) and the average y value. Now imagine if the regression slope term \beta_1 was equal to zero; then the SSE term would be, in effect, the SS term and the second term SSR would be, in effect, zero (\bar{y} - \bar{y} ). If we denote the standard deviation of the y’s by \sigma then \frac{SSR/\sigma}{SSE/((n-2)\sigma} is a ratio of chi-square distributions and is therefore F with 1 numerator and n-2 denominator degrees of freedom. If \beta_1 = 0 or was not statistically significant, we’d expect the ratio to be small.

For example: if the regression line fit the data perfectly, the SSE terms would be zero and the SSR term would equal the SS term as the predicted y values would be the y values. Hence the ratio of (SSR/constant) over (SSE/constant) would be infinite.

That is, the ratio that we use roughly measures the percentage of variation of the y values that comes from the regression line verses the percentage that comes from the error from the regression line. Note that it is customary to denote SSE/(n-2) by MSE and SSR/1 by MSR. (Mean Square Error, Mean Square Regression).

The smaller the numerator relative to the denominator the less that the regression explains.

The following examples using Excel spread sheets are designed to demonstrate these concepts.

The examples are as follows:

Example one: a perfect regression line with “perfect” normally distributed residuals (remember that the usual hypothesis test on the regression coefficients depend on the residuals being normally distributed).

Example two: a regression line in which the y-values have a uniform distribution (and are not really related to the x-values at all).

Examples three and four: show what happens when the regression line is “perfect” and the residuals are normally distributed, but have greater standard deviations than they do in Example One.

First, I created some x values and then came up with the line y = 4 + 5x . I then used the formula bar as shown to create that “perfect line” of data in the column called “fake” as shown. Excel allows one to copy and paste formulas such as these.

fig1formulabar

This is the result after copying:

fig2copyformula

Now we need to add some residuals to give us a non-zero SSE. This is where the “random number generation” feature comes in handy. One goes to the data tag and then to “data analysis”

fig3dataanalysis

and clicks on “random number generation”:

fig4rnselect

This gives you a dialogue box. I selected “normal distribution”; then I selected “0” of the mean and “1” for the standard deviation. Note: the assumption underlying the confidence interval calculation for the regression parameter confidence intervals is that the residuals are normally distributed and have an expected value of zero.

fig5rngen

I selected a column for output (as many rows as x-values) which yields a column:

fig6rncolumn

Now we add the random numbers to the column “fake” to get a simulated set of y values:

fig7addrndm

That yields the column Y as shown in this next screenshot. Also, I used the random number generator to generate random numbers in another column; this time I used the uniform distribution on [0,54]; I wanted the “random set of potential y values” to have roughly the same range as the “fake data” y-values.

fig8randuniform

Y holds the “non-random” fake data and YR holds the data for the “Y’s really are randomly distributed” example.

fig9ranuniformns

I then decided to generate two more “linear” sets of data; in these cases I used the random number generator to generate normal residuals of larger standard deviation and then create Y data to use as a data set; the columns or residuals are labeled “mres” and “lres” and the columns of new data are labeled YN and YVN.

Note: in the “linear trend data” I added the random numbers to the exact linear model y’s labeled “fake” to get the y’s to represent data; in the “random-no-linear-trend” data column I used the random number generator to generate the y values themselves.

Now it is time to run the regression package itself. In Excel, simple linear regression is easy. Just go to the data analysis tab and click, then click “regression”:

fig10regressselect

This gives a dialogue box. Be sure to tell the routine that you have “headers” to your columns of numbers (non-numeric descriptions of the columns) and note that you can select confidence intervals for your regression parameters. There are other things you can do as well.

fig11regressdialog

You can select where the output goes. I selected a new data sheet.

fig12regoutbest

Note the output: the r value is very close to 1, the p-values for the regression coefficients are small and the calculated regression line (to generate the \hat{y_i}'s is:
y = 3.70 + 5.01x . Also note the ANOVA table: the SSR (sum squares regression) is very, very large compared to the SSE (sum squares residuals), as expected. The variance in y values is almost completely explained by the variance in the y values from the regression line. Hence we obtain an obscenely large F value; we easily reject the null hypothesis (that \beta_1 = 0 ).

This is what a plot of the calculated regression line with the “fake data” looks like:

ploty

Yes, this is unrealistic, but this is designed to demonstrate a concept. Now let’s look at the regression output for the “uniform y values” (y values generated at random from a uniform distribution of roughly the same range as the “regression” y-values):

fig13regressoutrand

Note: r^2 is nearly zero, we fail to reject the null hypothesis that \beta_1 = 0 and note how the SSE is roughly equal to the SS; the reason, of course, is that the regression line is close to y = \bar{y} . The calculated F value is well inside the “fail to reject” range, as expected.

A plot looks like:

plot2yr

The next two examples show what happens when one “cooks” up a regression line with residuals that are normally distributed, have mean equal to zero, but have larger standard deviations. Watch how the r values change, as well as how the SSR and SSE values change. Note how the routine fails to come up with a statistically significant estimate for the “constant” part of the regression line but the slope coefficient is handled easily. This demonstrates the effect of residuals with larger standard deviations.

fig14regresoutnoise

plotyn

fig15regresvnois

plotyvn

Older Posts »

Create a free website or blog at WordPress.com.