College Math Teaching

December 12, 2023

And so we’ve lost our math major

The mathematics major is not one that often gets cut, but that is what happened to us. I admit that I saw it coming. Yes, there will be litigation, law suits etc, but most of that is personnel oriented. The fact is that, for now, our math major is gone; not that we won’t try to bring it back when our current top administrators leave, one way or another.

Yes, tenured faculty were cut. That is where litigation will come in. And yes, there is grief over that…a lot of it.

But that is not what this post will focus on.

For some of my colleagues, there is a profound sense of grief of being a math professor at a university that will no longer offer a math major. For them: a big part of their identity was being a mathematician.

Sadly, that isn’t true of me; I lost my identity of being a mathematician decades ago.

Time and time again, I’d go to a conference, be fired up about what I saw, return to campus…and watched the ideas in my head die a slow death under 11-12 hour teaching loads, committee work, etc. Over time, I started switching to MAA type conferences (Mathfest, for example) and finding joy in teaching “out of area” courses, even actuarial science ones. I sort of became “known” for that.

And yeah, I did research, picking off a problem here and there and did enough to make Full Professor at my institution. But more and more, by research became more shallow and more broad. And frankly I lost the appetite for all of the BS that comes with publication (type setting, figures, dealing with pedantic referees, etc) Yes, I have a paper at the referee’s now and I owe a referee’s report as well.

But..well, when I see what the mathematicians are doing at the R1 universities (e. g. Big Ten);, well, what I do simply does not compare..it never did. I came to accept that decades ago.

This is a long winded way of saying that I suffered no ego bruise from this.

Note: I am not saying this is totally over; if/when we get better leardership I might attempt to bring a math major back (yes, I am the incoming chair of the department).

Moving forward: I think that the math major is on the wane, at least at the non-elite universities.

August 12, 2023

West Virginia Math Department and trends..

First of all, I’ll have to read this 2016 article.

But: it is no secret that higher education in the US is in turmoil, at least at the non-elite universities. Some colleges are closing and others are experiencing cut backs due to high operating losses.

This little not will not attempt to explain the problems of why education has gotten so expensive, though things like: reduction of government subsidies, increased costs for technology (computers, wifi, learning management systems), unfunded mandates (e. g. accommodations for an increasing percentage of students with learning disabilities) and staff to handle helicopter parents are all factors adding to increased costs.

And so, many universities are more tuition dependent than ever before, and while the sticker price is high, many (most in many universities) are given steep discounts.

And so, higher administration is trying to figure out what to offer: they need to bring in tuition dollars.

Now about math: our number of majors has dropped, and much, if not most, of the drop comes from math education: teaching is not a popular occupation right now, for many reasons.

Things like this do not help attract student to teacher education programs:

One thing that hurts enrollment in upper division math courses is that higher math has prerequisites. Of course, many (most?) pure math courses do not appear to have immediate application to other fields (though they often do). And, let’s face it: math is hard. The ideas are very dense.

So, it is my feeling that the math major..one that requires two semesters of abstract algebra and two semesters of analysis, is probably on the way out, at least at non-elite schools. I think it will survive at Ivy caliber schools, MIT, Stanford, and the flagship R-1 schools.

As far as the rest of us: it absolutely hurts my heart to say this, but I feel that for our major to survive at a place like mine, we’ll have to allow for at least some upper division credit to come from “theory of interest”, “math for data science”, etc. type courses…and perhaps allow for mathy electives from other disciplines. I see us as having to become a “mathematical sciences” type program…or not existing at all.

Now for the West Virginia situation (and they probably won’t be the last):

I went on their faculty page and noted that they had 31 Associate/Full professors; the remainder appeard to be “instructors” or “assistant professors of instruction” and the like. So while I do not have any special information, it appears that they are cutting the non-tenured..the ones who did a lot (most?) of the undergraduate teaching.

Now for the uninitiated: keeping current with research at the R-1 level is, in and of itself, is a full time job. Now I am NOT one of those who says that “researchers are bad teachers” (that is often untrue) but I can say that teaching full loads (10-12 hours of undergraduate classes) is a very different job than running a graduate seminar, advising graduate students, researching, and getting NSF grants (often a prerequisite for getting tenure to begin with.

So, a lot of professor’s lives are going to change, not only for those being let go, but also for those still left. I’d imagine that some of the research professors might leave and have their place taken by the teaching faculty who are due to be cut, but that is pure speculation on my part.

July 23, 2023

Every Vector Space has a basis

Note: I will probably make a video for this. AND this post (and video to come) is for beginners. The pace will be too slow for the seasoned math student.

Let V be a vector space with some scalars \alpha \in F . I’ll assume you know what a spanning set is and what it means to be linearly independent.

Recall: a basis is a linearly independent spanning set. We’d like to prove that every vector space HAS a basis.

Those new to linear algebra might wonder why this even requires proof. So, I’ll say a few words about some vector spaces that have an infinite number of basis vectors in their basis.

  1. Polynomial space: here the vectors are polynomials in x , say, 3 + 2x -x^2 +\pi x^3 . The coefficients are real numbers and the set of scalars F will be the real numbers. (note: the set of scalars are sometimes called a “field”). So one basis (there are many others) is 1, x , x^2, ....x^k.... and since there is no limit to the degree of the polynomial, the basis must have an infinite number of vectors.

Note: vector addition is FINITE. So, though you may have learned in calculus class that { 1 \over 1-x} =1+x+x^2 + x^3.....+x^k + x^{k+1} .... (x \in (-1, 1) ) this definition depends on infinite series, which in turn requires a limiting process, which then requires a notion of “being close” (the delta- epsilon stuff) In some vector spaces there IS a way to do that, but you need the notion of “inner product” to define size and closeness (remember the dot product from calculus?) and then you can introduce ideas like convergence of an infinite sum. For example, check out Hilbert Spaces . But such operations can be thought of as an extension of vector space addition; they are NOT the addition operation itself.

2. The vector space of all real variable functions that are continuous f on [0,1] . The scalars will be the real numbers. Now this is a weird vector space. Remember that what is included are things like polynomials, rational functions without roots in [0,1] , exponential functions, trig functions whose domain includes the unit interval, functions like ln(2+x) and even piecewise defined functions whose graphs meet up at all points. And that is only the beginning.

Any basis of this beast will be impossible to list exactly, and, in fact, no basis will be able to be put into one to one correspondence with the positive integers (we say the basis is uncountably infinite)

But this vector space indeed has a basis, as we shall see.

So, how do we prove our assertion that every vector space has a basis?

Let V be our non-empty vector space. There is some vector in it, say \vec{v_1} . Let V_1 denote the span of this vector. Now if the span is not all of V we can find \vec{v_2} not in the span. Let the span of \{\vec{v_1}, \vec{v_2} \} be denoted by V_2 . If V_2 = V we have our basis, we are done. Otherwise, we continue on.

We continue indefinitely. And here is where some set theory comes in: our index set might well become infinite. But that is ok; by looking at the span of vectors \cup_{\gamma} \vec{v_{\gamma}} =V_{\gamma} we obtain a chain of nested subsets V_1 \subset V_2 \subset .... V_k \subset......V_{\gamma} .... and this chain has an upper bound, namely V , the given vector space itself.

Now we have to use some set theory. Zorn’s Lemma (which is equivalent to the axiom of choice) implies that any ordered chain of subsets that has an upper bound has a maximal element; that is, a set that contains ALL of the other sets in the order. So, in this chain (the one that we just constructed), call that set V^*.

Now we claim that the set of vectors that span v^* are linearly independent and span all of V.

Proof of claim: remembering that vector addition has only a positive number of summands, any FINITE sum of vectors, say

\vec{v_{n1}} + ... \vec{v_{nk}}

must lie in some V_{\gamma} and, by construction, are linearly dependent (order these vectors and remember how we got them: we added vectors by adding what was NOT in the span of the previous vectors.

Now to show that they span: suppose \vec{x} is NOT in the span. Then let W = V^{*} \cup \{\vec{x} \} . This is then in the chain and contains V^* which violates the fact that V^{*} is maximal.

So, we now have our basis.

April 28, 2023

Taylor Polynomials without series (advantages, drawbacks)

Filed under: calculus, series, Taylor polynomial., Taylor Series — oldgote @ 12:32 am

I was in a weird situation this semester in my “applied calculus” (aka “business calculus”) class. I had an awkward amount of time left (1 week) and I still wanted to do something with Taylor polynomials, but I had nowhere near enough time to cover infinite series and power series.

So, I just went ahead and introduced it “user’s manual” style, knowing that I could justify this, if I had to (and no, I didn’t), even without series. BUT there are some drawbacks too.

Let’s see how this goes. We’ll work with series centered at c =0 (expand about 0) and assume that f has as many continuous derivatives as desired on an interval connecting 0 to x .

Now we calculate: \int^x_0 f'(t) dt = f(x) -f(0) , of course. But we could do the integral another way: let’s use parts and say u = f'(t), dv = dt \rightarrow du = f''(t), v = (t-x) . Note the choice for v and that x is a constant in the integral. We then get f(x) -f(0)=\int^x_0 f'(t) dt = (f'(t)(t-x)|^x_0 -\int^x_0f''(t)(t-x) dx . Evaluation:

f(x) =f(0)+f'(0)x -\int^x_0f''(t)(t-x) dx and we’ve completed the first step.

Though we *could* do the inductive step now, it is useful to grind through a second iteration to see the pattern.

We take our expression and compute \int^x_0f''(t)(t-x) dx  by parts again, with u = f''(t), dv =t-x \rightarrow du =f'''(t), v = {(t-x)^2 \over 2!} and insert into our previous expression:

f(x) =f(0)+f'(0)x - (f''(t){(t-x)^2 \over 2!}|^x_0 + \int^x_0 f'''(t){(t-x)^2 \over 2!} dt which works out to:

f(x) = f(0)+f'(0)x +f''(0){x^2 \over 2} + \int^x_0 f'''(t){(t-x)^2 \over 2!} dt and note the alternating sign of the integral.

Now to use induction: assume that:

f(x) = f(0)+f'(0)x +f''(0){x^2 \over 2} + ....f^{(k)}(0){x^k \over k!} + (-1)^k \int^x_0 f^{(k+1)}(t) {(t-x)^k \over k!} dt

Now let’s look at the integral: as usual, use parts as before and we obtain:

(-1)^k (f^{(k+2)}(t) {(t-x)^{k+1} \over (k+1)!}|^x_0 - \int^x_0 f^{(k+2)}(t) {(t-x)^{k+1} \over (k+1)!} dt ). Taking some care with the signs we end up with

(-1)^k (-f^{(k+1)}(0){(-x)^{k+1} \over (k+1)! } )+ (-1)^{k+1}\int^x_0 f^{(k+2)}(t) {(t-x)^{k+1} \over (k+1)!} dt which works out to (-1)^{2k+2} (f^{(k+1)}(0) {x^{k+1} \over (k+1)!} )+ (-1)^{k+1}\int^x_0 f^{(k+2)}(t) {(t-x)^{k+1} \over (k+1)!} dt .

Substituting this evaluation into our inductive step equation gives the desired result.

And note: NOTHING was a assumed except for f having the required number of continuous derivatives!

BUT…yes, there is a catch. The integral is often regarded as a “correction term.” But the Taylor polynomial is really only useful so long as the integral can be made small. And that is the issue with this approach: there are times when the integral cannot be made small; it is possible that x can be far enough out that the associated power series does NOT converge on (-x, x) and the integral picks that up, but it may well be hidden, or at least non-obvious.

And that is why, in my opinion, it is better to do series first.

Let’s show an example.

Consider f(x) = {1 \over 1+x } . We know from work with the geometric series that its series expansion is 1 -x +x^2-x^3....+(-1)^k x^k + .... and that the interval of convergence is (-1,1) But note that f is smooth over [0, \infty) and so our Taylor polynomial, with integral correction, should work for x > 0 .

So, nothing that f^{(k)} = (-1)^k(k!)(1+x)^{-(k+1)} our k-th Taylor polynomial relation is:

f(x) =1-x+x^2-x^3 .....+(-1)^kx^k +(-1)^k \int^x_0 (-1)^{k+1}(k+1)!{1 \over (1+t)^{k+2} } {(t-x)^k \over k!} dt

Let’s focus on the integral; the “remainder”, if you will.

Rewrite it as: (-1)^{2k+1} (k+1) \int^x_0 ({(t -x) \over (t+1) })^k {1 \over (t+1)} dt .

Now this integral really isn’t that hard to do, if we use an algebraic trick:

Rewrite ({(t -x) \over (t+1) })^k  = ({(t+1 -x-1) \over (t+1) })^k = (1-{(x+1) \over (t+1) })^k

Now the integral is a simple substitutions integral: let u = 1-{(x+1) \over (t+1) } \rightarrow du = (x+1)( {1 \over (t+1)})^2 dt so our integral is transformed into:

(-1) ({k+1 \over x+1}) \int^0_{-x} u^{k} du = (-1) {k  \over (k+1)(x+1)} (-(-x)^{k+1}) = (-1)^{k+1} {k+1 \over (k+1)(x+1)} x^{k+1} =(-1)^{k+1}{1 \over (x+1)}x^{k+1}

This remainder cannot be made small if x \geq 1 no matter how big we make k

But, in all honesty, this remainder could have been computed with simple algebra.

{1 \over x+1} =1-x+x^2....+(-1)^k x^k + R and now solve for R algebraically .

The larger point is that the “error” is hidden in the integral remainder term, and this can be tough to see in the case where the associated Taylor series has a finite radius of convergence but is continuous on the whole real line, or a half line.

April 16, 2023

Annoying calculations: hypergeometric expectation and variance

Filed under: statistics — Tags: — oldgote @ 10:17 pm

April 11, 2023

Annoying calculations in Statistics: correlation coefficient is always between -1 and 1.

Filed under: linear albegra, statistics — Tags: , — oldgote @ 9:24 pm

Yes, I misspelled “Cauchy-Schwarz” in the video.

April 3, 2023

One benefit of teaching service courses

Filed under: editorial, pedagogy — oldgote @ 1:55 am

This caught my eye. Pay attention to how the professor responds.

He seemed genuinely rattled at being laughed at in public. And notice how he responded with “Hey, I am the expert here!”

I wonder if he has a lot of experience teaching courses in which many of the students really don’t want to be there, but have to be.

And that is what I have learned from teaching such courses: if you are going to given an answer that seems to go against the “common sense” of the student, it is a good thing to have a ready made reply such as “I can see why you might think that, but here is where it goes wrong…”

I don’t want to get too much into the details because this is a math teaching blog, not a biology blog. But it appears to me that he might have said “ok, in many cases, you can tell, but not in every case, and this is why…”

But instead, the professor pulled the “credentials card.”

Having some experience with a, well, disinterested audience (if not outright hostile at times) can be a good thing.

March 26, 2023

Annoying calculations: Beta integral

March 11, 2023

Annoying calculations: Binomial Distribution

Filed under: basic algebra, binomial coefficients, probability, statistics — Tags: — oldgote @ 10:18 pm

Here, we derive the expectation, variance, and moment generating function for the binomial distribution.

Video, when available, will be posted below.

Why the binomial coefficients are integers

The video, when ready, will be posted below.

Older Posts »

Create a free website or blog at WordPress.com.