College Math Teaching

August 12, 2023

West Virginia Math Department and trends..

First of all, I’ll have to read this 2016 article.

But: it is no secret that higher education in the US is in turmoil, at least at the non-elite universities. Some colleges are closing and others are experiencing cut backs due to high operating losses.

This little not will not attempt to explain the problems of why education has gotten so expensive, though things like: reduction of government subsidies, increased costs for technology (computers, wifi, learning management systems), unfunded mandates (e. g. accommodations for an increasing percentage of students with learning disabilities) and staff to handle helicopter parents are all factors adding to increased costs.

And so, many universities are more tuition dependent than ever before, and while the sticker price is high, many (most in many universities) are given steep discounts.

And so, higher administration is trying to figure out what to offer: they need to bring in tuition dollars.

Now about math: our number of majors has dropped, and much, if not most, of the drop comes from math education: teaching is not a popular occupation right now, for many reasons.

Things like this do not help attract student to teacher education programs:

One thing that hurts enrollment in upper division math courses is that higher math has prerequisites. Of course, many (most?) pure math courses do not appear to have immediate application to other fields (though they often do). And, let’s face it: math is hard. The ideas are very dense.

So, it is my feeling that the math major..one that requires two semesters of abstract algebra and two semesters of analysis, is probably on the way out, at least at non-elite schools. I think it will survive at Ivy caliber schools, MIT, Stanford, and the flagship R-1 schools.

As far as the rest of us: it absolutely hurts my heart to say this, but I feel that for our major to survive at a place like mine, we’ll have to allow for at least some upper division credit to come from “theory of interest”, “math for data science”, etc. type courses…and perhaps allow for mathy electives from other disciplines. I see us as having to become a “mathematical sciences” type program…or not existing at all.

Now for the West Virginia situation (and they probably won’t be the last):

I went on their faculty page and noted that they had 31 Associate/Full professors; the remainder appeard to be “instructors” or “assistant professors of instruction” and the like. So while I do not have any special information, it appears that they are cutting the non-tenured..the ones who did a lot (most?) of the undergraduate teaching.

Now for the uninitiated: keeping current with research at the R-1 level is, in and of itself, is a full time job. Now I am NOT one of those who says that “researchers are bad teachers” (that is often untrue) but I can say that teaching full loads (10-12 hours of undergraduate classes) is a very different job than running a graduate seminar, advising graduate students, researching, and getting NSF grants (often a prerequisite for getting tenure to begin with.

So, a lot of professor’s lives are going to change, not only for those being let go, but also for those still left. I’d imagine that some of the research professors might leave and have their place taken by the teaching faculty who are due to be cut, but that is pure speculation on my part.

April 3, 2023

One benefit of teaching service courses

Filed under: editorial, pedagogy — oldgote @ 1:55 am

This caught my eye. Pay attention to how the professor responds.

He seemed genuinely rattled at being laughed at in public. And notice how he responded with “Hey, I am the expert here!”

I wonder if he has a lot of experience teaching courses in which many of the students really don’t want to be there, but have to be.

And that is what I have learned from teaching such courses: if you are going to given an answer that seems to go against the “common sense” of the student, it is a good thing to have a ready made reply such as “I can see why you might think that, but here is where it goes wrong…”

I don’t want to get too much into the details because this is a math teaching blog, not a biology blog. But it appears to me that he might have said “ok, in many cases, you can tell, but not in every case, and this is why…”

But instead, the professor pulled the “credentials card.”

Having some experience with a, well, disinterested audience (if not outright hostile at times) can be a good thing.

March 26, 2023

Annoying calculations: Beta integral

March 7, 2023

Teaching double integrals: why you should *always* sketch the region

The problem (from Larson’s Calculus, an Applied Approach, 10’th edition, Section 7.8, no. 18 in my paperback edition, no. 17 in the e-book edition) does not seem that unusual at a very quick glance:

\int^2_0 \int^{\sqrt{1-y^2}}_0 -5xy dx dy if you have a hard time reading the image. AND, *if* you just blindly do the formal calculations:

-{5 \over 2} \int^2_0 x^2y|^{x=\sqrt{1-y^2}}_{x=0}  dy = -{5 \over 2} \int^2_0 y-y^3 dy = -{5 \over 2}(2-4) = 5 which is what the text has as “the answer.”

But come on. We took a function that was negative in the first quadrant, integrated it entirely in the first quadrant (in standard order) and ended up with a positive number??? I don’t think so!

Indeed, if we perform \int^2_0 \int^1_0 -5xy dxdy =-5 which is far more believable.

So, we KNOW something is wrong. Now let’s attempt to sketch the region first:

Oops! Note: if we just used the quarter circle boundary we obtain

\int^1_0 \int^{x=\sqrt{1-y^2}}_{x=0} -5xy dxdy = -{5 \over 8}

The 3-dimensional situation: we are limited by the graph of the function, the cylinder x^2+y^2 =1 and the planes y=0, x =0 ; the plane y=2 is outside of this cylinder. (from here: the red is the graph of z = -5xy

Now think about what the “formal calculation” really calculated and wonder if it was just a coincidence that we got the absolute value of the integral taken over the rectangle 0 \leq x \leq 1, 0 \leq y \leq 2

October 7, 2021

A “weird” implicit graph

Filed under: calculus, implicit differentiation, pedagogy — oldgote @ 12:46 am

I was preparing some implicit differentiation exercises and decided to give this one:

If sin^2(y) + cos^2(x) =1 find {dy \over dx} That is fairly straight forward, no? But there is a bit more here than meets the eye, as I quickly found out. I graphed this on Desmos and:

What in the world? Then I pondered for a minute or two and then it hit me:

sin^2(y) = 1-cos^2(x) \rightarrow sin^2(y) = sin^2(x) \rightarrow \pm(y \pm 2k \pi ) = \pm (x +2 k \pi) which leads to families of lines with either slope 1 or slope negative 1 and y intercepts multiples of \pi

Now, just blindly doing the problem we get 2sin(x)cos(x) = 2 {dy \over dx} cos(y)sin(y) which leads to: {sin(x)cos(x) \over sin(y)cos(y)} = {dy \over dx} = \pm {\sqrt{1-cos^2(y)} \sqrt{1-sin^2(x)} \over \sqrt{1-cos^2(y)} \sqrt{1-sin^2(x)}}  = \pm 1 by both the original equation and the circle identity.

May 21, 2021

Introduction to infinite series for inexperienced calculus teachers

Filed under: calculus, mathematics education, pedagogy, Power Series, sequences, series — oldgote @ 1:26 pm

Let me start by saying that this is NOT: this is not an introduction for calculus students (too steep) nor is this intended for experienced calculus teachers. Nor is this a “you should teach it THIS way” or “introduce the concepts in THIS order or emphasize THESE topics”; that is for the individual teacher to decide.

Rather, this is a quick overview to help the new teacher (or for the teacher who has not taught it in a long time) decide for themselves how to go about it.

And yes, I’ll be giving a lot of opinions; disagree if you like.

What series will be used for.

Of course, infinite series have applications in probability theory (discrete density functions, expectation and higher moment values of discrete random variables), financial mathematics (perpetuities), etc. and these are great reasons to learn about them. But in calculus, these tend to be background material for power series.

Power series: \sum^{\infty}_{k=0} a_k (x-c)^k , the most important thing is to determine the open interval of absolute convergence; that is, the intervals on which \sum^{\infty}_{k=0} |a_k (x-c)^k | converges.

We teach that these intervals are *always* symmetric about x = c (that is, at x = c only, on some open interval (c-\delta, c+ \delta) or the whole real line. Side note: this is an interesting place to point out the influence that the calculus of complex variables has on real variable calculus! These open intervals are the most important aspect as one can prove that one can differentiate and integrate said series “term by term” on the open interval of absolute convergence; sometimes one can extend the results to the boundary of the interval.

Therefore, if time is limited, I tend to focus on material more relevant for series that are absolutely convergent though there are some interesting (and fun) things one can do for a series which is conditionally convergent (convergent, but not absolutely convergent; e. g. \sum^{\infty}_{k=1} (-1)^{k+1} {1 \over k} .

Important principles: I think it is a good idea to first deal with geometric series and then series with positive terms…make that “non-negative” terms.

Geometric series: \sum ^{\infty}_{k =0} x^k ; here we see that for x \neq 1 , \sum ^{n}_{k =0} x^k= {1-x^{n+1} \over 1-x } and is equal to n+1 for n = 1 ; to show this do the old “shifted sum” addition: S = 1 + x + x^2 + ...x^n , xS = x+x^2 + ...+x^{n+1} then subtract: S-xS = (1-x)S = 1-x^{n+1} as most of the terms cancel with the subtraction.

Now to show the geometric series converges, (convergence being the standard kind: \sum^n_{k = 0} c_k = S_n the “n’th partial sum, then the series \sum^{\infty}_{k = 0} c_k  converges if an only if the sequence of partial sums S_n converges; yes there are other types of convergence)

Now that we’ve established that for the geometric series, S_n =  {1-x^{n+1} \over 1-x }  and we get convergence if |x^{n+1}| goes to zero, which happens only if |x| < 1 .

Why geometric series: two of the most common series tests (root and ratio tests) involve a comparison to a geometric series. Also, the geometric series concept is used both in the theory of improper integrals and in measure theory (e. g., showing that the rational numbers have measure zero).

Series of non-negative terms. For now, we’ll assume that \sum a_k has all a_k \geq 0 (suppressing the indices).

Main principle: though most texts talk about the various tests, I believe that most of the tests involved really involve three key principles, two of which the geometric series and the following result from sequences of positive numbers:

Key sequence result: every monotone bounded sequence of positive numbers converges to its least upper bound.

True: many calculus texts don’t do that much with the least upper bound concept but I feel it is intuitive enough to at least mention. If the least upper bound is, say, b , then if a_n is the sequence in question, there has to be some N  > 0 such that a_n > b-\delta for any small, positive \delta . Then because a_n is monotone, b> a_{m} > b-\delta for all m > n

The third key principle is “common sense” : if \sum c_k converges (standard convergence) then c_k \rightarrow 0 as a sequence. This is pretty clear if the c_k are non-negative; the idea is that the sequence of partial sums S_n cannot converge to a limit unless |S_n -S_{n+1}| becomes arbitrarily small. Of course, this is true even if the terms are not all positive.

Secondary results I think that the next results are “second order” results: the main results depend on these, and these depend on the key 3 that we just discussed.

The first of these secondary results is the direct comparison test for series of non-negative terms:

Direct comparison test

If 0< c_n \leq b_n  and \sum b_n converges, then so does \sum c_n . If \sum c_n diverges, then so does \sum b_n .

The proof is basically the “bounded monotone sequence” principle applied to the partial sums. I like to call it “if you are taller than an NBA center then you are tall” principle.

Evidently, some see this result as a “just get to something else” result, but it is extremely useful; one can apply this to show that the exponential of a square matrix is defined; it is the principle behind the Weierstrass M-test, etc. Do not underestimate this test!

Absolute convergence: this is the most important kind of convergence for power series as this is the type of convergence we will have on an open interval. A series is absolutely convergent if \sum |c_k| converges. Now, of course, absolute convergence implies convergence:

Note 0 < |c_k| -c_k \leq 2|c_k| and if \sum |c_k| converges, then \sum |c_k|-c_k converges by direct comparison. Now note c_k = |c_k|-(|c_k| -c_k) \rightarrow \sum c_k is the difference of two convergent series: \sum |c_k| -\sum (|c_k|-c_k ) and therefore converges.

Integral test This is an important test for convergence at a point. This test assumes that f is a non-negative, non-decreasing function on some [1, \infty) (that is, a >b \rightarrow f(a) \geq f(b) ) Then \sum f(n) converges if and only if \int_1^{\infty} f(x)dx converges as an improper integral.

Proof: \sum_{n=2} f(n) is just a right endpoint Riemann sum for \int_1^{\infty} f(x)dx and therefore the sequence of partial sums is an increasing, bounded sequence. Now if the sum converges, note that \sum_{n=1} f(n) is the right endpoint estimate for \int_1^{\infty} f(x)dx so the integral can be defined as a limit of a bounded, increasing sequence so the integral converges.

Yes, these are crude whiteboards but they get the job done.

Note: we need the hypothesis that f is decreasing (or non-decreasing). Example: the function f(x) = \begin{cases}  x , & \text{ if } x \notin \{1, 2, 3,...\} \\ 0, & \text{ otherwise} \end{cases} certainly has \sum f(n) converging but \int^{\infty}_{1} f(x) dx diverging.

Going the other way, defining f(x) = \begin{cases}  2^n , & \text{ if }  x \in [n, n+2^{-2n}] \\0, & \text{ otherwise} \end{cases} gives an unbounded function with unbounded sum \sum_{n=1} 2^n but the integral converges to the sum \sum_{n=1} 2^{-n} =1 . The “boxes” get taller and skinnier.

Note: the above shows the integral and sum starting at 0; same principle though.

Now wait a minute: we haven’t really gone over how students will do most of their homework and exam problems. We’ve covered none of these: p-test, limit comparison test, ratio test, root test. Ok, logically, we have but not practically.

Let’s remedy that. First, start with the “point convergence” tests.

p-test. This says that \sum {1 \over k^p} converges if p> 1 and diverges otherwise. Proof: Integral test.

Limit comparison test Given two series of positive terms: \sum b_k and \sum c_k

Suppose lim_{k \rightarrow \infty} {b_k \over c_k} = L

If \sum c_k converges and 0 \leq L < \infty then so does \sum b_k .

If \sum c_k diverges and 0 < L \leq \infty then so does \sum b_k

I’ll show the “converge” part of the proof: choose \epsilon = L then N such that n > N \rightarrow  {b_n \over c_n } < 2L This means \sum_{k=n} b_k \leq \sum_{k=n} c_k and we get convergence by direct comparison. See how useful that test is?

But note what is going on: it really isn’t necessary for lim_{k \rightarrow \infty} {b_k \over c_k}  to exist; for the convergence case it is only necessary that there be some M for which M >  {b_k \over c_k}  ; if one is familiar with the limit superior (“limsup”) that is enough to make the test work.

We will see this again.

Why limit comparison is used: Something like \sum {1 \over 4k^5-2k^2-14} clearly converges, but nailing down the proof with direct comparison can be hard. But a limit comparison with \sum {1 \over k^5} is pretty easy.

Ratio test this test is most commonly used when the series has powers and/or factorials in it. Basically: given \sum c_n consider lim_{k \rightarrow \infty} {c_{k+1} \over c_{k}} = L (if the limit exists..if it doesn’t..stay tuned).

If L < 1 the series converges. If L > 1 the series diverges. If L = 1 the test is inconclusive.

Note: if it turns out that there is exists some N >0 such that for all n > N we have {c_{n+1} \over c_n } < \gamma < 1 then the series converges (we can use the limsup concept here as well)

Why this works: suppose there exists some N >0 such that for all n > N we have {c_{n+1} \over c_n } < \gamma < 1 Then write \sum_{k=n} c_k = c_n + c_{n+1} + c_{n+2} + ....

now factor out a c_n to obtain c_n (1 + {c_{n+1} \over c_n} + {c_{n+2} \over c_n} + {c_{n+3} \over c_{n}} +....)

Now multiply the terms by 1 in a clever way:

c_n (1 + {c_{n+1} \over c_n} + {c_{n+2} \over c_{n+1}}{c_{n+1} \over c_n} + {c_{n+3} \over c_{n+2}}  {c_{n+2} \over c_{n+1}}  {c_{n+1} \over c_{n}}   +....) See where this is going: each ratio is less than \gamma so we have:

\sum_{k=n} c_k \leq c_n \sum_{j=0} (\gamma)^j which is a convergent geometric series.

See: there is geometric series and the direct comparison test, again.

Root Test No, this is NOT the same as the ratio test. In fact, it is a bit “stronger” than the ratio test in that the root test will work for anything the ratio test works for, but there are some series that the root test works for that the ratio test comes up empty.

I’ll state the “lim sup” version of the ratio test: if there exists some N such that, for all n>N we have (c_n)^{1 \over n} < \gamma < 1 then the series converges (exercise: find the “divergence version”).

As before: if the condition is met, \sum_{k=n} c_n \leq \sum_{k=n} \gamma^k so the original series converges by direction comparison.

Now as far as my previous remark about the ratio test: Consider the series:

1 + ({1 \over 3}) + ({2 \over 3})^2 + ({1 \over 3})^3 + ({2 \over 3})^4 +...({1 \over 3})^{2k-1} +({2 \over 3})^{2k} ...

Yes, this series is bounded by the convergent geometric series with r = {2 \over 3} and therefore converges by direct comparison. And the limsup version of the root test works as well.

But the ratio test is a disaster as {({2 \over 3})^{2k}  \over  ({1 \over 3})^{2k-1} } ={2^{2k} \over 3 } which is unbounded..but {({1 \over 3})^{2k+1}  \over  ({2 \over 3})^{2k} }  ={1 \over (2^{2k} 3) } .

What about non-absolute convergence (aka “conditional convergence”)

Series like \sum_{k=1} (-1)^{k+1} {1 \over k} converges but does NOT converge absolutely (p-test). On one hand, such series are a LOT of fun..but the convergence is very slow and unstable and so might say that these series are not as important as the series that converges absolutely. But there is a lot of interesting mathematics to be had here.

So, let’s chat about these a bit.

We say \sum c_k is conditionally convergent if the series converges but \sum |c_k| diverges.

One elementary tool for dealing with these is the alternating series test:

for this, let c_k >0 and for all k, c_{k+1} < c_k .

Then \sum_{k=1} (-1)^{k+1} c_k converges if and only if c_k \rightarrow 0 as a sequence.

That the sequence of terms goes to zero is necessary. That it is sufficient in this alternating case: first note that the terms of the sequence of partial sums are bounded above by c_1 (as the magnitudes get steadily smaller) and below by c_1 - c_2 (same reason. Note also that S_{2k+2} = S_{2k} -c_{2k+1} + c_{2k+2} < S_{2k} so the sequence of partial sums of even index are an increasing bounded sequence and therefore converges to some limit, say, L . But S_{2k+1} = S_{2k} + c_{2k+1} and so by a routine “epsilon-N” argument the odd partial sums converge to L as well.

Of course, there are conditionally convergent series that are NOT alternating. And conditionally convergent series have some interesting properties.

One of the most interesting properties is that such series can be “rearranged” (“derangment” in Knopp’s book) to either converge to any number of choice or to diverge to infinity or to have no limit at all.

Here is an outline of the arguments:

To rearrange a series to converge to L , start with the positive terms (which must diverge as the series is conditionally convergent) and add them up to exceed L ; stop just after L is exceeded. Call that partial sum u_1. Note: this could be 0 terms. Now use the negative terms to go of the left of L and stop the first one past. Call that l_1 Then move to the right, past L again with the positive terms..note that the overshoot is smaller as the terms are smaller. This is u_2 . Then go back again to get l_2 to the left of L . Repeat.

Note that at every stage, every partial sum after the first one past L is between some u_i, l_i and the u_i, l_i bracket L and the distance is shrinking to become arbitrarily small.

To rearrange a series to diverge to infinity: Add the positive terms to exceed 1. Add a negative term. Then add the terms to exceed 2. Add a negative term. Repeat this for each positive integer n .

Have fun with this; you can have the partial sums end up all over the place.

That’s it for now; I might do power series later.

August 29, 2020

Commentary: life with Webassign

Filed under: editorial, pedagogy — oldgote @ 5:45 pm

Since I am online, I decided to use Webassign for homework.
Well, of course, some students have had trouble with it..in particular the graphing application.
I am not saying that their graphing tool is hard to learn; in fact I might play with it myself. BUT, in this particular class, I want my students to focus on learning the material and NOT have to spend hours learning to get good with their graphing tools. For graphing: Desmos is outstanding.

Normally, I am not sympathetic to student complaints or frustrations, but here, I can see it.

July 12, 2020

Logarithmic differentiation: do we not care about domains anymore?

Filed under: calculus, derivatives, elementary mathematics, pedagogy — collegemathteaching @ 11:29 pm

The introduction is for a student who might not have seen logarithmic differentiation before: (and yes, this technique is extensively used..for example it is used in the “maximum likelihood function” calculation frequently encountered in statistics)

Suppose you are given, say, f(x) =sin(x)e^x(x-2)^3(x+1) and you are told to calculate the derivative?

Calculus texts often offer the technique of logarithmic differentiation: write ln(f(x)) = ln(sin(x)e^x(x-2)^3(x+1)) = ln(sin(x)) + x + 3ln(x-2) + ln(x+1)
Now differentiate both sides: ln((f(x))' = \frac{f'(x)}{f(x)}  = \frac{cos(x)}{sin(x)} + 1 + \frac{3}{x-2} + {1 \over x+1}

Now multiply both sides by f(x) to obtain

f'(x) = f(x)(\frac{cos(x)}{sin(x)} + 1 + \frac{3}{x-2} + {1 \over x+1}) =

\

(sin(x)e^x(x-2)^3(x+1)(\frac{cos(x)}{sin(x)} + 1 + \frac{3}{x-2} + {1 \over x+1})

And this is correct…sort of. Why I say sort of: what happens, at say, x = 0 ? The derivative certainly exists there but what about that second factor? Yes, the sin(x) gets cancelled out by the first factor, but AS WRITTEN, there is an oh-so-subtle problem with domains.

You can only substitute x \in \{ 0, \pm k \pi \} only after simplifying ..which one might see as a limit process.

But let’s stop and take a closer look at the whole process: we started with f(x) = g_1(x) g_2(x) ...g_n(x) and then took the log of both sides. Where is the log defined? And when does ln(ab) = ln(a) + ln(b) ? You got it: this only works when a > 0, b > 0 .

So, on the face of it, ln(g_1 (x) g_2(x) ...g_n(x)) = ln(g_1(x) ) + ln(g_2(x) ) + ...ln(g_n(x)) is justified only when each g_i(x) > 0 .

Why can we get away with ignoring all of this, at least in this case?

Well, here is why:

1. If f(x) \neq 0 is a differentiable function then \frac{d}{dx} ln(|f(x)|) = \frac{f'(x)}{f(x)}
Yes, this is covered in the derivation of \int {dx \over x} material but here goes: write

|f(x)| =   \begin{cases}      f(x) ,& \text{if } f(x) > 0 \\      -f(x),              & \text{otherwise}  \end{cases}

Now if f(x) > 0 we get { d \over dx} ln(f(x)) = {f'(x) \over f(x) } as usual. If f(x) < 0 then |f(x)| = =f(x), |f(x)|' = (-f(x))' = -f'(x) and so in either case:

\frac{d}{dx} ln(|f(x)|) = \frac{f'(x)}{f(x)} as required.

THAT is the workaround for calculating {d \over dx } ln(g_1(x)g_2(x)..g_n(x)) where g_1(x)g_2(x)..g_n(x) \neq 0 : just calculate {d \over dx } ln(|g_1(x)g_2(x)..g_n(x)|) . noting that |g_1(x)g_2(x)..g_n(x)| = |g_1(x)| |g_2(x)|...|g_n(x)|

Yay! We are almost done! But, what about the cases where at least some of the factors are zero at, say x= x_0 ?

Here, we have to bite the bullet and admit that we cannot take the log of the product where any of the factors have a zero, at that point. But this is what we can prove:

Given g_1(x) g_2(x)...g_n(x) is a product of differentiable functions and g_1(a) g_2(a)...g_k(a) = 0 k \leq n then
(g_1(a)g_2(a)...g_n(a))' = lim_{x \rightarrow a}  g_1(x)g_2(x)..g_n(x) ({g_1'(x) \over g_1(x)} + {g_2'(x) \over g_2(x)} + ...{g_n'(x) \over g_n(x})

This works out to what we want by cancellation of factors.

Here is one way to proceed with the proof:

1. Suppose f, g are differentiable and f(a) = g(a) = 0 . Then (fg)'(a) = f'(a)g(a) + f(a)g'(a) = 0 and lim_{x \rightarrow a} f(x)g(x)({f'(x) \over f(x)} + {g'(x) \over g(x)}) = 0
2. Now suppose f, g are differentiable and f(a) =0 ,  g(a) \neq 0 . Then (fg)'(a) = f'(a)g(a) + f(a)g'(a) = f'(a)g(a) and lim_{x \rightarrow a} f(x)g(x)({f'(x) \over f(x)} + {g'(x) \over g(x)}) = f'(a)g(a)
3.Now apply the above to g_1(x) g_2(x)...g_n(x) is a product of differentiable functions and g_1(a) g_2(a)...g_k(a) = 0 k \leq n
If k = n then (g_1(a)g_2(a)...g_n(a))' = lim_{x \rightarrow a}  g_1(x)g_2(x)..g_n(x) ({g_1'(x) \over g_1(x)} + {g_2'(x) \over g_2(x)} + ...{g_n'(x) \over g_n(x}) =0 by inductive application of 1.

If k < n then let g_1...g_k = f, g_{k+1} ..g_n  =g as in 2. Then by 2, we have (fg)' =  f'(a)g(a) Now this quantity is zero unless k = 1 and f'(a) neq 0 . But in this case note that lim_{x \rightarrow a} g_1(x)g_2(x)...g_n(x)({g_1'(x) \over g_1(x)} + {g_2'(x) \over g_2(x)} + ...+ {g_n'(x) \over g_n(x)})  = lim_{x \rightarrow a} g_2(x)...g_n(x)(g_1'(x)) =g(a)g_1(a)

So there it is. Yes, it works ..with appropriate precautions.

July 3, 2020

What will happen this fall?

Filed under: COVID19, pedagogy — Tags: — collegemathteaching @ 12:22 am

Yes, I should be doing math but, well, I can tell you what I’ve done since online and many of my summer duties have ended:

1. I’ve purchased some “stuff for hybrid learning” equipment, to wit: drawing board and a document camera.
2. I also have a gallon of hand sanitizer, face shield and 100 masks to hand out to students who “forget” to wear one to class.

Yes, I know, my university announced that we will start in person, go to Thanksgiving and then finish remotely. And exactly how far we get remains to be seen; I know that USC just announced that they are going online from the start.

That might get the dominoes falling.

But here are my plans for my two “hybrid” classes (4 meetings a week, but due to social distance limits, half the class will come M, Th, half on W, F)
a. Stuff in the lessons is mandatory; students are responsible for notes, quizzes, assignments
b. But, I will NOT require in person attendance, ever. All notes will be posted on line (I am working on them right now), all class room sessions will be put on video (maybe even live streamed) and
c. All testing, quizzes, etc. will be online. Yes, they will be open book; that is really the only way to be fair. Hence I’ll have to be creative with my exams.

As far as my actuarial science class: similar, though this class should be able to meet social distancing requirements. My not requiring in person attendance is for the students (e. g. what if they are worried, have some sort of medical condition, etc.)

I did attend a two week session on online learning and have some ideas on how to upload videos and the like. I’ll do some experimenting beforehand.

And yes, I have two papers to finish; I hope that this note gets me inspired to get back to it.

December 21, 2018

Over-scheduling of senior faculty and lower division courses: how important is course prep?

It seems as if the time faculty is expected to spend on administrative tasks is growing exponentially. In our case: we’ve had some administrative upheaval with the new people coming in to “clean things up”, thereby launching new task forces, creating more committees, etc. And this is a time suck; often more senior faculty more or less go through the motions when it comes to course preparation for the elementary courses (say: the calculus sequence, or elementary differential equations).

And so:

1. Does this harm the course quality and if so..
2. Is there any effect on the students?

I should first explain why I am thinking about this; I’ll give some specific examples from my department.

1. Some time ago, a faculty member gave a seminar in which he gave an “elementary” proof of why \int e^{x^2} dx is non-elementary. Ok, this proof took 40-50 minutes to get through. But at the end, the professor giving the seminar exclaimed: “isn’t this lovely?” at which, another senior member (one who didn’t have a Ph. D. had had been around since the 1960’s) asked “why are you happy that yet again, we haven’t had success?” The fact that a proof that \int e^{x^2} dx could not be expressed in terms of the usual functions by the standard field operations had been given; the whole point had eluded him. And remember, this person was in our calculus teaching line up.

2. Another time, in a less formal setting, I had mentioned that I had given a brief mention to my class that one could compute and improper integral (over the real line) of an unbounded function that that a function could have a Laplace transform. A junior faculty member who had just taught differential equations tried to inform me that only functions of exponential order could have a Laplace transform; I replied that, while many texts restricted Laplace transforms to such functions, that was not mathematically necessary (though it is a reasonable restriction for an applied first course). (briefly: imagine a function whose graph consisted of a spike of height e^{n^2} at integer points over an interval of width \frac{1}{2^{2n} e^{2n^2}} and was zero elsewhere.

3. In still another case, I was talking about errors in answer keys and how, when I taught courses that I wasn’t qualified to teach (e. g. actuarial science course), it was tough for me to confidently determine when the answer key was wrong. A senior, still active research faculty member said that he found errors in an answer key..that in some cases..the interval of absolute convergence for some power series was given as a closed interval.

I was a bit taken aback; I gently reminded him that \sum \frac{x^k}{k^2} was such a series.

I know what he was confused by; there is a theorem that says that if \sum a_k x^k converges (either conditionally or absolutely) for some x=x_1 then the series converges absolutely for all x_0 where |x_0| < |x_1| The proof isn’t hard; note that convergence of \sum a_k x^k means eventually, |a_k x^k| < M for some positive M then compare the “tail end” of the series: use |\frac{x_0}{x_1}| < r < 1 and then |a_k (x_0)^k| = |a_k x_1^k (\frac{x_0}{x_1})^k| < |r^k|M and compare to a convergent geometric series. Mind you, he was teaching series at the time..and yes, is a senior, research active faculty member with years and years of experience; he mentored me so many years ago.

4. Also…one time, a sharp young faculty member asked around “are there any real functions that are differentiable exactly at one point? (yes: try f(x) = x^2 if x is rational, x^3 if x is irrational.

5. And yes, one time I had forgotten that a function could be differentiable but not be C^1 (try: x^2 sin (\frac{1}{x}) at x = 0

What is the point of all of this? Even smart, active mathematicians forget stuff if they haven’t reviewed it in a while…even elementary stuff. We need time to review our courses! But…does this actually affect the students? I am almost sure that at non-elite universities such as ours, the answer is “probably not in any way that can be measured.”

Think about it. Imagine the following statements in a differential equations course:

1. “Laplace transforms exist only for functions of exponential order (false)”.
2. “We will restrict our study of Laplace transforms to functions of exponential order.”
3. “We will restrict our study of Laplace transforms to functions of exponential order but this is not mathematically necessary.”

Would students really recognize the difference between these three statements?

Yes, making these statements, with confidence, requires quite a bit of difference in preparation time. And our deans and administrators might not see any value to allowing for such preparation time as it doesn’t show up in measures of performance.

Older Posts »

Create a free website or blog at WordPress.com.