College Math Teaching

May 4, 2015

Hitting the bat with the ball….the vector calculus integral theorems….

Filed under: calculus, editorial, vector calculus — Tags: , — collegemathteaching @ 4:43 pm

When I was a small kid, my dad would play baseball with me. He’d pitch the ball and try to hit my bat with the ball so I could think I was actually hitting the ball.

Well, fast forward 50 years to my vector calculus final exam; we are covering the “big integral” theorems.

Yeah, I know; it is \int_{\partial \Omega} \sigma = \int_{\Omega} d \sigma but, let’s just say that we aren’t up to differential forms as yet. πŸ™‚

And so I am giving them classical Green’s Theorem, Stokes’ Theorem and Divergence Theorem problems….and everything in sight basically boils down to integrating a constant over a rectangle, box, sphere, ball or disk.

I am hitting their bats with the ball; I wonder how many will notice. πŸ™‚


March 13, 2015

Moving from “young Turk” to “old f***”

Filed under: calculus, class room experiment, editorial, pedagogy — Tags: , , — collegemathteaching @ 9:09 pm

Today, one of our hot “young” (meaning: new here) mathematicians came to me and wanted to inquire about a course switch. He noted that his two course load included two different courses (two preparations) and that I was teaching different sections of the same two courses…was I interested in doing a course swap so that he had only one preparation (he is teaching 8 hours) and I’d only have two?

I said: “when I was your age, I minimized the number of preparations. But at my age, teaching two sections of the same low level course makes me want to bash my head against the wall”. That is, by my second lesson of the same course in the same day; I just want to be just about anywhere else on campus; I have no interest, no enthusiasm, etc.

I specifically REQUESTED 3 preparations to keep myself from getting bored; that is what 24 years of teaching this stuff does to you.

Every so often, someone has the grand idea to REFORM the teaching of (whatever) and the “reformers” usually get at least a few departments to go along with it.

The common thing said is that it gets professors to reexamine their teaching of (whatever).

But I wonder if many try these things….just out of pure boredom. Seriously, read the buzzwords of the “reform paper” I linked to; there is really nothing new there.

January 23, 2015

Making a math professor happy…

Filed under: calculus, class room experiment, elementary mathematics — Tags: , — collegemathteaching @ 10:28 pm

Calculus III: we are talking about polar curves. I give the usual lesson about how to graph r = sin(2 \theta) and r = sin(3 \theta) and give the usual “if n is even, the graph of r = sin (n \theta) has 2n petals and if n is odd, it has n petals.

Question: “does that mean it is impossible to have a graph with 6 petals then”? πŸ™‚

Yes, one can have intersecting petals and one try: r = |sin(3 \theta) | . But you aren’t going to get it without a trick of some sort.


January 9, 2015

Bad notation drove me nuts….(and still does)

Filed under: advanced mathematics, calculus, topology — Tags: , — collegemathteaching @ 8:36 pm

I remember one of my first classes in algebraic topology. The professor was talking about how to prove that \pi_1(S^1) = Z . For those who might be rusty: I am talking about the fundamental group of the circle, which is a group structure put on the set of continuous maps of the circle into the circle, where the maps all contain a set “base point” and two maps are equivalent if there is a homotopy (continuous deformation) between the two.

He remarked that he hoped “it was clear” that the circle was NOT simply connected.

That confused the heck out of me, because I had fallen into the trap of confusing the circle with a disk bounded by the circle!

Remember, for years, I had heard things like “the area of a circle is”…when in fact, the circle has area zero. The disk in the plane bounded by the circle has an area though.

So, when I teach, I try to point out bad or inconsistent notation. Example: sin^2(x) means (sin(x))^2 rather than sin(sin(x)) as the notation f^2 might suggest. But sin^{-1}(x) means arcsin(x) and NOT csc(x) = \frac{1}{sin(x)} . But…\frac{d^2 y}{dx^2} means \frac{d}{dx}(\frac{dy}{dx}) .

And please, don’t even get me started about dx that appears in integrals. I remember a student asking me about that when we did “integration by substitution”: “we never used the dx for anything up until now!” he said…correctly.

What got me thinking about this
This blog describes many of the things that I am thinking about at the moment. Currently, I am thinking about “wild knots”, which are embeddings of the circle into 3 space which cannot be deformed (by a deformation of space) into a smooth embedding of the circle.

Here are two examples of knots that can’t be deformed into a smooth knot:

Now the term “knot” implies that an embedding is present; the space that is being embedded is a circle. Of course, one might confuse a particular embedding with the equivalence class of equivalent embeddings; some old time authors distinguished the two concepts. Most modern ones (myself included) don’t.

Now I am interested in knots that are formed by the embedding of two “arcs”, each of which is non-wildly embedded (not wild is called “tame”).

In the case of arcs, authors sometimes mean “the arc itself” and in other cases mean “the embedding of an arc” (e. g. “arcs in 3 space”). Yes, there are some arcs that are so pathologically embedded that there is no deformation of space that takes the arc to a smooth arc. Unfortunately, the term “arc” can mean “the underlying space” or “the embedding”.

This will be one focus of my research in 2015: I hope to show that a knot that has one wild point (roughly speaking: one point that can never be assigned a tangent vector) that is the union of two tamely embedded arcs is never determined by its compliment. That might sound like gibberish, but in 2014 I proved that a knot that is an infinite product of knots (which are converging to a single wild point) has a complement which is homeomorphic to the complement of a knot that is wild at ALL of its points.

Of course THINKING that I can prove something and proving it are two different things. I remember spending two years trying to prove something that was false (I published the counter example) and, for part of my Ph.D. thesis, I attempted to prove something that turned out to be false; of course the counterexample came over 20 years after my attempt.

January 6, 2015

Quick Diversion: a rotating circle of dots within a circle..

Filed under: calculus — Tags: , — collegemathteaching @ 3:37 am

Since grading final exams, I’ve been travelling a bit. I am doing some admin duties but should have some time to do some research prior to….SEARCH COMMITTEE. That is such a time suck.

But here is a bit of fun:

Check out this video

Now here is a challenge (that I will take; feel free to beat me to it): find a set of equations that describes the motion of the centers of these disks.

My idea: I might start with a helix (of the type x = cos(t), y = sin(t), z = t and then have this helix change its center as it “moves up”; perhaps something like x = 4sos(t) - cos(t), y =4sin(t) - sin(t), z = t . Then intersect this with planes of the following type: (cylindrical coordinates: r = \theta and then, perhaps the points might described by the intersection of the helix with these planes? I’ll have to check it out.

November 1, 2014

Ok Graduate Student, do you want a pure math Ph. D.???

Filed under: academia, calculus, editorial, research — collegemathteaching @ 2:19 am


This slide made me chuckle (click to see a larger version). But here is the point of it: it is very, very difficult to earn your living by researching in pure mathematics.

Is it a reasonable expectation for you?

Ask yourself this: look at your advisor. Is your advisor considerably smarter than you are, or even moderately smarter than you are? If so, then forget about earning your living as a research professor in pure math. It. Is. NOT. Going. To. Happen.

Yeah, you might get a post-doc. You might even manage to get one of those “tenure track with little hope for tenure” jobs at a D-I research university…maybe (perhaps unlikely?).

I’ve been on search committees. I’ve seen the letters for those who didn’t get tenure; often these folks had decent publication records but didn’t get large enough external grants.

It is brutal out there.

If you get a pure math Ph. D. and you aren’t your advisor’s intellectual equal, about your only hope for a tenured academic job is at the “teaching intensive” universities; basically you’ll spend the vast majority of your time attempting to teach calculus to students of very average ability; after all, most of the teaching load in mathematics is teaching service courses rather than majors courses.

It does have its charm at times, but after 20+ years, it gets very, very old. I’ll discuss how to alleviate the boredom in a responsible way in another post. (e. g., it is probably a bad idea to, say, spice it up by teaching integration via hyperbolic trig functions or to try to teach residue integrals).

So, ask yourself: is your passion research and discovery? Or, is it teaching average students? If it is the latter: well, go ahead and get that theoretical math Ph. D.; after all, there ARE jobs out there, and we’ve hired a couple of people last year and might hire some more in the next couple of years.

IF your passion is research and mathematical discovery and you aren’t your advisor’s intellectual equal, either switch to applied mathematics (more demand for such research) OR enhance your education with sellable skills such as computer programming/modeling, software engineering or perhaps picking up a masters in statistics. Make yourself more marketable to industry.

October 29, 2014

Hyperbolic Trig Functions and integration…

In college calculus courses, I’ve always wrestled with “how much to cover in the hyperbolic trig functions” section.

On one hand, the hyperbolic trig functions make some integrals much easer. On the other hand: well, it isn’t as if our classes are populated with the highest caliber student (I don’t teach at MIT); many struggle with the standard trig functions. There is only so much that the average young mind can absorb.

In case your memory is rusty:

cosh(x) =\frac{e^x + e^{-x}}{2}, sinh(x) = \frac{e^x -e^{-x}}{2} and then it is immediate that the standard “half/double angle formulas hold; we do remember that \frac{d}{dx}cosh(x) = sinh(x), \frac{d}{dx} = cosh(x).

What is less immediate is the following: sinh^{-1}(x)  = ln(x+\sqrt{x^2+1}), cosh^{-1}(x) = ln(x + \sqrt{x^2 -1}) (x \ge 1).

Exercise: prove these formulas. Hint: if sinh(y) = x then e^{y} - 2x- e^{-y} =0 so multiply both sides by e^{y} to obtain e^{2y} -2x e^y - 1 =0 now use the quadratic formula to solve for e^y and keep in mind that e^y is positive.

For the other formula: same procedure, and remember that we are using the x \ge 0 branch of cosh(x) and that cosh(x) \ge 1

The following follows easily: \frac{d}{dx} sinh^{-1} (x) = \frac{1}{\sqrt{x^2 + 1}} (just set up sinh(y) = x and use implicit differentiation followed by noting cosh^2(x) -sinh^2(x) = 1 . ) and \frac{d}{dx} cosh^{-1}(x) = \frac{1}{\sqrt{x^2-1}} (similar derivation).

Now, we are off and running.

Example: \int \sqrt{x^2 + 1} dx =

We can make the substitution x =sinh(u), dx = cosh(u) du and obtain \int cosh^2(u) du = \int \frac{1}{2} (cosh(2u) + 1)du = \frac{1}{4}sinh(2u) + \frac{1}{2} u + C . Now use sinh(2u) = 2 sinh(u)cosh(u) and we obtain:

\frac{1}{2}sinh(u)cosh(u) + \frac{u}{2} + C . The back substitution isn’t that hard if we recognize cosh(u) = \sqrt{sinh^2(u) + 1} so we have \frac{1}{2} sinh(u) \sqrt{sinh^2(u) + 1} + \frac{u}{2} + C . Back substitution is now easy:

\frac{1}{2} x \sqrt{x^2+1} + \frac{1}{2} ln(x + \sqrt{x^2 + 1}) + C . No integration by parts is required and the dreaded \int sec^3(x) dx integral is avoided. Ok, I was a bit loose about the domains here; we can make this valid for negative values of x by using an absolute value with the ln(x + \sqrt{x^2 + 1}) term.

October 3, 2014

Gaps in my mathematics education

Filed under: calculus, editorial, elementary mathematics — Tags: , , — collegemathteaching @ 1:19 pm

I’ve spoken about the many gaps in my mathematics education; I’ve written about a few. But in these cases, I was writing about the gaps at, say, the senior undergraduate to beginning graduate level.

I admit that I’ve enjoyed filling in some of these.

But, I also…have…elementary level gaps that I frequently overlook.

In my case: I never learned trigonometry all that well; I had forgotten about the laws of cosines and sines. And I had forgotten how to derive the following types of formulae: sin(A+B) = sin(A)cos(B) + cos(A)sin(b), cos(A+B) = cos(A)cos(B) - sin(A)sin(B) .

So, I spent a few minutes going over these old facts.


They aren’t hard but I am a bit surprised that I let my basic ignorance continue on this long.

October 2, 2014

ARGH!!! I got stuck at the board…

Filed under: calculus, elementary mathematics, pedagogy — Tags: , , — collegemathteaching @ 5:51 pm

Related rate problem that required the “law of cosines”, which…is a trig rule that I never bothered to learn and couldn’t derive on the spot.

ARRRRGGGHHHH!!!!!!!!! (even after 20+ years, even AFTER preparing, things like this happen from time to time).

Now, of course, I won’t rest until I’ve learned those stupid rules. πŸ™‚

I nailed the rest of them though.

Note: a student pulled out the manual and, given the diagram, finished it while I worked on another problem. He showed me the answer and I gave him a fist bump.

October 1, 2014

Osgood’s uniqueness theorem for differential equations

I am teaching a numerical analysis class this semester and we just started the section on differential equations. I want them to understand when we can expect to have a solution and when a solution satisfying a given initial condition is going to be unique.

We had gone over the “existence” theorem, which basically says: given y' = f(x,y) and initial condition y(x_0) = y_0 where (x_0,y_0) \in int(R) where R is some rectangle in the x,y plane, if f(x,y) is a continuous function over R, then we are guaranteed to have at least one solution to the differential equation which is guaranteed to be valid so long as (x, y(x) stays in R.

I might post a proof of this theorem later; however an outline of how a proof goes will be useful here. With no loss of generality, assume that x_0 = 0 and the rectangle has the lines x = -a, x = a as vertical boundaries. Let \phi_0 = f(0, y_0)x , the line of slope f(0, y_0) . Now partition the interval [-a, a] into -a, -\frac{a}{2}, 0, \frac{a}{2}, a and create a polygonal path as follows: use slope f(0, y_0) at (0, y_0) , slope f(\frac{a}{2}, y_0 + \frac{a}{2}f(0, y_0)) at (\frac{a}{2}, y_0 +  \frac{a}{2}f(0, y_0)) and so on to the right; reverse this process going left. The idea: we are using Euler’s differential equation approximation method to obtain an initial piecewise approximation. Then do this again for step size \frac{a}{4},

In this way, we obtain an infinite family of continuous approximation curves. Because f(x,y) is continuous over R , it is also bounded, hence the curves have slopes whose magnitude are bounded by some M. Hence this family is equicontinuous (for any given \epsilon one can use \delta = \frac{\epsilon}{M} in continuity arguments, no matter which curve in the family we are talking about. Of course, these curves are uniformly bounded, hence by the Arzela-Ascoli Theorem (not difficult) we can extract a subsequence of these curves which converges to a limit function.

Seeing that this limit function satisfies the differential equation isn’t that hard; if one chooses t, s \in (-a.a) close enough, one shows that | \frac{\phi_k(t) - \phi_k(s)}{(t-s)} - f(t, \phi(t))|  0 where |f(x,y_1)-f(x,y_2)| \le K|y_1-y_2| then the differential equation y'=f(x,y) has exactly one solution where \phi(0) = y_0 which is valid so long as the graph (x, \phi(x) ) remains in R .

Here is the proof: K > 0 where |f(x,y_1)-f(x,y_2)| \le K|y_1-y_2| < 2K|y_1-y_2| . This is clear but perhaps a strange step.
But now suppose that there are two solutions, say y_1(x) and y_2(x) where y_1(0) = y_2(0) . So set z(x) = y_1(x) -y_2(x) and note the following: z'(x) = y_1(x) - y_2(x) = f(x,y_1)-f(x,y_2) and |z'(x)| = |f(x,y_1)-f(x,y_2)|   0 . A Mean Value Theorem argument applied to z means that we can assume that we can select our x_1 so that z' > 0 on that interval (since z(0) = 0 ).

So, on this selected interval about x_1 we have z'(x) < 2Kz (we can remove the absolute value signs.).

Now we set up the differential equation: Y' = 2KY, Y(x_1) = z(x_1) which has a unique solution Y=z(x_1)e^{2K(x-x_1)} whose graph is always positive; Y(0) = z(x_1)e^{-2Kx_1} . Note that the graphs of z(x), Y(x) meet at (x_1, z(x_1)) . But z'(x)  0 where z(x_1 - \delta) > Y(x_1 - \delta) .

But since z(0) = 0  z'(x) on that interval.

So, no such point x_1 can exist.

Note that we used the fact that the solution to Y' = 2KY, Y(x_1) > 0 is always positive. Though this is an easy differential equation to solve, note the key fact that if we tried to separate the variables, we’d calculate \int_0^y \frac{1}{Kt} dt and find that this is an improper integral which diverges to positive \infty hence its primitive cannot change sign nor reach zero. So, if we had Y' =2g(Y) where \int_0^y \frac{1}{g(t)} dt is an infinite improper integral and g(t) > 0 , we would get exactly the same result for exactly the same reason.

Hence we can recover Osgood’s Uniqueness Theorem which states:

If f(x,y) is continuous on R and for all (x, y_1), (x, y_2) \in R we have a K > 0 where |f(x,y_1)-f(x,y_2)| \le g(|y_1-y_2|) where g is a positive function and \int_0^y \frac{1}{g(t)} dt diverges to \infty at y=0 then the differential equation y'=f(x,y) has exactly one solution where \phi(0) = y_0 which is valid so long as the graph (x, \phi(x) ) remains in R .

« Newer PostsOlder Posts »

Blog at