College Math Teaching

January 6, 2015

Quick Diversion: a rotating circle of dots within a circle..

Filed under: calculus — Tags: , — collegemathteaching @ 3:37 am

Since grading final exams, I’ve been travelling a bit. I am doing some admin duties but should have some time to do some research prior to….SEARCH COMMITTEE. That is such a time suck.

But here is a bit of fun:

Check out this video

Now here is a challenge (that I will take; feel free to beat me to it): find a set of equations that describes the motion of the centers of these disks.

My idea: I might start with a helix (of the type x = cos(t), y = sin(t), z = t and then have this helix change its center as it “moves up”; perhaps something like x = 4sos(t) - cos(t), y =4sin(t) - sin(t), z = t . Then intersect this with planes of the following type: (cylindrical coordinates: r = \theta and then, perhaps the points might described by the intersection of the helix with these planes? I’ll have to check it out.


November 1, 2014

Ok Graduate Student, do you want a pure math Ph. D.???

Filed under: academia, calculus, editorial, research — collegemathteaching @ 2:19 am


This slide made me chuckle (click to see a larger version). But here is the point of it: it is very, very difficult to earn your living by researching in pure mathematics.

Is it a reasonable expectation for you?

Ask yourself this: look at your advisor. Is your advisor considerably smarter than you are, or even moderately smarter than you are? If so, then forget about earning your living as a research professor in pure math. It. Is. NOT. Going. To. Happen.

Yeah, you might get a post-doc. You might even manage to get one of those “tenure track with little hope for tenure” jobs at a D-I research university…maybe (perhaps unlikely?).

I’ve been on search committees. I’ve seen the letters for those who didn’t get tenure; often these folks had decent publication records but didn’t get large enough external grants.

It is brutal out there.

If you get a pure math Ph. D. and you aren’t your advisor’s intellectual equal, about your only hope for a tenured academic job is at the “teaching intensive” universities; basically you’ll spend the vast majority of your time attempting to teach calculus to students of very average ability; after all, most of the teaching load in mathematics is teaching service courses rather than majors courses.

It does have its charm at times, but after 20+ years, it gets very, very old. I’ll discuss how to alleviate the boredom in a responsible way in another post. (e. g., it is probably a bad idea to, say, spice it up by teaching integration via hyperbolic trig functions or to try to teach residue integrals).

So, ask yourself: is your passion research and discovery? Or, is it teaching average students? If it is the latter: well, go ahead and get that theoretical math Ph. D.; after all, there ARE jobs out there, and we’ve hired a couple of people last year and might hire some more in the next couple of years.

IF your passion is research and mathematical discovery and you aren’t your advisor’s intellectual equal, either switch to applied mathematics (more demand for such research) OR enhance your education with sellable skills such as computer programming/modeling, software engineering or perhaps picking up a masters in statistics. Make yourself more marketable to industry.

October 29, 2014

Hyperbolic Trig Functions and integration…

In college calculus courses, I’ve always wrestled with “how much to cover in the hyperbolic trig functions” section.

On one hand, the hyperbolic trig functions make some integrals much easer. On the other hand: well, it isn’t as if our classes are populated with the highest caliber student (I don’t teach at MIT); many struggle with the standard trig functions. There is only so much that the average young mind can absorb.

In case your memory is rusty:

cosh(x) =\frac{e^x + e^{-x}}{2}, sinh(x) = \frac{e^x -e^{-x}}{2} and then it is immediate that the standard “half/double angle formulas hold; we do remember that \frac{d}{dx}cosh(x) = sinh(x), \frac{d}{dx} = cosh(x).

What is less immediate is the following: sinh^{-1}(x)  = ln(x+\sqrt{x^2+1}), cosh^{-1}(x) = ln(x + \sqrt{x^2 -1}) (x \ge 1).

Exercise: prove these formulas. Hint: if sinh(y) = x then e^{y} - 2x- e^{-y} =0 so multiply both sides by e^{y} to obtain e^{2y} -2x e^y - 1 =0 now use the quadratic formula to solve for e^y and keep in mind that e^y is positive.

For the other formula: same procedure, and remember that we are using the x \ge 0 branch of cosh(x) and that cosh(x) \ge 1

The following follows easily: \frac{d}{dx} sinh^{-1} (x) = \frac{1}{\sqrt{x^2 + 1}} (just set up sinh(y) = x and use implicit differentiation followed by noting cosh^2(x) -sinh^2(x) = 1 . ) and \frac{d}{dx} cosh^{-1}(x) = \frac{1}{\sqrt{x^2-1}} (similar derivation).

Now, we are off and running.

Example: \int \sqrt{x^2 + 1} dx =

We can make the substitution x =sinh(u), dx = cosh(u) du and obtain \int cosh^2(u) du = \int \frac{1}{2} (cosh(2u) + 1)du = \frac{1}{4}sinh(2u) + \frac{1}{2} u + C . Now use sinh(2u) = 2 sinh(u)cosh(u) and we obtain:

\frac{1}{2}sinh(u)cosh(u) + \frac{u}{2} + C . The back substitution isn’t that hard if we recognize cosh(u) = \sqrt{sinh^2(u) + 1} so we have \frac{1}{2} sinh(u) \sqrt{sinh^2(u) + 1} + \frac{u}{2} + C . Back substitution is now easy:

\frac{1}{2} x \sqrt{x^2+1} + \frac{1}{2} ln(x + \sqrt{x^2 + 1}) + C . No integration by parts is required and the dreaded \int sec^3(x) dx integral is avoided. Ok, I was a bit loose about the domains here; we can make this valid for negative values of x by using an absolute value with the ln(x + \sqrt{x^2 + 1}) term.

October 3, 2014

Gaps in my mathematics education

Filed under: calculus, editorial, elementary mathematics — Tags: , , — collegemathteaching @ 1:19 pm

I’ve spoken about the many gaps in my mathematics education; I’ve written about a few. But in these cases, I was writing about the gaps at, say, the senior undergraduate to beginning graduate level.

I admit that I’ve enjoyed filling in some of these.

But, I also…have…elementary level gaps that I frequently overlook.

In my case: I never learned trigonometry all that well; I had forgotten about the laws of cosines and sines. And I had forgotten how to derive the following types of formulae: sin(A+B) = sin(A)cos(B) + cos(A)sin(b), cos(A+B) = cos(A)cos(B) - sin(A)sin(B) .

So, I spent a few minutes going over these old facts.


They aren’t hard but I am a bit surprised that I let my basic ignorance continue on this long.

October 2, 2014

ARGH!!! I got stuck at the board…

Filed under: calculus, elementary mathematics, pedagogy — Tags: , , — collegemathteaching @ 5:51 pm

Related rate problem that required the “law of cosines”, which…is a trig rule that I never bothered to learn and couldn’t derive on the spot.

ARRRRGGGHHHH!!!!!!!!! (even after 20+ years, even AFTER preparing, things like this happen from time to time).

Now, of course, I won’t rest until I’ve learned those stupid rules. 🙂

I nailed the rest of them though.

Note: a student pulled out the manual and, given the diagram, finished it while I worked on another problem. He showed me the answer and I gave him a fist bump.

October 1, 2014

Osgood’s uniqueness theorem for differential equations

I am teaching a numerical analysis class this semester and we just started the section on differential equations. I want them to understand when we can expect to have a solution and when a solution satisfying a given initial condition is going to be unique.

We had gone over the “existence” theorem, which basically says: given y' = f(x,y) and initial condition y(x_0) = y_0 where (x_0,y_0) \in int(R) where R is some rectangle in the x,y plane, if f(x,y) is a continuous function over R, then we are guaranteed to have at least one solution to the differential equation which is guaranteed to be valid so long as (x, y(x) stays in R.

I might post a proof of this theorem later; however an outline of how a proof goes will be useful here. With no loss of generality, assume that x_0 = 0 and the rectangle has the lines x = -a, x = a as vertical boundaries. Let \phi_0 = f(0, y_0)x , the line of slope f(0, y_0) . Now partition the interval [-a, a] into -a, -\frac{a}{2}, 0, \frac{a}{2}, a and create a polygonal path as follows: use slope f(0, y_0) at (0, y_0) , slope f(\frac{a}{2}, y_0 + \frac{a}{2}f(0, y_0)) at (\frac{a}{2}, y_0 +  \frac{a}{2}f(0, y_0)) and so on to the right; reverse this process going left. The idea: we are using Euler’s differential equation approximation method to obtain an initial piecewise approximation. Then do this again for step size \frac{a}{4},

In this way, we obtain an infinite family of continuous approximation curves. Because f(x,y) is continuous over R , it is also bounded, hence the curves have slopes whose magnitude are bounded by some M. Hence this family is equicontinuous (for any given \epsilon one can use \delta = \frac{\epsilon}{M} in continuity arguments, no matter which curve in the family we are talking about. Of course, these curves are uniformly bounded, hence by the Arzela-Ascoli Theorem (not difficult) we can extract a subsequence of these curves which converges to a limit function.

Seeing that this limit function satisfies the differential equation isn’t that hard; if one chooses t, s \in (-a.a) close enough, one shows that | \frac{\phi_k(t) - \phi_k(s)}{(t-s)} - f(t, \phi(t))|  0 where |f(x,y_1)-f(x,y_2)| \le K|y_1-y_2| then the differential equation y'=f(x,y) has exactly one solution where \phi(0) = y_0 which is valid so long as the graph (x, \phi(x) ) remains in R .

Here is the proof: K > 0 where |f(x,y_1)-f(x,y_2)| \le K|y_1-y_2| < 2K|y_1-y_2| . This is clear but perhaps a strange step.
But now suppose that there are two solutions, say y_1(x) and y_2(x) where y_1(0) = y_2(0) . So set z(x) = y_1(x) -y_2(x) and note the following: z'(x) = y_1(x) - y_2(x) = f(x,y_1)-f(x,y_2) and |z'(x)| = |f(x,y_1)-f(x,y_2)|   0 . A Mean Value Theorem argument applied to z means that we can assume that we can select our x_1 so that z' > 0 on that interval (since z(0) = 0 ).

So, on this selected interval about x_1 we have z'(x) < 2Kz (we can remove the absolute value signs.).

Now we set up the differential equation: Y' = 2KY, Y(x_1) = z(x_1) which has a unique solution Y=z(x_1)e^{2K(x-x_1)} whose graph is always positive; Y(0) = z(x_1)e^{-2Kx_1} . Note that the graphs of z(x), Y(x) meet at (x_1, z(x_1)) . But z'(x)  0 where z(x_1 - \delta) > Y(x_1 - \delta) .

But since z(0) = 0  z'(x) on that interval.

So, no such point x_1 can exist.

Note that we used the fact that the solution to Y' = 2KY, Y(x_1) > 0 is always positive. Though this is an easy differential equation to solve, note the key fact that if we tried to separate the variables, we’d calculate \int_0^y \frac{1}{Kt} dt and find that this is an improper integral which diverges to positive \infty hence its primitive cannot change sign nor reach zero. So, if we had Y' =2g(Y) where \int_0^y \frac{1}{g(t)} dt is an infinite improper integral and g(t) > 0 , we would get exactly the same result for exactly the same reason.

Hence we can recover Osgood’s Uniqueness Theorem which states:

If f(x,y) is continuous on R and for all (x, y_1), (x, y_2) \in R we have a K > 0 where |f(x,y_1)-f(x,y_2)| \le g(|y_1-y_2|) where g is a positive function and \int_0^y \frac{1}{g(t)} dt diverges to \infty at y=0 then the differential equation y'=f(x,y) has exactly one solution where \phi(0) = y_0 which is valid so long as the graph (x, \phi(x) ) remains in R .

September 23, 2014

Ok, what do you see here? (why we don’t blindly trust software)

I had Dfield8 from MATLAB propose solutions to y' = t(y-2)^{\frac{4}{5}} meeting the following initial conditions:

y(0) = 0, y(0) = 3, y(0) = 2.


Now, of course, one of these solutions is non-unique. But, of all of the solutions drawn: do you trust ANY of them? Why or why not?

Note: you really don’t have to do much calculus to see what is wrong with at least one of these. But, if you must know, the general solution is given by y(t) = (\frac{t^2}{10} +C)^5 + 2 (and, of course, the equilibrium solution y = 2 ). But that really doesn’t provide more information that the differential equation does.

By the way, here are some “correct” plots of the solutions, (up to uniqueness)


September 19, 2014

Freshman calculus: they don’t always know the basics….

Filed under: basic algebra, calculus — Tags: , — collegemathteaching @ 5:43 pm

Example one: many students don’t know that \frac{\frac{a}{b}}{c} \ne \frac{a}{\frac{b}{c}} (of course, assume that a, b, c, \ne 0 . ) Where this came up: when we computed lim_{h \rightarrow 0} \frac{\frac{1}{1+h} - 1}{h} we obtained lim_{h \rightarrow 0} \frac{\frac{-h}{1+h}}{h} and a student didn’t understand why this was equal to lim_{h \rightarrow 0} \frac{-1}{1+h}

I ended up asking the student to simplify \frac{\frac{2}{3}}{2} = and ….asking: ok, “if I have 2/3’rds of a pie and I give two people an equal piece of that, how much pie does each person get?

Example two: I gave “find the domain of \frac{1}{\sqrt{x^2 - 9}} and two students didn’t understand why an answer of -3 > x > 3 was logically impossible. One of them told me: “my calculus teacher in high school told me to do it this way”: I am 99.99 percent that this isn’t true, but, well, I stayed with it until the student understood why such a statement was logically impossible. Oh yes, this same “I had calculus in high school” student was sure that I was wrong when I told him that “the derivative of a constant function is zero”; he was SURE that it is “1”.

September 9, 2014

Chebyshev polynomials: a topological viewpoint

Chebyshev (or Tchebycheff) polynomials are a class of mutually orthogonal polynomials (with respect to the inner product: f \cdot g  = \int^1_{-1} \frac{1}{\sqrt{1 - x^2}} f(x)g(x) dx ) defined on the interval [-1, 1] . Yes, I realize that this is an improper integral, but it does converge in our setting.

These are used in approximation theory; here are a couple of uses:

1. The roots of the Chebyshev polynomial can be used to find the values of x_0, x_1, x_2, ...x_k \in [-1,1] that minimize the maximum of |(x-x_0)(x-x_1)(x-x_2)...(x-x_k)| over the interval [-1,1] . This is important in minimizing the error of the Lagrange interpolation polynomial.

2. The Chebyshev polynomial can be used to adjust an approximating Taylor polynomial P_n to increase its accuracy (away from the center of expansion) without increasing its degree.

The purpose of this note isn’t to discuss the utility but rather to discuss an interesting property that these polynomials have. The Wiki article on these polynomials is reasonably good for that purpose.

Let’s discuss the polynomials themselves. They are defined for all positive integers n as follows:

T_n = cos(n acos(x)) . Now, it is an interesting exercise in trig identities to discover that these ARE polynomials to begin with; one shows this to be true for, say, n \in \{0, 1, 2\} by using angle addition formulas and the standard calculus resolution of things like sin(acos(x)) . Then one discovers a relation: T_{n+1} =2xT_n - T_{n-1} to calculate the rest.

The cos(n acos(x)) definition allows for some properties to be calculated with ease: the zeros occur when acos(x) = \frac{\pi}{2n} + \frac{k \pi}{n} and the first derivative has zeros where arcos(x) = \frac{k \pi}{n} ; these ALL correspond to either an endpoint max/min at x=1, x = -1 or local max and mins whose y values are also \pm 1 . Here are the graphs of T_4(x), T_5 (x)



Now here is a key observation: the graph of a T_n forms n spanning arcs in the square [-1, 1] \times [-1,1] and separates the square into n+1 regions. So, if there is some other function f whose graph is a connected, piecewise smooth arc that is transverse to the graph of T_n that both spans the square from x = -1 to x = 1 and that stays within the square, that graph must have n points of intersection with the graph of T_n .

Now suppose that f is the graph of a polynomial of degree n whose leading coefficient is 2^{n-1} and whose graph stays completely in the square [-1, 1] \times [-1,1] . Then the polynomial Q(x) = T_n(x) - f(x) has degree n-1 (because the leading terms cancel via the subtraction) but has n roots (the places where the graphs cross). That is clearly impossible; hence the only such polynomial is f(x) = T_n(x) .

This result is usually stated in the following way: T_n(x) is normalized to be monic (have leading coefficient 1) by dividing the polynomial by 2^{n-1} and then it is pointed out that the normalized T_n(x) is the unique monic polynomial over [-1,1] that stays within [-\frac{1}{2^{n-1}}, \frac{1}{2^{n-1}}] for all x \in [-1,1] . All other monic polynomials have a graph that leaves that box at some point over [-1,1] .

Of course, one can easily cook up analytic functions which don’t leave the box but these are not monic polynomials of degree n .

August 31, 2014

The convolution integral: do some examples in Calculus III or not?

For us, calculus III is the most rushed of the courses, especially if we start with polar coordinates. Getting to the “three integral theorems” is a real chore. (ok, Green’s, Divergence and Stoke’s theorem is really just \int_{\Omega} d \sigma = \int_{\partial \Omega} \sigma but that is the subject of another post)

But watching this lecture made me wonder: should I say a few words about how to calculate a convolution integral?

Note: I’ve discussed a type of convolution integral with regards to solving differential equations here.

In the context of Fourier Transforms, the convolution integral is defined as it was in analysis class: f*g = \int^{\infty}_{-\infty} f(x-t)g(t) dt . Typically, we insist that the functions be, say, L^1 and note that it is a bit of a chore to show that the convolution of two L^1 functions is L^1 ; one proves this via the Fubini-Tonelli Theorem.

(The straight out product of two L^1 functions need not be L^1 ; e.g, consider f(x) = \frac {1}{\sqrt{x}} for x \in (0,1] and zero elsewhere)

So, assuming that the integral exists, how do we calculate it? Easy, you say? Well, it can be, after practice.

But to test out your skills, let f(x) = g(x) be the function that is 1 for x \in [\frac{-1}{2}, \frac{1}{2}] and zero elsewhere. So, what is f*g ???

So, it is easy to see that f(x-t)g(t) only assumes the value of 1 on a specific region of the (x,t) plane and is zero elsewhere; this is just like doing an iterated integral of a two variable function; at least the first step. This is why it fits well into calculus III.

f(x-t)g(t) = 1 for the following region: (x,t), -\frac{1}{2} \le x-t \le \frac{1}{2}, -\frac{1}{2} \le t \le \frac{1}{2}

This region is the parallelogram with vertices at (-1, -\frac{1}{2}), (0, -\frac{1}{2}), (0 \frac{1}{2}), (1, \frac{1}{2}) .


Now we see that we can’t do the integral in one step. So, the function we are integrating f(x-t)f(t) has the following description:

f(x-t)f(t)=\left\{\begin{array}{c} 1,x \in [-1,0], -\frac{1}{2} t \le \frac{1}{2}+x \\ 1 ,x\in [0,1], -\frac{1}{2}+x \le t \le \frac{1}{2} \\ 0 \text{ elsewhere} \end{array}\right.

So the convolution integral is \int^{\frac{1}{2} + x}_{-\frac{1}{2}} dt = 1+x for x \in [-1,0) and \int^{\frac{1}{2}}_{-\frac{1}{2} + x} dt = 1-x for x \in [0,1] .

That is, of course, the tent map that we described here. The graph is shown here:


So, it would appear to me that a good time to do a convolution exercise is right when we study iterated integrals; just tell the students that this is a case where one “stops before doing the outside integral”.

« Newer PostsOlder Posts »

Blog at