College Math Teaching

December 22, 2015

Multi leaf polar graphs and total area…

Filed under: calculus, elementary mathematics, integrals — Tags: , — collegemathteaching @ 4:07 am

I saw polar coordinate calculus for the first time in 1977. I’ve taught calculus as a TA and as a professor since 1987. And yet, I’ve never thought of this simple little fact.

Consider r(\theta) = sin(n \theta), 0 \theta \ 2 \pi . Now it is well know that the area formula (area enclosed by a polar graph, assuming no “doubling”, self intersections, etc.) is A = \frac{1}{2} \int^b_a (r(\theta))^2 d \theta

Now the leaved roses have the following types of graphs: n leaves if n is odd, and 2n leaves if n is even (in the odd case, the graph doubles itself).




So here is the question: how much total area is covered by the graph (all the leaves put together, do NOT count “overlapping”)?

Well, for n an integer, the answer is: \frac{\pi}{4} if n is odd, and \frac{\pi}{2} if n is even! That’s it! Want to know why?

Do the integral: if n is odd, our total area is \frac{n}{2}\int^{\frac{\pi}{n}}_0 (sin(n \theta)^2 d\theta = \frac{n}{2}\int^{\frac{\pi}{n}}_0 \frac{1}{2} + cos(2n\theta) d\theta =\frac{\pi}{4} . If n is even, we have the same integral but the outside coefficient is \frac{2n}{2} = n which is the only difference. Aside from parity, the number of leaves does not matter as to the total area!

Now the fun starts when one considers a fractional multiple of \theta and I might ponder that some.


October 29, 2015

The Alternating Series Test: the need for hypothesis

Filed under: calculus, series — Tags: — collegemathteaching @ 9:49 pm

It is well known that if series \sum a_k meets the following conditions:

1. (a_k)(a_{k+1}) < 0 for all k
2. lim_{k \rightarrow \infty} a_k = 0
3. |a_k| > |a_{k+1} | for all k

the series converges. This is the famous “alternating series test”.

I know that I am frequently remiss in discussing what can go wrong if condition 3 is not met.

An example that is useful is 1 - \frac{1}{\sqrt{2}} + \frac{1}{3} - \frac{1}{\sqrt{4}} + ...+\frac{1}{2n-1} - \frac{1}{\sqrt{2n}} .....

Clearly this series meets conditions 1 and 2: the series alternates and the terms approach zero. But the series can be written (carefully) as:

\sum_{k=1}^{\infty} (\frac{1}{2k-1} - \frac{1}{\sqrt{2k}}) .

Then one can combine the terms in the parenthesis and then do a limit comparison to the series \sum_{k=1}^{\infty} \frac{1}{k} to see the series diverges.

July 13, 2015

Trolled by Newton’s Law of Cooling…

Filed under: calculus, differential equations, editorial — Tags: , — collegemathteaching @ 8:55 pm

From a humor website: there is a Facebook account called “customer service” who trolls customers making complaints. Though that isn’t a topic here, it is interesting to see Newton’s Cooling Law get mentioned:


June 9, 2015

Volumes of n-balls: why?

Filed under: calculus, geometry — Tags: , , — collegemathteaching @ 11:38 am

I talked about a method of finding the hypervolume of an n-ball in a previous post. To recap: the volume of the n-ball is that “hypervolume” (I’ll be dropping “hyper”) of the region described by \{(x_1, x_2,...x_n) | x_1^2 + x_2^2 + ...x_n^2 \leq R^2 \} .

The formula is: V_n = \frac{\pi^{\frac{n}{2}}}{\Gamma[\frac{n}{2} + 1]} R^n

Here, we’ll explore this topic further, both giving a different derivation (from Greg Huber’s American Mathematical Monthly paper) and make some informal observations.

Derivation: the argument I present can be tweaked to produce a formal induction argument that, if the volume is V_n, it is proportional to the n’th power of the radius R .

Now note that if the surface area of the n-1 sphere is given by W_{n-1} = w_{n-1}R^{n-1} we have, from the theory of differentials, \frac{dV_n}{dR} = W_{n-1}  . Think of taking a sphere and adding just a bit \Delta R to its radius; you obtain a shell of thickness \Delta R all the way around the sphere which is roughly equal to the surface area times \Delta R


So we can rewrite this as V_n = \int^R_0 W_{n-1} dr = \int^R_0 w_{n-1}r^{n-1} dr = w_{n-1}\int^R_0r^{n-1} dr

To see what comes next, we first write this same quantity in two different ways:

\int \int \int ...\int_{S^{n-1}} \Pi^{n}_{i=1} dx_i = \int^R_0 w_{n-1}r^{n-1} dr = w_{n-1}\int^R_0r^{n-1} dr  = \int \int \int ...\int^R_{0} r^{n-1} J(\theta_1, ..\theta_{n-1}) dr .

The first integral is integral in rectangular coordinates within the boundary of the n-1 sphere. The rightmost integral is the same integral in generalized spherical coordinates (see Blumenson’s excellent Monthly article) where the first iterated integrals are those with angular limits with J being the angular volume element. The middle integral is the volume integral. All are equal to the volume of the n-ball. The key here is that the iterated integrals evaluated over the entire n-1 sphere are equal to w_{n-1} .

Now integrate e^{-r^2} over the region bounded by the sphere r^2 = x_1^2 + x_2^2 + ...x_n^2 , noting that e^{-r^2} = e^{-x_1^2}e^{-x_2^2}...e^{-x_n^2} :

\int \int \int ...\int_{S^{n-1}} \Pi^{n}_{i=1}e^{-x_i^2} dx_i  = w_{n-1}\int^R_0e^{-r^2}r^{n-1} dr  = \int \int \int ...\int^R_{0} r^{n-1}e^{-r^2}J(\theta_1, ..\theta_{n-1}) dr

Equality holds between the middle and right integral because in “angle/r” space, the r and angular coordinates are independent. Equality between the leftmost and rightmost integrals holds because this is a mere change of variables.
So we can now drop the rightmost integral. Now take a limit as R \rightarrow \infty :

\int \int \int ...\int_{R^{n}} \Pi^{n}_{i=1}e^{-x_i^2} dx_i  = (\int^{\infty}_{-\infty} e^{-x^2} dx)^n = w_{n-1}\int^{\infty}_0e^{-r^2}r^{n-1} dr

The left integral is just the n-th power of the Gaussian integral and is therefore \pi^{\frac{n}{2}} and a substitution r = \sqrt{u} turns this into \frac{w_{n-1}}{2}\int^{\infty}_{0} u^{\frac{1}{2} -1}e^{-u}du = w_{n-1} \frac{1}{2}\Gamma[\frac{1}{2}] (recall \Gamma[x] = \int^{\infty}_{0} t^{x-1}e^{-t} dt ).

So w_{n-1} = \frac{2 \pi^{\frac{n}{2}}}{\Gamma[\frac{n}{2}]} and hence, by integration, v_n = \frac{2 \pi^{\frac{n}{2}}}{n\Gamma[\frac{n}{2}]}= \frac{ \pi^{\frac{n}{2}}}{n\Gamma[\frac{n}{2}+1]}

Now v_n =V_n when R = 1 . \frac{v_n}{2^n} can be thought of as the percentage of the cube with vertexes (\pm 1, \pm 1, ... \pm 1) that is taken up by the inscribed unit sphere.

Now we set R = 1 at look at a graph of hypervolume vs. n:

The first graph is the ratio of the volume taken up by the ball verses the hypervolume of the hyper cube that the ball is inscribed in.


Next we see the the hypervolume peaks at n = 5 (max is between 5 and 6 )and then starts to decline to zero. Of course there has to be an inflection point somewhere; it turns out to be between n = 10 and n = 11.


Now we plot the hyperarea of the n-1 sphere vs. the hypervolume of the ball that it bounds; we see that more and more of the hypervolume of the ball is concentrated near the boundary as the dimension goes up.


For more: see the interesting discussion on Division by Zero.

May 31, 2015

And a Fields Medalist makes me feel better

Filed under: calculus, editorial, elementary mathematics, popular mathematics, topology — Tags: — collegemathteaching @ 10:30 pm

I have subscribed to Terence Tao’s blog.

His latest post is about a clever observation about…calculus: in particular is is about calculating:

\frac{d^{k+1}}{dx^{k+1}}(1+x^2)^{\frac{k}{2}} for k \in \{1, 2, 3, ... \} . Try this yourself and surf to his post to see the “slick, 3 line proof”.

But that really isn’t the point of this post.

This is the point: I often delight in finding something “fun” and “new to me” about an established area. I thought “well, that is because I am too dumb to do the really hard stuff.” (Yes, I’ve published, but my results are not Annals of Mathematics caliber stuff. 🙂 )

But I see that even the smartest, most accomplished among us can delight in the fun, simple things.

That makes me feel better.

Side note: I haven’t published much on this blog lately, mostly because I’ve been busy updating this one. It is a blog giving notes for my undergraduate topology class. That class was time consuming, but I had the teaching time of my life. I hope that my students enjoyed it too.

May 11, 2015

The hypervolume of the n-ball enclosed by a standard n-1 sphere

I am always looking for interesting calculus problems to demonstrate various concepts and perhaps generate some interest in pure mathematics.
And yes, I like to “blow off some steam” by spending some time having some non-technical mathematical fun with elementary mathematics.

This post uses only:

1. Integration by parts and basic reduction formulas.
2. Trig substitution.
3. Calculation of volumes (and hyper volumes) by the method of cross sections.
4. Induction
5. Elementary arithmetic involving factorials.

The quest: find a formula that finds the (hyper)volume of the region \{(x_1, x_2, x_3,....x_k) | \sum_{i=1}^k x_i^2 \leq R^2 \} \subset R^k

We will assume that the usual tools of calculus work as advertised.

Start. If we done the (hyper)volume of the k-ball by V_k  we will start with the assumption that V_1 = 2R ; that is, the distance between the endpoints of [-R,R] is 2R.

Step 1: we show, via induction, that V_k =c_kR^k where c_k is a constant and R is the radius.

Our proof will be inefficient for instructional purposes.

We know that V_1 =2R hence the induction hypothesis holds for the first case and c_1 = 2 . We now go to show the second case because, for the beginner, the technique will be easier to follow further along if we do the k = 2 case.

Yes, I know that you know that V_2 = \pi R^2 and you’ve seen many demonstrations of this fact. Here is another: let’s calculate this using the method of “area by cross sections”. Here is x^2 + y^2 = R^2 with some y = c cross sections drawn in.


Now do the calculation by integrals: we will use symmetry and only do the upper half and multiply our result by 2. At each y = y_c level, call the radius from the center line to the circle R(y) so the total length of the “y is constant” level is 2R(y) and we “multiply by thickness “dy” to obtain V_2 = 4 \int^{y=R}_{y=0} R(y) dy .

But remember that the curve in question is x^2 + y^2 = R^2 and so if we set x = R(y) we have R(y) = \sqrt{R^2 -y^2} and so our integral is 4 \int^{y=R}_{y=0}\sqrt{R^2 -y^2}  dy

Now this integral is no big deal. But HOW we solve it will help us down the road. So here, we use the change of variable (aka “trigonometric substitution”): y = Rsin(t), dy =Rcos(t) to change the integral to:

4 \int^{\frac{\pi}{2}}_0 R^2 cos^2(t) dt = 4R^2 \int^{\frac{\pi}{2}}_0  cos^2(t) dt therefore

V_2 = c_2 R^2 where:

c_2 = 4\int^{\frac{\pi}{2}}_0  cos^2(t)

Yes, I know that this is an easy integral to solve, but I first presented the result this way in order to make a point.

Of course, c_2 = 4\int^{\frac{\pi}{2}}_0  cos^2(t) = 4\int^{\frac{\pi}{2}}_0 \frac{1}{2} + \frac{1}{2}cos(2t) dt = \pi

Therefore, V_2 =\pi R^2 as expected.

Exercise for those seeing this for the first time: compute c_3 and V_3 by using the above methods.

Inductive step: Assume V_k = c_kR^k Now calculate using the method of cross sections above (and here we move away from x-y coordinates to more general labeling):

V_{k+1} = 2\int^R_0 V_k dy = 2 \int^R_0 c_k (R(x_{k+1})^k dx_{k+1} =c_k 2\int^R_0 (R(x_{k+1}))^k dx_{k+1}

Now we do the substitutions: first of all, we note that x_1^2 + x_2^2 + ...x_{k}^2 + x_{k+1}^2 = R^2 and so

x_1^2 + x_2^2 ....+x_k^2 = R^2 - x_{k+1}^2 . Now for the key observation: x_1^2 + x_2^2 ..+x_k^2 =R^2(x_{k+1}) and so R(x_{k+1}) = \sqrt{R^2 - x_{k+1}^2}

Now use the induction hypothesis to note:

V_{k+1} = c_k 2\int^R_0 (R^2 - x_{k+1}^2)^{\frac{k}{2}} dx_{k+1}

Now do the substitution x_{k+1} = Rsin(t), dx_{k+1} = Rcos(t)dt and the integral is now:

V_{k+1} = c_k 2\int^{\frac{\pi}{2}}_0 R^{k+1} cos^{k+1}(t) dt = c_k(2 \int^{\frac{\pi}{2}}_0 cos^{k+1}(t) dt)R^{k+1} which is what we needed to show.

In fact, we have shown a bit more. We’ve shown that c_1 = 2 =2 \int^{\frac{\pi}{2}}_0(cos(t))dt, c_2 = 2 \cdot 2\int^{\frac{\pi}{2}}_0 cos^2(t) dt = c_1 2\int^{\frac{\pi}{2}}_0 cos^2(t) dt and, in general,

c_{k+1} = c_{k}c_{k-1}c_{k-2} ....c_1(2 \int^{\frac{\pi}{2}}_0 cos^{k+1}(t) dt) = 2^{k+1} \int^{\frac{\pi}{2}}_0(cos^{k+1}(t))dt \int^{\frac{\pi}{2}}_0(cos^{k}(t))dt \int^{\frac{\pi}{2}}_0(cos^{k-1}(t))dt .....\int^{\frac{\pi}{2}}_0(cos(t))dt

Finishing the formula

We now need to calculate these easy calculus integrals: in this case the reduction formula:

\int cos^n(x) dx = \frac{1}{n}cos^{n-1}sin(x) + \frac{n-1}{n} \int cos^{n-2}(x) dx is useful (it is merely integration by parts). Now use the limits and elementary calculation to obtain:

\int^{\frac{\pi}{2}}_0 cos^n(x) dx = \frac{n-1}{n} \int^{\frac{\pi}{2}}_0 cos^{n-2}(x)dx to obtain:

\int^{\frac{\pi}{2}}_0 cos^n(x) dx = (\frac{n-1}{n})(\frac{n-3}{n-2})......(\frac{3}{4})\frac{\pi}{4} if n is even and:
\int^{\frac{\pi}{2}}_0 cos^n(x) dx = (\frac{n-1}{n})(\frac{n-3}{n-2})......(\frac{4}{5})\frac{2}{3} if n is odd.

Now to come up with something resembling a closed formula let’s experiment and do some calculation:

Note that c_1 = 2, c_2 = \pi, c_3 = \frac{4 \pi}{3}, c_4 = \frac{(\pi)^2}{2}, c_5 = \frac{2^3 (\pi)^2)}{3 \cdot 5} = \frac{8 \pi^2}{15}, c_6 = \frac{\pi^3}{3 \cdot 2} = \frac{\pi^3}{6} .

So we can make the inductive conjecture that c_{2k} = \frac{\pi^k}{k!} and see how it holds up: c_{2k+2} = 2^2 \int^{\frac{\pi}{2}}_0(cos^{2k+2}(t))dt \int^{\frac{\pi}{2}}_0(cos^{2k+1}(t))dt \frac{\pi^k}{k!}

= 2^2 ((\frac{2k+1}{2k+2})(\frac{2k-1}{2k})......(\frac{3}{4})\frac{\pi}{4})((\frac{2k}{2k+1})(\frac{2k-2}{2k-1})......\frac{2}{3})\frac{\pi^k}{k!}

Now notice the telescoping effect of the fractions from the c_{2k+1} factor. All factors cancel except for the (2k+2) in the first denominator and the 2 in the first numerator, as well as the \frac{\pi}{4} factor. This leads to:

c_{2k+2} = 2^2(\frac{\pi}{4})\frac{2}{2k+2} \frac{\pi^k}{k!} = \frac{\pi^{k+1}}{(k+1)!} as required.

Now we need to calculate c_{2k+1} = 2\int^{\frac{\pi}{2}}_0(cos^{2k+1}(t))dt c_{2k} = 2\int^{\frac{\pi}{2}}_0(cos^{2k+1}(t))dt \frac{\pi^k}{k!}

= 2 (\frac{2k}{2k+1})(\frac{2k-2}{2k-1})......(\frac{4}{5})\frac{2}{3}\frac{\pi^k}{k!} = 2 (\frac{(2k)(2k-2)(2k-4)..(4)(2)}{(2k+1)(2k-1)...(5)(3)} \frac{\pi^k}{k!}

To simplify this further: split up the factors of the k! in the denominator and put one between each denominator factor:

= 2 (\frac{(2k)(2k-2)(2k-4)..(4)(2)}{(2k+1)(k)(2k-1)(k-1)...(3)(5)(2)(3)(1)} \pi^k Now multiply the denominator by 2^k and put one factor with each k-m factor in the denominator; also multiply by 2^k in the numerator to obtain:

(2) 2^k (\frac{(2k)(2k-2)(2k-4)..(4)(2)}{(2k+1)(2k)(2k-1)(2k-2)...(6)(5)(4)(3)(2)} \pi^k Now gather each factor of 2 in the numerator product of the 2k, 2k-2…

= (2) 2^k 2^k \pi^k \frac{k!}{(2k+1)!} = 2 \frac{(4 \pi)^k k!}{(2k+1)!} which is the required formula.

So to summarize:

V_{2k} = \frac{\pi^k}{k!} R^{2k}

V_{2k+1}= \frac{2 k! (4 \pi)^k}{(2k+1)!}R^{2k+1}

Note the following: lim_{k \rightarrow \infty} c_{k} = 0 . If this seems strange at first, think of it this way: imagine the n-ball being “inscribed” in an n-cube which has hyper volume (2R)^n . Then consider the ratio \frac{2^n R^n}{c_n R^n} = 2^n \frac{1}{c_n} ; that is, the n-ball holds a smaller and smaller percentage of the hyper volume of the n-cube that it is inscribed in; note the 2^n corresponds to the number of corners in the n-cube. One might see that the rounding gets more severe as the number of dimensions increases.

One also notes that for fixed radius R, lim_{n \rightarrow \infty} V_n = 0 as well.

There are other interesting aspects to this limit: for what dimension n does the maximum hypervolume occur? As you might expect: this depends on the radius involved; a quick glance at the hyper volume formulas will show why. For more on this topic, including an interesting discussion on this limit itself, see Dave Richardson’s blog Division by Zero. Note: his approach to finding the hyper volume formula is also elementary but uses polar coordinate integration as opposed to the method of cross sections.

May 4, 2015

Hitting the bat with the ball….the vector calculus integral theorems….

Filed under: calculus, editorial, vector calculus — Tags: , — collegemathteaching @ 4:43 pm

When I was a small kid, my dad would play baseball with me. He’d pitch the ball and try to hit my bat with the ball so I could think I was actually hitting the ball.

Well, fast forward 50 years to my vector calculus final exam; we are covering the “big integral” theorems.

Yeah, I know; it is \int_{\partial \Omega} \sigma = \int_{\Omega} d \sigma but, let’s just say that we aren’t up to differential forms as yet. 🙂

And so I am giving them classical Green’s Theorem, Stokes’ Theorem and Divergence Theorem problems….and everything in sight basically boils down to integrating a constant over a rectangle, box, sphere, ball or disk.

I am hitting their bats with the ball; I wonder how many will notice. 🙂

March 13, 2015

Moving from “young Turk” to “old f***”

Filed under: calculus, class room experiment, editorial, pedagogy — Tags: , , — collegemathteaching @ 9:09 pm

Today, one of our hot “young” (meaning: new here) mathematicians came to me and wanted to inquire about a course switch. He noted that his two course load included two different courses (two preparations) and that I was teaching different sections of the same two courses…was I interested in doing a course swap so that he had only one preparation (he is teaching 8 hours) and I’d only have two?

I said: “when I was your age, I minimized the number of preparations. But at my age, teaching two sections of the same low level course makes me want to bash my head against the wall”. That is, by my second lesson of the same course in the same day; I just want to be just about anywhere else on campus; I have no interest, no enthusiasm, etc.

I specifically REQUESTED 3 preparations to keep myself from getting bored; that is what 24 years of teaching this stuff does to you.

Every so often, someone has the grand idea to REFORM the teaching of (whatever) and the “reformers” usually get at least a few departments to go along with it.

The common thing said is that it gets professors to reexamine their teaching of (whatever).

But I wonder if many try these things….just out of pure boredom. Seriously, read the buzzwords of the “reform paper” I linked to; there is really nothing new there.

January 23, 2015

Making a math professor happy…

Filed under: calculus, class room experiment, elementary mathematics — Tags: , — collegemathteaching @ 10:28 pm

Calculus III: we are talking about polar curves. I give the usual lesson about how to graph r = sin(2 \theta) and r = sin(3 \theta) and give the usual “if n is even, the graph of r = sin (n \theta) has 2n petals and if n is odd, it has n petals.

Question: “does that mean it is impossible to have a graph with 6 petals then”? 🙂

Yes, one can have intersecting petals and one try: r = |sin(3 \theta) | . But you aren’t going to get it without a trick of some sort.


January 9, 2015

Bad notation drove me nuts….(and still does)

Filed under: advanced mathematics, calculus, topology — Tags: , — collegemathteaching @ 8:36 pm

I remember one of my first classes in algebraic topology. The professor was talking about how to prove that \pi_1(S^1) = Z . For those who might be rusty: I am talking about the fundamental group of the circle, which is a group structure put on the set of continuous maps of the circle into the circle, where the maps all contain a set “base point” and two maps are equivalent if there is a homotopy (continuous deformation) between the two.

He remarked that he hoped “it was clear” that the circle was NOT simply connected.

That confused the heck out of me, because I had fallen into the trap of confusing the circle with a disk bounded by the circle!

Remember, for years, I had heard things like “the area of a circle is”…when in fact, the circle has area zero. The disk in the plane bounded by the circle has an area though.

So, when I teach, I try to point out bad or inconsistent notation. Example: sin^2(x) means (sin(x))^2 rather than sin(sin(x)) as the notation f^2 might suggest. But sin^{-1}(x) means arcsin(x) and NOT csc(x) = \frac{1}{sin(x)} . But…\frac{d^2 y}{dx^2} means \frac{d}{dx}(\frac{dy}{dx}) .

And please, don’t even get me started about dx that appears in integrals. I remember a student asking me about that when we did “integration by substitution”: “we never used the dx for anything up until now!” he said…correctly.

What got me thinking about this
This blog describes many of the things that I am thinking about at the moment. Currently, I am thinking about “wild knots”, which are embeddings of the circle into 3 space which cannot be deformed (by a deformation of space) into a smooth embedding of the circle.

Here are two examples of knots that can’t be deformed into a smooth knot:

Now the term “knot” implies that an embedding is present; the space that is being embedded is a circle. Of course, one might confuse a particular embedding with the equivalence class of equivalent embeddings; some old time authors distinguished the two concepts. Most modern ones (myself included) don’t.

Now I am interested in knots that are formed by the embedding of two “arcs”, each of which is non-wildly embedded (not wild is called “tame”).

In the case of arcs, authors sometimes mean “the arc itself” and in other cases mean “the embedding of an arc” (e. g. “arcs in 3 space”). Yes, there are some arcs that are so pathologically embedded that there is no deformation of space that takes the arc to a smooth arc. Unfortunately, the term “arc” can mean “the underlying space” or “the embedding”.

This will be one focus of my research in 2015: I hope to show that a knot that has one wild point (roughly speaking: one point that can never be assigned a tangent vector) that is the union of two tamely embedded arcs is never determined by its compliment. That might sound like gibberish, but in 2014 I proved that a knot that is an infinite product of knots (which are converging to a single wild point) has a complement which is homeomorphic to the complement of a knot that is wild at ALL of its points.

Of course THINKING that I can prove something and proving it are two different things. I remember spending two years trying to prove something that was false (I published the counter example) and, for part of my Ph.D. thesis, I attempted to prove something that turned out to be false; of course the counterexample came over 20 years after my attempt.

« Newer PostsOlder Posts »

Create a free website or blog at