College Math Teaching

April 26, 2011

On “0 to the 0’th power”: UPDATED

Filed under: calculus, derivatives, media, popular mathematics — collegemathteaching @ 2:26 am

UPDATED: I’ve extended this discussion to the cases in which the limit functions are pathological and corrected an error.

I was amused when I read this article:

My friends over at the popular blog Ask a Mathematician, Ask a Physicist did a great post a while ago addressing one of their readers’ questions: What is 0^0?

The reason this question is a head-scratcher is that our rules about how exponents work seem to yield two contradictory answers. On the one hand, we have a rule that zero raised to any power equals zero. But on the other hand, we have a rule that anything raised to the power of zero equals one. So which is it? Does 0^0 = 0 or does 0^0 = 1?

Well, I asked Google and according to their super-official calculator, the answer is unambiguous: […]

Indeed, the Mathematician at AAMAAP confirms, mathematicians in practice act as if 0^0 = 1. But why? Because it’s more convenient, basically. If we let 0^0=0, there are certain important theorems, like the Binomial Theorem, that would need to be rewritten in more complicated and clunky ways. Note that it’s not even the case that letting 0^0=0 would contradict our theorems (if so, we could perhaps view that as a disproof of the statement 0^0=0). It’s just that it would make our theorems less elegant. Says the mathematician:

“There are some further reasons why using 0^0 = 1 is preferable, but they boil down to that choice being more useful than the alternative choices, leading to simpler theorems, or feeling more “natural” to mathematicians. The choice is not “right”, it is merely nice.”

I am curious as to who this “mathematician” is and how well the author of the above article understood what he heard. Here is a fact: if 0^0 \neq 1 , then many basic Theorems of calculus would be wrong.

Of course, is is an elementary calculus exercise to show that lim_{x \rightarrow 0+} x^x = 1 and the graph seems to confirm this:

(graph is a screenshot of a SCILAB graph)

But one might ask: what if the exponent approaches zero at a greater or lesser rate than the base? For example, what are: lim_{x \rightarrow 0+} (ln(x+1))^{x} or lim_{x \rightarrow 0+} (x)^{ln(x+1)} ?

UPDATE The following is designed for people who teach calculus; the pace might be too quick for a student who is just learning. At the end of the post I’ll put the details. Back to the post:

Let’s take a look. Suppose f and g are both differentiable analytic at 0 and f(0)=g(0)=0 .

Then lim_{x \rightarrow 0+} f(x)^{g(x)} = exp(-((g(x))^2 f'(x))/(f(x)g'(x))) by an elementary application of L’Hopital’s rule. So we should examine what is in the exponential exponent:
-((g(x))^2 f'(x))/(f(x)g'(x)) = -g(x) (g(x)/f(x))(f'(x)/g'(x)) .

The product of the ratios (g(x)/f(x))(f'(x)/g'(x)) will prove to be the key.

Now use the definition of derivative and the fact that both f and g vanish at zero to simplify this product:
(f'(x)/g'(x)) = lim_{x \rightarrow 0}((f(x)-f(0))/(x-0))/(g(x)-g(0))/(x-0)=lim_{x \rightarrow 0}(f(x)/(g(x)) .
Hence lim_{x \rightarrow 0+}-((g(x))^2 f'(x))/(f(x)g'(x)) =
lim_{x \rightarrow 0+} -g(x) (g(x)/f(x))(f(x)/g(x)) =  lim_{x \rightarrow 0+} -g(x) = 0

Hence lim_{x \rightarrow 0+} f(x)^{g(x)} = exp(0) = 1

Note: can you spot the error in my deleted “proof”?

I’ll do it right this time:

-((g(x))^2 f'(x))/(f(x)g'(x)) = -g(x) (g(x)/f(x))(f'(x)/g'(x)) .

The product of the ratios (g(x)/f(x))(f'(x)/g'(x)) will prove to be the key. Now exploit the fact that both f and g are analytic at zero and have a Taylor series expansion: say f(x) = \Sigma^{\infty}_{k=m}a_kx^k and g(x) = \Sigma^{\infty}_{j=n}b_jx^j
Then f'(x) = \Sigma^{\infty}_{k=m}ka_kx^{k-1} and g(x) = \Sigma^{\infty}_{j=n}jb_jx^{j-1}
Now look at the ratio (g(x)/f(x))(f'(x)/g'(x)) .
This is easier to see if we write the ratio out term by term: the numerator of the fraction is:
(b_n x^n + b_{n+1} x^{n+1} + b_{n+2} x^{n+2}...)(m a_m x^{m-1} + (m+1) a_{m+1} x^{m}...)
The denominator is: (n b_n x^{n-1} + (n+1) b_{n+1} x^{n}...)(a_m x^{m} + a_{m+1} x^{m+1}...)
Note: we can assume that there is no constant term in the Taylor expansion.
Now we can factor out (x^n x^{m-1} = x^{n+m-1}) from the numerator and (x^{n-1} x^m) = x^{n+m-1} from the denominator to obtain {(b_n + b_{n+1}x +..)(m a_m + (m+1) a_{m+1}x+..)}/{(n b_n + (n+1) b_{n+1}x +..)( a_m + a_{m+1}x+..)} which equals (m b_n a_m)/(n b_n a_m) at x = 0 (of course we can assume that b_n, a_m \neq 0
Therefore lim_{x \rightarrow 0+}-g(x) (g(x)f'(x))/(g'(x)f(x)) = 0 as required and the result follows.

Conclusion: the speed of approach to zero doesn’t really matter, so long as the functions are analytic.

UPDATE: A non-analytic case:

Of course, we might have the case when, say, f approaches zero but fails to be analytic. Then interesting things can happen.
Here is a graph which shows f(x) = exp(-1/x^2) and g(x) = ln(1+x) The above proof doesn’t work as f(x) is not analytic at x = 0; indeed f'(0) = 0 but the Taylor series is valid at x = 0 only. In fact, in this case g(x)ln(f(x)) = ln(x+1)(-1/x^2) which approaches -\infty as x approaches zero. Hence our f(x)^{g(x)} = 0 in this case…but we had to use a somewhat pathological function.

Note: we can get different limits by playing with the exp(-1/x^2) example. In fact:

\lim_{x\rightarrow 0+}(\exp (-\frac{1}{x^{k}}))^{x^{m}}=

\lim_{x\rightarrow 0+}\exp (-\frac{x^{m}}{x^{k}})=\left\{   \begin{array}{c}  \exp (0)=1\text{ if }m>k \\   \exp (-1)=e^{-1}\text{ if }m=k \\   \exp (-\infty )=0\text{ if }m<k\end{array}  \right\}

This is a Matlab generated example with exp(-1/x^2) with the exponents x, x^2, x^3 . Note the struggle with round-off error.

Advertisements

April 8, 2011

A possible way to explain the contrapositive

Filed under: class room experiment, logic, mathematical ability, mathematics education, media — collegemathteaching @ 2:16 am

Mathematics Education

This post at Schneier’s security blog is very interesting. The gist of the post is this: do you remember the simple logical rule: “P implies Q” is equivalent to “not Q implies not P”. Example: if you have the statement “green apples are sour” means that if you bite an apple and it isn’t sour, then it can’t be green. In my opinion, there is nothing hard about this. We use this principle all of the time in mathematics! As an example, consider how we prove that there is no largest prime: Suppose that there was a largest prime q_n with the previous (finite) primes indexed. Now form the number p = q_1q_2q_3....q_n + 1 Now p cannot be prime because it is bigger than q_n So it is composite and therefore has prime factors. But this is impossible because q_k can never divide p because it divides p - 1 . QED.

The whole structure of the proof by contradiction is the principle that “p implies q” is equivalent “not q implies not p”. Here the q is “there is no biggest prime” and the “suppose there IS a biggest prime” is the “not q” which ended up implying “not p” where p is the true statement that p and p -1 are relatively prime.
No mathematician would have a problem using that bit of logic.

But evidently mathematicians are in the minority.

Consider this experiment:

Consider the Wason selection task. Subjects are presented with four cards next to each other on a table. Each card represents a person, with each side listing some statement about that person. The subject is then given a general rule and asked which cards he would have to turn over to ensure that the four people satisfied that rule. For example, the general rule might be, “If a person travels to Boston, then he or she takes a plane.” The four cards might correspond to travelers and have a destination on one side and a mode of transport on the other. On the side facing the subject, they read: “went to Boston,” “went to New York,” “took a plane,” and “took a car.”

So, which card needs to be turned over? Of course, the card has to be “went to Boston” because there is nothing in the rule about going to New York, there is nothing that says that Boston is the only place you can fly to, and turning over the “car card” might reveal “New York” as a destination. Evidently, this problem is hard for most people.
But here is where this gets interesting: if the exact same logical problem is phrased as a “fairness rule”; say “for you to play in a game, you must attend practice” then the problem because very easy for people to solve! Schneier concludes:

Our brains are specially designed to deal with cheating in social exchanges. The evolutionary psychology explanation is that we evolved brain heuristics for the social problems that our prehistoric ancestors had to deal with. Once humans became good at cheating, they then had to become good at detecting cheating — otherwise, the social group would fall apart.

So, maybe I can use the fact that people seem to understand this rule in this setting when it comes to teaching this point of logic?

April 3, 2011

Infinite Series: the Root and Ratio tests

Filed under: advanced mathematics, analysis, calculus, infinite series, series — collegemathteaching @ 3:29 am

This post is written mostly for those who are new to teaching calculus rather than for students learning calculus for the first time. Experienced calculus teachers and those whose analysis background is still fresh will likely be bored. 🙂

The setting will be series \Sigma^\infty_{k=1} a_k with a_k > 0 We will use the usual notion of convergence; that is, the series converges if the sequence of partial sums converge. If that last statement puzzles you, there are other non-standard notions of convergence.

I’ll give the usual statements of the root test and the ratio test, given \Sigma^\infty_{k=1} a_k with a_k > 0

Root Test
Suppose lim_{k \rightarrow \infty}(a_k)^{\frac{1}{k}} = c . If c >1 the series diverges. If c < 1 the series converges. If c = 1 the test is inconclusive.

Ratio Test
Suppose lim_{k \rightarrow \infty} a_{k+1}/a_k = c . If c >1 the series diverges. If c < 1 the series converges. If c = 1 the test is inconclusive.

Quick examples of how these tests are used
Example one: show that \Sigma^{\infty}_{k=1} (x/k)^{k} converges for all x . Apply the root test and note lim_{k \rightarrow \infty} ((x/k))^{k})^{1/k} = lim_{k \rightarrow \infty} ((x/k) = 0 for all x \geq 0 hence the series converges absolutely for all x .

Example two: show that show that \Sigma^{\infty}_{k=1} (x^k/k!) converges for all x . Consider lim_{k \rightarrow \infty} (x^{k+1}/(k+1)!)/((x^k)/k!) = lim_{k \rightarrow \infty} x/(k+1) = 0 < 1 for all x \geq 0 hence the series converges absolutely for all x .

However these tests, as taught, are often more limited than they need to be. For example, if one considers the series \Sigma^{\infty}_{k=1} (|sin(k)|/2)^{k} the root test, as stated, doesn't apply as lim_{k \rightarrow \infty} ((|sin(k)|)/2)^{k})^{1/k} fails to exist, though it is clear that the sup{((|sin(k)|)/2)^{k})^{1/k}} = 1/2 and the series is dominated by the convergent geometric series \Sigma^{\infty}_{k=1} (1/2)^k and therefore converges.

There is also a common misconception that the root test and the ratio tests are equivalent. They aren’t; in fact, we’ll show that if a series passes the ratio test, it will also pass the root test but that the reverse is false. We’ll also provide an easy to understand stronger version of these two tests. The stronger versions should be easily within the grasp of the best calculus students and within the grasp of beginning analysis/advanced calculus students.

Note: there is nothing original here; I can recommend the books Counterexamples in Analysis by Bernard R. Gelbaum and Theory and Application of Infinite Series by Konrad Knop for calculus instructors who aren’t analysts. Much of what I say can be found there in one form or another.

The proofs of the tests and what can be learned
Basically, both proofs are merely basic comparisons with a convergent geometric series.

Proof of the root test (convergence)
If lim_{k \rightarrow \infty} (a_k)^{1/k} = c with c < 1 then there exists some d <1 and some index N such that for all n > N, {a_n}^{1/n}  < d . Hence for all n> N, {a_n} < d^n where d < 1 . Therefore the series converges by a direct comparison with the convergent geometric series \Sigma^{\infty}_{k = N} d^{k} .

Note the following: requiring lim_{k \rightarrow \infty} (a_k)^{1/k} to exist is overkill; what is important that, for some index N , k , sup_{k>N} (a_k)^{1/k} < c < 1. This is enough for the comparison with the geometric series to work. In fact, in the language of analysis, we can replace the limit condition with limsup (a_k)^{1/k} = c < 1 . If you are fuzzy about limit superior, this is a good reference.

In fact, we can weaken the hypothesis a bit further. Since the convergence of a series really depends on the convergence of the “tail” of a series, we really only need: for some index N and all n > N , limsup (a_{N+k})^{1/k} = c < 1 , k \in \{0, 1, 2,...\} . This point may seem pedantic but we’ll use this in just a bit.

Note: we haven’t talked about divergence; one can work out a stronger test for divergence by using limit inferior.

Proof of the ratio test (convergence)
We’ll prove the stronger version of the ratio test: if limsup a_{k+1}/a_k = c < 1 then there is an index N and some number d < 1 such that for all n\geq   N, a_{n+1}/a_n < d.
Simple algebra implies that a_{n+1} < (a_n)d and a_{n+2} < (a_{n+1})d <  (a_n)d^2 and in general a_{n+j} < (a_n)d^j . Hence the series \Sigma ^{\infty}_{k = N} a_k is dominated by (a_N)(\Sigma ^{\infty}_{k = 0} d^k) which is a convergent geometric series.

Comparing the root and the ratio tests
Consider the convergent series 1/2 + (1/3)^2 + (1/2)^3 + (1/3)^4 +.....+ (1/2)^{2k- 1} + (1/3)^{2k}.... .
Then clearly limsup a_{k} = 1/2 hence the root test works. But the ratio test yields the following:
(a_{2k+1})/a_{2k} = (1/2)^{2k+1}/(1/3)^{2k} = (3/2)^{2k}/2 which tends to infinity as k goes to infinity. Note: since the limit does not exist, the traditional ratio test doesn’t apply. The limit inferior tends to zero so a strengthened ratio test doesn’t imply divergence.

So the root test is not equivalent to the ratio test.

But suppose the ratio test yields convergence; that is:
limsup a_{k+1}/a_k = c < 1 . Then by the same arguments used in the proof:
a_{N+j} < (a_N)d^j . Then we can take j'th roots of both sides and note: (a_{n+j})^{1/j} < d(a_n)^{1/j} < d hence the weakened hypothesis of the root test is met.

That is, the root test is a stronger test than the ratio test, though, of course, it is sometimes more difficult to apply.

We’ll state the tests in the stronger form for convergence; the base assumption that \Sigma a_{k} has positive terms:

Root test: if there exists an index N such that for all n \leq N we have limsup_{j} (a_{N+j})^{1/j} \leq c < 1 then the series converges.

Ratio test: if limsup (a_{k+1})/a_{k} \leq c < 1 then the series converges.

It is a routine exercise to restate these tests in stronger form for divergence.

Create a free website or blog at WordPress.com.