UPDATED: **I’ve extended this discussion to the cases in which the limit functions are pathological and corrected an error.**

I was amused when I read this article:

My friends over at the popular blog Ask a Mathematician, Ask a Physicist did a great post a while ago addressing one of their readers’ questions: What is 0^0?

The reason this question is a head-scratcher is that our rules about how exponents work seem to yield two contradictory answers. On the one hand, we have a rule that zero raised to any power equals zero. But on the other hand, we have a rule that anything raised to the power of zero equals one. So which is it? Does 0^0 = 0 or does 0^0 = 1?

Well, I asked Google and according to their super-official calculator, the answer is unambiguous: […]

Indeed, the Mathematician at AAMAAP confirms, mathematicians in practice act as if 0^0 = 1. But why? Because it’s more convenient, basically. If we let 0^0=0, there are certain important theorems, like the Binomial Theorem, that would need to be rewritten in more complicated and clunky ways. Note that it’s not even the case that letting 0^0=0 would contradict our theorems (if so, we could perhaps view that as a disproof of the statement 0^0=0). It’s just that it would make our theorems less elegant. Says the mathematician:

“There are some further reasons why using 0^0 = 1 is preferable, but they boil down to that choice being more useful than the alternative choices, leading to simpler theorems, or feeling more “natural” to mathematicians. The choice is not “right”, it is merely nice.”

I am curious as to who this “mathematician” is and how well the author of the above article understood what he heard. Here is a fact: if , then many basic Theorems of calculus would be wrong.

Of course, is is an elementary calculus exercise to show that and the graph seems to confirm this:

(graph is a screenshot of a SCILAB graph)

But one might ask: what if the exponent approaches zero at a greater or lesser rate than the base? For example, what are: or ?

**UPDATE** The following is designed for people who teach calculus; the pace might be too quick for a student who is just learning. At the end of the post I’ll put the details. Back to the post:

Let’s take a look. Suppose and are both ~~differentiable~~ analytic at 0 and .

Then by an elementary application of L’Hopital’s rule. ~~ So we should examine what is in the exponential exponent:~~

.

~~The product of the ratios will prove to be the key. ~~

~~Now use the definition of derivative and the fact that both and vanish at zero to simplify this product:~~

~~. ~~

~~Hence ~~

~~ ~~

~~Hence ~~

Note: can you spot the error in my deleted “proof”?

I’ll do it right this time:

.

The product of the ratios will prove to be the key. Now exploit the fact that both and are analytic at zero and have a Taylor series expansion: say and

Then and

Now look at the ratio .

This is easier to see if we write the ratio out term by term: the numerator of the fraction is:

The denominator is:

Note: we can assume that there is no constant term in the Taylor expansion.

Now we can factor out from the numerator and from the denominator to obtain which equals at (of course we can assume that

Therefore as required and the result follows.

**Conclusion:** the speed of approach to zero doesn’t really matter, so long as the functions are analytic.

**UPDATE:** A non-analytic case:

Of course, we might have the case when, say, approaches zero but fails to be analytic. Then interesting things can happen.

Here is a graph which shows and The above proof doesn’t work as is not analytic at ; indeed but the Taylor series is valid at only. In fact, in this case which approaches as approaches zero. Hence our in this case…but we had to use a somewhat pathological function.

Note: we can get different limits by playing with the example. In fact:

This is a Matlab generated example with with the exponents . Note the struggle with round-off error.