College Math Teaching

July 14, 2020

An alternative to trig substitution, sort of..

Ok, just for fun: \int \sqrt{1+x^2} dx =

The usual is to use x =tan(t), dx =sec^2(t) dt which transforms this to the dreaded \int sec^3(t) dt integral, which is a double integration by parts.
Is there a way out? I think so, though the price one pays is a trickier conversion back to x.

Let’s try x =sinh(t) \rightarrow dx = cosh(t) dt so upon substituting we obtain \int |cosh(t)|cosh(t) dt and noting that cosh(t) > 0 alaways:

\int cosh^2(t)dt Now this can be integrated by parts: let u=cosh(t) dv = cosh(t) dt \rightarrow du =sinh(t), v = sinh(t)

So \int cosh^2(t)dt = cosh(t)sinh(t) -\int sinh^2(t)dt but this easily reduces to:

\int cosh^2(t)dt = cosh(t)sinh(t) -\int cosh^2(t)-1 dt \rightarrow 2\int cosh^2(t)dt  = cosh(t)sinh(t) -t + C

Division by 2: \int cosh^2(t)dt = \frac{1}{2}(cosh(t)sinh(t)-t)+C

That was easy enough.

But we now have the conversion to x: \frac{1}{2}(cosh(t)sinh(t) \rightarrow \frac{1}{2}x \sqrt{1+x^2}

So far, so good. But what about t \rightarrow   arcsinh(x) ?

Write: sinh(t) = \frac{e^{t}-e^{-t}}{2} =  x \rightarrow e^{t}-e^{-t} =2x \rightarrow e^{t}-2x -e^{-t} =0

Now multiply both sides by e^{t} to get e^{2t}-2xe^t -1 =0 and use the quadratic formula to get e^t = \frac{1}{2}(2x\pm \sqrt{4x^2+4} \rightarrow e^t = x \pm \sqrt{x^2+1}

We need e^t > 0 so e^t = x + \sqrt{x^2+1} \rightarrow t = ln|x + \sqrt{x^2+1}| and that is our integral:

\int \sqrt{1+x^2} dx = \frac{1}{2}x \sqrt{1+x^2} + \frac{1}{2} ln|x + \sqrt{x^2+1}| + C

I guess that this isn’t that much easier after all.

July 12, 2020

Logarithmic differentiation: do we not care about domains anymore?

Filed under: calculus, derivatives, elementary mathematics, pedagogy — collegemathteaching @ 11:29 pm

The introduction is for a student who might not have seen logarithmic differentiation before: (and yes, this technique is extensively used..for example it is used in the “maximum likelihood function” calculation frequently encountered in statistics)

Suppose you are given, say, f(x) =sin(x)e^x(x-2)^3(x+1) and you are told to calculate the derivative?

Calculus texts often offer the technique of logarithmic differentiation: write ln(f(x)) = ln(sin(x)e^x(x-2)^3(x+1)) = ln(sin(x)) + x + 3ln(x-2) + ln(x+1)
Now differentiate both sides: ln((f(x))' = \frac{f'(x)}{f(x)}  = \frac{cos(x)}{sin(x)} + 1 + \frac{3}{x-2} + {1 \over x+1}

Now multiply both sides by f(x) to obtain

f'(x) = f(x)(\frac{cos(x)}{sin(x)} + 1 + \frac{3}{x-2} + {1 \over x+1}) =

\

(sin(x)e^x(x-2)^3(x+1)(\frac{cos(x)}{sin(x)} + 1 + \frac{3}{x-2} + {1 \over x+1})

And this is correct…sort of. Why I say sort of: what happens, at say, x = 0 ? The derivative certainly exists there but what about that second factor? Yes, the sin(x) gets cancelled out by the first factor, but AS WRITTEN, there is an oh-so-subtle problem with domains.

You can only substitute x \in \{ 0, \pm k \pi \} only after simplifying ..which one might see as a limit process.

But let’s stop and take a closer look at the whole process: we started with f(x) = g_1(x) g_2(x) ...g_n(x) and then took the log of both sides. Where is the log defined? And when does ln(ab) = ln(a) + ln(b) ? You got it: this only works when a > 0, b > 0 .

So, on the face of it, ln(g_1 (x) g_2(x) ...g_n(x)) = ln(g_1(x) ) + ln(g_2(x) ) + ...ln(g_n(x)) is justified only when each g_i(x) > 0 .

Why can we get away with ignoring all of this, at least in this case?

Well, here is why:

1. If f(x) \neq 0 is a differentiable function then \frac{d}{dx} ln(|f(x)|) = \frac{f'(x)}{f(x)}
Yes, this is covered in the derivation of \int {dx \over x} material but here goes: write

|f(x)| =   \begin{cases}      f(x) ,& \text{if } f(x) > 0 \\      -f(x),              & \text{otherwise}  \end{cases}

Now if f(x) > 0 we get { d \over dx} ln(f(x)) = {f'(x) \over f(x) } as usual. If f(x) < 0 then |f(x)| = =f(x), |f(x)|' = (-f(x))' = -f'(x) and so in either case:

\frac{d}{dx} ln(|f(x)|) = \frac{f'(x)}{f(x)} as required.

THAT is the workaround for calculating {d \over dx } ln(g_1(x)g_2(x)..g_n(x)) where g_1(x)g_2(x)..g_n(x) \neq 0 : just calculate {d \over dx } ln(|g_1(x)g_2(x)..g_n(x)|) . noting that |g_1(x)g_2(x)..g_n(x)| = |g_1(x)| |g_2(x)|...|g_n(x)|

Yay! We are almost done! But, what about the cases where at least some of the factors are zero at, say x= x_0 ?

Here, we have to bite the bullet and admit that we cannot take the log of the product where any of the factors have a zero, at that point. But this is what we can prove:

Given g_1(x) g_2(x)...g_n(x) is a product of differentiable functions and g_1(a) g_2(a)...g_k(a) = 0 k \leq n then
(g_1(a)g_2(a)...g_n(a))' = lim_{x \rightarrow a}  g_1(x)g_2(x)..g_n(x) ({g_1'(x) \over g_1(x)} + {g_2'(x) \over g_2(x)} + ...{g_n'(x) \over g_n(x})

This works out to what we want by cancellation of factors.

Here is one way to proceed with the proof:

1. Suppose f, g are differentiable and f(a) = g(a) = 0 . Then (fg)'(a) = f'(a)g(a) + f(a)g'(a) = 0 and lim_{x \rightarrow a} f(x)g(x)({f'(x) \over f(x)} + {g'(x) \over g(x)}) = 0
2. Now suppose f, g are differentiable and f(a) =0 ,  g(a) \neq 0 . Then (fg)'(a) = f'(a)g(a) + f(a)g'(a) = f'(a)g(a) and lim_{x \rightarrow a} f(x)g(x)({f'(x) \over f(x)} + {g'(x) \over g(x)}) = f'(a)g(a)
3.Now apply the above to g_1(x) g_2(x)...g_n(x) is a product of differentiable functions and g_1(a) g_2(a)...g_k(a) = 0 k \leq n
If k = n then (g_1(a)g_2(a)...g_n(a))' = lim_{x \rightarrow a}  g_1(x)g_2(x)..g_n(x) ({g_1'(x) \over g_1(x)} + {g_2'(x) \over g_2(x)} + ...{g_n'(x) \over g_n(x}) =0 by inductive application of 1.

If k < n then let g_1...g_k = f, g_{k+1} ..g_n  =g as in 2. Then by 2, we have (fg)' =  f'(a)g(a) Now this quantity is zero unless k = 1 and f'(a) neq 0 . But in this case note that lim_{x \rightarrow a} g_1(x)g_2(x)...g_n(x)({g_1'(x) \over g_1(x)} + {g_2'(x) \over g_2(x)} + ...+ {g_n'(x) \over g_n(x)})  = lim_{x \rightarrow a} g_2(x)...g_n(x)(g_1'(x)) =g(a)g_1(a)

So there it is. Yes, it works ..with appropriate precautions.

July 10, 2020

This always bothered me about partial fractions…

Filed under: algebra, calculus, complex variables, elementary mathematics, integration by substitution — Tags: — collegemathteaching @ 12:03 am

Let’s look at an “easy” starting example: write \frac{1}{(x-1)(x+1)} = \frac{A}{x-1} + \frac{B}{x+1}
We know how that goes: multiply both sides by (x-1)(x+1) to get 1 = A(x+1) + B(x-1) and then since this must be true for ALL x , substitute x=-1 to get B = -{1 \over 2} and then substitute x = 1 to get A = {1 \over 2} . Easy-peasy.

BUT…why CAN you do such a substitution since the original domain excludes x =1, x = -1 ?? (and no, I don’t want to hear about residues and “poles of order 1”; this is calculus 2. )

Lets start with \frac{1}{(x-1)(x+1)} = \frac{A}{x-1} + \frac{B}{x+1} with the restricted domain, say x \neq 1
Now multiply both sides by x-1 and note that, with the restricted domain x \neq 1 we have:

\frac{1}{x+1}  = A + \frac{B(x-1)}{x+1} But both sides are equal on the domain (-1, 1) \cup (1, \infty) and the limit on the left hand side is lim_{x \rightarrow 1} {1 \over x+1 } = {1 \over 2} So the right hand side has a limit which exists and is equal to A . So the result follows..and this works for the calculation for B as well.

Yes, no engineer will care about this. But THIS is the reason we can substitute the non-domain points.

As an aside: if you are trying to solve something like {x^2 + 3x + 2 \over (x^2+1)(x-3) } = {Ax + B \over x^2+1 } + {C \over x-3 } one can do the denominator clearing, and, as appropriate substitute x = i and compare real and imaginary parts ..and yes, now you can use poles and residues.

March 16, 2019

The beta function integral: how to evaluate them

My interest in “beta” functions comes from their utility in Bayesian statistics. A nice 78 minute introduction to Bayesian statistics and how the beta distribution is used can be found here; you need to understand basic mathematical statistics concepts such as “joint density”, “marginal density”, “Bayes’ Rule” and “likelihood function” to follow the youtube lecture. To follow this post, one should know the standard “3 semesters” of calculus and know what the gamma function is (the extension of the factorial function to the real numbers); previous exposure to the standard “polar coordinates” proof that \int^{\infty}_{-\infty} e^{x^2} dx = \sqrt{\pi} would be very helpful.

So, what it the beta function? it is \beta(a,b) = \frac{\Gamma(a) \Gamma(b)}{\Gamma(a+b)} where \Gamma(x) = \int_0^{\infty} t^{x-1}e^{-t} dt . Note that \Gamma(n+1) = n! for integers n The gamma function is the unique “logarithmically convex” extension of the factorial function to the real line, where “logarithmically convex” means that the logarithm of the function is convex; that is, the second derivative of the log of the function is positive. Roughly speaking, this means that the function exhibits growth behavior similar to (or “greater”) than e^{x^2}

Now it turns out that the beta density function is defined as follows: \frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)} x^{a-1}(1-x)^{b-1} for 0 < x < 1 as one can see that the integral is either proper or a convergent improper integral for 0 < a < 1, 0 < b < 1 .

I'll do this in two steps. Step one will convert the beta integral into an integral involving powers of sine and cosine. Step two will be to write \Gamma(a) \Gamma(b) as a product of two integrals, do a change of variables and convert to an improper integral on the first quadrant. Then I'll convert to polar coordinates to show that this integral is equal to \Gamma(a+b) \beta(a,b)

Step one: converting the beta integral to a sine/cosine integral. Limit t \in [0, \frac{\pi}{2}] and then do the substitution x = sin^2(t), dx = 2 sin(t)cos(t) dt . Then the beta integral becomes: \int_0^1 x^{a-1}(1-x)^{b-1} dx = 2\int_0^{\frac{\pi}{2}} (sin^2(t))^{a-1}(1-sin^2(t))^{b-1} sin(t)cos(t)dt = 2\int_0^{\frac{\pi}{2}} (sin(t))^{2a-1}(cos(t))^{2b-1} dt

Step two: transforming the product of two gamma functions into a double integral and evaluating using polar coordinates.

Write \Gamma(a) \Gamma(b) = \int_0^{\infty} x^{a-1} e^{-x} dx  \int_0^{\infty} y^{b-1} e^{-y} dy

Now do the conversion x = u^2, dx = 2udu, y = v^2, dy = 2vdv to obtain:

\int_0^{\infty} 2u^{2a-1} e^{-u^2} du  \int_0^{\infty} 2v^{2b-1} e^{-v^2} dv (there is a tiny amount of algebra involved)

From which we now obtain

4\int^{\infty}_0 \int^{\infty}_0 u^{2a-1}v^{2b-1} e^{-(u^2+v^2)} dudv

Now we switch to polar coordinates, remembering the rdrd\theta that comes from evaluating the Jacobian of x = rcos(\theta), y = rsin(\theta)

4 \int^{\frac{\pi}{2}}_0 \int^{\infty}_0 r^{2a +2b -1} (cos(\theta))^{2a-1}(sin(\theta))^{2b-1} e^{-r^2} dr d\theta

This splits into two integrals:

2 \int^{\frac{\pi}{2}}_0 (cos(\theta))^{2a-1}(sin(\theta))^{2b-1} d \theta 2\int^{\infty}_0 r^{2a +2b -1}e^{-r^2} dr

The first of these integrals is just \beta(a,b) so now we have:

\Gamma(a) \Gamma(b) = \beta(a,b) 2\int^{\infty}_0 r^{2a +2b -1}e^{-r^2} dr

The second integral: we just use r^2 = x \rightarrow 2rdr = dx \rightarrow \frac{1}{2}\frac{1}{\sqrt{x}}dx = dr to obtain:

2\int^{\infty}_0 r^{2a +2b -1}e^{-r^2} dr = \int^{\infty}_0 x^{a+b-\frac{1}{2}} e^{-x} \frac{1}{\sqrt{x}}dx = \int^{\infty}_0 x^{a+b-1} e^{-x} dx =\Gamma(a+b) (yes, I cancelled the 2 with the 1/2)

And so the result follows.

That seems complicated for a simple little integral, doesn’t it?

February 18, 2019

An easy fact about least squares linear regression that I overlooked

The background: I was making notes about the ANOVA table for “least squares” linear regression and reviewing how to derive the “sum of squares” equality:

Total Sum of Squares = Sum of Squares Regression + Sum of Squares Error or…

If y_i is the observed response, \bar{y} the sample mean of the responses, and \hat{y}_i are the responses predicted by the best fit line (simple linear regression here) then:

\sum (y_i - \bar{y})^2 = \sum (\hat{y}_i -\bar{y})^2+ \sum (y_i - \hat{y}_i)^2 (where each sum is \sum^n_{i=1} for the n observations. )

Now for each i it is easy to see that (y_i - \bar{y}) = (\hat{y}_i -\bar{y}) + (y_i - \hat{y}_i) but the equations still holds if when these terms are squared, provided you sum them up!

And it was going over the derivation of this that reminded me about an important fact about least squares that I had overlooked when I first presented it.

If you go in to the derivation and calculate: \sum ( (\hat{y}_i -\bar{y}) + (y_i - \hat{y}_i))^2 = \sum  ((\hat{y}_i -\bar{y})^2 + (y_i - \hat{y}_i)^2 +2 (\hat{y}_i -\bar{y})(y_i - \hat{y}_i))

Which equals \sum  ((\hat{y}_i -\bar{y})^2 + (y_i - \hat{y}_i)^2 + 2\sum (\hat{y}_i -\bar{y})(y_i - \hat{y}_i)) and the proof is completed by showing that:

\sum (\hat{y}_i -\bar{y})(y_i - \hat{y}_i)) = \sum (\hat{y}_i)(y_i - \hat{y}_i)) - \sum (\bar{y})(y_i - \hat{y}_i)) and that BOTH of these sums are zero.

But why?

Let’s go back to how the least squares equations were derived:

Given that \hat{y}_i = \hat{\beta}_0 + \hat{\beta}_1 x_i

\frac{\partial}{\partial \hat{\beta}_0} \sum (\hat{y}_i -y_i)^2 = 2\sum (\hat{y}_i -y_i) =0 yields that \sum (\hat{y}_i -y_i) =0 . That is, under the least squares equations, the sum of the residuals is zero.

Now \frac{\partial}{\partial \hat{\beta}_1} \sum (\hat{y}_i -y_i)^2 = 2\sum x_i(\hat{y}_i -y_i) =0 which yields that \sum x_i(\hat{y}_i -y_i) =0

That is, the sum of the residuals, weighted by the corresponding x values (inputs) is also zero. Note: this holds with multilinear regreassion as well.

Really, that is what the least squares process does: it sets the sum of the residuals and the sum of the weighted residuals equal to zero.

Yes, there is a linear algebra formulation of this.

Anyhow returning to our sum:

\sum (\bar{y})(y_i - \hat{y}_i)) = (\bar{y})\sum(y_i - \hat{y}_i)) = 0 Now for the other term:

\sum (\hat{y}_i)(y_i - \hat{y}_i)) = \sum (\hat{\beta}_0+\hat{\beta}_1 x_i)(y_i - \hat{y}_i)) = \hat{\beta}_0\sum (y_i - \hat{y}_i) + \hat{\beta}_1 \sum x_i (y_i - \hat{y}_i))

Now \hat{\beta}_0\sum (y_i - \hat{y}_i) = 0 as it is a constant multiple of the sum of residuals and \hat{\beta}_1 \sum x_i (y_i - \hat{y}_i)) = 0 as it is a constant multiple of the weighted sum of residuals..weighted by the x_i .

That was pretty easy, wasn’t it?

But the role that the basic least squares equations played in this derivation went right over my head!

February 14, 2019

Happy Valentines Day!

Filed under: calculus — Tags: — collegemathteaching @ 7:29 pm

Let t \in [0, 2 \pi] and graph x(t) = cos(t), y(t) = sin(t) + ((cos(t))^2)^{\frac{1}{3}} and get:

Here is where I got the idea from:

heartequationgraph.png

January 15, 2019

Calculus series: derivatives

Filed under: calculus, derivatives — collegemathteaching @ 3:36 am

Reminder: this series is NOT for the student who is attempting to learn calculus for the first time.

Derivatives This is dealing with differentiable functions f: R^1 \rightarrow R^1 and no, I will NOT be talking about maps between tangent bundles. Yes, my differential geometry and differential topology courses were on the order of 30 years ago or so. 🙂

In calculus 1, we typically use the following definitions for the derivative of a function at a point: lim_{x \rightarrow a} \frac{f(x)-f(a)}{x-a} = lim_{h \rightarrow 0} \frac{f(a+h) - f(a)}{h} = f'(a) . This is opposed to the derivative function which can be thought of as the one dimensional gradient of f .

The first definition is easier to use for some calculations, say, calculating the derivative of f(x) = x ^{\frac{p}{q}} at a point. (hint, if you need one: use u = x^{\frac{1}{q}} then it is easier to factor). It can be used for proving a special case of the chain rule as well (the case there we are evaluating f at x = a and f(x) = f(t) for at most a finite number of points near a .)

When introducing this concept, the binomial expansion theorem is very handy to use for many of the calculations.

Now there is another definition for the derivative that is helpful when proving the chain rule (sans restrictions).

Note that as h \rightarrow 0 we have |\frac{f(a+h)-f(a)}{h} - f'(a)| < \epsilon . We can now view \epsilon as a function of h which goes to zero as h does.

That is, f(a+h) = hf'(a) + f(a) + \frac{\epsilon}{h} where \frac{\epsilon}{h} \rightarrow 0 and f'(a) is the best linear approximation for f at x = a .

We’ll talk about the chain rule a bit later.

But what about the derivative and examples?

It is common to develop intuition for the derivative as applied to nice, smooth..ok, analytic functions. And this might be a fine thing to do for beginning calculus students. But future math majors might benefit from being exposed to just a bit more so I’ll give some examples.

Now, of course, being differentiable at a point means being continuous there (the limit of the numerator of the difference quotient must go to zero for the derivative to exist). And we all know examples of a function being continuous at a point but not being differentiable there. Examples: |x|, x^{\frac{1}{3}}, x^{\frac{2}{3}}  are all continuous at zero but none are differentiable there; these give examples of a corner, vertical tangent and a cusp respectively.

But for many of the piecewise defined examples, say, f(x) = x for x < 0 and x^2 + x for x \geq 0 the derivative fails to exist because the respective derivative functions fail to be continuous at x =0 ; the same is true of the other stated examples.

And of course, we can show that x^{\frac{3k +2}{3}} has k continuous derivatives at the origin but not k+1 derivatives.

But what about a function with a discontinuous derivative? Try f(x) = x^2 sin(\frac{1}{x}) for x \neq 0 and zero at x =0 . It is easy to see that the derivative exists for all x but the first derivative fails to be continuous at the origin.

The derivative is 0 at x = 0 and 2x sin(\frac{1}{x}) -cos(\frac{1}{x}) for x \neq 0 which is not continuous at the origin.

Ok, what about a function that is differentiable at a single point only? There are different constructions, but if f(x) = x^2 for x rational, x^3 for x irrational is both continuous and, yes, differentiable at x = 0 (nice application of the Squeeze Theorem on the difference quotient).

Yes, there are everywhere continuous, nowhere differentiable functions.

January 14, 2019

New series in calculus: nuances and deeper explanations/examples

Filed under: calculus, cantor set — Tags: — collegemathteaching @ 3:07 am

Though I’ve been busy both learning and creating new mathematics (that is, teaching “new to me” courses and writing papers to submit for publication) I have not written much here. I’ve decided to write up some notes on, yes, calculus. These notes are NOT for the average student who is learning for the first time but rather for the busy TA or new instructor; it is just to get the juices flowing. Someday I might decide to write these notes up more formally and create something like “an instructor’s guide to calculus.”

I’ll pick topics that we often talk about and expand on them, giving suggested examples and proofs.

First example: Continuity. Of course, we say f is continuous at x = a if lim_{x \rightarrow a} f(x) = f(a) which means that the limit exists and is equal to the function evaluated at the point. In analysis notation: for all \epsilon > 0 there exists \delta > 0 such that |f(a)-f(x)| < \epsilon whenever |a-x| < \delta .

Of course, I see this as “for every open U containing f(a) , f^{-1}(U) is an open set. But never mind that for now.

So, what are some decent examples other than the usual “jump discontinuities” and “asymptotes” examples?

A function that is continuous at exactly one point: try f(x) = x for x rational and f(x) = x^2 for x irrational.

A function that oscillates infinitely often near a point but is continuous: f(x) = xsin(\frac{1}{x}) for x \neq 0 and zero at x = 0 .

A bounded unction with a non-jump discontinuity but is continuous for all x \neq 0 : f(x) = sin(\frac{1}{x}) for x \neq 0 and zero at x = 0 .

An unbounded function without an asymptote but is continuous for all x \neq 0 f(x) = \frac{1}{x} sin(\frac{1}{x}) for x \neq 0 and zero at x = 0 .

A nowhere continuous function: f(x) = 1 for x rational, and 0 for x irrational.

If you want an advanced example which blows the “a function is continuous if its graph can be drawn without lifting the pencil off of the paper, try the Cantor function. (this function is continuous on [0,1] , has derivative equal to zero almost everywhere, and yet increases from 0 to 1.

December 21, 2018

Over-scheduling of senior faculty and lower division courses: how important is course prep?

It seems as if the time faculty is expected to spend on administrative tasks is growing exponentially. In our case: we’ve had some administrative upheaval with the new people coming in to “clean things up”, thereby launching new task forces, creating more committees, etc. And this is a time suck; often more senior faculty more or less go through the motions when it comes to course preparation for the elementary courses (say: the calculus sequence, or elementary differential equations).

And so:

1. Does this harm the course quality and if so..
2. Is there any effect on the students?

I should first explain why I am thinking about this; I’ll give some specific examples from my department.

1. Some time ago, a faculty member gave a seminar in which he gave an “elementary” proof of why \int e^{x^2} dx is non-elementary. Ok, this proof took 40-50 minutes to get through. But at the end, the professor giving the seminar exclaimed: “isn’t this lovely?” at which, another senior member (one who didn’t have a Ph. D. had had been around since the 1960’s) asked “why are you happy that yet again, we haven’t had success?” The fact that a proof that \int e^{x^2} dx could not be expressed in terms of the usual functions by the standard field operations had been given; the whole point had eluded him. And remember, this person was in our calculus teaching line up.

2. Another time, in a less formal setting, I had mentioned that I had given a brief mention to my class that one could compute and improper integral (over the real line) of an unbounded function that that a function could have a Laplace transform. A junior faculty member who had just taught differential equations tried to inform me that only functions of exponential order could have a Laplace transform; I replied that, while many texts restricted Laplace transforms to such functions, that was not mathematically necessary (though it is a reasonable restriction for an applied first course). (briefly: imagine a function whose graph consisted of a spike of height e^{n^2} at integer points over an interval of width \frac{1}{2^{2n} e^{2n^2}} and was zero elsewhere.

3. In still another case, I was talking about errors in answer keys and how, when I taught courses that I wasn’t qualified to teach (e. g. actuarial science course), it was tough for me to confidently determine when the answer key was wrong. A senior, still active research faculty member said that he found errors in an answer key..that in some cases..the interval of absolute convergence for some power series was given as a closed interval.

I was a bit taken aback; I gently reminded him that \sum \frac{x^k}{k^2} was such a series.

I know what he was confused by; there is a theorem that says that if \sum a_k x^k converges (either conditionally or absolutely) for some x=x_1 then the series converges absolutely for all x_0 where |x_0| < |x_1| The proof isn’t hard; note that convergence of \sum a_k x^k means eventually, |a_k x^k| < M for some positive M then compare the “tail end” of the series: use |\frac{x_0}{x_1}| < r < 1 and then |a_k (x_0)^k| = |a_k x_1^k (\frac{x_0}{x_1})^k| < |r^k|M and compare to a convergent geometric series. Mind you, he was teaching series at the time..and yes, is a senior, research active faculty member with years and years of experience; he mentored me so many years ago.

4. Also…one time, a sharp young faculty member asked around “are there any real functions that are differentiable exactly at one point? (yes: try f(x) = x^2 if x is rational, x^3 if x is irrational.

5. And yes, one time I had forgotten that a function could be differentiable but not be C^1 (try: x^2 sin (\frac{1}{x}) at x = 0

What is the point of all of this? Even smart, active mathematicians forget stuff if they haven’t reviewed it in a while…even elementary stuff. We need time to review our courses! But…does this actually affect the students? I am almost sure that at non-elite universities such as ours, the answer is “probably not in any way that can be measured.”

Think about it. Imagine the following statements in a differential equations course:

1. “Laplace transforms exist only for functions of exponential order (false)”.
2. “We will restrict our study of Laplace transforms to functions of exponential order.”
3. “We will restrict our study of Laplace transforms to functions of exponential order but this is not mathematically necessary.”

Would students really recognize the difference between these three statements?

Yes, making these statements, with confidence, requires quite a bit of difference in preparation time. And our deans and administrators might not see any value to allowing for such preparation time as it doesn’t show up in measures of performance.

October 4, 2018

When is it ok to lie to students? part I

Filed under: calculus, derivatives, pedagogy — collegemathteaching @ 9:32 pm

We’ve arrived at logarithms in our calculus class, and, of course, I explained that ln(ab) = ln(a) + ln(b) only holds for a, b > 0 . That is all well and good.
And yes, I explained that expressions like f(x)^{g(x)} only makes sense when f(x) > 0

But then I went ahead and did a problem of the following type: given f(x) = \frac{x^3 e^{x^2} cos(x)}{x^4 + 1} by using logarithmic differentiation,

f'(x) = \frac{x^3 e^{x^2} cos(x)}{x^4 + 1} (\frac{3}{x} + 2x -tan(x) -\frac{4x^3}{x^4+ 1})

And you KNOW exactly what I did. Right?

Note that f is differentiable for all x and, well, the derivative *should* be continuous for all x but..is it? Well, up to inessential singularities, it is. You see: the second factor is not defined for x = 0, x = \frac{\pi}{2} \pm k \pi , etc.

Well, let’s multiply it out and obtain:
f'(x) = \frac{3x^2 e^{x^2} cos(x)}{x^4 + 1} + \frac{2x^4 e^{x^2} cos(x)}{x^4 + 1} - \frac{x^3 e^{x^2} sin(x)}{x^4 + 1}-\frac{4x^6 e^{x^2} cos(x)}{(x^4 + 1)^2}

So, there is that. We might induce inessential singularities.

And there is the following: in the process of finding the derivative to begin with we did:

ln(\frac{x^3 e^{x^2} cos(x)}{x^4 + 1}) = ln(x^3) + ln(e^{x^2}) + ln(cos(x)) - ln(x^4 + 1) and that expansion is valid only for
x \in (0, \frac{\pi}{2}) \cup (\frac{5\pi}{2}, \frac{7\pi}{2}) \cup .... because we need x^3 > 0 and cos(x) > 0 .

But the derivative formula works anyway. So what is the formula?

It is: if f = \prod_{j=1}^k f_j where f_j is differentiable, then f' = \sum_{i=1}^k f'_i \prod_{j =1, j \neq i}^k f_j and verifying this is an easy exercise in induction.

But the logarithmic differentiation is really just a motivating idea that works for positive functions.

To make this complete: we’ll now tackle y = f(x)^{g(x)} where it is essential that f(x) > 0 .

Rewrite y = e^{ln(f(x)^{g(x)})} = e^{g(x)ln(f(x))}

Then y' = e^{g(x)ln(f(x))} (g'(x) ln(f(x)) + g(x) \frac{f'(x)}{f(x)}) =  f(x)^{g(x)}(g'(x) ln(f(x)) + g(x) \frac{f'(x)}{f(x)})

This formula is a bit of a universal one. Let’s examine two special cases.

Suppose g(x) = k some constant. Then g'(x) =0 and the formula becomes y = f(x)^k(k \frac{f'(x)}{f(x)}) = kf(x)^{k-1}f'(x) which is just the usual constant power rule with the chain rule.

Now suppose f(x) = a for some positive constant. Then f'(x) = 0 and the formula becomes y = a^{g(x)}(ln(a)g'(x)) which is the usual exponential function differentiation formula combined with the chain rule.

Older Posts »

Create a free website or blog at WordPress.com.