# College Math Teaching

## March 16, 2019

### The beta function integral: how to evaluate them

My interest in “beta” functions comes from their utility in Bayesian statistics. A nice 78 minute introduction to Bayesian statistics and how the beta distribution is used can be found here; you need to understand basic mathematical statistics concepts such as “joint density”, “marginal density”, “Bayes’ Rule” and “likelihood function” to follow the youtube lecture. To follow this post, one should know the standard “3 semesters” of calculus and know what the gamma function is (the extension of the factorial function to the real numbers); previous exposure to the standard “polar coordinates” proof that $\int^{\infty}_{-\infty} e^{x^2} dx = \sqrt{\pi}$ would be very helpful.

So, what it the beta function? it is $\beta(a,b) = \frac{\Gamma(a) \Gamma(b)}{\Gamma(a+b)}$ where $\Gamma(x) = \int_0^{\infty} t^{x-1}e^{-t} dt$. Note that $\Gamma(n+1) = n!$ for integers $n$ The gamma function is the unique “logarithmically convex” extension of the factorial function to the real line, where “logarithmically convex” means that the logarithm of the function is convex; that is, the second derivative of the log of the function is positive. Roughly speaking, this means that the function exhibits growth behavior similar to (or “greater”) than $e^{x^2}$

Now it turns out that the beta density function is defined as follows: $\frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)} x^{a-1}(1-x)^{b-1}$ for $0 < x < 1$ as one can see that the integral is either proper or a convergent improper integral for $0 < a < 1, 0 < b < 1$.

I'll do this in two steps. Step one will convert the beta integral into an integral involving powers of sine and cosine. Step two will be to write $\Gamma(a) \Gamma(b)$ as a product of two integrals, do a change of variables and convert to an improper integral on the first quadrant. Then I'll convert to polar coordinates to show that this integral is equal to $\Gamma(a+b) \beta(a,b)$

Step one: converting the beta integral to a sine/cosine integral. Limit $t \in [0, \frac{\pi}{2}]$ and then do the substitution $x = sin^2(t), dx = 2 sin(t)cos(t) dt$. Then the beta integral becomes: $\int_0^1 x^{a-1}(1-x)^{b-1} dx = 2\int_0^{\frac{\pi}{2}} (sin^2(t))^{a-1}(1-sin^2(t))^{b-1} sin(t)cos(t)dt = 2\int_0^{\frac{\pi}{2}} (sin(t))^{2a-1}(cos(t))^{2b-1} dt$

Step two: transforming the product of two gamma functions into a double integral and evaluating using polar coordinates.

Write $\Gamma(a) \Gamma(b) = \int_0^{\infty} x^{a-1} e^{-x} dx \int_0^{\infty} y^{b-1} e^{-y} dy$

Now do the conversion $x = u^2, dx = 2udu, y = v^2, dy = 2vdv$ to obtain:

$\int_0^{\infty} 2u^{2a-1} e^{-u^2} du \int_0^{\infty} 2v^{2b-1} e^{-v^2} dv$ (there is a tiny amount of algebra involved)

From which we now obtain

$4\int^{\infty}_0 \int^{\infty}_0 u^{2a-1}v^{2b-1} e^{-(u^2+v^2)} dudv$

Now we switch to polar coordinates, remembering the $rdrd\theta$ that comes from evaluating the Jacobian of $x = rcos(\theta), y = rsin(\theta)$

$4 \int^{\frac{\pi}{2}}_0 \int^{\infty}_0 r^{2a +2b -1} (cos(\theta))^{2a-1}(sin(\theta))^{2b-1} e^{-r^2} dr d\theta$

This splits into two integrals:

$2 \int^{\frac{\pi}{2}}_0 (cos(\theta))^{2a-1}(sin(\theta))^{2b-1} d \theta 2\int^{\infty}_0 r^{2a +2b -1}e^{-r^2} dr$

The first of these integrals is just $\beta(a,b)$ so now we have:

$\Gamma(a) \Gamma(b) = \beta(a,b) 2\int^{\infty}_0 r^{2a +2b -1}e^{-r^2} dr$

The second integral: we just use $r^2 = x \rightarrow 2rdr = dx \rightarrow \frac{1}{2}\frac{1}{\sqrt{x}}dx = dr$ to obtain:

$2\int^{\infty}_0 r^{2a +2b -1}e^{-r^2} dr = \int^{\infty}_0 x^{a+b-\frac{1}{2}} e^{-x} \frac{1}{\sqrt{x}}dx = \int^{\infty}_0 x^{a+b-1} e^{-x} dx =\Gamma(a+b)$ (yes, I cancelled the 2 with the 1/2)

And so the result follows.

That seems complicated for a simple little integral, doesn’t it?

## May 20, 2016

### Student integral tricks…

Ok, classes ended last week and my brain is way out of math shape. Right now I am contemplating how to show that the complements of this object

and of the complement of the object depicted in figure 3, are NOT homeomorphic.

I can do this in this very specific case; I am interested in seeing what happens if the “tangle pattern” is changed. Are the complements of these two related objects *always* topologically different? I am reasonably sure yes, but my brain is rebelling at doing the hard work to nail it down.

Anyhow, finals are graded and I am usually treated to one unusual student trick. Here is one for the semester:

$\int x^2 \sqrt{x+1} dx =$

Now I was hoping that they would say $u = x +1 \rightarrow u-1 = x \rightarrow x^2 = u^2-2u+1$ at which case the integral is translated to: $\int u^{\frac{5}{2}} - 2u^{\frac{3}{2}} + u^{\frac{1}{2}} du$ which is easy to do.

Now those wanting to do it a more difficult (but still sort of standard) way could do two repetitions of integration by parts with the first set up being $x^2 = u, \sqrt{x+1}dx =dv \rightarrow du = 2xdx, v = \frac{2}{3} (x+1)^{\frac{3}{2}}$ and that works just fine.

But I did see this: $x =tan^2(u), dx = 2tan(u)sec^2(u)du, x+1 = tan^2(x)+1 = sec^2(u)$ (ok, there are some domain issues here but never mind that) and we end up with the transformed integral: $2\int tan^5(u)sec^3(u) du$ which can be transformed to $2\int (sec^6(u) - 2 sec^4(u) + sec^2(u)) tan(u)sec(u) du$ by elementary trig identities.

And yes, that leads to an answer of $\frac{2}{7}sec^7(u) +\frac{4}{5}sec^5(u) + \frac{2}{3}sec^3(u) + C$ which, upon using the triangle

Gives you an answer that is exactly in the same form as the desired “rationalization substitution” answer. Yeah, I gave full credit despite the “domain issues” (in the original integral, it is possible for $x \in (-1,0]$ ).

What can I say?

## May 11, 2015

### The hypervolume of the n-ball enclosed by a standard n-1 sphere

I am always looking for interesting calculus problems to demonstrate various concepts and perhaps generate some interest in pure mathematics.
And yes, I like to “blow off some steam” by spending some time having some non-technical mathematical fun with elementary mathematics.

This post uses only:

1. Integration by parts and basic reduction formulas.
2. Trig substitution.
3. Calculation of volumes (and hyper volumes) by the method of cross sections.
4. Induction
5. Elementary arithmetic involving factorials.

The quest: find a formula that finds the (hyper)volume of the region $\{(x_1, x_2, x_3,....x_k) | \sum_{i=1}^k x_i^2 \leq R^2 \} \subset R^k$

We will assume that the usual tools of calculus work as advertised.

Start. If we done the (hyper)volume of the k-ball by $V_k$ we will start with the assumption that $V_1 = 2R$; that is, the distance between the endpoints of $[-R,R]$ is $2R$.

Step 1: we show, via induction, that $V_k =c_kR^k$ where $c_k$ is a constant and $R$ is the radius.

Our proof will be inefficient for instructional purposes.

We know that $V_1 =2R$ hence the induction hypothesis holds for the first case and $c_1 = 2$. We now go to show the second case because, for the beginner, the technique will be easier to follow further along if we do the $k = 2$ case.

Yes, I know that you know that $V_2 = \pi R^2$ and you’ve seen many demonstrations of this fact. Here is another: let’s calculate this using the method of “area by cross sections”. Here is $x^2 + y^2 = R^2$ with some $y = c$ cross sections drawn in.

Now do the calculation by integrals: we will use symmetry and only do the upper half and multiply our result by 2. At each $y = y_c$ level, call the radius from the center line to the circle $R(y)$ so the total length of the “y is constant” level is $2R(y)$ and we “multiply by thickness “dy” to obtain $V_2 = 4 \int^{y=R}_{y=0} R(y) dy$.

But remember that the curve in question is $x^2 + y^2 = R^2$ and so if we set $x = R(y)$ we have $R(y) = \sqrt{R^2 -y^2}$ and so our integral is $4 \int^{y=R}_{y=0}\sqrt{R^2 -y^2} dy$

Now this integral is no big deal. But HOW we solve it will help us down the road. So here, we use the change of variable (aka “trigonometric substitution”): $y = Rsin(t), dy =Rcos(t)$ to change the integral to:

$4 \int^{\frac{\pi}{2}}_0 R^2 cos^2(t) dt = 4R^2 \int^{\frac{\pi}{2}}_0 cos^2(t) dt$ therefore

$V_2 = c_2 R^2$ where:

$c_2 = 4\int^{\frac{\pi}{2}}_0 cos^2(t)$

Yes, I know that this is an easy integral to solve, but I first presented the result this way in order to make a point.

Of course, $c_2 = 4\int^{\frac{\pi}{2}}_0 cos^2(t) = 4\int^{\frac{\pi}{2}}_0 \frac{1}{2} + \frac{1}{2}cos(2t) dt = \pi$

Therefore, $V_2 =\pi R^2$ as expected.

Exercise for those seeing this for the first time: compute $c_3$ and $V_3$ by using the above methods.

Inductive step: Assume $V_k = c_kR^k$ Now calculate using the method of cross sections above (and here we move away from x-y coordinates to more general labeling):

$V_{k+1} = 2\int^R_0 V_k dy = 2 \int^R_0 c_k (R(x_{k+1})^k dx_{k+1} =c_k 2\int^R_0 (R(x_{k+1}))^k dx_{k+1}$

Now we do the substitutions: first of all, we note that $x_1^2 + x_2^2 + ...x_{k}^2 + x_{k+1}^2 = R^2$ and so

$x_1^2 + x_2^2 ....+x_k^2 = R^2 - x_{k+1}^2$. Now for the key observation: $x_1^2 + x_2^2 ..+x_k^2 =R^2(x_{k+1})$ and so $R(x_{k+1}) = \sqrt{R^2 - x_{k+1}^2}$

Now use the induction hypothesis to note:

$V_{k+1} = c_k 2\int^R_0 (R^2 - x_{k+1}^2)^{\frac{k}{2}} dx_{k+1}$

Now do the substitution $x_{k+1} = Rsin(t), dx_{k+1} = Rcos(t)dt$ and the integral is now:

$V_{k+1} = c_k 2\int^{\frac{\pi}{2}}_0 R^{k+1} cos^{k+1}(t) dt = c_k(2 \int^{\frac{\pi}{2}}_0 cos^{k+1}(t) dt)R^{k+1}$ which is what we needed to show.

In fact, we have shown a bit more. We’ve shown that $c_1 = 2 =2 \int^{\frac{\pi}{2}}_0(cos(t))dt, c_2 = 2 \cdot 2\int^{\frac{\pi}{2}}_0 cos^2(t) dt = c_1 2\int^{\frac{\pi}{2}}_0 cos^2(t) dt$ and, in general,

$c_{k+1} = c_{k}c_{k-1}c_{k-2} ....c_1(2 \int^{\frac{\pi}{2}}_0 cos^{k+1}(t) dt) = 2^{k+1} \int^{\frac{\pi}{2}}_0(cos^{k+1}(t))dt \int^{\frac{\pi}{2}}_0(cos^{k}(t))dt \int^{\frac{\pi}{2}}_0(cos^{k-1}(t))dt .....\int^{\frac{\pi}{2}}_0(cos(t))dt$

Finishing the formula

We now need to calculate these easy calculus integrals: in this case the reduction formula:

$\int cos^n(x) dx = \frac{1}{n}cos^{n-1}sin(x) + \frac{n-1}{n} \int cos^{n-2}(x) dx$ is useful (it is merely integration by parts). Now use the limits and elementary calculation to obtain:

$\int^{\frac{\pi}{2}}_0 cos^n(x) dx = \frac{n-1}{n} \int^{\frac{\pi}{2}}_0 cos^{n-2}(x)dx$ to obtain:

$\int^{\frac{\pi}{2}}_0 cos^n(x) dx = (\frac{n-1}{n})(\frac{n-3}{n-2})......(\frac{3}{4})\frac{\pi}{4}$ if $n$ is even and:
$\int^{\frac{\pi}{2}}_0 cos^n(x) dx = (\frac{n-1}{n})(\frac{n-3}{n-2})......(\frac{4}{5})\frac{2}{3}$ if $n$ is odd.

Now to come up with something resembling a closed formula let’s experiment and do some calculation:

Note that $c_1 = 2, c_2 = \pi, c_3 = \frac{4 \pi}{3}, c_4 = \frac{(\pi)^2}{2}, c_5 = \frac{2^3 (\pi)^2)}{3 \cdot 5} = \frac{8 \pi^2}{15}, c_6 = \frac{\pi^3}{3 \cdot 2} = \frac{\pi^3}{6}$.

So we can make the inductive conjecture that $c_{2k} = \frac{\pi^k}{k!}$ and see how it holds up: $c_{2k+2} = 2^2 \int^{\frac{\pi}{2}}_0(cos^{2k+2}(t))dt \int^{\frac{\pi}{2}}_0(cos^{2k+1}(t))dt \frac{\pi^k}{k!}$

$= 2^2 ((\frac{2k+1}{2k+2})(\frac{2k-1}{2k})......(\frac{3}{4})\frac{\pi}{4})((\frac{2k}{2k+1})(\frac{2k-2}{2k-1})......\frac{2}{3})\frac{\pi^k}{k!}$

Now notice the telescoping effect of the fractions from the $c_{2k+1}$ factor. All factors cancel except for the $(2k+2)$ in the first denominator and the 2 in the first numerator, as well as the $\frac{\pi}{4}$ factor. This leads to:

$c_{2k+2} = 2^2(\frac{\pi}{4})\frac{2}{2k+2} \frac{\pi^k}{k!} = \frac{\pi^{k+1}}{(k+1)!}$ as required.

Now we need to calculate $c_{2k+1} = 2\int^{\frac{\pi}{2}}_0(cos^{2k+1}(t))dt c_{2k} = 2\int^{\frac{\pi}{2}}_0(cos^{2k+1}(t))dt \frac{\pi^k}{k!}$

$= 2 (\frac{2k}{2k+1})(\frac{2k-2}{2k-1})......(\frac{4}{5})\frac{2}{3}\frac{\pi^k}{k!} = 2 (\frac{(2k)(2k-2)(2k-4)..(4)(2)}{(2k+1)(2k-1)...(5)(3)} \frac{\pi^k}{k!}$

To simplify this further: split up the factors of the $k!$ in the denominator and put one between each denominator factor:

$= 2 (\frac{(2k)(2k-2)(2k-4)..(4)(2)}{(2k+1)(k)(2k-1)(k-1)...(3)(5)(2)(3)(1)} \pi^k$ Now multiply the denominator by $2^k$ and put one factor with each $k-m$ factor in the denominator; also multiply by $2^k$ in the numerator to obtain:

$(2) 2^k (\frac{(2k)(2k-2)(2k-4)..(4)(2)}{(2k+1)(2k)(2k-1)(2k-2)...(6)(5)(4)(3)(2)} \pi^k$ Now gather each factor of 2 in the numerator product of the 2k, 2k-2…

$= (2) 2^k 2^k \pi^k \frac{k!}{(2k+1)!} = 2 \frac{(4 \pi)^k k!}{(2k+1)!}$ which is the required formula.

So to summarize:

$V_{2k} = \frac{\pi^k}{k!} R^{2k}$

$V_{2k+1}= \frac{2 k! (4 \pi)^k}{(2k+1)!}R^{2k+1}$

Note the following: $lim_{k \rightarrow \infty} c_{k} = 0$. If this seems strange at first, think of it this way: imagine the n-ball being “inscribed” in an n-cube which has hyper volume $(2R)^n$. Then consider the ratio $\frac{2^n R^n}{c_n R^n} = 2^n \frac{1}{c_n}$; that is, the n-ball holds a smaller and smaller percentage of the hyper volume of the n-cube that it is inscribed in; note the $2^n$ corresponds to the number of corners in the n-cube. One might see that the rounding gets more severe as the number of dimensions increases.

One also notes that for fixed radius R, $lim_{n \rightarrow \infty} V_n = 0$ as well.

There are other interesting aspects to this limit: for what dimension $n$ does the maximum hypervolume occur? As you might expect: this depends on the radius involved; a quick glance at the hyper volume formulas will show why. For more on this topic, including an interesting discussion on this limit itself, see Dave Richardson’s blog Division by Zero. Note: his approach to finding the hyper volume formula is also elementary but uses polar coordinate integration as opposed to the method of cross sections.

## October 29, 2014

### Hyperbolic Trig Functions and integration…

In college calculus courses, I’ve always wrestled with “how much to cover in the hyperbolic trig functions” section.

On one hand, the hyperbolic trig functions make some integrals much easer. On the other hand: well, it isn’t as if our classes are populated with the highest caliber student (I don’t teach at MIT); many struggle with the standard trig functions. There is only so much that the average young mind can absorb.

In case your memory is rusty:

$cosh(x) =\frac{e^x + e^{-x}}{2}, sinh(x) = \frac{e^x -e^{-x}}{2}$ and then it is immediate that the standard “half/double angle formulas hold; we do remember that $\frac{d}{dx}cosh(x) = sinh(x), \frac{d}{dx} = cosh(x)$.

What is less immediate is the following: $sinh^{-1}(x) = ln(x+\sqrt{x^2+1}), cosh^{-1}(x) = ln(x + \sqrt{x^2 -1}) (x \ge 1)$.

Exercise: prove these formulas. Hint: if $sinh(y) = x$ then $e^{y} - 2x- e^{-y} =0$ so multiply both sides by $e^{y}$ to obtain $e^{2y} -2x e^y - 1 =0$ now use the quadratic formula to solve for $e^y$ and keep in mind that $e^y$ is positive.

For the other formula: same procedure, and remember that we are using the $x \ge 0$ branch of $cosh(x)$ and that $cosh(x) \ge 1$

The following follows easily: $\frac{d}{dx} sinh^{-1} (x) = \frac{1}{\sqrt{x^2 + 1}}$ (just set up $sinh(y) = x$ and use implicit differentiation followed by noting $cosh^2(x) -sinh^2(x) = 1$. ) and $\frac{d}{dx} cosh^{-1}(x) = \frac{1}{\sqrt{x^2-1}}$ (similar derivation).

Now, we are off and running.

Example: $\int \sqrt{x^2 + 1} dx =$

We can make the substitution $x =sinh(u), dx = cosh(u) du$ and obtain $\int cosh^2(u) du = \int \frac{1}{2} (cosh(2u) + 1)du = \frac{1}{4}sinh(2u) + \frac{1}{2} u + C$. Now use $sinh(2u) = 2 sinh(u)cosh(u)$ and we obtain:

$\frac{1}{2}sinh(u)cosh(u) + \frac{u}{2} + C$. The back substitution isn’t that hard if we recognize $cosh(u) = \sqrt{sinh^2(u) + 1}$ so we have $\frac{1}{2} sinh(u) \sqrt{sinh^2(u) + 1} + \frac{u}{2} + C$. Back substitution is now easy:

$\frac{1}{2} x \sqrt{x^2+1} + \frac{1}{2} ln(x + \sqrt{x^2 + 1}) + C$. No integration by parts is required and the dreaded $\int sec^3(x) dx$ integral is avoided. Ok, I was a bit loose about the domains here; we can make this valid for negative values of $x$ by using an absolute value with the $ln(x + \sqrt{x^2 + 1})$ term.

## August 21, 2014

### Calculation of the Fourier Transform of a tent map, with a calculus tip….

I’ve been following these excellent lectures by Professor Brad Osgood of Stanford. As an aside: yes, he is dynamite in the classroom, but there is probably a reason that Stanford is featuring him. ðŸ™‚

And yes, his style is good for obtaining a feeling of comradery that is absent in my classroom; at least in the lower division “service” classes.

This lecture takes us from Fourier Series to Fourier Transforms. Of course, he admits that the transition here is really a heuristic trick with symbolism; it isn’t a bad way to initiate an intuitive feel for the subject though.

However, the point of this post is to offer a “algebra of calculus trick” for dealing with the sort of calculations that one might encounter.

By the way, if you say “hey, just use a calculator” you will be BANNED from this blog!!!! (just kidding…sort of. ðŸ™‚ )

So here is the deal: let $f(x)$ represent the tent map: the support of $f$ is $[-1,1]$ and it has the following graph:

The formula is: $f(x)=\left\{\begin{array}{c} x+1,x \in [-1,0) \\ 1-x ,x\in [0,1] \\ 0 \text{ elsewhere} \end{array}\right.$

So, the Fourier Transform is $F(f) = \int^{\infty}_{-\infty} e^{-2 \pi i st}f(t)dt = \int^0_{-1} e^{-2 \pi i st}(1+t)dt + \int^1_0e^{-2 \pi i st}(1-t)dt$

Now, this is an easy integral to do, conceptually, but there is the issue of carrying constants around and being tempted to make “on the fly” simplifications along the way, thereby leading to irritating algebraic errors.

So my tip: just let $a = -2 \pi i s$ and do the integrals:

$\int^0_{-1} e^{at}(1+t)dt + \int^1_0e^{at}(1-t)dt$ and substitute and simplify later:

Now the integrals become: $\int^{1}_{-1} e^{at}dt + \int^0_{-1}te^{at}dt - \int^1_0 te^{at} dt.$
These are easy to do; the first is merely $\frac{1}{a}(e^a - e^{-a})$ and the next two have the same anti-derivative which can be obtained by a “integration by parts” calculation: $\frac{t}{a}e^{at} -\frac{1}{a^2}e^{at}$; evaluating the limits yields:

$-\frac{1}{a^2}-(\frac{-1}{a}e^{-a} -\frac{1}{a^2}e^{-a}) - (\frac{1}{a}e^{a} -\frac{1}{a^2}e^a)+ (-\frac{1}{a^2})$

Add the first integral and simplify and we get: $-\frac{1}{a^2}(2 - (e^{-a} -e^{a})$. NOW use $a = -2\pi i s$ and we have the integral is $\frac{1}{4 \pi^2 s^2}(2 -(e^{2 \pi i s} -e^{-2 \pi i s}) = \frac{1}{4 \pi^2 s^2}(2 - cos(2 \pi s))$ by Euler’s formula.

Now we need some trig to get this into a form that is “engineering/scientist” friendly; here we turn to the formula: $sin^2(x) = \frac{1}{2}(1-cos(2x))$ so $2 - cos(2 \pi s) = 4sin^2(\pi s)$ so our answer is $\frac{sin^2( \pi s)}{(\pi s)^2} = (\frac{sin(\pi s)}{\pi s})^2$ which is often denoted as $(sinc(s))^2$ as the “normalized” $sinc(x)$ function is given by $\frac{sinc(\pi x)}{\pi x}$ (as we want the function to have zeros at integers and to “equal” one at $x = 0$ (remember that famous limit!)

So, the point is that using $a$ made the algebra a whole lot easier.

Now, if you are shaking your head and muttering about how this calculation was crude that that one usually uses “convolution” instead: this post is probably too elementary for you. ðŸ™‚

## September 20, 2013

### Stupid Integral Tricks….

Filed under: calculus, integrals, integration by substitution — Tags: — collegemathteaching @ 9:35 pm

Ok what is the worst way to perform $\int sin(2x) dx$ correctly?

How about $\int sin(2x)dx = \int 2sin(x)cos(x)dx = sin^2(x) + C$ ?

Yes; if you compare $sin^2(x)$ to the expected $\frac{-1}{2} cos(2x)$ you’ll see that $sin^2(x) = \frac{1}{2} - \frac{1}{2}cos(2x)$. So the answers differ by a constant and therefore correspond to the same indefinite integral.

I wanted so much to take off a point for “style” but didn’t. ðŸ™‚

## December 13, 2012

### Domains and Anti Derivatives (Indefinite Integration)

Grading student exams sometimes inspires me to revisit elementary topics. For example, I recently spoke about some unusual (but mostly correct) integration techniques used by students on a final exam.

I’ll recap (and adjust the example slightly): on a recent exam, a student encountered $\int \frac{2}{1-x^2} dx$. I had expected the student to use the usual partial fractions expansion to obtain $\int \frac{1}{1+x} dx + \int \frac{1}{1-x} dx = ln|1+x| - ln|1-x| + C$ which is valid when $x \ne \pm 1$. I admit to being a bad professor and not being picky about domains.

But one student noticed the $1 - x^2$ in the denominator of the fraction and so used the trig substitution $x = sin(\theta), dx = cos(\theta) d\theta$ which leads to the following integral: $\int \frac{2}{cos(\theta)} d\theta = 2ln|sec(\theta) + tan(\theta)| + C$ which leads to $2ln|\frac{1}{\sqrt{1-x^2}} + \frac{x}{\sqrt{1-x^2}}| + C = 2ln|\frac{1+x}{\sqrt{1-x^2}}| = ln|1+x| - ln|1-x| + C$ for $x \in (-1,1)$. Note that, strictly speaking, the “final answer” is really defined for all $x \ne \pm 1$ though the equalities do not hold outside of the domain for $x$ used in the original trig substitution.

And yes, I was a bad professor; I gave full credit to this answer even though we “lost domain” during the string of equalities.

But that got me to wondering: is there a trig substitution that works for $|x| > 1$? Answer: of course:

$\int \frac{2}{1-x^2} dx = -\int \frac{2}{x^2 -1} dx$. Now use $x = sec(\theta), dx = sec(\theta) tan(\theta) d\theta$ which leads to $-2\int csc(\theta) d\theta = 2ln|csc(\theta) + cot(\theta)| + C = 2ln|\frac{x}{\sqrt{x^2-1}} + \frac{1}{\sqrt{x^2 -1}}| + C$ which leads us to our ultimate solution for $|x| > 1$

So, if one REALLY wanted to use trig substituions for this problem, one could and do it in a way to cover the entire domain.

But…as our existence and uniqueness theorems imply, once we get a candidate for an anti-derivative that “works” or the domain, it really doesn’t matter if we did “illegal” steps to get it; we need only show that it is an anti derivative and is valid for the entire domain for the integrand.

Now if one wants a more detailed discussion on domain issues for anti-derivatives, I can recommend the article The Importance of Being Continuous by D. J. JEFFREY which appeared in Mathematics Magazine, Vol 67, pp 294 – 300. (reprint can be found here, scroll down a bit; this mathematician has written quite a bit!). Note: I can recommend this little paper as it talks about the domains of the anti derivatives themselves and not just the domains assumed in doing the calculations along the way or the domains of validity of the substitutions. Note: integral tables and computer algebra systems don’t always give the anti derivative with the “largest” possible domain. One has to watch for that.

## December 7, 2012

### “Unusual” Student Integral Tricks….that are correct!

Filed under: academia, calculus, editorial, elementary mathematics, integrals, integration by substitution, pedagogy — Tags: — collegemathteaching @ 2:52 am

To make this list, the student has to do the integral correctly, but choose a painfully inefficient way of doing it.

On today’s final exam alone (from the most innocent to the most unusual and inefficient….)

1. $\int x\sqrt{x+1} dx =$
Ok, there are two standard methods. The first (and easiest) is to do the change of variable $u = x+1$ which transforms this to $\int (u-1)\sqrt{u} du$ which is very easy to do. The second method: parts, let $u = x, dv = \sqrt{x+1}$ etc. It is an algebraic exercise to see that one gets the same answer either way, though the answers look different at first.

One answer that I saw: $u = \sqrt{x+1}, u^2-1 = x, 2udu = dx$ which leads to $\int 2(u^2 -1)u^2 du$ which of course is doable. So this isn’t that far off of the easiest path, hence this entry only gets an “honorable mention”.

2. $\int \frac{1}{9-x^2} dx$. of course, I thought that I was testing “partial fractions” which leads to an answer of $\frac{1}{6}(ln|3+x| - ln|3-x|)+C$. Fair enough. But what did one of my students do? Well, this looked like trig substitution to him so: $x = 3sin(t), dx = 3cos(t)$ so this was transformed to $\int \frac{3cos(t)}{9cos(t)}dt = \frac{1}{3}\int sec(t) dt = \frac{1}{3}ln|sec(t) + tan(t)|+C$ which transforms back to $\frac{1}{3}ln|\frac{3}{\sqrt{9-x^2}} + \frac{x}{\sqrt{9-x^2}}| = \frac{1}{3}(ln|3+x| - \frac{1}{2}(ln|3-x|+ln|3+x|))+C$ which is, of course, the correct answer.

Yes, I know that there are domain issues with the trig substitution (that is, the integral exists for all values of $x \ne \pm 3$ but I wasn’t being that picky. Besides, this trig substitution is really setting $t = arcsin{\frac{x}{3}}$ and we are really just choosing a convenient “branch” (meaning: viewing the domain “mod (-1,1)”) of the $arcsin(x)$ function.

3. $\int \frac{arcsin(x))^2}{\sqrt{1-x^2}} dx$. Easy, you say? Why not let $u = arcsin(x), du = \frac{1}{\sqrt{1-x^2}},$ etc. Yes, most did it that way. But then we had a couple do the following: $x = sin(t), dx = cos(t)dt, arcsin(x) = t$ which lead to $\int t^2 dt = \frac{t^3}{3} + C$ which transforms to $\frac{(arcsin(x))^3}{3} + C$ which is the correct answer. ðŸ™‚

Well, I tell my classes that “this isn’t a gymnastics meet; there are no “degree of difficulty points”” but some insist on trying to entertain me anyway. ðŸ™‚

## April 18, 2012

### Construction of the antiderivative of sqrt(r^2 – x^2) without trig substitution

Filed under: calculus, integrals, integration by substitution, pedagogy, popular mathematics — collegemathteaching @ 1:15 am

The above figure shows the graph of $y = \sqrt{r^2 - x^2}$ for $r = 1$.
Let $A(t) = \int^t_0 \sqrt{r^2 - x^2} dx$ This gives the area bounded by the graphs of $y=0, x = t, x =0, y = \sqrt{r^2 - x^2}$.

We break this down into the wedge shaped area and the triangle.
Wedge shaped area: has area equal to $\frac{1}{2} \theta r^2$ where $\theta$ is the angle made by the $y$ axis and the radius.
Triangle shaped area: has area equal to $\frac{1}{2}xy = \frac{1}{2}t\sqrt{r^2 - t^2}$
But we can resolve $\theta = arcsin(\frac{t}{r})$ so the wedge shaped area is $\frac{1}{2}r^2arcsin(\frac{t}{r})$ hence:
$A(t) = \int^t_0 \sqrt{r^2 - x^2} dx = \frac{1}{2}r^2arcsin(\frac{t}{r}) + \frac{1}{2}t\sqrt{r^2 - t^2}$

We can verify this by computing $A'(t) = \frac{1}{2} \frac{1}{r}\frac{r^2}{\sqrt{ 1- (\frac{t}{r})^2}} + \frac{1}{2}\sqrt{r^2 - t^2}-t^2\frac{1}{2}\frac{1}{\sqrt{ r^2- t^2}}$
$= \frac{1}{2} \frac{r^2}{\sqrt{ r^2- t^2}}-t^2\frac{1}{2}\frac{1}{\sqrt{ r^2- t^2}} + \frac{1}{2}\sqrt{r^2 - t^2}$
$= \frac{1}{2} \frac{r^2 - t^2}{\sqrt{ r^2- t^2}} + \frac{1}{2}\sqrt{r^2 - t^2} = \sqrt{r^2 - t^2}$ as required.

## February 19, 2012

### Divergent Improper Integrals: change of variables to an unbounded integrand.

Filed under: calculus, change of variable, improper integrals, integrals, integration by substitution, pedagogy — collegemathteaching @ 10:41 pm

This post was motivated by a student question: my student wanted help with the following problem:

$\int^{\infty}_{1} \frac{x^2}{\sqrt{x^3 +1}} dx$
Of course the idea is to do a substitution: $u = x^3 + 1$ which transforms the integral into $\frac{1}{3} \int^{\infty}_{2} \frac{1}{\sqrt{u}} du$ which diverges. So far, so good. But then I told him one of my calculus tips: “it is often a good idea to try to guess the answer ahead of time” and then pointed out that for large values of $x, \frac{x^2}{\sqrt{x^3 +1}} \approx \frac{x^2}{\sqrt{x^3}} = \sqrt{x}$ and of course $\int^{\infty}_1 \sqrt{x} dx$ diverges because the integrand does not go to zero (in fact, is unbounded!) as $x$ tends to infinity.

Then I realized that a change of variables had taken an unbounded function to a bounded one…though one which did not produce a convergent improper integral.

That lead to the natural question: if one has an integrand which is positive but monotonically decreasing to zero on $[1, \infty )$, is there a change of variables which will change the integrand to either an unbounded function on $[1, \infty )$ or at least one that does not decrease to zero?

I admit that I have not answered this question yet, nor have I looked it up. But I can answer this question for a certain class of functions:

Theorem

Given $\int^{\infty}_1 \frac{1}{x^r} dx$
If $0 < r < 1$, let $k > \frac{1}{1-r}$. Then the change of variable $u = x^{\frac{1}{k}}$ transforms $\int^{\infty}_{1} \frac{1}{x^r} dx$ to $k \int^{\infty}_1 u^{k(1-r) -1} du$ and of course $u^{k(1-r) -1}$ is unbounded on $[1, \infty)$.

If $1 < r$ let $k < \frac{1}{1-r} < 0$. Then $\int^{\infty}_1 \frac{1}{x^r} dx$ is transformed into $|k|\int^{1}_{0} u^{-1+k(1-r)} du$ which is an integral of a bounded function over a bounded region.

In short, one class of functions whose improper integral diverges can be transformed to functions that tend to infinity and the class of functions whose integrals converge can be transformed into functions which are bounded over a bounded interval.

Here is such an example: We show the equivalent integrals $\int^{1.5}_{1} 3x^{\frac{1}{2}} dx$ and $\int^{(1.5)^3}_1 \frac{1}{\sqrt{u}} du$. The transformation is accomplished by using $u =x^3$. Note how the transformation stretches the interval of integration to account for the function “shrinkage”.

On the other hand, using $u^{-2}=x$ transforms $\int^{\infty}_1 \frac{1}{x^2} dx$ into $2\int^{1}_{0} u du$