College Math Teaching

July 10, 2020

This always bothered me about partial fractions…

Filed under: algebra, calculus, complex variables, elementary mathematics, integration by substitution — Tags: — collegemathteaching @ 12:03 am

Let’s look at an “easy” starting example: write \frac{1}{(x-1)(x+1)} = \frac{A}{x-1} + \frac{B}{x+1}
We know how that goes: multiply both sides by (x-1)(x+1) to get 1 = A(x+1) + B(x-1) and then since this must be true for ALL x , substitute x=-1 to get B = -{1 \over 2} and then substitute x = 1 to get A = {1 \over 2} . Easy-peasy.

BUT…why CAN you do such a substitution since the original domain excludes x =1, x = -1 ?? (and no, I don’t want to hear about residues and “poles of order 1”; this is calculus 2. )

Lets start with \frac{1}{(x-1)(x+1)} = \frac{A}{x-1} + \frac{B}{x+1} with the restricted domain, say x \neq 1
Now multiply both sides by x-1 and note that, with the restricted domain x \neq 1 we have:

\frac{1}{x+1}  = A + \frac{B(x-1)}{x+1} But both sides are equal on the domain (-1, 1) \cup (1, \infty) and the limit on the left hand side is lim_{x \rightarrow 1} {1 \over x+1 } = {1 \over 2} So the right hand side has a limit which exists and is equal to A . So the result follows..and this works for the calculation for B as well.

Yes, no engineer will care about this. But THIS is the reason we can substitute the non-domain points.

As an aside: if you are trying to solve something like {x^2 + 3x + 2 \over (x^2+1)(x-3) } = {Ax + B \over x^2+1 } + {C \over x-3 } one can do the denominator clearing, and, as appropriate substitute x = i and compare real and imaginary parts ..and yes, now you can use poles and residues.

April 12, 2020

A tidbit with respect to Laplace transforms and sin(x)/x

Filed under: complex variables, integrals, Laplace transform, media — collegemathteaching @ 9:01 pm

I’ve discovered the channel “blackpenredpen” and it is delightful.
It is a nice escape into mathematics that, while far from research level, is “fun” and beyond mere fluff.

And that got me to thinking about \int^{\infty}_0 \frac{sin(x)}{x} dx . Yes, this can be done by residues

But I’ll look at this with Laplace Transforms.

We know that \mathcal{L}(sin(x)) = \int^{\infty}_0 e^{-st}sin(t)dt = \frac{1}{s^2+1}
But note that the antiderivative of e^{-st} with respect to s is -\frac{1}{t}e^{-st} That might not seem like much help, but then notice \int^{\infty}_0 e^{-st} ds = \frac{-1}{t}e^{-st}|^{\infty}_0 = \frac{1}{t} (assuming s > 0

So why not: \int^{\infty}_0 \int^{\infty}_0 e^{-st}sin(t)dt ds = \int^{\infty}_0 \frac{1}{s^2+1} ds =arctan(s)|^{\infty}_0 = \frac{\pi}{2}
Now since the left hand side is just a double integral over the first quadrant (an infinite rectangle) the order of integration can be interchanged:
\int^{\infty}_0 \int^{\infty}_0 e^{-st}sin(t)dt ds = \int^{\infty}_0 \int^{\infty}_0 e^{-st}sin(t)ds dt  = \int^{\infty}_0 sin(t) \int^{\infty}_0 e^{-st}ds dt = \int^{\infty}_0 sin(t)\frac{1}{t} dt

and that is equal to \frac{\pi}{2} .

Note: \int_0^x\frac{sin(t)}{t} dt is sometimes called the Si(x) function

\

April 6, 2020

How I am cutting corners in class

Filed under: complex variables, differential equations, Laplace transform — collegemathteaching @ 12:06 am

Ok, which is more difficult?

1. Solve x'' + 6x' + 13x = sin(t), x(0) = x'(0) = 0 using Laplace transforms or:

2. Given Y = \frac{1}{s^4 + 6s^3 +10s^2 + 6s + 9} find the inverse Laplace transform.

Clearly, 2 is harder and in texts I’ve used, we had to do those prior to doing 1. But, in a way, you have to do 2 in order to do 1:

(s^2 + 6s + 13)X(s) = \frac{1}{s^2+1}  \rightarrow X(s) = \frac{1}{(s^2 + 6s +13)(s^2 + 1)}

But this is already factored and students can be taught to “attempt to factor and if you can’t, complete the square” and this leads immediately to:

X(s) = \frac{1}{(s^2 + 6s + 9 +4)(s^2+1)} = \frac{As + B}{(s+3)^2+4} + \frac{Cs + D}{s^2 +1} which can be resolved by partial fractions.

In our “one less week plus online” I will do much more of 1 than 2.

Of course, there is still some work to do; we still have to solve (As+B)(s^2+1) +(Cs+D)((s+3)^2 +4) =1

I will teach the “eliminate the term method by using complex numbers:

Let s = i to get

(Ci+D)(12+6i) = 12D-6C +(12C + 6D)i = 1 \rightarrow D = -2C \rightarrow -30C = 1 \rightarrow C = -\frac{1}{30}, D = \frac{1}{15}
Let s = -3+2i

\rightarrow s^2+1 = 6-12i \rightarrow (-3A+B +2iA)(6-12i)

= -18A+6B +24A +36iA+12Ai-12Bi = -6A+6B +(48A-12B)i = 1

\rightarrow B=4A, 6A+6B= 1\rightarrow 30A=1

So we have A = \frac{1}{30}, B = \frac{2}{15}, C = -\frac{1}{30}, D = \frac{1}{15}

The particular solution part pulls back to -\frac{1}{30} cos(t) + \frac{1}{15} sin(t)

There is a work to do for the other part:

To get the s+3 shift we have to add and subtract 3; this leads to:

\frac{A(s+3) + B-3A}{(s+3)^2+4} =\frac{A(s+3) }{(s+3)^2+4} + \frac{ B-3A}{(s+3)^2+4}

\frac{1}{30}\frac{(s+3) }{(s+3)^2+4} + \frac{1}{30}\frac{1}{2}\frac{2}{(s+3)^2+4} (adjusting the second term for the 4 = 2^2
And this part pulls back to \frac{1}{30}e^{-3t}cos(2t) +\frac{1}{60} e^{-3t}sin(2t)

Yeah, I know; if you are reading this, you already know this stuff, but I think using i helps speed things up a bit.

And yes, you could have just used the convolution integral and have been done with it, though one would have had to have used
\frac{1}{2}e^{-3t}sin(2t) * sin(t) =\int^t_0\frac{1}{2}e^{-3u}sin(2u)sin(2t-2u)du and been done with it. (you remembered the 1/2, didn’t you? )

June 18, 2018

And my “clever proof” is dashed

Filed under: complex variables, editorial, knot theory, numerical methods, topology — Tags: , — collegemathteaching @ 6:03 pm

It has been a while since I posted here, though I have been regularly posting in my complex variables class blog last semester.

And for those who like complex variables and numerical analysis, this is an exciting, interesting development.

But as to the title of my post: I was working to finish up a proof that one kind of wild knot is not “equivalent” to a different kind of wild knot and I had developed a proof (so I think) that the complement of one knot contains an infinite collection of inequivalent tori (whose solid tori contain the knot non-trivially) whereas the other kind of knot can only have a finite number of such tori. I still like the proof.

But it turns out that there is already an invariant that does the trick nicely..hence I can shorten and simplify the paper.

But dang it..I liked my (now irrelevant to my intended result) result!

April 24, 2018

And I trolled my complex variables class

Filed under: advanced mathematics, analysis, class room experiment, complex variables — collegemathteaching @ 6:34 pm

One question on my last exam: find the Laurent series for \frac{1}{z + 2i} centered at z = -2i which converges on the punctured disk |z+2i| > 0 . And yes, about half the class missed it.

I am truly evil.

March 12, 2018

And I embarrass myself….integrate right over a couple of poles…

Filed under: advanced mathematics, analysis, calculus, complex variables, integrals — Tags: — collegemathteaching @ 9:43 pm

I didn’t have the best day Thursday; I was very sick (felt as if I had been in a boxing match..chills, aches, etc.) but was good to go on Friday (no cough, etc.)

So I walk into my complex variables class seriously under prepared for the lesson but decide to tackle the integral

\int^{\pi}_0 \frac{1}{1+sin^2(t)} dt

Of course, you know the easy way to do this, right?

\int^{\pi}_0 \frac{1}{1+sin^2(t)} dt =\frac{1}{2}  \int^{2\pi}_0 \frac{1}{1+sin^2(t)} dt and evaluate the latter integral as follows:

sin(t) = \frac{1}{2i}(z-\frac{1}{z}), dt = \frac{dz}{iz} (this follows from restricting z to the unit circle |z| =1 and setting z = e^{it} \rightarrow dz = ie^{it}dt and then obtaining a rational function of z which has isolated poles inside (and off of) the unit circle and then using the residue theorem to evaluate.

So 1+sin^2(t) \rightarrow 1+\frac{-1}{4}(z^2 -2 + \frac{1}{z^2}) = \frac{1}{4}(-z^2 + 6 -\frac{1}{z^2}) And then the integral is transformed to:

\frac{1}{2}\frac{1}{i}(-4)\int_{|z|=1}\frac{dz}{z^3 -6z +\frac{1}{z}} =2i \int_{|z|=1}\frac{zdz}{z^4 -6z^2 +1}

Now the denominator factors: (z^2 -3)^2 -8  which means z^2 = 3 - \sqrt{8}, z^2 = 3+ \sqrt{8} but only the roots z = \pm \sqrt{3 - \sqrt{8}} lie inside the unit circle.
Let w =  \sqrt{3 - \sqrt{8}}

Write: \frac{z}{z^4 -6z^2 +1} = \frac{\frac{z}{((z^2 -(3 + \sqrt{8})}}{(z-w)(z+w)}

Now calculate: \frac{\frac{w}{((w^2 -(3 + \sqrt{8})}}{(2w)} = \frac{1}{2} \frac{-1}{2 \sqrt{8}} and \frac{\frac{-w}{((w^2 -(3 + \sqrt{8})}}{(-2w)} = \frac{1}{2} \frac{-1}{2 \sqrt{8}}

Adding we get \frac{-1}{2 \sqrt{8}} so by Cauchy’s theorem 2i \int_{|z|=1}\frac{zdz}{z^4 -6z^2 +1} = 2i 2 \pi i \frac{-1}{2 \sqrt{8}} = \frac{2 \pi}{\sqrt{8}}=\frac{\pi}{\sqrt{2}}

Ok…that is fine as far as it goes and correct. But what stumped me: suppose I did not evaluate \int^{2\pi}_0 \frac{1}{1+sin^2(t)} dt and divide by two but instead just went with:

$latex \int^{\pi}_0 \frac{1}{1+sin^2(t)} dt \rightarrow i \int_{\gamma}\frac{zdz}{z^4 -6z^2 +1} where \gamma is the upper half of |z| = 1 ? Well, \frac{z}{z^4 -6z^2 +1} has a primitive away from those poles so isn’t this just i \int^{-1}_{1}\frac{zdz}{z^4 -6z^2 +1} , right?

So why not just integrate along the x-axis to obtain i \int^{-1}_{1}\frac{xdx}{x^4 -6x^2 +1} = 0 because the integrand is an odd function?

This drove me crazy. Until I realized…the poles….were…on…the…real…axis. ….my goodness, how stupid could I possibly be???

To the student who might not have followed my point: let \gamma be the upper half of the circle |z|=1 taken in the standard direction and \int_{\gamma} \frac{1}{z} dz = i \pi if you do this property (hint: set z(t) = e^{it}, dz = ie^{it}, t \in [0, \pi] . Now attempt to integrate from 1 to -1 along the real axis. What goes wrong? What goes wrong is exactly what I missed in the above example.

February 11, 2018

Posting went way down in 2017

Filed under: advanced mathematics, complex variables, editorial — collegemathteaching @ 12:05 am

I only posted 3 times in 2017. There are many reasons for this; one reason is the teaching load, the type of classes I was teaching, etc.

I spent some of the year creating a new course for the Business College; this is one that replaced the traditional “business calculus” class.

The downside: there is a lot of variation in that course; for example, one of my sections has 1/3 of the class having a math ACT score of under 20! And we have many who are one standard deviation higher than that.

But I am writing. Most of what I write this semester can be found at the class blog for our complex variables class.

Our class does not have analysis as a prerequisite so it is a challenge to make it a truly mathematical class while getting to the computationally useful stuff. I want the students to understand that this class is NOT merely “calculus with z instead of x” but I don’t want to blow them away with proofs that are too detailed for them.

The book I am using does a first pass at integration prior to getting to derivatives.

August 25, 2014

Fourier Transform of the “almost Gaussian” function with a residue integral

This is based on the lectures on the Fourier Transform by Brad Osgood from Stanford:

And here, F(f)(s) = \int^{\infty}_{-\infty} e^{-2 \pi i st} f(t) dt provided the integral converges.

The “almost Gaussian” integrand is f(t) = e^{-\pi t^2} ; one can check that \int^{\infty}_{-\infty} e^{-\pi t^2} dt = 1 . One way is to use the fact that \int^{\infty}_{-\infty} e^{-x^2} dx = \sqrt{\pi} and do the substitution x = \sqrt{\pi} t; of course one should be able to demonstrate the fact to begin with. (side note: a non-standard way involving symmetries and volumes of revolution discovered by Alberto Delgado can be found here)

So, during this lecture, Osgood shows that F(e^{-\pi t^2}) = e^{-\pi s^2} ; that is, this modified Gaussian function is “its own Fourier transform”.

I’ll sketch out what he did in the lecture at the end of this post. But just for fun (and to make a point) I’ll give a method that uses an elementary residue integral.

Both methods start by using the definition: F(s) = \int^{\infty}_{-\infty} e^{-2 \pi i ts} e^{-\pi t^2} dt

Method 1: combine the exponential functions in the integrand:

\int^{\infty}_{-\infty} e^{-\pi(t^2 +2  i ts}  dt . Now complete the square to get: \int^{\infty}_{-\infty} e^{-\pi(t^2 +2  i ts-s^2)-\pi s^2}  dt

Now factor out the factor involving s alone and write as a square: e^{-\pi s^2}\int^{\infty}_{-\infty} e^{-\pi(t+is)^2}  dt

Now, make the substitution x = t+is, dx = dt to obtain:

e^{-\pi s^2}\int^{\infty+is}_{-\infty+is} e^{-\pi x^2}  dx

Now we show that the above integral is really equal to e^{-\pi s^2}\int^{\infty}_{-\infty} e^{-\pi x^2}  dx = e^{\pi s^2} (1) = e^{-\pi s^2}

To show this, we perform \int_{\gamma} e^{z^2} dz along the retangular path \gamma : -x, x, x+is, -x+is and let x \rightarrow \infty

countour
Now the integral around the contour is 0 because e^{-z^2} is analytic.

We wish to calculate the negative of the integral along the top boundary of the contour. Integrating along the bottom gives 1.
As far as the sides: if we fix s we note that e^{-z^2} = e^{(s^2-x^2)+2si} and the magnitude goes to zero as x \rightarrow \infty So the integral along the vertical paths approaches zero, therefore the integrals along the top and bottom contours agree in the limit and the result follows.

Method 2: The method in the video
This uses “differentiation under the integral sign”, which we talk about here.

Stat with F(s) = \int^{\infty}_{-\infty} e^{-2 \pi i ts} e^{-\pi t^2} dt and note \frac{dF}{ds} = \int^{\infty}_{-\infty} (-2 \pi i t) e^{-2 \pi i ts} e^{-\pi t^2} dt

Now we do integration by parts: u = e^{-2 \pi i ts}, dv = (-2 \pi i t)e^{-\pi t^2} \rightarrow v = i e^{-\pi t^2}, du = (-2 \pi i s)e^{-2 \pi i ts} and the integral becomes:

(i e^{-\pi t^2} e^{-2 \pi i ts}|^{\infty}_{-\infty} - (i)(-2 \pi i s) \int^{\infty}_{-\infty} e^{-2 \pi i ts} e^{-\pi t^2} dt

Now the first term is zero for all values of s as t \rightarrow \infty . The second term is merely:

-(2 \pi s) \int^{\infty}_{-\infty} e^{-2 \pi i ts} e^{-\pi t^2} dt = -(2 \pi s) F(s) .

So we have shown that \frac{d F}{ds} = (-2 \pi s)F which is a differential equation in s which has solution F = F_0 e^{- \pi s^2} (a simple separation of variables calculation will verify this). Now to solve for the constant F_0 note that F(0) = \int^{\infty}_{-\infty} e^{0} e^{-\pi t^2} dt = 1 .

The result follows.

Now: which method was easier? The second required differential equations and differentiating under the integral sign; the first required an easy residue integral.

By the way: the video comes from an engineering class. Engineers need to know this stuff!

August 7, 2014

Letting complex algebra make our calculus lives easier

Filed under: basic algebra, calculus, complex variables — Tags: , — collegemathteaching @ 1:37 am

If one wants to use complex arithmetic in elementary calculus, one should, of course, verify a few things first. One might talk about elementary complex arithmetic and about complex valued functions of a real variable at an elementary level; e. g. f(x) + ig(x) . Then one might discuss Euler’s formula: e^{ix} = cos(x) + isin(x) and show that the usual laws of differentiation hold; i. e. show that \frac{d}{dx} e^{ix} = ie^{ix} and one might show that (e^{ix})^k = e^{ikx} for k an integer. The latter involves some dreary trigonometry but, by doing this ONCE at the outset, one is spared of having to repeat it later.

This is what I mean: suppose we encounter cos^n(x) where n is an even integer. I use an even integer power because \int cos^n(x) dx is more challenging to evaluate when n is even.

Coming up with the general formula can be left as an exercise in using the binomial theorem. But I’ll demonstrate what is going on when, say, n = 8 .

cos^8(x) = (\frac{e^{ix} + e^{-ix}}{2})^8 =

\frac{1}{2^8} (e^{i8x} + 8 e^{i7x}e^{-ix} + 28 e^{i6x}e^{-i2x} + 56 e^{i5x}e^{-i3x} + 70e^{i4x}e^{-i4x} + 56 e^{i3x}e^{-i5x} + 28e^{i2x}e^{-i6x} + 8 e^{ix}e^{-i7x} + e^{-i8x})

= \frac{1}{2^8}((e^{i8x}+e^{-i8x}) + 8(e^{i6x}+e^{-i6x}) + 28(e^{i4x}+e^{-i4x})+  56(e^{i2x}+e^{-i2x})+ 70) =

\frac{70}{2^8} + \frac{1}{2^7}(cos(8x) + 8cos(6x) + 28cos(4x) +56cos(2x))

So it follows reasonably easily that, for n even,

cos^n(x)  = \frac{1}{2^{n-1}}\Sigma^{\frac{n}{2}-1}_{k=0} (\binom{n}{k}cos((n-2k)x)+\frac{\binom{n}{\frac{n}{2}}}{2^n}

So integration should be a breeze. Lets see about things like, say,

cos(kx)sin(nx) = \frac{1}{(2)(2i)} (e^{ikx}+e^{-ikx})(e^{inx}-ie^{-inx}) =

\frac{1}{4i}((e^{i(k+n)x} - e^{-i(k+n)x}) + (e^{i(n-k)x}-e^{-i(n-k)x}) = \frac{1}{2}(sin((k+n)x) + sin((n-k)x)

Of course these are known formulas, but their derivation is relatively simple when one uses complex expressions.

August 6, 2014

Where “j” comes from

I laughed at what was said from 30:30 to 31:05 or so:

If you are wondering why your engineering students want to use j = \sqrt{-1} is is because, in electrical engineering, i usually stands for “current”.

Though many of you know this, this lesson also gives an excellent reason to use the complex form of the Fourier series; e. g. if f is piece wise smooth and has period 1, write f(x) = \Sigma^{k = \infty}_{k=-\infty}c_k e^{i 2k\pi x} (usual abuse of the equals sign) rather than writing it out in sines and cosines. of course, \overline{c_{-k}} = c_k if f is real valued.

How is this easier? Well, when you give a demonstration as to what the coefficients have to be (assuming that the series exists to begin with, the orthogonality condition is very easy to deal with. Calculate: c_m= \int^1_0 e^{i 2k\pi t}e^{i 2m\pi x} dx for when k \ne m . There is nothing to it; easy integral. Of course, one has to demonstrate the validity of e^{ix} = cos(x) + isin(x) and show that the usual differentiation rules work ahead of time, but you need to do that only once.

Older Posts »

Blog at WordPress.com.