College Math Teaching

September 9, 2014

Chebyshev polynomials: a topological viewpoint

Chebyshev (or Tchebycheff) polynomials are a class of mutually orthogonal polynomials (with respect to the inner product: f \cdot g  = \int^1_{-1} \frac{1}{\sqrt{1 - x^2}} f(x)g(x) dx ) defined on the interval [-1, 1] . Yes, I realize that this is an improper integral, but it does converge in our setting.

These are used in approximation theory; here are a couple of uses:

1. The roots of the Chebyshev polynomial can be used to find the values of x_0, x_1, x_2, ...x_k \in [-1,1] that minimize the maximum of |(x-x_0)(x-x_1)(x-x_2)...(x-x_k)| over the interval [-1,1] . This is important in minimizing the error of the Lagrange interpolation polynomial.

2. The Chebyshev polynomial can be used to adjust an approximating Taylor polynomial P_n to increase its accuracy (away from the center of expansion) without increasing its degree.

The purpose of this note isn’t to discuss the utility but rather to discuss an interesting property that these polynomials have. The Wiki article on these polynomials is reasonably good for that purpose.

Let’s discuss the polynomials themselves. They are defined for all positive integers n as follows:

T_n = cos(n acos(x)) . Now, it is an interesting exercise in trig identities to discover that these ARE polynomials to begin with; one shows this to be true for, say, n \in \{0, 1, 2\} by using angle addition formulas and the standard calculus resolution of things like sin(acos(x)) . Then one discovers a relation: T_{n+1} =2xT_n - T_{n-1} to calculate the rest.

The cos(n acos(x)) definition allows for some properties to be calculated with ease: the zeros occur when acos(x) = \frac{\pi}{2n} + \frac{k \pi}{n} and the first derivative has zeros where arcos(x) = \frac{k \pi}{n} ; these ALL correspond to either an endpoint max/min at x=1, x = -1 or local max and mins whose y values are also \pm 1 . Here are the graphs of T_4(x), T_5 (x)

cheby4

cheby5

Now here is a key observation: the graph of a T_n forms n spanning arcs in the square [-1, 1] \times [-1,1] and separates the square into n+1 regions. So, if there is some other function f whose graph is a connected, piecewise smooth arc that is transverse to the graph of T_n that both spans the square from x = -1 to x = 1 and that stays within the square, that graph must have n points of intersection with the graph of T_n .

Now suppose that f is the graph of a polynomial of degree n whose leading coefficient is 2^{n-1} and whose graph stays completely in the square [-1, 1] \times [-1,1] . Then the polynomial Q(x) = T_n(x) - f(x) has degree n-1 (because the leading terms cancel via the subtraction) but has n roots (the places where the graphs cross). That is clearly impossible; hence the only such polynomial is f(x) = T_n(x) .

This result is usually stated in the following way: T_n(x) is normalized to be monic (have leading coefficient 1) by dividing the polynomial by 2^{n-1} and then it is pointed out that the normalized T_n(x) is the unique monic polynomial over [-1,1] that stays within [-\frac{1}{2^{n-1}}, \frac{1}{2^{n-1}}] for all x \in [-1,1] . All other monic polynomials have a graph that leaves that box at some point over [-1,1] .

Of course, one can easily cook up analytic functions which don’t leave the box but these are not monic polynomials of degree n .

August 31, 2014

The convolution integral: do some examples in Calculus III or not?

For us, calculus III is the most rushed of the courses, especially if we start with polar coordinates. Getting to the “three integral theorems” is a real chore. (ok, Green’s, Divergence and Stoke’s theorem is really just \int_{\Omega} d \sigma = \int_{\partial \Omega} \sigma but that is the subject of another post)

But watching this lecture made me wonder: should I say a few words about how to calculate a convolution integral?

Note: I’ve discussed a type of convolution integral with regards to solving differential equations here.

In the context of Fourier Transforms, the convolution integral is defined as it was in analysis class: f*g = \int^{\infty}_{-\infty} f(x-t)g(t) dt . Typically, we insist that the functions be, say, L^1 and note that it is a bit of a chore to show that the convolution of two L^1 functions is L^1 ; one proves this via the Fubini-Tonelli Theorem.

(The straight out product of two L^1 functions need not be L^1 ; e.g, consider f(x) = \frac {1}{\sqrt{x}} for x \in (0,1] and zero elsewhere)

So, assuming that the integral exists, how do we calculate it? Easy, you say? Well, it can be, after practice.

But to test out your skills, let f(x) = g(x) be the function that is 1 for x \in [\frac{-1}{2}, \frac{1}{2}] and zero elsewhere. So, what is f*g ???

So, it is easy to see that f(x-t)g(t) only assumes the value of 1 on a specific region of the (x,t) plane and is zero elsewhere; this is just like doing an iterated integral of a two variable function; at least the first step. This is why it fits well into calculus III.

f(x-t)g(t) = 1 for the following region: (x,t), -\frac{1}{2} \le x-t \le \frac{1}{2}, -\frac{1}{2} \le t \le \frac{1}{2}

This region is the parallelogram with vertices at (-1, -\frac{1}{2}), (0, -\frac{1}{2}), (0 \frac{1}{2}), (1, \frac{1}{2}) .

convolutiondraw

Now we see that we can’t do the integral in one step. So, the function we are integrating f(x-t)f(t) has the following description:

f(x-t)f(t)=\left\{\begin{array}{c} 1,x \in [-1,0], -\frac{1}{2} t \le \frac{1}{2}+x \\ 1 ,x\in [0,1], -\frac{1}{2}+x \le t \le \frac{1}{2} \\ 0 \text{ elsewhere} \end{array}\right.

So the convolution integral is \int^{\frac{1}{2} + x}_{-\frac{1}{2}} dt = 1+x for x \in [-1,0) and \int^{\frac{1}{2}}_{-\frac{1}{2} + x} dt = 1-x for x \in [0,1] .

That is, of course, the tent map that we described here. The graph is shown here:

tentmapgraph

So, it would appear to me that a good time to do a convolution exercise is right when we study iterated integrals; just tell the students that this is a case where one “stops before doing the outside integral”.

August 27, 2014

Nice collection of Math GIFs

Filed under: basic algebra, calculus, elementary mathematics, pedagogy — Tags: — collegemathteaching @ 12:25 am

No, these GIF won’t explain better than an instructor, but I found many of them to be fun.

Via: IFLS

August 25, 2014

How to succeed at calculus, and why it is worth it!

Filed under: calculus, student learning — Tags: , — collegemathteaching @ 2:06 pm

This post is intended to help the student who is willing to put time and effort into succeeding in a college calculus class.

Part One: How to Study

The first thing to remember is that most students will have to study outside of class in order to learn the material. There are those who pick things up right away, but these students tend to be the rare exception.

Think of it this way: suppose you want to learn to play the piano. A teacher can help show you how to play it and provide a practice schedule. But you won’t be any good if you don’t practice.

Suppose you want to run a marathon. A coach can help you with running form, provide workout schedules and provide feedback. But if you don’t run those workouts, you won’t build up the necessary speed and endurance for success.

The same principle applies for college mathematics classes; you really learn the material when you study it and do the homework exercises.

Here are some specific tips on how to study:

1. It is optimal if you can spend a few minutes scanning the text for the upcoming lesson. If you do this, you’ll be alert for the new concepts as they are presented and the concepts might sink in quicker.

2. There is some research that indicates:
a. It is better to have several shorter study sessions rather than one long one and
b. There is an optimal time delay between study sessions and the associated lecture.

Look at it this way: if you wait too long after the lesson to study it, you would have forgotten much of what was presented. If you study right away, then you really have, in essence, a longer class room session. It is probably best to hit the material right when the initial memory starts to fade; this time interval will vary from individual to individual. For more on this and for more on learning for long term recall, see this article.

3. Learn the basic derivative formulas inside and out; that is, know what the derivatives of functions like sin(x), cos(x), tan(x), sec(x), arctan(x), arcsin(x), exp(x), ln(x) are on sight; you shouldn’t have to think about them. The same goes for the basic trig identities such as \sin ^{2}(x)+\cos ^{2}(x)=1 and \tan^{2}(x)+1 = \sec^{2}(x)

Why is this? The reason is that much of calculus (though not all!) boils down to pattern recognition.

For example, suppose you need to calculate:

\int \dfrac{(\arctan (x))^{5}}{1+x^{2}}dx=

If you don’t know your differentiation formulas, this problem is all but impossible. On the other hand, if you do know your differentiation formulas, then you’ll immediately recognize the arctan(x) and it’s derivative \dfrac{1}{1+x^{2}} and you’ll see that this problem is really the very easy problem \int u^{5}du .

But this all starts with having “automatic” knowledge of the derivative formulas.

Note: this learning is something your professor or TA cannot do for you!

4. Be sure to do some study problems with your notes and your book closed. If you keep flipping to your notes and book to do the homework problems, you won’t be ready for the exams. You have to kick up the training wheels.
Try this; the difference will surprise you. There is also evidence that forcing yourself to recall the material FROM YOUR OWN BRAIN helps you learn the material! Give yourself frequent quizzes on what you are learning.

5. When reviewing for an exam, study the problems in mixed fashion. For example, get some note cards and write problems from the various sections on them (say, some from 3.1, some from 3.2, some from 3.3, and so on), mix the cards, then try the problems. If you just review section by section, you’ll go into each problem knowing what technique to use each time right from the start. Many times, half of the battle is knowing which technique to use with each problem; that is part of the course! Do the problems in mixed order.

If you find yourself whining complaining “I don’t know where to start” it means that you don’t know the material well enough. Remember that a trained monkey can repeat specific actions; you have to be a bit better than that!

6. Read the book, S L O W L Y, with pen and paper nearby. Make sure that you work through the examples in the text and that you understand the reasons for each step.

7. For the “more theoretical” topics, know some specific examples for specific theorems. Here is what I am talking about:

a. Intermediate value theorem: recall that if f(x)=\frac{1}{x} , then f(-1)=-1,f(1)=1 but there is no x such that f(x) = 0 . Why does this not violate the intermediate value theorem?

b. Mean value theorem: note also that there is no c such that f'(c) = \frac{f(1)-f(-1)}{2} = 0 . Why does this NOT violate the Mean Value Theorem?

c. Series: it is useful to know basic series such as those for exp(x), sin(x), cos(x) . It is also good to know some basic examples such as the geometric series, the divergent harmonic series \sum \frac{1}{k} and the conditionally convergent series \sum (-1)^{k}\frac{1}{k} .

d. Limit definition of derivative: be able to work a few basic examples of the derivative via the limit definition: f(x) = x^{n}, f(x) = \frac{1}{x}, f(x)=\sqrt{x} and know why the derivative of f(x) = |x| and f(x) = x^{1/3} do not exist at x = 0 .

Having some “template” example can help you master a theoretical concept.

Part II: Attitude
Your attitude will be very important.

1. Remember that your effort will be essential! Again, you can’t learn to run a marathon without getting off of the couch and making your muscles sore. Learning mathematics involves some frustration and, yes, at times, some tedium. Learning is fun OVERALL but it isn’t always fun at all times. You will encounter discomfort and unpleasantness at times.

2. Remember that winners look for ways to succeed; losers and whiners look for excuses for failure. You can always find those who will be willing to enable your underachievement. Instead, seek out those who bring out your best.

3. Success is NOT guaranteed; that is what makes success rewarding! Think of how good you’ll feel about yourself if you mastered something that seemed impossible to master at first. And yes, anyone who has achieved anything that is remotely difficult has taken some lumps and bruises along the way. You will NOT be spared these.

Remember that if you duck the calculus challenge, you are, in essence, slamming many doors of opportunity shut right from the get-go.

4. On the other hand, remember that Calculus (the first two semesters anyway) is a Freshman level class; exceptional mathematical talent is not a prerequisite for success. True, calculus is easy for some but that isn’t the point. Most reasonably intelligent people can have success, if they are willing to put forth the proper effort in the proper manner.

Just think of how good it will feel to succeed in an area that isn’t your strong suit!

Dinette set on calculus…

Filed under: calculus, media — Tags: , — collegemathteaching @ 12:47 pm

Note: if you haven’t followed Julie Larson’s comic strip Dinette Set, the characters featured in it are not, well, the world’s most intellectually minded characters (with the exception of Patty). :-)

dinettesetcalculus

Ironically, I see such attitudes displayed by people…posting their thoughts on the internet via a computer or smart phone. The irony doesn’t even occur to them.

August 22, 2014

Almost ready to start the semester again…

Filed under: calculus, editorial, elementary mathematics — Tags: — collegemathteaching @ 12:05 pm

Screen shot 2014-08-22 at 7.02.19 AM

I am teaching two freshman sections…it is a breaking in process for them.

August 21, 2014

Calculation of the Fourier Transform of a tent map, with a calculus tip….

I’ve been following these excellent lectures by Professor Brad Osgood of Stanford. As an aside: yes, he is dynamite in the classroom, but there is probably a reason that Stanford is featuring him. :-)

And yes, his style is good for obtaining a feeling of comradery that is absent in my classroom; at least in the lower division “service” classes.

This lecture takes us from Fourier Series to Fourier Transforms. Of course, he admits that the transition here is really a heuristic trick with symbolism; it isn’t a bad way to initiate an intuitive feel for the subject though.

However, the point of this post is to offer a “algebra of calculus trick” for dealing with the sort of calculations that one might encounter.

By the way, if you say “hey, just use a calculator” you will be BANNED from this blog!!!! (just kidding…sort of. :-) )

So here is the deal: let f(x) represent the tent map: the support of f is [-1,1] and it has the following graph:

tentmapgraph

The formula is: f(x)=\left\{\begin{array}{c} x+1,x \in [-1,0) \\ 1-x ,x\in [0,1] \\ 0 \text{ elsewhere} \end{array}\right.

So, the Fourier Transform is F(f) = \int^{\infty}_{-\infty} e^{-2 \pi i st}f(t)dt = \int^0_{-1} e^{-2 \pi i st}(1+t)dt + \int^1_0e^{-2 \pi i st}(1-t)dt

Now, this is an easy integral to do, conceptually, but there is the issue of carrying constants around and being tempted to make “on the fly” simplifications along the way, thereby leading to irritating algebraic errors.

So my tip: just let a = -2 \pi i s and do the integrals:

\int^0_{-1} e^{at}(1+t)dt + \int^1_0e^{at}(1-t)dt and substitute and simplify later:

Now the integrals become: \int^{1}_{-1} e^{at}dt + \int^0_{-1}te^{at}dt - \int^1_0 te^{at} dt.
These are easy to do; the first is merely \frac{1}{a}(e^a - e^{-a}) and the next two have the same anti-derivative which can be obtained by a “integration by parts” calculation: \frac{t}{a}e^{at} -\frac{1}{a^2}e^{at}; evaluating the limits yields:

-\frac{1}{a^2}-(\frac{-1}{a}e^{-a} -\frac{1}{a^2}e^{-a}) - (\frac{1}{a}e^{a} -\frac{1}{a^2}e^a)+ (-\frac{1}{a^2})

Add the first integral and simplify and we get: -\frac{1}{a^2}(2 - (e^{-a} -e^{a}) . NOW use a = -2\pi i s and we have the integral is \frac{1}{4 \pi^2 s^2}(2 -(e^{2 \pi i s} -e^{-2 \pi i s}) = \frac{1}{4 \pi^2 s^2}(2 - cos(2 \pi s)) by Euler’s formula.

Now we need some trig to get this into a form that is “engineering/scientist” friendly; here we turn to the formula: sin^2(x) = \frac{1}{2}(1-cos(2x)) so 2 - cos(2 \pi s) = 4sin^2(\pi s) so our answer is \frac{sin^2( \pi s)}{(\pi s)^2} = (\frac{sin(\pi s)}{\pi s})^2 which is often denoted as (sinc(s))^2 as the “normalized” sinc(x) function is given by \frac{sinc(\pi x)}{\pi x} (as we want the function to have zeros at integers and to “equal” one at x = 0 (remember that famous limit!)

So, the point is that using a made the algebra a whole lot easier.

Now, if you are shaking your head and muttering about how this calculation was crude that that one usually uses “convolution” instead: this post is probably too elementary for you. :-)

August 7, 2014

Engineers need to know this stuff part II

This is a 50 minute lecture in a engineering class; one can easily see the mathematical demands put on the students. Many of the seemingly abstract facts from calculus (differentiability, continuity, convergence of a sequence of functions) are heavily used. Of particular interest to me is the remarks from 45 to 50 minutes into the video:

Here is what is going on: if we have a sequence of functions f_n defined on some interval [a,b] and if f is defined on [a,b] , lim_{n \rightarrow \infty} \int^b_a (f_n(x) - f(x))^2 dx =0 then we say that f_n \rightarrow f “in mean” (or “in the L^2 norm”). Basically, as n grows, the area between the graphs of f_n and f gets arbitrarily small.

However this does NOT mean that f_n converges to f point wise!

If that seems strange: remember that the distance between the graphs can say fixed over a set of decreasing measure.

Here is an example that illustrates this: consider the intervals [0, \frac{1}{2}], [\frac{1}{2}, \frac{5}{6}], [\frac{3}{4}, 1], [\frac{11}{20}, \frac{3}{4}],... The intervals have length \frac{1}{2}, \frac{1}{3}, \frac{1}{4},... and start by moving left to right on [0,1] and then moving right to left and so on. They “dance” on [0,1]. Let f_n the the function that is 1 on the interval and 0 off of it. Then clearly lim_{n \rightarrow \infty} \int^b_a (f_n(x) - 0)^2 dx =0 as the interval over which we are integrating is shrinking to zero, but this sequence of functions doesn’t converge point wise ANYWHERE on [0,1] . Of course, a subsequence of functions converges pointwise.

Letting complex algebra make our calculus lives easier

Filed under: basic algebra, calculus, complex variables — Tags: , — collegemathteaching @ 1:37 am

If one wants to use complex arithmetic in elementary calculus, one should, of course, verify a few things first. One might talk about elementary complex arithmetic and about complex valued functions of a real variable at an elementary level; e. g. f(x) + ig(x) . Then one might discuss Euler’s formula: e^{ix} = cos(x) + isin(x) and show that the usual laws of differentiation hold; i. e. show that \frac{d}{dx} e^{ix} = ie^{ix} and one might show that (e^{ix})^k = e^{ikx} for k an integer. The latter involves some dreary trigonometry but, by doing this ONCE at the outset, one is spared of having to repeat it later.

This is what I mean: suppose we encounter cos^n(x) where n is an even integer. I use an even integer power because \int cos^n(x) dx is more challenging to evaluate when n is even.

Coming up with the general formula can be left as an exercise in using the binomial theorem. But I’ll demonstrate what is going on when, say, n = 8 .

cos^8(x) = (\frac{e^{ix} + e^{-ix}}{2})^8 =

\frac{1}{2^8} (e^{i8x} + 8 e^{i7x}e^{-ix} + 28 e^{i6x}e^{-i2x} + 56 e^{i5x}e^{-i3x} + 70e^{i4x}e^{-i4x} + 56 e^{i3x}e^{-i5x} + 28e^{i2x}e^{-i6x} + 8 e^{ix}e^{-i7x} + e^{-i8x})

= \frac{1}{2^8}((e^{i8x}+e^{-i8x}) + 8(e^{i6x}+e^{-i6x}) + 28(e^{i4x}+e^{-i4x})+  56(e^{i2x}+e^{-i2x})+ 70) =

\frac{70}{2^8} + \frac{1}{2^7}(cos(8x) + 8cos(6x) + 28cos(4x) +56cos(2x))

So it follows reasonably easily that, for n even,

cos^n(x)  = \frac{1}{2^{n-1}}\Sigma^{\frac{n}{2}-1}_{k=0} (\binom{n}{k}cos((n-2k)x)+\frac{\binom{n}{\frac{n}{2}}}{2^n}

So integration should be a breeze. Lets see about things like, say,

cos(kx)sin(nx) = \frac{1}{(2)(2i)} (e^{ikx}+e^{-ikx})(e^{inx}-ie^{-inx}) =

\frac{1}{4i}((e^{i(k+n)x} - e^{-i(k+n)x}) + (e^{i(n-k)x}-e^{-i(n-k)x}) = \frac{1}{2}(sin((k+n)x) + sin((n-k)x)

Of course these are known formulas, but their derivation is relatively simple when one uses complex expressions.

August 6, 2014

Where “j” comes from

I laughed at what was said from 30:30 to 31:05 or so:

If you are wondering why your engineering students want to use j = \sqrt{-1} is is because, in electrical engineering, i usually stands for “current”.

Though many of you know this, this lesson also gives an excellent reason to use the complex form of the Fourier series; e. g. if f is piece wise smooth and has period 1, write f(x) = \Sigma^{k = \infty}_{k=-\infty}c_k e^{i 2k\pi x} (usual abuse of the equals sign) rather than writing it out in sines and cosines. of course, \overline{c_{-k}} = c_k if f is real valued.

How is this easier? Well, when you give a demonstration as to what the coefficients have to be (assuming that the series exists to begin with, the orthogonality condition is very easy to deal with. Calculate: c_m= \int^1_0 e^{i 2k\pi t}e^{i 2m\pi x} dx for when k \ne m . There is nothing to it; easy integral. Of course, one has to demonstrate the validity of e^{ix} = cos(x) + isin(x) and show that the usual differentiation rules work ahead of time, but you need to do that only once.

Older Posts »

The WordPress Classic Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 584 other followers