College Math Teaching

March 26, 2020

My review lessons online

Filed under: applications of calculus, COVID19, differential equations, linear albegra — collegemathteaching @ 11:04 am

We had an extra week to prepare to teach online, so I put notes from the previous few weeks up in blog form:

https://bradleylinearalgebra2020.wordpress.com/blog-2/

https://bradleyappliedcalculus.wordpress.com/blog-2/

https://bradleymth224differentialequations.wordpress.com/blog-2/

That was quite a bit of work, but I did find some cool videos out there and embedded them in my lessons.

March 24, 2020

My teaching during the COVID-19 Pandemic

My university has moved to “online only” for the rest of the semester. I realize that most of us are in the same boat.
Fortunately, for now, I’ve got some academic freedom to make changes and I am taking a different approach than some.

Some appear to be wanting to keep things “as normal as possible.”

For me: the pandemic changes everything.

Yes, there are those on the beach in Florida. That isn’t most of my students; it could be some of them.

So, here is what will be different for me:
1) I am making exams “open book, open note” and take home: they get it and are given several days to do it and turn it back in, like a project.
Why? Fluid situations, living with a family, etc. might make it difficult to say “you HAVE to take it now…during period X.” This is NOT an online class that they signed up for.
Yes, it is possible that some cheat; that can’t be helped.

Also, studying will be difficult to do. So, getting a relatively long “designed as a programmed text” is, well, getting them to study WHILE DOING THE EXAM. No, it is not the same as “study to put it in your brain and then show you know it” at exam time. But I feel that this gets them to learn while under this stressful situation; they take time aside to look up and think about the material. The exam, in a way, is going through a test bank.

2) Previously, I thought of testing as serving two purposes: a) it encourages students to review and learn and b) distinguishing those with more knowledge from those with lesser knowledge. Now: tests are to get the students to learn..of course diligence will be rewarded. But who does well and who does not..those groups might change a little.

3) Quiz credit: I was able to sign up for webassign, and their quizzes will be “extra credit” to build on their existing grade. This is a “carrot only” approach.

4) Most of the lesson delivery will be a polished set of typeset notes with videos. My classes will be a combination of “live chat” with video where I will discuss said notes and give tips on how to do problems. I’ll have office hours ..some combination of zoom meetings which people can join and I’ll use e-mail to set up “off hours” meetings, either via chat or zoom, or even an exchange of e-mails.

We shall see how it works; I have a plan and think I can execute it, but I make no guarantee of the results.
Yes, there are polished online classes, but those are designed to be done deliberately. What we have here is something made up at the last minute for students who did NOT sign up for it and are living in an emergency situation.

February 18, 2019

An easy fact about least squares linear regression that I overlooked

The background: I was making notes about the ANOVA table for “least squares” linear regression and reviewing how to derive the “sum of squares” equality:

Total Sum of Squares = Sum of Squares Regression + Sum of Squares Error or…

If y_i is the observed response, \bar{y} the sample mean of the responses, and \hat{y}_i are the responses predicted by the best fit line (simple linear regression here) then:

\sum (y_i - \bar{y})^2 = \sum (\hat{y}_i -\bar{y})^2+ \sum (y_i - \hat{y}_i)^2 (where each sum is \sum^n_{i=1} for the n observations. )

Now for each i it is easy to see that (y_i - \bar{y}) = (\hat{y}_i -\bar{y}) + (y_i - \hat{y}_i) but the equations still holds if when these terms are squared, provided you sum them up!

And it was going over the derivation of this that reminded me about an important fact about least squares that I had overlooked when I first presented it.

If you go in to the derivation and calculate: \sum ( (\hat{y}_i -\bar{y}) + (y_i - \hat{y}_i))^2 = \sum  ((\hat{y}_i -\bar{y})^2 + (y_i - \hat{y}_i)^2 +2 (\hat{y}_i -\bar{y})(y_i - \hat{y}_i))

Which equals \sum  ((\hat{y}_i -\bar{y})^2 + (y_i - \hat{y}_i)^2 + 2\sum (\hat{y}_i -\bar{y})(y_i - \hat{y}_i)) and the proof is completed by showing that:

\sum (\hat{y}_i -\bar{y})(y_i - \hat{y}_i)) = \sum (\hat{y}_i)(y_i - \hat{y}_i)) - \sum (\bar{y})(y_i - \hat{y}_i)) and that BOTH of these sums are zero.

But why?

Let’s go back to how the least squares equations were derived:

Given that \hat{y}_i = \hat{\beta}_0 + \hat{\beta}_1 x_i

\frac{\partial}{\partial \hat{\beta}_0} \sum (\hat{y}_i -y_i)^2 = 2\sum (\hat{y}_i -y_i) =0 yields that \sum (\hat{y}_i -y_i) =0 . That is, under the least squares equations, the sum of the residuals is zero.

Now \frac{\partial}{\partial \hat{\beta}_1} \sum (\hat{y}_i -y_i)^2 = 2\sum x_i(\hat{y}_i -y_i) =0 which yields that \sum x_i(\hat{y}_i -y_i) =0

That is, the sum of the residuals, weighted by the corresponding x values (inputs) is also zero. Note: this holds with multilinear regreassion as well.

Really, that is what the least squares process does: it sets the sum of the residuals and the sum of the weighted residuals equal to zero.

Yes, there is a linear algebra formulation of this.

Anyhow returning to our sum:

\sum (\bar{y})(y_i - \hat{y}_i)) = (\bar{y})\sum(y_i - \hat{y}_i)) = 0 Now for the other term:

\sum (\hat{y}_i)(y_i - \hat{y}_i)) = \sum (\hat{\beta}_0+\hat{\beta}_1 x_i)(y_i - \hat{y}_i)) = \hat{\beta}_0\sum (y_i - \hat{y}_i) + \hat{\beta}_1 \sum x_i (y_i - \hat{y}_i))

Now \hat{\beta}_0\sum (y_i - \hat{y}_i) = 0 as it is a constant multiple of the sum of residuals and \hat{\beta}_1 \sum x_i (y_i - \hat{y}_i)) = 0 as it is a constant multiple of the weighted sum of residuals..weighted by the x_i .

That was pretty easy, wasn’t it?

But the role that the basic least squares equations played in this derivation went right over my head!

October 7, 2016

Now what is a linear transformation anyway?

Filed under: linear albegra, pedagogy — Tags: , — collegemathteaching @ 9:43 pm

Yes, I know, a linear transformation L: V \rightarrow W is a function between vector spaces such that L(V \oplus W) = L(V) \oplus L(W) and L(a \odot V) = a \odot L(V) where the vector space operations of vector addition and scalar multiplication occur in their respective spaces.

Previously, I talked about this classical example:

Consider the set R^+ = \{x| x > 0 \} endowed with the “vector addition” x \oplus y = xy where xy represents ordinary real number multiplication and “scalar multiplication r \odot x = x^r where r \in R and x^r is ordinary exponentiation. It is clear that \{R^+, R | \oplus, \odot \} is a vector space with 1 being the vector “additive” identity and 0 playing the role of the scalar zero and 1 playing the multiplicative identity. Verifying the various vector space axioms is a fun, if trivial exercise.

Then L(x) = ln(x) is a vector space isomophism between R^+ and R (the usual addition and scalar multiplication) and of course, L^{-1}(x) = exp(x) .

Can we expand this concept any further?

Question: (I have no idea if this has been answered or not): given any, say, non-compact, connected subset of R, is it possible to come up with vector space operations (vector addition, scalar multiplication) so as to make a given, say, real valued, continuous one to one function into a linear transformation?

The answer in some cases is “yes.”

Consider L(x): R^+ \rightarrow R^+ by L(x) = x^r , r any real number.

Exercise 1: L is a linear transformation.

Exercise 2: If we have ANY linear transformation L: R^+ \rightarrow R^+ , let L(e) = e^a .
Then L(x) = L(e^{ln(x)}) = L(e)^{ln(x)} = (e^a)^{ln(x)} = x^a .

Exercise 3: we know that all linear transformations L: R \rightarrow R are of the form L(x) = ax . These can be factored through:

x \rightarrow e^x \rightarrow (e^x)^a = e^{ax} \rightarrow ln(e^{ax}) = ax .

So this isn’t exactly anything profound, but it is fun! And perhaps it might be a way to introduce commutative diagrams.

October 4, 2016

Linear Transformation or not? The vector space operations matter.

Filed under: calculus, class room experiment, linear albegra, pedagogy — collegemathteaching @ 3:31 pm

This is nothing new; it is an example for undergraduates.

Consider the set R^+ = \{x| x > 0 \} endowed with the “vector addition” x \oplus y = xy where xy represents ordinary real number multiplication and “scalar multiplication r \odot x = x^r where r \in R and x^r is ordinary exponentiation. It is clear that \{R^+, R | \oplus, \odot \} is a vector space with 1 being the vector “additive” identity and 0 playing the role of the scalar zero and 1 playing the multiplicative identity. Verifying the various vector space axioms is a fun, if trivial exercise.

Now consider the function L(x) = ln(x) with domain R^+ . (here: ln(x) is the natural logarithm function). Now ln(xy) = ln(x) + ln(y) and ln(x^a) = aln(x) . This shows that L:R^+ \rightarrow R (the range has the usual vector space structure) is a linear transformation.

What is even better: ker(L) =\{x|ln(x) = 0 \} which shows that ker(L) = \{1 \} so L is one to one (of course, we know that from calculus).

And, given z \in R, ln(e^z) = z so L is also onto (we knew that from calculus or precalculus).

So, R^+ = \{x| x > 0 \} is isomorphic to R with the usual vector operations, and of course the inverse linear transformation is L^{-1}(y) = e^y .

Upshot: when one asks “is F a linear transformation or not”, one needs information about not only the domain set but also the vector space operations.

October 3, 2016

Lagrange Polynomials and Linear Algebra

Filed under: algebra, linear albegra — Tags: — collegemathteaching @ 9:24 pm

We are discussing abstract vector spaces in linear algebra class. So, I decided to do an application.

Let P_n denote the polynomials of degree n or less; the coefficients will be real numbers. Clearly P_n is n+1 dimensional and \{1, x, x^2, ...x^n \} constitutes a basis.

Now there are many reasons why we might want to find a degree n polynomial that takes on certain values for certain values of x . So, choose x_0, x_1, x_2, ..., x_{n-1} . So, let’s construct an alternate basis as follows: L_0 = \frac{(x-x_1)(x-x_2)(x-x_3)..(x-x_{n})}{(x_0 - x_1)(x_0-x-x_2)..(x_0 - x_{n})}, L_1 = \frac{(x-x_0)(x-x_2)(x-x_3)..(x-x_{n})}{(x_1 - x_0)(x_1-x-x_2)..(x_1 - x_{n})}, ...L_k = \frac{(x-x_0)(x-x_1)(x-x_2)..(x-x_{k-1})(x-x_{k+1})...(x-x_{n})}{(x_k - x_1)(x_k-x-x_2)..(x_k - x_{k-1})(x_k - x_{k+1})...(x_k - x_{n})}. ....L_{n} = \frac{(x-x_0)(x-x_1)(x-x_2)..(x-x_{n-1})}{(x_{n}- x_1)(x_{n}-x-x_2)..(x_{n} - x_{n})}

This is a blizzard of subscripts but the idea is pretty simple. Note that L_k(x_k) = 1 and L_k(x_j) = 0 if j \neq k .

But let’s look at a simple example: suppose we want to form a new basis for P_2 and we are interested in fixing x values of -1, 0, 1 .

So L_0 = \frac{(x)(x-1)}{(-1-0)(-1-1)} = \frac{(x)(x-1)}{2}, L_1 = \frac{(x+1)(x-1)}{(0+1)(0-1)} = -(x+1)(x-1),
L_2 = \frac{(x+1)x}{(1+1)(1-0)} = \frac{(x+1)(x)}{2} . Then we note that

L_0(-1) = 1, L_0(0) =0, L_0(1) =0, L_1(-1)=0, L_1(0) = 1, L_1(1) = 0, L_2(-1)=0, L_2(0) =0, L_2(1)=1

Now, we claim that the L_k are linearly independent. This is why:

Suppose a_0 L_0 + a_1 L_1 + ....a_n L_n =0 as a vector. We can now solve for the a_i Substitute x_i into the right hand side of the equation to get a_iL_i(x_i) = 0 (note: L_k(x_i) = 0 for i \neq k ). So L_0, L_1, ...L_n are n+1 linearly independent vectors in P_n and therefore constitute a basis.

Example: suppose we want to have a degree two polynomial p(x) where p(-1) =5, p(0) =3, p(1) = 17. . We use our new basis to obtain:

p(x) = 5L_0(x) + 3 L_1(x) + 17L_2(x) = \frac{5}{2}(x)(x-1)  -3(x+1)(x-1) + \frac{17}{2}x(x+1) . It is easy to check that p(-1) = 5, p(0) =3, p(1) = 17

June 7, 2016

Infinite dimensional vector subspaces: an accessible example that W-perp-perp isn’t always W

Filed under: integrals, linear albegra — Tags: , — collegemathteaching @ 9:02 pm

This is based on a Mathematics Magazine article by Irving Katz: An Inequality of Orthogonal Complements found in Mathematics Magazine, Vol. 65, No. 4, October 1992 (258-259).

In finite dimensional inner product spaces, we often prove that (W^{\perp})^{\perp} = W My favorite way to do this: I introduce Grahm-Schmidt early and find an orthogonal basis for W and then extend it to an orthogonal basis for the whole space; the basis elements that are not basis elements are automatically the basis for W^{\perp} . Then one easily deduces that (W^{\perp})^{\perp} = W (and that any vector can easily be broken into a projection onto W, W^{\perp} , etc.

But this sort of construction runs into difficulty when the space is infinite dimensional; one points out that the vector addition operation is defined only for the addition of a finite number of vectors. No, we don’t deal with Hilbert spaces in our first course. 🙂

So what is our example? I won’t belabor the details as they can make good exercises whose solution can be found in the paper I cited.

So here goes: let V be the vector space of all polynomials. Let W_0 the subspace of even polynomials (all terms have even degree), W_1 the subspace of odd polynomials, and note that V = W_0 \oplus W_1

Let the inner product be \langle p(x), q(x) \rangle = \int^1_{-1}p(x)q(x) dx . Now it isn’t hard to see that (W_0)^{\perp} = W_1 and (W_1)^{\perp} = W_0 .

Now let U denote the subspace of polynomials whose terms all have degree that are multiples of 4 (e. g. 1 + 3x^4 - 2x^8 and note that U^{\perp} \subset W_1 .

To see the reverse inclusion, note that if p(x) \in U^{\perp} , p(x) = p_0 + p_1 where p_0 \in W_0, p_1 \in W_1 and then \int^1_{-1} (p_1(x))x^{4k} dx = 0 for any k \in \{1, 2, ... \} . So we see that it must be the case that \int^1_{-1} (p_0(x))x^{4k} dx = 0 = 2\int^1_0 (p_0(x))x^{4k} dx as well.

Now we can write: p_0(x) = c_0 + c_1 x^2 + ...c_n x^{2n} and therefore \int^1_0 p_0(x) x^{4k} dx = c_0\frac{1}{4k+1} + c_1 \frac{1}{2 + 4k+1}...+c_n \frac{1}{2n + 4k+1} = 0 for k \in \{0, 1, 2, ...2n+1 \}

Now I wish I had a more general proof of this. But these equations (for each k leads a system of equations:

\left( \begin{array}{cccc}  1 & \frac{1}{3} & \frac{1}{5} & ...\frac{1}{2n+1} \\  \frac{1}{5} & \frac{1}{7} & \frac{1}{9}...&\frac{1}{2n+5} \\  ... & ... & ... & ... \\  \frac{1}{4k+1} & \frac{1}{4k+3} & ...& \frac{1}{10n+4}     \end{array} \right)       \left( \begin{array}{c}  c_0 \\  c_1  \\  ...  \\  c_n   \end{array} \right) =     \left( \begin{array}{c}  0 \\  0  \\  ...  \\  0  \end{array} \right)

It turns out that the given square matrix is non-singular (see page 92, no. 3 of Polya and Szego: Problems and Theorems in Analysis, Vol. 2, 1976) and so the c_j = 0. This means p_0 = 0 and so U^{\perp} = W_1

Anyway, the conclusion leaves me cold a bit. It seems as if I should be able to prove: let f be some, say…C^{\infty} function over [0,1] where \int^1_0 x^{2k} f(x) dx = 0 for all k \in \{0, 1, ....\} then f = 0 . I haven’t found a proof as yet…perhaps it is false?

February 10, 2016

Vector subspaces: two examples

Filed under: linear albegra, pedagogy — Tags: — collegemathteaching @ 8:41 pm

I am teaching linear algebra our of the book by Fraleigh and Beauregard. We are on “subspaces” (subsets of R^n for now) and a subspace is defined to be a set of vectors that is closed under both vector addition and scalar multiplication. Here are a couple of examples of non-subspaces:

1. W= \{(x,y)| xy = 0 \} . Now this space IS closed under scalar multiplication, note that this space IS closed under additive inverses. But it is not closed under addition as [x,0] + [0,y]=[x,y] \notin W for x \neq 0, y \neq 0 .

2. (this example is in the book): the vectors \{(n, m) | n, m \in Z \} are closed under vector addition but not under scalar multiplication.

February 8, 2016

Where these posts are (often) coming from

Filed under: academia, linear albegra, student learning — Tags: , — collegemathteaching @ 9:57 pm

books

messydesk

theoffice

Yes, my office is messy. Deal with it. 🙂 And yes, some of my professional friends (an accountant and a lawyer) just HAD to send me their office shots…pristine condition, of course.
(all in good fun!)

Note: this semester I teach 3 classes in a row: second semester “business/life science” calculus, second semester “engineering/physical science” calculus and linear algebra. Yes, I love the topics, but there is just enough overlap that I have to really clear my head between lessons. Example: we covered numerical integration in both of my calculus classes, as well as improper integrals. I have to be careful not to throw in \int^{\infty}_{-\infty} \frac{dx}{1+x^2} as an example during my “life science calculus” class. I do the “head clearing” by going up the stairs to my office between classes.

Linear algebra is a bit tricky; we are so accustomed to taking things like “linear independence” for granted that it is easy to forget that this is the first time the students are seeing it. Also, the line between rigor and “computational usefulness” is tricky; for example, how rigorously do we explain “the determinant” of a matrix?

Oh well…back to some admin nonsense.

May 4, 2014

How to create tridiagonal matrices in Matlab (any size)

Filed under: linear albegra, matrix algebra, numerical methods, pedagogy — Tags: , , — collegemathteaching @ 1:38 am

Suppose we wanted to create a tridiagonal matrix in Matlab and print it to a file so it would be used in a routine. For demonstration purposes: let’s create this 10 x 10 matrix:

\left( \begin{array}{cccccccccc}  1 & e^{-1} &0 & 0 & 0 &0 &0 &0 &0 &0\\  \frac{1}{4} & \frac{1}{2} & e^{-2} &0 &0 &0 &0 &0 &0 &0\\  0 &  \frac{1}{9} & \frac{1}{3} & e^{-3} & 0 &0 &0 &0 &0 & 0 \\  0 & 0 &  \frac{1}{16}  & \frac{1}{4} & e^{-4} & 0 &0 &0 &0 &0 \\  0 & 0 & 0 & \frac{1}{25} & \frac{1}{5} & e^{-5} & 0 &0 &0 &0 \\  0 & 0 & 0 & 0 & \frac{1}{36} & \frac{1}{6} & e^{-6} & 0 & 0 &0 \\  0 & 0 & 0 & 0 & 0 & \frac{1}{49} & \frac{1}{7} & e^{-7} & 0 & 0 \\  0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{64} & \frac{1}{8} & e^{-8} & 0 \\  0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{81} & \frac{1}{9} & e^{-9} \\  0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{100} & \frac{1}{10}   \end{array} \right)

To take advantage of Matlab’s “sparse” command we should notice the pattern of the entries.
The diagonals: m_{(i,i)} = \frac{1}{i} for i \in \{1, 2, ...10 \} .
The “right of diagonal” entries: m_{(i, i+1)} = e^{-i} for i \in \{1, 2, ...9 \}
The “left of diagonal” entries: m_{(i-1, i)} = \frac{1}{i^2} for i \in \{2, 3, ...10 \}

Now we need to set up Matlab vectors to correspond to these indices.

First, we need to set up a vector for the “row index” entries for: the diagonal entries, the right of diagonal entries, then the left of diagonal entries.
One way to do this is with the command “i = [1:10 1:9 2:10]; ” (without the quotes, of course).
What this does: it creates a list of indices for the row value of the entries: 1, 2, 3,…10 for the diagonals, 1, 2, 3, …9 for the right of diagonals, and 2, 3, …10 for the left of diagonals.

Now, we set up a vector for the “column index” entries for: the diagonal entries, the right of diagonal entries and the left of diagonal entries.
We try: “j = [1:10 2:10 1:9]; ”
What this does: it creates a list of indices for the column value of the entries: 1, 2, 3, ..10 for the diagonals, 2, 3, …10 for the right of diagonals and 1, 2, …9 for the left of diagonals.

As a pair, (i,j) goes (1,1), (2,2), …(10,10), (1,2), (2,3), ….(9, 10), (2,1), (3,2)…..(10, 9).

Now, of course, we need to enter the desired entries in the matrix: we want to assign \frac{1}{i} to entry (i,i), \frac{1}{i^2} to entry (i, i-1) and e^{-i} to entry (i, i+1).

So we create the following vectors: I’ll start with “MD = 1:10 ” to get a list 1, 2, 3, ..10 and then “D = 1./M” to get a vector with the reciprocal values. To get the left of diagonal values, use “ML = 2:10” and then “L = 1./ML.^2 “. Now get the list of values for the right of diagonal entries: “MU = 1:9” then “U = exp(-MU)”.

Now we organize the values as follows: “s = [D L U]”. This provides a vector whose first 10 entries are D, next 9 are L and next 9 are U (a list concatenation) which is in one to one correspondence with (i,j).

We can then generate the matrix with the command (upper/lower cases are distinguished in Matlab): “S =sparse(i, j, s)”.

What this does: this creates a matrix S and assigns each (i,j) entry the value stored in s that corresponds to it. The remaining entries are set to 0 (hence the name “sparse”).

Let’s see how this works:

tridiag1

(click to see a larger size)

Now what does this put into the matrix S?

tridag2

Note how the non-zero entries of S are specified; nothing else is.

Now suppose you want to store this matrix in a file that can be called by a Matlab program. One way is to write to the file with the following command:

“dlmwrite(‘c:\matlabr12\work\tridag.dat’, S , ‘ ‘)”

The first entry tells what file to send the entries to (this has to be created ahead of time). The next is the matrix (in our case, S) and the last entry tells how to delineate the entries; the default is a “comma” and the Matlab programs I am using requires a “space” instead, hence the specification of ‘ ‘ .

Now this is what was produced by this process:

tridiag3

Suppose now, you wish to produce an augmented matrix. We have to do the following:
add row indices (which, in our case, range from 1 to 10), column indices (column 11 for each row) and the augmented entries themselves, which I’ll call 1, 2, 3, …10.

Here is the augmenting vector:

>> B = [1 2 3 4 5 6 7 8 9 10];

Here is how we modify i, j, and s:

>> i =[1:10 2:10 1:9 1:10];
>> j = [1:10 1:9 2:10 11*ones(1,10)];
>> s = [D L U B];
>> S = sparse(i, j, s);

For “i”: we append 1:10 which gives rows 1 through 10.
For “j”: we created a vector of all 11’s as each augmented entry will appear in the 11’th column of each row.
That is, to our (i,j) list we added (1,11), (2,11), (3, 11), ….(10, 11) to the end.
Now we add B to our S: the list of non-zero matrix entries.

Then

“dlmwrite(‘c:\matlabr12\work\tridage.dat’, S , ‘ ‘)”

produces:

tridag4

So now you are ready to try out matrix manipulation and algebra with (relatively) large size matrices; at least sparse ones.

Older Posts »

Create a free website or blog at WordPress.com.