# College Math Teaching

## August 2, 2012

### MAA Mathfest Madison Day 1, 2 August 2012

I am sitting in the main ballroom waiting for the large public talks to start. I should be busy most of the day; it looks as if there will be some interesting all day long.

I like this conference not only for the variety but also for the timing; it gives me some momentum going into the academic year.

I regret not taking my camera; downtown Madison is scenic and we are close to the water. The conference venue is just a short walk away from the hotel; I see some possibilities for tomorrow’s run. Today: just weights and maybe a bit of treadmill in the afternoon.

The Talks
The opening lecture was the MAA-AMS joint talk by David Mumford of Brown University. This guy’s credentials are beyond stellar: Fields Medal, member of the National Academy of Science, etc.

His talk was about applied and pure mathematics and how there really shouldn’t be that much of a separation between the two, though there is. For one thing: pure mathematics prestige is measured by the depth of the result; applied mathematical prestige is mostly measured by the utility of the produced model. Pure mathematicians tend to see applied mathematics as shallow and simple and they resent the fact that applied math…gets a lot more funding.

He talked a bit about education and how the educational establishment ought to solicit input from pure areas; he also talked about computer science education (in secondary schools) and mentioned that there should be more emphasis on coding (I agree).

He mentioned that he tended to learn better when he had a concrete example to start from (I am the same way).

What amused me: his FIRST example was on PDE (partial differential equations) model of neutron flux through nuclear reactors used for submarines; note that these reactors were light water, thermal reactors (in that the fission reaction became self sustaining via the absorption of neutrons whose energy levels had been lowered by a moderator (the neutrons lose energy when they collide with atoms that aren’t too much heavier).

Of course, in nuclear power school, we studied the PDEs of the situation after the design had been developed; these people had to come up with an optimal geometry to begin with.

Note that they didn’t have modern digital computers; they used analogue computers modeled after simple voltage drops across resistors!

About the PDE: you had two neutron populations: “fast” neutrons (ones at high energy levels) and “slow” neutrons (ones at lower energy levels). The fast neutrons are slowed down to become thermal neutrons. But thermal neutrons in turn cause more fissions thereby increasing the fast neutron flux; hence you have two linked PDEs. Of course there is leakage, absorption by control rods, etc., and the classical PDEs can’t be solved in closed form.

Another thing I didn’t know: Clairaut (from the “symmetry of mixed partial derivatives” fame) actually came up with the idea of the Fourier series before Fourier did; he did this in an applied setting.

Next talk Amie Wilkinson of Northwestern (soon to be University of Chicago) gave a talk about dynamical systems. She is one of those who has publication in the finest journals that mathematics has to offer (stellar).

The whole talk was pretty good. Highlights: she mentioned Henri Poincare and how he worked on the 3-body problem (one massive body, one medium body, and one tiny body that didn’t exert gravitational force on the other bodies). This creates a 3-dimensional system whose dynamics live in 3-space (the system space is, of course, has much higher dimension). Now consider a closed 2 dimensional manifold in that space and a point on that manifold. Now study the orbit of that point under the dynamical system action. Eventually, that orbit intersects the 2 dimensional manifold again. The action of moving from the first point to the first intersection point actually describes a motion ON THE TWO MANIFOLD and if we look at ALL intersections, we get a the orbit of that point, considered as an action on the two dimensional manifold.

So, in some sense, this two manifold has an “inherited” action on it. Now if we look at, say, a square on that 2-dimensional manifold, it was proved that this square comes back in a “folded” fashion: this is the famed “Smale Horseshoe map“:

Other things: she mentioned that there are dynamical systems that are stable with respect to perturbations that have unstable orbits (with respect to initial conditions) and that these instabilities cannot be perturbed away; they are inherent to the system. There are other dynamical systems (with less stability) that have this property as well.

There is, of course, much more. I’ll link to the lecture materials when I find them.

Last morning Talk
Bernd Sturmfels on Tropical Mathematics
Ok, quickly, if you have a semi-ring (no additive inverses) with the following operations:
$x \oplus y =$ min $(x,y)$ and $x \otimes y = x + y$ (check that the operations distribute), what good would it be? Why would you care about such a beast?

Answer: many reasons. This sort of object lends itself well to things like matrix operations and is used for things such as “least path” problems (dynamic programming) and “tree metrics” in biology.

Think of it this way: if one is considering, say, an “order n” technique in numerical analysis, then the products of the error terms adds to the order, and the sum of the errors gives the, ok, maximum of the two summands (very similar).

The PDF of the slides in today’s lecture can be found here.

## May 26, 2012

### Eigenvalues, Eigenvectors, Eigenfunctions and all that….

The purpose of this note is to give a bit of direction to the perplexed student.

I am not going to go into all the possible uses of eigenvalues, eigenvectors, eigenfuntions and the like; I will say that these are essential concepts in areas such as partial differential equations, advanced geometry and quantum mechanics:

Quantum mechanics, in particular, is a specific yet very versatile implementation of this scheme. (And quantum field theory is just a particular example of quantum mechanics, not an entirely new way of thinking.) The states are “wave functions,” and the collection of every possible wave function for some given system is “Hilbert space.” The nice thing about Hilbert space is that it’s a very restrictive set of possibilities (because it’s a vector space, for you experts); once you tell me how big it is (how many dimensions), you’ve specified your Hilbert space completely. This is in stark contrast with classical mechanics, where the space of states can get extraordinarily complicated. And then there is a little machine — “the Hamiltonian” — that tells you how to evolve from one state to another as time passes. Again, there aren’t really that many kinds of Hamiltonians you can have; once you write down a certain list of numbers (the energy eigenvalues, for you pesky experts) you are completely done.

(emphasis mine).

So it is worth understanding the eigenvector/eigenfunction and eigenvalue concept.

First note: “eigen” is German for “self”; one should keep that in mind. That is part of the concept as we will see.

The next note: “eigenfunctions” really are a type of “eigenvector” so if you understand the latter concept at an abstract level, you’ll understand the former one.

The third note: if you are reading this, you are probably already familiar with some famous eigenfunctions! We’ll talk about some examples prior to giving the formal definition. This remark might sound cryptic at first (but hang in there), but remember when you learned $\frac{d}{dx} e^{ax} = ae^{ax}$? That is, you learned that the derivative of $e^{ax}$ is a scalar multiple of itself? (emphasis on SELF). So you already know that the function $e^{ax}$ is an eigenfunction of the “operator” $\frac{d}{dx}$ with eigenvalue $a$ because that is the scalar multiple.

The basic concept of eigenvectors (eigenfunctions) and eigenvalues is really no more complicated than that. Let’s do another one from calculus:
the function $sin(wx)$ is an eigenfunction of the operator $\frac{d^2}{dx^2}$ with eigenvalue $-w^2$ because $\frac{d^2}{dx^2} sin(wx) = -w^2sin(wx)$. That is, the function $sin(wx)$ is a scalar multiple of its second derivative. Can you think of more eigenfunctions for the operator $\frac{d^2}{dx^2}$?

Answer: $cos(wx)$ and $e^{ax}$ are two others, if we only allow for non zero eigenvalues (scalar multiples).

So hopefully you are seeing the basic idea: we have a collection of objects called vectors (can be traditional vectors or abstract ones such as differentiable functions) and an operator (linear transformation) that acts on these objects to yield a new object. In our example, the vectors were differentiable functions, and the operators were the derivative operators (the thing that “takes the derivative of” the function). An eigenvector (eigenfunction)-eigenvalue pair for that operator is a vector (function) that is transformed to a scalar multiple of itself by the operator; e. g., the derivative operator takes $e^{ax}$ to $ae^{ax}$ which is a scalar multiple of the original function.

Formal Definition
We will give the abstract, formal definition. Then we will follow it with some examples and hints on how to calculate.

First we need the setting. We start with a set of objects called “vectors” and “scalars”; the usual rules of arithmetic (addition, multiplication, subtraction, division, distributive property) hold for the scalars and there is a type of addition for the vectors and scalars and the vectors “work together” in the intuitive way. Example: in the set of, say, differentiable functions, the scalars will be real numbers and we have rules such as $a (f + g) =af + ag$, etc. We could also use things like real numbers for scalars, and say, three dimensional vectors such as $[a, b, c]$ More formally, we start with a vector space (sometimes called a linear space) which is defined as a set of vectors and scalars which obey the vector space axioms.

Now, we need a linear transformation, which is sometimes called a linear operator. A linear transformation (or operator) is a function $L$ that obeys the following laws: $L(\vec{v} + \vec{w}) = L(\vec{v}) + L(\vec{w} )$ and $L(a\vec{v}) = aL(\vec{v})$. Note that I am using $\vec{v}$ to denote the vectors and the undecorated variable to denote the scalars. Also note that this linear transformation $L$ might take one vector space to a different vector space.

Common linear transformations (and there are many others!) and their eigenvectors and eigenvalues.
Consider the vector space of two-dimensional vectors with real numbers as scalars. We can create a linear transformation by matrix multiplication:

$L([x,y]^T) = \left[ \begin{array}{cc} a & b \\ c & d \end{array} \right] \left[ \begin{array}{c} x \\ y \end{array} \right]=\left[ \begin{array}{c} ax+ by \\ cx+dy \end{array} \right]$ (note: $[x,y]^T$ is the transpose of the row vector; we need to use a column vector for the usual rules of matrix multiplication to apply).

It is easy to check that the operation of matrix multiplying a vector on the left by an appropriate matrix is yields a linear transformation.
Here is a concrete example: $L([x,y]^T) = \left[ \begin{array}{cc} 1 & 2 \\ 0 & 3 \end{array} \right] \left[ \begin{array}{c} x \\ y \end{array} \right]=\left[ \begin{array}{c} x+ 2y \\ 3y \end{array} \right]$

So, does this linear transformation HAVE non-zero eigenvectors and eigenvalues? (not every one does).
Let’s see if we can find the eigenvectors and eigenvalues, provided they exist at all.

For $[x,y]^T$ to be an eigenvector for $L$, remember that $L([x,y]^T) = \lambda [x,y]^T$ for some real number $\lambda$

So, using the matrix we get: $L([x,y]^T) = \left[ \begin{array}{cc} 1 & 2 \\ 0 & 3 \end{array} \right] \left[ \begin{array}{c} x \\ y \end{array} \right]= \lambda \left[ \begin{array}{c} x \\ y \end{array} \right]$. So doing some algebra (subtracting the vector on the right hand side from both sides) we obtain $\left[ \begin{array}{cc} 1 & 2 \\ 0 & 3 \end{array} \right] \left[ \begin{array}{c} x \\ y \end{array} \right] - \lambda \left[ \begin{array}{c} x \\ y \end{array} \right] = \left[ \begin{array}{c} 0 \\ 0 \end{array} \right]$

At this point it is tempting to try to use a distributive law to factor out $\left[ \begin{array}{c} x \\ y \end{array} \right]$ from the left side. But, while the expression makes sense prior to factoring, it wouldn’t AFTER factoring as we’d be subtracting a scalar number from a 2 by 2 matrix! But there is a way out of this: one can then insert the 2 x 2 identity matrix to the left of the second term of the left hand side:
$\left[ \begin{array}{cc} 1 & 2 \\ 0 & 3 \end{array} \right] \left[ \begin{array}{c} x \\ y \end{array} \right] - \lambda\left[ \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right] \left[ \begin{array}{c} x \\ y \end{array} \right] = \left[ \begin{array}{c} 0 \\ 0 \end{array} \right]$

Notice that by doing this, we haven’t changed anything except now we can factor out that vector; this would leave:
$(\left[ \begin{array}{cc} 1 & 2 \\ 0 & 3 \end{array} \right] - \lambda\left[ \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right] )\left[ \begin{array}{c} x \\ y \end{array} \right] = \left[ \begin{array}{c} 0 \\ 0 \end{array} \right]$

$(\left[ \begin{array}{cc} 1-\lambda & 2 \\ 0 & 3-\lambda \end{array} \right] ) \left[ \begin{array}{c} x \\ y \end{array} \right] = \left[ \begin{array}{c} 0 \\ 0 \end{array} \right]$

Now we use a fact from linear algebra: if $[x,y]^T$ is not the zero vector, we have a non-zero matrix times a non-zero vector yielding the zero vector. This means that the matrix is singular. In linear algebra class, you learn that singular matrices have determinant equal to zero. This means that $(1-\lambda)(3-\lambda) = 0$ which means that $\lambda = 1, \lambda = 3$ are the respective eigenvalues. Note: when we do this procedure with any 2 by 2 matrix, we always end up with a quadratic with $\lambda$ as the variable; if this quadratic has real roots then the linear transformation (or matrix) has real eigenvalues. If it doesn’t have real roots, the linear transformation (or matrix) doesn’t have non-zero real eigenvalues.

Now to find the associated eigenvectors: if we start with $\lambda = 1$ we get
$(\left[ \begin{array}{cc} 0 & 2 \\ 0 & 2 \end{array} \right] \left[ \begin{array}{c} x \\ y \end{array} \right] = \left[ \begin{array}{c} 0 \\ 0 \end{array} \right]$ which has solution $\left[ \begin{array}{c} x \\ y \end{array} \right] = \left[ \begin{array}{c} 1 \\ 0 \end{array} \right]$. So that is the eigenvector associated with eigenvalue 1.
If we next try $\lambda = 3$ we get
$(\left[ \begin{array}{cc} -2 & 2 \\ 0 & 0 \end{array} \right] \left[ \begin{array}{c} x \\ y \end{array} \right] = \left[ \begin{array}{c} 0 \\ 0 \end{array} \right]$ which has solution $\left[ \begin{array}{c} x \\ y \end{array} \right] = \left[ \begin{array}{c} 1 \\ 1 \end{array} \right]$. So that is the eigenvector associated with the eigenvalue 3.

In the general “k-dimensional vector space” case, the recipe for finding the eigenvectors and eigenvalues is the same.
1. Find the matrix $A$ for the linear transformation.
2. Form the matrix $A - \lambda I$ which is the same as matrix $A$ except that you have subtracted $\lambda$ from each diagonal entry.
3. Note that $det(A - \lambda I)$ is a polynomial in variable $\lambda$; find its roots $\lambda_1, \lambda_2, ...\lambda_n$. These will be the eigenvalues.
4. Start with $\lambda = \lambda_1$ Substitute this into the matrix-vector equation $det(A - \lambda I) \vec{v_1} = \vec{0}$ and solve for $\vec({v_1}$. That will be the eigenvector associated with the first eigenvalue. Do this for each eigenvalue, one at a time. Note: you can get up to $k$ “linearly independent” eigenvectors in this manner; that will be all of them.

Practical note
Yes, this should work “in theory” but practically speaking, there are many challenges. For one: for equations of degree 5 or higher, it is known that there is no formula that will find the roots for every equation of that degree (Galios proved this; this is a good reason to take an abstract algebra course!). Hence one must use a numerical method of some sort. Also, calculation of the determinant involves many round-off error-inducing calculations; hence sometimes one must use sophisticated numerical techniques to get the eigenvalues (a good reason to take a numerical analysis course!)

Consider a calculus/differential equation related case of eigenvectors (eigenfunctions) and eigenvalues.
Our vectors will be, say, infinitely differentiable functions and our scalars will be real numbers. We will define the operator (linear transformation) $D^n = \frac{d^n}{dx^n}$, that is, the process that takes the n’th derivative of a function. You learned that the sum of the derivatives is the derivative of the sums and that you can pull out a constant when you differentiate. Hence $D^n$ is a linear operator (transformation); we use the term “operator” when we talk about the vector space of functions, but it is really just a type of linear transformation.

We can also use these operators to form new operators; that is $(D^2 + 3D)(y) = D^2(y) + 3D(y) = \frac{d^2y}{dx^2} + 3\frac{dy}{dx}$ We see that such “linear combinations” of linear operators is a linear operator.

So, what does it mean to find eigenvectors and eigenvalues of such beasts?

Suppose we with to find the eigenvectors and eigenvalues of $(D^2 + 3D)$. An eigenvector is a twice differentiable function $y$ (ok, we said “infinitely differentiable”) such that $(D^2 + 3D) = \lambda y$ or $\frac{d^2y}{dx^2} + 3\frac{dy}{dx} = \lambda y$ which means $\frac{d^2y}{dx^2} + 3\frac{dy}{dx} - \lambda y = 0$. You might recognize this from your differential equations class; the only “tweak” is that we don’t know what $\lambda$ is. But if you had a differential equations class, you’d recognize that the solution to this differential equation depends on the roots of the characteristic equation $m^2 + 3m - \lambda = 0$ which has solutions: $m = -\frac{3}{2} \pm \frac{\sqrt{9-4\lambda}}{2}$ and the solution takes the form $e^{m_1}, e^{m_2}$ if the roots are real and distinct, $e^{ax}sin(bx), e^{ax}cos(bx)$ if the roots are complex conjugates $a \pm bi$ and $e^{m}, xe^{m}$ if there is a real, repeated root. In any event, those functions are the eigenfunctions and these very much depend on the eigenvalues.

Of course, reading this little note won’t make you an expert, but it should get you started on studying.

## August 19, 2011

### Partial Differential Equations, Differential Equations and the Eigenvalue/Eigenfunction problem

Suppose we are trying to solve the following partial differential equation:
$\frac{\partial \psi}{\partial t} = 3 \frac{\partial ^2 \phi}{\partial x^2}$ subject to boundary conditions:
$\psi(0) = \psi(\pi) = 0, \psi(x,0) = x(x-\pi)$

It turns out that we will be using techniques from ordinary differential equations and concepts from linear algebra; these might be confusing at first.

The first thing to note is that this differential equation (the so-called heat equation) is known to satisfy a “uniqueness property” in that if one obtains a solution that meets the boundary criteria, the solution is unique. Hence we can attempt to find a solution in any way we choose; if we find it, we don’t have to wonder if there is another one lurking out there.

So one technique that is often useful is to try: let $\psi = XT$ where $X$ is a function of $x$ alone and $T$ is a function of $t$ alone. Then when we substitute into the partial differential equation we obtain:
$XT^{\prime} = 3X^{\prime\prime}T$ which leads to $\frac{T^{\prime}}{T} = 3\frac{X^{\prime\prime}}{X}$

The next step is to note that the left hand side does NOT depend on $x$; it is a function of $t$ alone. The right hand side does not depend on $t$ as it is a function of $x$ alone. But the two sides are equal; hence neither side can depend on $x$ or $t$; they must be constant.

Hence we have $\frac{T^{\prime}}{T} = 3\frac{X^{\prime\prime}}{X} = \lambda$

So far, so good. But then you are told that $\lambda$ is an eigenvalue. What is that about?

The thing to notice is that $T^{\prime} - \lambda T = 0$ and $X^{\prime\prime} - \frac{\lambda}{3}X = 0$
First, the equation in $T$ can be written as $D(T) = \lambda T$ with the operator $D$ denoting the first derivative. Then the second can be written as $D^2(X) = 3\lambda X$ where $D^2$ denotes the second derivative operator. Recall from linear algebra that these operators meet the requirements for a linear transformation if the vector space is the set of all functions that are “differentiable enough”. So what we are doing, in effect, are trying to find eigenvectors for these operators.

So in this sense, solving a homogeneous differential equation is really solving an eigenvector problem; often this is termed the “eigenfucntion” problem.

Note that the differential equations are not difficult to solve:
$T = a exp(\lambda T)$ $X = b exp(\sqrt{\frac{\lambda}{3}} x) + cexp(-\sqrt{\frac{\lambda}{3}} x)$; the real valued form of the equation in $x$ depends on whether $\lambda$ is positive, zero or negative.

But the point is that we are merely solving a constant coefficient differential equation just as we did in our elementary differential equations course with one important difference: we don’t know what the constant (the eigenvalue) is.

Now if we turn to the boundary conditions on $x$ we see that a solution of the form $A e^{bx} + Be^{-bx}$ cannot meet the zero at the boundaries conditions; we can rule out the $\lambda = 0$ condition as well.
Hence we know that $\lambda$ is negative and we get $X = a cos(\sqrt{\frac{\lambda}{3}} x) + b sin(\sqrt{\frac{\lambda}{3}} x)$ solution and then $T = d e^{\lambda t }$ solution.

But now we notice that these solutions have a $\lambda$ in them; this is what makes these ordinary differential equations into an “eigenvalue/eigenfucntion” problem.

So what values of $\lambda$ will work? We know it is negative so we say $\lambda = -w^2$ If we look at the end conditions and note that $T$ is never zero, we see that the cosine term must vanish ($a = 0$ ) and we can ensure that $\sqrt{\frac{w}{3}}\pi = k \pi$ which implies that $w = 3k^2$ So we get a whole host of functions: $\psi_k = a_k e^{-3k^2 t}sin(kx)$.

Now we still need to meet the last condition (set at $t = 0$ ) and that is where Fourier analysis comes in. Because the equation was linear, we can add the solutions and get another solution; hence the $X$ term is just obtained by taking the Fourier expansion for the function $x(x-\pi)$ in terms of sines.

The coefficients are $b_k = \frac{1}{\pi} \int^{\pi}_{-\pi} (x)(x-\pi) sin(kx) dx$ and the solution is:
$\psi(x,t) = \sum_{k=1}^{\infty} e^{-3k^2 t} b_k sin(kx)$

### Quantum Mechanics and Undergraduate Mathematics XV: sample problem for stationary states

I feel a bit guilty as I haven’t gone over an example of how one might work out a problem. So here goes:

Suppose our potential function is some sort of energy well: $V(x) = 0$ for $0 < x < 1$ and $V(x) = \infty$ elsewhere.
Note: I am too lazy to keep writing $\hbar$ so I am going with $h$ for now.

So, we have the two Schrödinger equations with $\psi$ being the state vector and $\eta_k$ being one of the stationary states:
$-\frac{h^2}{2m} \frac{\partial}{\partial x^2}\eta_k + V(x) \eta_k = ih\frac{\partial}{\partial t} \eta_k$
$-\frac{h^2}{2m} \frac{\partial}{\partial x^2}\eta_k + V(x) \eta_k = e_k \eta_k$

Where $e_k$ are the eigenvalues for $\eta_k$

Now apply the potential for $0 < x < 1$ and the equations become:
$-\frac{h^2}{2m} \frac{\partial}{\partial x^2}\eta_k = ih\frac{\partial}{\partial t} \eta_k$
$-\frac{h^2}{2m} \frac{\partial}{\partial x^2}\eta_k = e_k \eta_k$

Yes, I know that equation II is a consequence of equation I.

Now we use a fact from partial differential equations: the first equation is really a form of the “diffusion” or “heat” equation; it has been shown that once one takes boundary conditions into account, the equation posses a unique solution. Hence if we find a solution by any means necessary, we don’t have to worry about other solutions being out there.

So attempt a solution of the form $\eta_k = X_k T_k$ where the first factor is a function of $x$ alone and the second is of $t$ alone.
Now put into the second equation:

$-\frac{h^2}{2m} X^{\prime\prime}_kT_k = e_k XT$

Now assume $T \ne 0$ and divide both sides by $T$ and do a little algebra to obtain:
$X^{\prime\prime}_k +\frac{2m e_k}{h^2}X_k = 0$
$e_k$ are the eigenvalues for the stationary states; assume that these are positive and we obtain:
$X = a_k cos(\frac{\sqrt{2m e_k}}{h} x) + b_k sin(\frac{\sqrt{2m e_k}}{h} x)$
from our knowledge of elementary differential equations.
Now for $x = 0$ we have $X_k(0) = a_k$. Our particle is in our well and we can’t have values below 0; hence $a_k = 0$. Now $X(x) = b_k sin(\frac{\sqrt{2m e_k}}{h} x)$
We want zero at $x = 1$ so $\frac{\sqrt{2m e_k}}{h} = k\pi$ which means $e_k = \frac{(k \pi h)^2}{2m}$.

Now let’s look at the first Schrödinger equation:
$-\frac{h^2}{2m}X_k^{\prime\prime} T_k = ihT_k^{\prime}X_k$
This gives the equation: $\frac{X_k^{\prime\prime}}{X_k} = -\frac{ 2m i}{h} \frac{T_k^{\prime}}{T_k}$
Note: in partial differential equations, it is customary to note that the left side of the equation is a function of $x$ alone and therefore independent of $t$ and that the right hand side is a function of $T$ alone and therefore independent of $x$; since these sides are equal they must be independent of both $t$ and $x$ and therefore constant. But in our case, we already know that $\frac{X_k^{\prime\prime}}{X_k} = -2m\frac{e_k}{h^2}$. So our equation involving $T$ becomes $\frac{T_k^{\prime}}{T_k} = -2m\frac{e_k}{h^2} i \frac{h}{2m} = i\frac{e_k}{h}$ so our differential equation becomes
$T_k {\prime} = i \frac{e_k}{h} T_k$ which has the solution $T_k = c_k exp(i \frac{e_k}{h} t)$

So our solution is $\eta_k = d_k sin(\frac{\sqrt{2m e_k}}{h} x) exp(i \frac{e_k}{h} t)$ where $e_k = \frac{(k \pi h)^2}{2m}$.

This becomes $\eta_k = d_k sin(k\pi x) exp(i (k \pi)^2 \frac{\hbar}{2m} t)$ which, written in rectangular complex coordinates is $d_k sin(k\pi x) (cos((k \pi)^2 \frac{\hbar}{2m} t) + i sin((k \pi)^2 \frac{\hbar}{2m} t)$

Here are some graphs: we use $m = \frac{\hbar}{2}$ and plot for $k = 1, k = 3$ and $t \in {0, .1, .2, .5}$. The plot is of the real part of the stationary state vector.

## August 6, 2011

### MathFest Day 2 (2011: Lexington, KY)

I went to the three “big” talks in the morning.
Dawn Lott’s talk was about applied mathematics and its relation to the study of brain aneurysms; in particular the aneurysm model was discussed (partial differential equations with a time coordinate and stresses in the radial, circumference and latitudinal directions were modeled).

There was also modeling of the clipping procedure (where the base of the aneurysm was clipped with a metal clip); various clipping strategies were investigated (straight across? diagonal?). One interesting aspect was that the model of the aneurysm was discussed; what shape gave the best results?

Note: this is one procedure that was being modeled:

Next, Bhargava gave his second talk (on rational points on algebraic curves)
It was excellent. In the previous lecture, we saw that a quadratic curve either has an infinite number of rational points or zero rational points. Things are different with a cubic curve.

For example, $y^2 = x^3 - 3x$ has exactly one rational point (namely (0,0) ) but $y^2 = x^3-2x$ has an infinite number! It turns out that the number of rational points an algebraic curve has is related to the genus of the graph of the curve in $C^2$ (where one uses complex values for both variables). The surface is a punctured multi-holed torus of genus $g$ with the punctures being “at infinity”.

The genus is as follows: 0 if the degree is 1 or 2, 1 if the degree is 3, and greater than 1 if the degree is 4 or higher. So what about the number of rational points:
0 or finite if the genus is zero
finite if the genus is strictly greater than 1 (Falting’s Theorem; 1983)
indeterminate if the genus is 1. Hence much work is done in this area.

No general algorithm is known to make the determination if the curve is cubic (and therefore of genus 1)

Note: the set of rational points has a group structure.

Note: a rational cubic has a rational change of variable which changes the curve to elliptic form:
Weierstrauss form: $y^2 = x^3 + Ax + B$ where $A, B$ are integers.
Hence this is the form that is studied.
Sometimes the rational points can be found in the following way (example: $y^2 = x^3 + 2x + 3$:
note: this curve is symmetric about the $x$ axis.
$(-1, 0)$ is a rational point. So is $(3, 6)$. This line intersects the curve in a third point; this line and the cubic form a cubic in $x$ with two rational roots; hence the third must be rational. So we get a third rational point. Then we use $(3, -6)$ to obtain another line and still another rational point; we keep adding rational points in this manner.

This requires proof, but eventually we get all of the rational points in this manner.

The minimum number of “starting points” that we need to find the rational points is called the “rank” of the curve. Our curve is of rank 1 since we really needed only $(3, 6)$ (which, after reflecting, yields a line and a third rational point).

Mordell’s Theorem: every cubic is of finite rank, though it is unknown (as of this time) what the maximum rank is (maximum known example: rank 28), what an expected size would be, or even if “most” are rank 0 or rank 1.

Note: rank 0 means only a finite number of rational points.

Smaller talks
I enjoyed many of the short talks. Of note:
there was a number theory talk by Jay Schiffman in which different conjectures of the following type were presented: if $S$ is some sequence of positive integers and we look at the series of partial sums, or partial products (plus or minus some set number), what can we say about the number of primes that we obtain?

Example: Consider the Euclid product of primes (used to show that there is no largest prime number)
$E(1) = 2 + 1 = 3, E(2) = 2*3 + 1 = 7, E(3) = 2*3*5 + 1 = 31, E(4) = 2*3*5*7 + 1 = 211$ etc. It is unknown if there is a largest prime in the sequence $E(1), E(2), E(3)....$.

Another good talk was given by Charlie Smith. It was about the proofs of the irrationality of various famous numbers; it was shown that many of the proofs follow a similar pattern and use a series of 3 techniques/facts that the presenter called “rabbits”. I might talk about this in a later post.

Another interesting talk was given by Jack Mealy. It was about a type of “hyper-hyperbolic” geometry called a “Snell geometry”. Basically one sets up the plane and then puts in a smooth closed boundary curve (say, a line or a sphere). One then declares that the geodesics are those that result from a straight lines…that stay straight until they hit the boundary; they then obey the Snell’s law from physics with respect to the normal of the boundary surface; the two rays joined together from the geodesic in the new geometry. One can do this with, say, a concentric series of circles.

If one arranges the density coefficient in the correct manner, one’s density (in terms of area) can be made to increase as one goes outward; this can lead to interesting area properties of triangles.

## March 6, 2010

### Why We Shouldn’t Take Uniqueness Theorems for Granted (Differential Equations)

Filed under: differential equations, partial differential equations, uniqueness of solution — collegemathteaching @ 11:07 pm

I made up this sheet for my students who are studying partial differential equations for the first time:

Remember all of those ”existence and uniqueness theorems” from ordinary differential equations; that is theorems like: “Given

$y^{\prime }=f(t,y)$ where $f$ is continuous on some rectangle
$R=\{a and $(t_{0},y_{0})\in R$, then we are guaranteed at least one solution where $y(t_{0})=y_{0}$. Furthermore, if $\frac{\partial f}{\partial y}$ is continuous in $R$ then the solution is unique”.

Or, you learned that solutions to
$y^{\prime \prime }+p(t)y^{\prime}+q(t)y=f(t), y(t_{0})=y_{0}, \ y^{\prime}(t_{0})=y_{1}$ existed and were unique so long as $p,q,$ and $f$ were continuous at $t_{0}$.

Well, things are very different in the world of partial differential
equations.

We learned that $u(x,y)=x^{2}+xy+y^{2}$ is a solution to $xu_{x}+yu_{y}=2u$
(this is an easy exercise)

But, can attempt a solution of the form $u(x,y)=f(x)g(y)$.
This separation of variables technique actually works; it is an exercise to see that $u(x,y)=x^{r}y^{2-r}$ is also a solution for all real $r$!!!

Note that if we wanted to meet some sort of initial condition, say, $u(1,1)=3,$ then $u(x,y)=x^{2}+xy+y^{2},$ and $u(x,y)=3x^{r}y^{2-r}$ provide an infinite number of solutions to this problem. Note that this is a simple, linear partial differential equation!

Hence, to make any headway at all, we need to restrict ourselves to studying very specific partial differential equations in situations for which we do have some uniqueness theorems.