# College Math Teaching

## July 11, 2011

### Quantum Mechanics for teachers of undergraduate mathematics I

I am planning on writing up a series of notes from the out of print book A Quantum Mechanics Primer by Daniel Gillespie.

My background: mathematics instructor (Ph.D. research area: geometric topology) whose last physics course (at the Naval Nuclear Power School) was almost 30 years ago; sophomore physics was 33 years ago.

Your background: you teach undergraduate mathematics for a living and haven’t had a course in quantum mechanics; those who have the time to study a book such as Quantum Mechanics and the Particles of Nature by Anthony Sudbery would be better off studying that. Those who have had a course in quantum mechanics would be bored stiff.

Topics the reader should know: probability density functions, square integrability, linear algebra, (abstract inner products (Hermitian), eigenbasis, orthonormal basis), basic analysis (convergence of a series of functions) differential equations, dirac delta distribution.

My purpose: present some opportunities to present applications to undergraduate students e. g., “the dirac delta “function” (distribution really) can be thought of as an eigenvector for this linear transformation”, or “here is an application of non-standard inner products and an abstract vector space”, or “here is a non-data application to the idea of the expected value and variance of a probability density function”, etc.

Basic mathematical objects
Our vector space will consist of functions $\psi : R \rightarrow C$ (complex valued functions of a real variable) for which $\int^{\infty}_{-\infty} \overline{\psi} \psi dx$ is finite. Note: the square root of a probability density function is a vector of this vector space. Scalars are complex numbers and the operation is the usual function addition.

Our inner product $\langle \psi , \phi \rangle = \int^{\infty}_{-\infty} \overline{\psi} \phi dx$ has the following type of symmetry: $\langle \psi , \phi \rangle= \overline{\langle \phi , \psi \rangle}$ and $\langle c\psi , \phi \rangle = \langle \psi , \overline{c} \phi \rangle = \overline{c}\langle \psi , \phi \rangle$.

Note: Our vector space will have a metric that is compatible with the inner product; such spaces are called Hilbert spaces. This means that we will allow for infinite sums of functions with some convergence; one might think of “convergence in the mean” which uses our inner product in the usual way to define the mean.

Of interest to us will be the Hermitian linear transformations $H$ where $\langle H(\psi ), \phi \rangle = \langle \psi ,H(\phi) \rangle .$ It is an easy exercise to see that such a linear transformation can only have real eigenvalues. We will also be interested in the subset (NOT a vector subspace) of vectors $\psi$ for which $||(\langle \psi , \phi \rangle)||^2 = 1$.

Eigenvalues and eigenvectors will be defined in the usual way: if $H(\psi) = \alpha \psi$ then we say that $\psi$ is an eigenvector for $H$ with associated eigenvalue $\alpha$. If there is a countable number of orthornormal eigenvectors whose “span” (allowing for infinite sums) includes every element of the vector space, then we say that $H$ has a complete orthonormal eigenbasis.

It is a good warm up exercise to show that if $H$ has a complete orthonormal eigenbasis then $H$ is Hermitian.

Hint: start with $\langle H(\psi ), \phi \rangle$ and expand $\psi$ and $\phi$ in terms of the eigenbasis; of course the linear operator $H$ has to commute with the infinite sum so there are convergence issues to be concerned about.

The outline goes something like this: suppose $\epsilon_i$ is the complete set of eigenvectors for $H$ with eigenvalues $a_i$ and $\psi = \sum_i b_i \epsilon_i$ and $\phi = \sum_i c_i \epsilon_i$
$\langle H(\psi ), \phi \rangle =\langle H(\sum_i b_i \epsilon_i ), \phi \rangle = \langle \sum_i H(b_i \epsilon_i ), \phi \rangle = \langle \sum_i b_i a_i \epsilon_i , \phi \rangle = \sum_i a_i\langle b_i \epsilon_i , \phi \rangle$

Now do the same operation on the left side of the inner product and use the fact that the basis vectors are mutually orthogonal. Note: there are convergence issues here; those that relate the switching of the infinite sum notation outside of the inner product can be handled with a dominated convergence theorem for integrals. But the intuition taken from finite vector spaces works here.

The other thing to note is that not every Hermitian operator is “closed”; that is it is possible for $\psi$ to be square integrable but for operator $H(\phi) = x \phi$ to not be square integrable.

Probability Density Functions

Let $H$ be a linear operator with a complete orthonormal eigenbasis $\epsilon_i$ and corresponding real eigenvalues $a_i$. Let $\psi$ be an element of the Hilbert space with unit norm and let $\psi = \sum_j b_j \epsilon_j$.

Claim: the function $P(y = a_i) = (|b_i|)^2$ is a probability density function. (note: $b_i = \langle \epsilon_i , \psi \rangle$).

The fact that $(|b_i|)^2 \leq 1$ follows easily from the Cauchy-Schwartz inequality. Also note that $1 = | \langle \psi, \psi \rangle | = | \langle \sum b_i \epsilon_i,\sum b_i \epsilon_i \rangle | = |\sum_i (b_i)^2 \langle \epsilon_i, \epsilon_i \rangle | = |\sum_i (b_i)^2|$

Yes, I skipped some steps that are easy to fill in. But the bottom line is that this density function now has a (sometimes) finite expected value and a (sometimes) finite variance.

With the mathematical preliminaries (mostly) out of the way, we are ready to see how this applies to physics.