I am planning on writing up a series of notes from the out of print book A Quantum Mechanics Primer by Daniel Gillespie.
My background: mathematics instructor (Ph.D. research area: geometric topology) whose last physics course (at the Naval Nuclear Power School) was almost 30 years ago; sophomore physics was 33 years ago.
Therefore, corrections (or illuminations) from readers would be warmly received.
Your background: you teach undergraduate mathematics for a living and haven’t had a course in quantum mechanics; those who have the time to study a book such as Quantum Mechanics and the Particles of Nature by Anthony Sudbery would be better off studying that. Those who have had a course in quantum mechanics would be bored stiff.
Topics the reader should know: probability density functions, square integrability, linear algebra, (abstract inner products (Hermitian), eigenbasis, orthonormal basis), basic analysis (convergence of a series of functions) differential equations, dirac delta distribution.
My purpose: present some opportunities to present applications to undergraduate students e. g., “the dirac delta “function” (distribution really) can be thought of as an eigenvector for this linear transformation”, or “here is an application of non-standard inner products and an abstract vector space”, or “here is a non-data application to the idea of the expected value and variance of a probability density function”, etc.
Basic mathematical objects
Our vector space will consist of functions (complex valued functions of a real variable) for which is finite. Note: the square root of a probability density function is a vector of this vector space. Scalars are complex numbers and the operation is the usual function addition.
Our inner product has the following type of symmetry: and .
Note: Our vector space will have a metric that is compatible with the inner product; such spaces are called Hilbert spaces. This means that we will allow for infinite sums of functions with some convergence; one might think of “convergence in the mean” which uses our inner product in the usual way to define the mean.
Of interest to us will be the Hermitian linear transformations where It is an easy exercise to see that such a linear transformation can only have real eigenvalues. We will also be interested in the subset (NOT a vector subspace) of vectors for which .
Eigenvalues and eigenvectors will be defined in the usual way: if then we say that is an eigenvector for with associated eigenvalue . If there is a countable number of orthornormal eigenvectors whose “span” (allowing for infinite sums) includes every element of the vector space, then we say that has a complete orthonormal eigenbasis.
It is a good warm up exercise to show that if has a complete orthonormal eigenbasis then is Hermitian.
Hint: start with and expand and in terms of the eigenbasis; of course the linear operator has to commute with the infinite sum so there are convergence issues to be concerned about.
The outline goes something like this: suppose is the complete set of eigenvectors for with eigenvalues and and
Now do the same operation on the left side of the inner product and use the fact that the basis vectors are mutually orthogonal. Note: there are convergence issues here; those that relate the switching of the infinite sum notation outside of the inner product can be handled with a dominated convergence theorem for integrals. But the intuition taken from finite vector spaces works here.
The other thing to note is that not every Hermitian operator is “closed”; that is it is possible for to be square integrable but for operator to not be square integrable.
Probability Density Functions
Let be a linear operator with a complete orthonormal eigenbasis and corresponding real eigenvalues . Let be an element of the Hilbert space with unit norm and let .
Claim: the function is a probability density function. (note: ).
The fact that follows easily from the Cauchy-Schwartz inequality. Also note that
Yes, I skipped some steps that are easy to fill in. But the bottom line is that this density function now has a (sometimes) finite expected value and a (sometimes) finite variance.
With the mathematical preliminaries (mostly) out of the way, we are ready to see how this applies to physics.