College Math Teaching

January 6, 2016

On all but a set of measure zero

Filed under: analysis, physics, popular mathematics, probability — Tags: — collegemathteaching @ 7:36 pm

This blog isn’t about cosmology or about arguments over religion. But it is unusual to hear “on all but a set of measure zero” in the middle of a pop-science talk: (2:40-2:50)

January 20, 2014

A bit more prior to admin BS

One thing that surprised me about the professor’s job (at a non-research intensive school; we have a modest but real research requirement, but mostly we teach): I never knew how much time I’d spend doing tasks that have nothing to do with teaching and scholarship. Groan….how much of this do I tell our applicants that arrive on campus to interview? 🙂

But there is something mathematical that I want to talk about; it is a follow up to this post. It has to do with what string theorist tell us: \sum^{\infty}_{k = 1} k = -\frac{1}{12} . Needless to say, they are using a non-standard definition of “value of a series”.

Where I think the problem is: when we hear “series” we think of something related to the usual process of addition. Clearly, this non-standard assignment doesn’t related to addition in the way we usually think about it.

So, it might make more sense to think of a “generalized series” as a map from the set of sequences of real numbers (or: the infinite dimensional real vector space) to the real numbers; the usual “limit of partial sums” definition has some nice properties with respect to sequence addition, scalar multiplication and with respect to a “shift operation” and addition, provided we restrict ourselves to a suitable collection of sequences (say, those whose traditional sum of components are absolutely convergent).

So, this “non-standard sum” can be thought of as a map f:V \rightarrow R^1 where f(\{1, 2, 3, 4, 5,....\}) \rightarrow -\frac{1}{12} . That is a bit less offensive than calling it a “sum”. 🙂

January 18, 2014

Fun with divergent series (and uses: e. g. string theory)

One “fun” math book is Knopp’s book Theory and Application of Infinite Series. I highly recommend it to anyone who frequently teaches calculus, or to talented, motivated calculus students.

One of the more interesting chapters in the book is on “divergent series”. If that sounds boring consider the following:

we all know that \sum^{\infty}_{n=0} x^n = \frac{1}{1-x} when |x| < 1 and diverges elsewhere, PROVIDED one uses the “sequence of partial sums” definition of covergence of sums. But, as Knopp points out, there are other definitions of convergence which leaves all the convergent (by the usual definition) series convergent (to the same value) but also allows one to declare a larger set of series to be convergent.

Consider 1 - 1 + 1 -1 + 1.......

of course this is a divergent geometric series by the usual definition. But note that if one uses the geometric series formula:

\sum^{\infty}_{n=0} x^n = \frac{1}{1-x} and substitutes x = -1 which IS in the domain of the right hand side (but NOT in the interval of convergence in the left hand side) one obtains 1 -1 +1 -1 + 1.... = \frac{1}{2} .

Now this is nonsense unless we use a different definition of sum convergence, such as the Cesaro summation: if s_k is the usual “partial sum of the first k terms: s_k = \sum^{n=k}_{n =0}a_n then one declares the Cesaro sum of the series to be lim_{m \rightarrow \infty} \frac{1}{m}\sum^{m}_{k=1}s_k provided this limit exists (this is the arithmetic average of the partial sums).

(see here)

So for our 1 -1 + 1 -1 .... we easily see that s_{2k+1} = 0, s_{2k} = 1 so for m even we see \frac{1}{m}\sum^{m}_{k=1}s_k = \frac{\frac{m}{2}}{m} = \frac{1}{2} and for m odd we get \frac{\frac{m-1}{2}}{m} which tends to \frac{1}{2} as m tends to infinity.

Now, we have this weird type of assignment.

But that won’t help with \sum^{\infty}_{k = 1} k = 1 + 2 + 3 + 4 + 5...... But weirdly enough, string theorists find a way to assign this particular series a number! In fact, the number that they assign to this makes no sense at all: -\frac{1}{12} .

What the heck? Well, one way this is done is explained here:

Consider \sum^{\infty}_{k=0}x^k = \frac{1}{1-x} Now differentiate term by term to get 1 +2x + 3x^2+4x^3 .... = \frac{1}{(1-x)^2} and now multiply both sides by x to obtain x + 2x^2 + 3x^3 + .... = \frac{x}{(1-x)^2} This has a pole of order 2 at x = 1. But now substitute x = e^h and calculate the Laurent series about h = 0 ; the 0 order term turns out to be \frac{1}{12} . Yes, this has applications in string theory!

Now of course, if one uses the usual definitions of convergence, I played fast and loose with the usual intervals of convergence and when I could differentiate term by term. This theory is NOT the usual calculus theory.

Now if you want to see some “fun nonsense” applied to this (spot how many “errors” are made….it is a nice exercise):

And read this to see exploding heads. 🙂

What is going on: when one sums a series, one is really “assigning a value” to an object; think of this as a type of morphism of the set of series to the set of numbers. The usual definition of “sum of a series” is an especially nice morphism as it allows, WITH PRECAUTIONS, some nice algebraic operations in the domain (the set of series) to be carried over into the range. I say “with precautions” because of things like the following:

1. If one is talking about series of numbers, then one must have an absolutely convergent series for derangements of a given series to be assigned the same number. Example: it is well known that a conditionally convergent alternating series can be arranged to converge to any value of choice.

2. If one is talking about a series of functions (say, power series where one sums things like x^n ) one has to be in OPEN interval of absolute convergence to justify term by term differentiation and integration; then of course a series is assigned a function rather than a number.

So when one tries to go with a different notion of convergence, one must be extra cautious as to which operations in the domain space carry through under the “assignment morphism” and what the “equivalence classes” of a given series are (e. g. can a series be deranged and keep the same sum?)

This Phil Plait article started this post in motion for me and I got to it via 3-quarks daily.

July 1, 2013

Mathematics: aids the conceptual understanding of elementary physics

Filed under: applications of calculus, editorial, elementary mathematics, pedagogy, physics — collegemathteaching @ 4:52 pm

I was blogging about the topic of how “classroom knowledge” turns into “walking around knowledge” and came across an “elementary physics misconceptions” webpage at the University of Montana. It is fun, but it helped me realize how easy things can be when one thinks mathematically.

Example.

Screen shot 2013-07-01 at 11.25.49 AM

This becomes very easy if one does a bit of mathematics. Let m represent the mass of the object; F = 10 = ma implies that a = \frac{10}{m} which isn’t that important; we’ll just use a . Now putting into vector form we have \vec{a}(t) = a \vec{i}, \vec{v}(0) = V_i \vec{j}, \vec{s}(0) = \vec{0}  . By elementary integration, obtain \vec{v} =  at \vec{i} + V_i \vec{j}  and integrate again to obtain \vec{s}(t) = \frac{1}{2}at^2\vec{i}+(V_i)t\vec{j} which has parametric equations x(t) = \frac{a}{2}t^2, y(t) = V_i t which has a “sideways parabola” as a graph.

Let’s look at another example:

Screen shot 2013-07-01 at 11.40.26 AM

So what is going on? Force F = \frac{d}{dt}(mv) = \frac{dm}{dt}v + m\frac{dv}{dt} = 0 . The first term is thrust and is against the direction of acceleration. So we have:1000 = m\frac{dv}{dt} which, upon integration, implies that \frac{1000}{m} t + v_0 = v(t) and so we see that the rocket continues to speed up at a constant acceleration.

These problems are easier with mathematics, aren’t they? 🙂

June 12, 2013

A couple of instances of math in action

Filed under: advanced mathematics, applied mathematics, Fourier Series, physics, popular mathematics — Tags: — collegemathteaching @ 9:02 pm

Via Jerry Coyne’s website; you’ll see some great comments there.

Watch standing waves in action:

Here is what is going on; the particles collect at the “stationary” points.
This is an excellent reason to take a course that deals with Fourier Series!

Here is an example of a projection, and what happens when you take the image and move it a little.

March 3, 2013

Mathematics, Statistics, Physics

Filed under: applications of calculus, media, news, physics, probability, science, statistics — collegemathteaching @ 11:00 pm

This is a fun little post about the interplay between physics, mathematics and statistics (Brownian Motion)

Here is a teaser video:

The article itself has a nice animation showing the effects of a Poisson process: one will get some statistical clumping in areas rather than uniform spreading.

Treat yourself to the whole article; it is entertaining.

May 26, 2012

Eigenvalues, Eigenvectors, Eigenfunctions and all that….

The purpose of this note is to give a bit of direction to the perplexed student.

I am not going to go into all the possible uses of eigenvalues, eigenvectors, eigenfuntions and the like; I will say that these are essential concepts in areas such as partial differential equations, advanced geometry and quantum mechanics:

Quantum mechanics, in particular, is a specific yet very versatile implementation of this scheme. (And quantum field theory is just a particular example of quantum mechanics, not an entirely new way of thinking.) The states are “wave functions,” and the collection of every possible wave function for some given system is “Hilbert space.” The nice thing about Hilbert space is that it’s a very restrictive set of possibilities (because it’s a vector space, for you experts); once you tell me how big it is (how many dimensions), you’ve specified your Hilbert space completely. This is in stark contrast with classical mechanics, where the space of states can get extraordinarily complicated. And then there is a little machine — “the Hamiltonian” — that tells you how to evolve from one state to another as time passes. Again, there aren’t really that many kinds of Hamiltonians you can have; once you write down a certain list of numbers (the energy eigenvalues, for you pesky experts) you are completely done.

(emphasis mine).

So it is worth understanding the eigenvector/eigenfunction and eigenvalue concept.

First note: “eigen” is German for “self”; one should keep that in mind. That is part of the concept as we will see.

The next note: “eigenfunctions” really are a type of “eigenvector” so if you understand the latter concept at an abstract level, you’ll understand the former one.

The third note: if you are reading this, you are probably already familiar with some famous eigenfunctions! We’ll talk about some examples prior to giving the formal definition. This remark might sound cryptic at first (but hang in there), but remember when you learned \frac{d}{dx} e^{ax} = ae^{ax} ? That is, you learned that the derivative of e^{ax} is a scalar multiple of itself? (emphasis on SELF). So you already know that the function e^{ax} is an eigenfunction of the “operator” \frac{d}{dx} with eigenvalue a because that is the scalar multiple.

The basic concept of eigenvectors (eigenfunctions) and eigenvalues is really no more complicated than that. Let’s do another one from calculus:
the function sin(wx) is an eigenfunction of the operator \frac{d^2}{dx^2} with eigenvalue -w^2 because \frac{d^2}{dx^2} sin(wx) = -w^2sin(wx). That is, the function sin(wx) is a scalar multiple of its second derivative. Can you think of more eigenfunctions for the operator \frac{d^2}{dx^2} ?

Answer: cos(wx) and e^{ax} are two others, if we only allow for non zero eigenvalues (scalar multiples).

So hopefully you are seeing the basic idea: we have a collection of objects called vectors (can be traditional vectors or abstract ones such as differentiable functions) and an operator (linear transformation) that acts on these objects to yield a new object. In our example, the vectors were differentiable functions, and the operators were the derivative operators (the thing that “takes the derivative of” the function). An eigenvector (eigenfunction)-eigenvalue pair for that operator is a vector (function) that is transformed to a scalar multiple of itself by the operator; e. g., the derivative operator takes e^{ax} to ae^{ax} which is a scalar multiple of the original function.

Formal Definition
We will give the abstract, formal definition. Then we will follow it with some examples and hints on how to calculate.

First we need the setting. We start with a set of objects called “vectors” and “scalars”; the usual rules of arithmetic (addition, multiplication, subtraction, division, distributive property) hold for the scalars and there is a type of addition for the vectors and scalars and the vectors “work together” in the intuitive way. Example: in the set of, say, differentiable functions, the scalars will be real numbers and we have rules such as a (f + g) =af + ag , etc. We could also use things like real numbers for scalars, and say, three dimensional vectors such as [a, b, c] More formally, we start with a vector space (sometimes called a linear space) which is defined as a set of vectors and scalars which obey the vector space axioms.

Now, we need a linear transformation, which is sometimes called a linear operator. A linear transformation (or operator) is a function L that obeys the following laws: L(\vec{v} + \vec{w}) = L(\vec{v}) + L(\vec{w} ) and L(a\vec{v}) = aL(\vec{v}) . Note that I am using \vec{v} to denote the vectors and the undecorated variable to denote the scalars. Also note that this linear transformation L might take one vector space to a different vector space.

Common linear transformations (and there are many others!) and their eigenvectors and eigenvalues.
Consider the vector space of two-dimensional vectors with real numbers as scalars. We can create a linear transformation by matrix multiplication:

L([x,y]^T) = \left[ \begin{array}{cc} a & b \\ c & d \end{array} \right] \left[ \begin{array}{c} x \\ y \end{array} \right]=\left[ \begin{array}{c} ax+ by \\ cx+dy \end{array} \right]  (note: [x,y]^T is the transpose of the row vector; we need to use a column vector for the usual rules of matrix multiplication to apply).

It is easy to check that the operation of matrix multiplying a vector on the left by an appropriate matrix is yields a linear transformation.
Here is a concrete example: L([x,y]^T) = \left[ \begin{array}{cc} 1 & 2 \\ 0 & 3 \end{array} \right] \left[ \begin{array}{c} x \\ y \end{array} \right]=\left[ \begin{array}{c} x+ 2y \\ 3y \end{array} \right]

So, does this linear transformation HAVE non-zero eigenvectors and eigenvalues? (not every one does).
Let’s see if we can find the eigenvectors and eigenvalues, provided they exist at all.

For [x,y]^T to be an eigenvector for L , remember that L([x,y]^T) = \lambda [x,y]^T for some real number \lambda

So, using the matrix we get: L([x,y]^T) = \left[ \begin{array}{cc} 1 & 2 \\ 0 & 3 \end{array} \right] \left[ \begin{array}{c} x \\ y \end{array} \right]= \lambda \left[ \begin{array}{c} x \\ y \end{array} \right] . So doing some algebra (subtracting the vector on the right hand side from both sides) we obtain \left[ \begin{array}{cc} 1 & 2 \\ 0 & 3 \end{array} \right] \left[ \begin{array}{c} x \\ y \end{array} \right] - \lambda \left[ \begin{array}{c} x \\ y \end{array} \right] = \left[ \begin{array}{c} 0 \\ 0 \end{array} \right]

At this point it is tempting to try to use a distributive law to factor out \left[ \begin{array}{c} x \\ y \end{array} \right] from the left side. But, while the expression makes sense prior to factoring, it wouldn’t AFTER factoring as we’d be subtracting a scalar number from a 2 by 2 matrix! But there is a way out of this: one can then insert the 2 x 2 identity matrix to the left of the second term of the left hand side:
\left[ \begin{array}{cc} 1 & 2 \\ 0 & 3 \end{array} \right] \left[ \begin{array}{c} x \\ y \end{array} \right] - \lambda\left[ \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right] \left[ \begin{array}{c} x \\ y \end{array} \right] = \left[ \begin{array}{c} 0 \\ 0 \end{array} \right]

Notice that by doing this, we haven’t changed anything except now we can factor out that vector; this would leave:
(\left[ \begin{array}{cc} 1 & 2 \\ 0 & 3 \end{array} \right]  - \lambda\left[ \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right] )\left[ \begin{array}{c} x \\ y \end{array} \right] = \left[ \begin{array}{c} 0 \\ 0 \end{array} \right]

Which leads to:

(\left[ \begin{array}{cc} 1-\lambda & 2 \\ 0 & 3-\lambda \end{array} \right] ) \left[ \begin{array}{c} x \\ y \end{array} \right] = \left[ \begin{array}{c} 0 \\ 0 \end{array} \right]

Now we use a fact from linear algebra: if [x,y]^T is not the zero vector, we have a non-zero matrix times a non-zero vector yielding the zero vector. This means that the matrix is singular. In linear algebra class, you learn that singular matrices have determinant equal to zero. This means that (1-\lambda)(3-\lambda) = 0 which means that \lambda = 1, \lambda = 3 are the respective eigenvalues. Note: when we do this procedure with any 2 by 2 matrix, we always end up with a quadratic with \lambda as the variable; if this quadratic has real roots then the linear transformation (or matrix) has real eigenvalues. If it doesn’t have real roots, the linear transformation (or matrix) doesn’t have non-zero real eigenvalues.

Now to find the associated eigenvectors: if we start with \lambda = 1 we get
(\left[ \begin{array}{cc} 0 & 2 \\ 0 & 2 \end{array} \right]  \left[ \begin{array}{c} x \\ y \end{array} \right] = \left[ \begin{array}{c} 0 \\ 0 \end{array} \right] which has solution \left[ \begin{array}{c} x \\ y \end{array} \right] = \left[ \begin{array}{c} 1 \\ 0 \end{array} \right] . So that is the eigenvector associated with eigenvalue 1.
If we next try \lambda = 3 we get
(\left[ \begin{array}{cc} -2 & 2 \\ 0 & 0 \end{array} \right]  \left[ \begin{array}{c} x \\ y \end{array} \right] = \left[ \begin{array}{c} 0 \\ 0 \end{array} \right] which has solution \left[ \begin{array}{c} x \\ y \end{array} \right] = \left[ \begin{array}{c} 1 \\ 1 \end{array} \right] . So that is the eigenvector associated with the eigenvalue 3.

In the general “k-dimensional vector space” case, the recipe for finding the eigenvectors and eigenvalues is the same.
1. Find the matrix A for the linear transformation.
2. Form the matrix A - \lambda I which is the same as matrix A except that you have subtracted \lambda from each diagonal entry.
3. Note that det(A - \lambda I) is a polynomial in variable \lambda ; find its roots \lambda_1, \lambda_2, ...\lambda_n . These will be the eigenvalues.
4. Start with \lambda = \lambda_1 Substitute this into the matrix-vector equation det(A - \lambda I) \vec{v_1} = \vec{0} and solve for \vec({v_1} . That will be the eigenvector associated with the first eigenvalue. Do this for each eigenvalue, one at a time. Note: you can get up to k “linearly independent” eigenvectors in this manner; that will be all of them.

Practical note
Yes, this should work “in theory” but practically speaking, there are many challenges. For one: for equations of degree 5 or higher, it is known that there is no formula that will find the roots for every equation of that degree (Galios proved this; this is a good reason to take an abstract algebra course!). Hence one must use a numerical method of some sort. Also, calculation of the determinant involves many round-off error-inducing calculations; hence sometimes one must use sophisticated numerical techniques to get the eigenvalues (a good reason to take a numerical analysis course!)

Consider a calculus/differential equation related case of eigenvectors (eigenfunctions) and eigenvalues.
Our vectors will be, say, infinitely differentiable functions and our scalars will be real numbers. We will define the operator (linear transformation) D^n = \frac{d^n}{dx^n} , that is, the process that takes the n’th derivative of a function. You learned that the sum of the derivatives is the derivative of the sums and that you can pull out a constant when you differentiate. Hence D^n is a linear operator (transformation); we use the term “operator” when we talk about the vector space of functions, but it is really just a type of linear transformation.

We can also use these operators to form new operators; that is (D^2 + 3D)(y) = D^2(y) + 3D(y) = \frac{d^2y}{dx^2} + 3\frac{dy}{dx} We see that such “linear combinations” of linear operators is a linear operator.

So, what does it mean to find eigenvectors and eigenvalues of such beasts?

Suppose we with to find the eigenvectors and eigenvalues of (D^2 + 3D) . An eigenvector is a twice differentiable function y (ok, we said “infinitely differentiable”) such that (D^2 + 3D) = \lambda y or \frac{d^2y}{dx^2} + 3\frac{dy}{dx} = \lambda y which means \frac{d^2y}{dx^2} + 3\frac{dy}{dx} - \lambda y = 0 . You might recognize this from your differential equations class; the only “tweak” is that we don’t know what \lambda is. But if you had a differential equations class, you’d recognize that the solution to this differential equation depends on the roots of the characteristic equation m^2 + 3m - \lambda = 0 which has solutions: m = -\frac{3}{2} \pm \frac{\sqrt{9-4\lambda}}{2} and the solution takes the form e^{m_1}, e^{m_2} if the roots are real and distinct, e^{ax}sin(bx), e^{ax}cos(bx) if the roots are complex conjugates a \pm bi and e^{m}, xe^{m} if there is a real, repeated root. In any event, those functions are the eigenfunctions and these very much depend on the eigenvalues.

Of course, reading this little note won’t make you an expert, but it should get you started on studying.

I’ll close with a link on how these eigenfunctions and eigenvalues are calculated (in the context of solving a partial differential equation).

August 19, 2011

Partial Differential Equations, Differential Equations and the Eigenvalue/Eigenfunction problem

Suppose we are trying to solve the following partial differential equation:
\frac{\partial \psi}{\partial t} = 3 \frac{\partial ^2 \phi}{\partial x^2} subject to boundary conditions:
\psi(0) = \psi(\pi) = 0, \psi(x,0) = x(x-\pi)

It turns out that we will be using techniques from ordinary differential equations and concepts from linear algebra; these might be confusing at first.

The first thing to note is that this differential equation (the so-called heat equation) is known to satisfy a “uniqueness property” in that if one obtains a solution that meets the boundary criteria, the solution is unique. Hence we can attempt to find a solution in any way we choose; if we find it, we don’t have to wonder if there is another one lurking out there.

So one technique that is often useful is to try: let \psi = XT where X is a function of x alone and T is a function of t alone. Then when we substitute into the partial differential equation we obtain:
XT^{\prime} = 3X^{\prime\prime}T which leads to \frac{T^{\prime}}{T} = 3\frac{X^{\prime\prime}}{X}

The next step is to note that the left hand side does NOT depend on x ; it is a function of t alone. The right hand side does not depend on t as it is a function of x alone. But the two sides are equal; hence neither side can depend on x or t ; they must be constant.

Hence we have \frac{T^{\prime}}{T} = 3\frac{X^{\prime\prime}}{X} = \lambda

So far, so good. But then you are told that \lambda is an eigenvalue. What is that about?

The thing to notice is that T^{\prime} - \lambda T = 0 and X^{\prime\prime} - \frac{\lambda}{3}X = 0
First, the equation in T can be written as D(T) = \lambda T with the operator D denoting the first derivative. Then the second can be written as D^2(X) = 3\lambda X where D^2 denotes the second derivative operator. Recall from linear algebra that these operators meet the requirements for a linear transformation if the vector space is the set of all functions that are “differentiable enough”. So what we are doing, in effect, are trying to find eigenvectors for these operators.

So in this sense, solving a homogeneous differential equation is really solving an eigenvector problem; often this is termed the “eigenfucntion” problem.

Note that the differential equations are not difficult to solve:
T = a exp(\lambda T) X  = b exp(\sqrt{\frac{\lambda}{3}} x) + cexp(-\sqrt{\frac{\lambda}{3}} x) ; the real valued form of the equation in x depends on whether \lambda is positive, zero or negative.

But the point is that we are merely solving a constant coefficient differential equation just as we did in our elementary differential equations course with one important difference: we don’t know what the constant (the eigenvalue) is.

Now if we turn to the boundary conditions on x we see that a solution of the form A e^{bx} + Be^{-bx} cannot meet the zero at the boundaries conditions; we can rule out the \lambda = 0 condition as well.
Hence we know that \lambda is negative and we get X = a cos(\sqrt{\frac{\lambda}{3}} x) + b sin(\sqrt{\frac{\lambda}{3}} x) solution and then T = d e^{\lambda t } solution.

But now we notice that these solutions have a \lambda in them; this is what makes these ordinary differential equations into an “eigenvalue/eigenfucntion” problem.

So what values of \lambda will work? We know it is negative so we say \lambda = -w^2 If we look at the end conditions and note that T is never zero, we see that the cosine term must vanish (a = 0 ) and we can ensure that \sqrt{\frac{w}{3}}\pi = k \pi which implies that w = 3k^2 So we get a whole host of functions: \psi_k = a_k e^{-3k^2 t}sin(kx) .

Now we still need to meet the last condition (set at t = 0 ) and that is where Fourier analysis comes in. Because the equation was linear, we can add the solutions and get another solution; hence the X term is just obtained by taking the Fourier expansion for the function x(x-\pi) in terms of sines.

The coefficients are b_k = \frac{1}{\pi} \int^{\pi}_{-\pi} (x)(x-\pi) sin(kx) dx and the solution is:
\psi(x,t) =   \sum_{k=1}^{\infty}  e^{-3k^2 t} b_k sin(kx)

Quantum Mechanics and Undergraduate Mathematics XV: sample problem for stationary states

I feel a bit guilty as I haven’t gone over an example of how one might work out a problem. So here goes:

Suppose our potential function is some sort of energy well: V(x) = 0 for 0 < x < 1 and V(x) = \infty elsewhere.
Note: I am too lazy to keep writing \hbar so I am going with h for now.

So, we have the two Schrödinger equations with \psi being the state vector and \eta_k being one of the stationary states:
-\frac{h^2}{2m} \frac{\partial}{\partial x^2}\eta_k + V(x) \eta_k = ih\frac{\partial}{\partial t} \eta_k
-\frac{h^2}{2m} \frac{\partial}{\partial x^2}\eta_k + V(x) \eta_k = e_k \eta_k

Where e_k are the eigenvalues for \eta_k

Now apply the potential for 0 < x < 1 and the equations become:
-\frac{h^2}{2m} \frac{\partial}{\partial x^2}\eta_k  = ih\frac{\partial}{\partial t} \eta_k
-\frac{h^2}{2m} \frac{\partial}{\partial x^2}\eta_k  = e_k \eta_k

Yes, I know that equation II is a consequence of equation I.

Now we use a fact from partial differential equations: the first equation is really a form of the “diffusion” or “heat” equation; it has been shown that once one takes boundary conditions into account, the equation posses a unique solution. Hence if we find a solution by any means necessary, we don’t have to worry about other solutions being out there.

So attempt a solution of the form \eta_k = X_k T_k where the first factor is a function of x alone and the second is of t alone.
Now put into the second equation:

-\frac{h^2}{2m} X^{\prime\prime}_kT_k  = e_k XT

Now assume T \ne 0 and divide both sides by T and do a little algebra to obtain:
X^{\prime\prime}_k +\frac{2m e_k}{h^2}X_k = 0
e_k are the eigenvalues for the stationary states; assume that these are positive and we obtain:
X = a_k cos(\frac{\sqrt{2m e_k}}{h} x) + b_k sin(\frac{\sqrt{2m e_k}}{h} x)
from our knowledge of elementary differential equations.
Now for x = 0 we have X_k(0) = a_k . Our particle is in our well and we can’t have values below 0; hence a_k = 0 . Now X(x) = b_k sin(\frac{\sqrt{2m e_k}}{h} x)
We want zero at x = 1 so \frac{\sqrt{2m e_k}}{h} = k\pi which means e_k = \frac{(k \pi h)^2}{2m} .

Now let’s look at the first Schrödinger equation:
-\frac{h^2}{2m}X_k^{\prime\prime} T_k = ihT_k^{\prime}X_k
This gives the equation: \frac{X_k^{\prime\prime}}{X_k} = -\frac{ 2m i}{h} \frac{T_k^{\prime}}{T_k}
Note: in partial differential equations, it is customary to note that the left side of the equation is a function of x alone and therefore independent of t and that the right hand side is a function of T alone and therefore independent of x ; since these sides are equal they must be independent of both t and x and therefore constant. But in our case, we already know that \frac{X_k^{\prime\prime}}{X_k} = -2m\frac{e_k}{h^2} . So our equation involving T becomes \frac{T_k^{\prime}}{T_k} = -2m\frac{e_k}{h^2} i \frac{h}{2m} = i\frac{e_k}{h} so our differential equation becomes
T_k {\prime} = i \frac{e_k}{h} T_k which has the solution T_k = c_k exp(i \frac{e_k}{h} t)

So our solution is \eta_k = d_k sin(\frac{\sqrt{2m e_k}}{h} x) exp(i \frac{e_k}{h} t) where e_k = \frac{(k \pi h)^2}{2m} .

This becomes \eta_k = d_k sin(k\pi x) exp(i (k \pi)^2 \frac{\hbar}{2m} t) which, written in rectangular complex coordinates is d_k sin(k\pi x) (cos((k \pi)^2 \frac{\hbar}{2m} t) + i sin((k \pi)^2 \frac{\hbar}{2m} t)

Here are some graphs: we use m = \frac{\hbar}{2} and plot for k = 1, k = 3 and t \in {0, .1, .2, .5} . The plot is of the real part of the stationary state vector.

August 17, 2011

Quantum Mechanics and Undergraduate Mathematics XIV: bras, kets and all that (Dirac notation)

Filed under: advanced mathematics, applied mathematics, linear albegra, physics, quantum mechanics, science — collegemathteaching @ 11:29 pm

Up to now, I’ve used mathematical notation for state vectors, inner products and operators. However, physicists use something called “Dirac” notation (“bras” and “kets”) which we will now discuss.

Recall: our vectors are integrable functions \psi: R^1 \rightarrow C^1 where \int^{-\infty}_{\infty} \overline{\psi} \psi dx converges.

Our inner product is: \langle \phi, \psi \rangle = \int^{-\infty}_{\infty} \overline{\phi} \psi dx

Here is the Dirac notation version of this:
A “ket” can be thought of as the vector \langle , \psi \rangle . Of course, there is an easy vector space isomorphism (Hilbert space isomorphism really) between the vector space of state vectors and kets given by \Theta_k \psi = \langle,\psi \rangle . The kets are denoted by |\psi \rangle .
Similarly there are the “bra” vectors which are “dual” to the “kets”; these are denoted by \langle \phi | and the vector space isomorphism is given by \Theta_b \psi = \langle,\overline{\psi} | . I chose this isomorphism because in the bra vector space, a \langle\alpha,| =  \langle \overline{a} \alpha,| . Then there is a vector space isomorphism between the bras and the kets given by \langle \psi | \rightarrow |\overline{\psi} \rangle .

Now \langle \psi | \phi \rangle is the inner product; that is \langle \psi | \phi \rangle = \int^{\infty}_{-\infty} \overline{\psi}\phi dx

By convention: if A is a linear operator, \langle \psi,|A = \langle A(\psi)| and A |\psi \rangle = |A(\psi) \rangle Now if A is a Hermitian operator (the ones that correspond to observables are), then there is no ambiguity in writing \langle \psi | A | \phi \rangle .

This leads to the following: let A be an operator corresponding to an observable with eigenvectors \alpha_i and eigenvalues a_i . Let \psi be a state vector.
Then \psi = \sum_i \langle \alpha_i|\psi \rangle \alpha_i and if Y is a random variable corresponding to the observed value of A , then P(Y = a_k) = |\langle \alpha_k | \psi \rangle |^2 and the expectation E(A) = \langle \psi | A | \psi \rangle .

Older Posts »

Create a free website or blog at WordPress.com.