College Math Teaching

August 9, 2011

Quantum Mechanics and Undergraduate Mathematics IX: Time evolution of an Observable Density Function

We’ll assume a state function \psi and an observable whose Hermitian operator is denoted by A with eigenvectors \alpha_k and eigenvalues a_k . If we take an observation (say, at time t = 0 ) we obtain the probability density function p(Y = a_k) = | \langle \alpha_k, \psi \rangle |^2 (we make the assumption that there is only one eigenvector per eigenvalue).

We saw how the expectation (the expected value of the associated density function) changes with time. What about the time evolution of the density function itself?

Since \langle \alpha_k, \psi \rangle completely determines the density function and because \psi can be expanded as \psi = \sum_{k=1} \langle \alpha_k, \psi \rangle \alpha_k it make sense to determine \frac{d}{dt} \langle \alpha_k, \psi \rangle . Note that the eigenvectors \alpha_k and eigenvalues a_k do not change with time and therefore can be regarded as constants.

\frac{d}{dt} \langle \alpha_k, \psi \rangle =   \langle \alpha_k, \frac{\partial}{\partial t}\psi \rangle = \langle \alpha_k, \frac{-i}{\hbar}H\psi \rangle = \frac{-i}{\hbar}\langle \alpha_k, H\psi \rangle

We can take this further: we now write H\psi = H\sum_j \langle \alpha_j, \psi \rangle \alpha_j = \sum_j \langle \alpha_j, \psi \rangle H \alpha_j We now substitute into the previous equation to obtain:
\frac{d}{dt} \langle \alpha_k, \psi \rangle = \frac{-i}{\hbar}\langle \alpha_k, \sum_j \langle \alpha_j, \psi \rangle H \alpha_j   \rangle = \frac{-i}{\hbar}\sum_j \langle \alpha_k, H\alpha_j \rangle \langle \alpha_j, \psi \rangle

Denote \langle \alpha_j, \psi \rangle by a_j . Then we see that we have the infinite coupled differential equations: \frac{d}{dt} a_k = \frac{-i}{\hbar} \sum_j a_j \langle \alpha_k, H\alpha_j \rangle . That is, the rate of change of one of the a_k depends on all of the a_j which really isn’t a surprise.

We can see this another way: because we have a density function, \sum_j |\langle \alpha_j, \psi \rangle |^2 =1 . Now rewrite: \sum_j |\langle \alpha_j, \psi \rangle |^2 =  \sum_j \langle \alpha_j, \psi \rangle \overline{\langle \alpha_j, \psi \rangle } =  \sum_j a_j \overline{ a_j} = 1 . Now differentiate with respect to t and use the product rule: \sum_j \frac{d}{dt}a_j \overline{ a_j} + a_j  \frac{d}{dt} \overline{ a_j} = 0

Things get a bit easier if the original operator A is compatible with the Hamiltonian H ; in this case the operators share common eigenvectors. We denote the eigenvectors for H by \eta and then
\frac{d}{dt} a_k = \frac{-i}{\hbar} \sum_j a_j \langle \alpha_k, H\alpha_j \rangle becomes:
\frac{d}{dt} \langle \eta_j, \psi \rangle = \frac{-i}{\hbar} \sum_j \langle \eta_j, \psi \rangle \langle \eta_k, H\eta_j \rangle Now use the fact that the \eta_j are eigenvectors for H and are orthogonal to each other to obtain:
\frac{d}{dt} \langle \eta_k, \psi \rangle = \frac{-i}{\hbar} e_k \langle \eta_k, \psi \rangle where e_k is the eigenvalue for H associated with \eta_k .

Now we use differential equations (along with existence and uniqueness conditions) to obtain:
\langle \eta_k, \psi \rangle  = \langle_k, \psi_0 \rangle exp(-ie_k \frac{t}{\hbar}) where \psi_0 is the initial state vector (before it had time to evolve).

This has two immediate consequences:

1. \psi(x,t) = \sum_j \langle \eta_j, \psi_0 \rangle  exp(-ie_j \frac{t}{\hbar}) \eta_j
That is the general solution to the time-evolution equation. The reader might be reminded that exp(ib) = cos(b) + i sin (b)

2. Returning to the probability distribution: P(Y = e_k) = |\langle \eta_k, \psi \rangle |^2 = |\langle \eta_k, \psi_0 \rangle |^2 ||exp(-ie_k \frac{t}{\hbar})|^2 = |\langle \eta_k, \psi_0 \rangle |^2 . But since A is compatible with H , we have the same eigenvectors, hence we see that the probability density function does not change AT ALL. So such an observable really is a “constant of motion”.

Stationary States
Since H is an observable, we can always write \psi(x,t) = \sum_j \langle \eta_j, \psi(x,t) \rangle \eta_j . Then we have \psi(x,t)= \sum_j \langle \eta_j, \psi_0 \rangle exp(-ie_j \frac{t}{\hbar}) \eta_j

Now suppose \psi_0 is precisely one of the eigenvectors for the Hamiltonian; say \psi_0 = \eta_k for some k . Then:

1. \psi_(x,t) = exp(-ie_k \frac{t}{\hbar}) \eta_k
2. For any t \geq 0 , P(Y = e_k) = 1, P(Y \neq  e_k) = 0

Note: no other operator has made an appearance.
Now recall our first postulate: states are determined only up to scalar multiples of unity modulus. Hence the state undergoes NO time evolution, no matter what observable is being observed.

We can see this directly: let A be an operator corresponding to any observable. Then \langle \alpha_k, A \psi_k \rangle = \langle \alpha_k, A exp(-i e_k \frac{t}{\hbar})\eta_k \rangle = exp(-i e_k \frac{t}{\hbar}\langle \alpha_k, A \eta_k \rangle . Then because the probability distribution is completely determined by the eigenvalues e_k and |\langle \alpha_k, A \eta_k \rangle | and |exp(-i e_k \frac{t}{\hbar}| = 1 , the distribution does NOT change with time. This motivates us to define the stationary states of a system: \psi_{(k)} = exp(- e_k \frac{t}{\hbar})\eta_k .

Gillespie notes that much of the problem solving in quantum mechanics is solving the Eigenvalue problem: H \eta_k = e_k \eta_k which is often difficult to do. But if one can do that, one can determine the stationary states of the system.

Advertisements

Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Create a free website or blog at WordPress.com.

%d bloggers like this: