We’ll assume a state function and an observable whose Hermitian operator is denoted by with eigenvectors and eigenvalues . If we take an observation (say, at time ) we obtain the probability density function (we make the assumption that there is only one eigenvector per eigenvalue).
We saw how the expectation (the expected value of the associated density function) changes with time. What about the time evolution of the density function itself?
Since completely determines the density function and because can be expanded as it make sense to determine . Note that the eigenvectors and eigenvalues do not change with time and therefore can be regarded as constants.
We can take this further: we now write We now substitute into the previous equation to obtain:
Denote by . Then we see that we have the infinite coupled differential equations: . That is, the rate of change of one of the depends on all of the which really isn’t a surprise.
We can see this another way: because we have a density function, . Now rewrite: . Now differentiate with respect to and use the product rule:
Things get a bit easier if the original operator is compatible with the Hamiltonian ; in this case the operators share common eigenvectors. We denote the eigenvectors for by and then
Now use the fact that the are eigenvectors for and are orthogonal to each other to obtain:
where is the eigenvalue for associated with .
Now we use differential equations (along with existence and uniqueness conditions) to obtain:
where is the initial state vector (before it had time to evolve).
This has two immediate consequences:
That is the general solution to the time-evolution equation. The reader might be reminded that
2. Returning to the probability distribution: . But since is compatible with , we have the same eigenvectors, hence we see that the probability density function does not change AT ALL. So such an observable really is a “constant of motion”.
Since is an observable, we can always write . Then we have
Now suppose is precisely one of the eigenvectors for the Hamiltonian; say for some . Then:
2. For any
Note: no other operator has made an appearance.
Now recall our first postulate: states are determined only up to scalar multiples of unity modulus. Hence the state undergoes NO time evolution, no matter what observable is being observed.
We can see this directly: let be an operator corresponding to any observable. Then . Then because the probability distribution is completely determined by the eigenvalues and and , the distribution does NOT change with time. This motivates us to define the stationary states of a system: .
Gillespie notes that much of the problem solving in quantum mechanics is solving the Eigenvalue problem: which is often difficult to do. But if one can do that, one can determine the stationary states of the system.