Subsections


7.1 The Schrö­din­ger Equation

In Newtonian mechanics, Newton's second law states that the linear momentum changes in time proportional to the applied force; ${\rm d}{m}\vec{v}$$\raisebox{.5pt}{$/$}$${\rm d}{t}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $m\vec{a}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\vec{F}$. The equivalent in quantum mechanics is the Schrö­din­ger equation, which describes how the wave function evolves. This section discusses this equation, and a few of its immediate consequences.


7.1.1 The equation

The Schrö­din­ger equation says that the time derivative of the wave function is obtained by applying the Hamiltonian on it. More precisely:

\begin{displaymath}
\fbox{$\displaystyle
{\rm i}\hbar \frac{\partial \Psi}{\partial t} = H \Psi
$} %
\end{displaymath} (7.1)

An equivalent and earlier formulation of quantum mechanics was given by Heisenberg, {A.12}. However, the Schrö­din­ger equation tends to be easier to deal with, especially in nonrelativistic applications. An integral version of the Schrö­din­ger equation that is sometimes convenient is in {A.13}.

The Schrö­din­ger equations is nonrelativistic. The simplest relativistic version is called the Klein-Gordon equation. A discussion is in addendum {A.14}. However, relativity introduces a fundamentally new issue: following Einstein’s mass-energy equivalence, particles may be created out of pure energy or destroyed. To deal with that, you typically need a formulation of quantum mechanics called quantum field theory. A very brief introduction is in addendum {A.15}.


Key Points
$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
The Schrö­din­ger equation describes the time evolution of the wave function.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
The time derivative is proportional to the Hamiltonian.


7.1.2 Solution of the equation

The solution to the Schrö­din­ger equation can immediately be given for most cases of interest. The only condition that needs to be satisfied is that the Hamiltonian depends only on the state the system is in, and not explicitly on time. This condition is satisfied in all cases discussed so far, including the particle in a box, the harmonic oscillator, the hydrogen and heavier atoms, and the molecules, so the following solution applies to them all:

To satisfy the Schrö­din­ger equation, write the wave function $\Psi$ in terms of whatever are the energy eigenfunctions $\psi_{\vec n}$ of the Hamiltonian,

\begin{displaymath}
\Psi
= c_{{\vec n}_1}(t) \psi_{{\vec n}_1} + c_{{\vec n}...
... n}_2} + \ldots
= \sum_{\vec n}c_{\vec n}(t) \psi_{\vec n}
\end{displaymath} (7.2)

Then the coefficients $c_{\vec n}$ must evolve in time as complex exponentials:

\begin{displaymath}
\fbox{$\displaystyle
c_{\vec n}(t) = c_{\vec n}(0) e^{-{\rm i}E_{\vec n}t /\hbar}
$} %
\end{displaymath} (7.3)

for every combination of quantum numbers ${\vec n}$.

In short, you get the wave function for arbitrary times by taking the initial wave function and shoving in additional factors $e^{-{{\rm i}}E_{{\vec n}}t/\hbar}$. The initial values $c_{\vec n}(0)$ of the coefficients are not determined from the Schrö­din­ger equation, but from whatever initial condition for the wave function is given. As always, the appropriate set of quantum numbers ${\vec n}$ depends on the problem.

Consider how this works out for the electron in the hydrogen atom. Here each spatial energy state $\psi_{nlm}$ is characterized by the three quantum numbers $n$, $l$, $m$, chapter 4.3. However, there is a spin-up version $\psi_{nlm}{\uparrow}$ of each state in which the electron has spin magnetic quantum number $m_s$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\frac12$, and a spin-down version $\psi_{nlm}{\downarrow}$ in which $m_s$ $\vphantom0\raisebox{1.5pt}{$=$}$ $-\frac12$, chapter 5.5.1. So the states are characterized by the set of four quantum numbers

\begin{displaymath}
{\vec n}\equiv (n,l,m,m_s)
\end{displaymath}

The most general wave function for the hydrogen atom is then:

\begin{eqnarray*}
\lefteqn{\Psi(r,\theta,\phi,S_z,t) =} \\
&&
\sum_{n=1}^...
...
e^{-{\rm i}E_n t/\hbar} \psi_{nlm}(r,\theta,\phi){\downarrow}
\end{eqnarray*}

Note that each eigenfunction has been given its own coefficient that depends exponentially on time. (The summation limits come from chapter 4.3.)

The given solution in terms of eigenfunctions covers most cases of interest, but as noted, it is not valid if the Hamiltonian depends explicitly on time. That possibility arises when there are external influences on the system; in such cases the energy does not just depend on what state the system itself is in, but also on what the external influences are like at the time.


Key Points
$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
Normally, the coefficients of the energy eigenfunctions must be proportional to $e^{-{{\rm i}}E_{{\vec n}}t/\hbar}$.

7.1.2 Review Questions
1.

The energy of a photon is $\hbar\omega$ where $\omega$ is the classical frequency of the electromagnetic field produced by the photon. So what is $e^{-{{\rm i}}E_{{\vec n}}t/\hbar}$ for a photon? Are you surprised by the result?

Solution schrodsol-a

2.

For the one-di­men­sion­al harmonic oscillator, the energy eigenvalues are

\begin{displaymath}
E_n = \frac{2n+1}{2} \omega
\end{displaymath}

Write out the coefficients $c_n(0)e^{-{{\rm i}}E_nt/\hbar}$ for those energies.

Now classically, the harmonic oscillator has a natural frequency $\omega$. That means that whenever ${\omega}t$ is a whole multiple of $2\pi$, the harmonic oscillator is again in the same state as it started out with. Show that the coefficients of the energy eigenfunctions have a natural frequency of $\frac 12\omega$; $\frac 12{\omega}t$ must be a whole multiple of $2\pi$ for the coefficients to return to their original values.

Solution schrodsol-b

3.

Write the full wave function for a one-di­men­sion­al harmonic oscillator. Formulae are in chapter 4.1.2.

Solution schrodsol-c


7.1.3 Energy conservation

The Schrö­din­ger equation implies that the energy of a system is conserved, assuming that there are no external influences on the system.

To see why, consider the general form of the wave function:

\begin{displaymath}
\Psi = \sum_{\vec n}c_{\vec n}(t) \psi_{\vec n}
\qquad
c_{\vec n}(t) = c_{\vec n}(0) e^{-{\rm i}E_{\vec n}t /\hbar}
\end{displaymath}

According to chapter 3.4, the square magnitudes $\vert c_{\vec n}\vert^2$ of the coefficients of the energy eigenfunctions give the probability for the corresponding energy. While the coefficients vary with time, their square magnitudes do not:

\begin{displaymath}
\vert c_{\vec n}(t)\vert^2 \equiv c_{\vec n}^*(t)c_{\vec n...
...e^{-{\rm i}E_{\vec n}t /\hbar}
= \vert c_{\vec n}(0)\vert^2
\end{displaymath}

So the probability of measuring a given energy level does not vary with time either. That means that energy is conserved.

For example, a wave function for a hydrogen atom at the excited energy level $E_2$ might be of the form:

\begin{displaymath}
\Psi = e^{-{\rm i}E_2 t/\hbar} \psi_{210}{\uparrow}
\end{displaymath}

(This corresponds to an assumed initial condition in which all coefficients $c_{nlmm_s}$ are zero except $c_{2101}$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1.) The square magnitude of the exponential is one, so the energy of this excited atom will stay $E_2$ with 100% certainty for all time. The energy of the atom is conserved.

This is an important example, because it also illustrates that an excited atom will stay excited for all time if left alone. That is an apparent contradiction because, as discussed in chapter 4.3, the above excited atom will eventually emit a photon and transition back to the ground state. Even if you put it in a sealed box whose interior is at absolute zero temperature, it will still decay.

The explanation for this apparent contradiction is that an atom is never truly left alone. Simply put, even at absolute zero temperature, quantum uncertainty in energy allows an electromagnetic photon to pop up that perturbs the atom and causes the decay. (To describe more precisely what happens is a major objective of this chapter.)

Returning to the unperturbed atom, you may wonder what happens to energy conservation if there is uncertainty in energy. In that case, what does not change with time are the probabilities of measuring the possible energy levels. As an arbitrary example, the following wave function describes a case of an unperturbed hydrogen atom whose energy has a 50/50 chance of being measured as $E_1$, (-13.6 eV), or as $E_2$, (-3.4 eV):

\begin{displaymath}
\Psi =
{\displaystyle\frac{1}{\sqrt2}} e^{-{\rm i}E_1 t/...
...frac{1}{\sqrt2}} e^{-{\rm i}E_2 t/\hbar} \psi_{210}{\uparrow}
\end{displaymath}

The 50/50 probability applies regardless how long the wait is before the measurement is done.

You can turn the observations of this subsection also around. If an external effect changes the energy of a system, then clearly the probabilities of the individual energies must change. So then the coefficients of the energy eigenfunctions cannot be simply vary exponentially with time as they do for the unperturbed systems discussed above.


Key Points
$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
Energy conservation is a fundamental consequence of the Schrö­din­ger equation.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
An isolated system that has a given energy retains that energy.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
Even if there is uncertainty in the energy of an isolated system, still the probabilities of the various energies do not change with time.


7.1.4 Stationary states

The quest for the dynamical implications of the Schrö­din­ger equation must start with the simplest case. That is the case in which there is only a single energy eigenfunction involved. Then the wave function is of the form

\begin{displaymath}
\Psi = c_{\vec n}(0) e^{-{\rm i}E_{\vec n}t /\hbar} \psi_{\vec n}
\end{displaymath}

Such states are called stationary states. Systems in their ground state are of this type.

To see why these states are called stationary, note first of all that the energy of the state is $E_{\vec n}$ for all time, with no uncertainty.

But energy is not the only thing that does not change in time. According to the Born interpretation, chapter 3.1, the square magnitude of the wave function of a particle gives the probability of finding the particle at that position and time. Now the square magnitude of the wave function above is

\begin{displaymath}
\vert\Psi\vert^2 = \vert\psi_{\vec n}\vert^2
\end{displaymath}

Time has dropped out in the square magnitude; the probability of finding the particle is the same for all time.

For example, consider the case of the particle in a pipe of chapter 3.5. If the particle is in the ground state, its wave function is of the form

\begin{displaymath}
\Psi=c_{111}(0)e^{-{{\rm i}}E_{111}t/\hbar}\psi_{111}
\end{displaymath}

The precise form of the function $\psi_{111}$ is not of particular interest here, but it can be found in chapter 3.5.

The relative probability for where the particle may be found can be shown as grey tones:

Figure 7.1: The ground state wave function looks the same at all times.
\begin{figure}
\centering
{}%
\epsffile{pipet1.eps}
\end{figure}

The bottom line is that this picture is the same for all time.

If the wave function is purely the first excited state $\psi_{211}$, the corresponding picture looks for all time like:

Figure 7.2: The first excited state at all times.
\begin{figure}
\centering
{}%
\epsffile{pipet2.eps}
\end{figure}

And it is not just position that does not change. Neither do linear or angular momentum, kinetic energy, etcetera. That can be easily checked. The probability for a specific value of any physical quantity is given by

\begin{displaymath}
\vert\langle\alpha\vert\Psi\rangle\vert^2
\end{displaymath}

where $\alpha$ is the eigenfunction corresponding to the value. (If there is more than one eigenfunction with that value, sum their contributions.) The exponential drops out in the square magnitude. So the probability does not depend on time.

And if probabilities do not change, then neither do expectation values, uncertainties, etcetera. No physically meaningful quantity changes with time.

Hence it is not really surprising that none of the energy eigenfunctions derived so far had any resemblance to the classical Newtonian picture of a particle moving around. Each energy eigenfunction by itself is a stationary state. There is no change in the probability of finding the particle regardless of the time that you look. So how could it possibly resemble a classical particle that is at different positions at different times?

To get time variations of physical quantities, states of different energy must be combined. In other words, there must be uncertainty in energy.


Key Points
$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
States of definite energy are stationary states.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
To get nontrivial time variation of a system requires uncertainty in energy.


7.1.5 The adiabatic approximation

The previous subsections discussed the solution for systems in which the Hamiltonian does not explicitly depend on time. Typically that means isolated systems, unaffected by external effects, or systems for which the external effects are relatively simple. If the external effects produce a time-dependent Hamiltonian, things get much messier. You cannot simply make the coefficients of the eigenfunctions vary exponentially in time as done in the previous subsections.

However, dealing with systems with time-dependent Hamiltonians can still be relatively easy if the Hamiltonian varies sufficiently slowly in time. Such systems are quasi-steady ones.

So physicists cannot call these systems quasi-steady; that would give the secret away to these hated nonspecialists and pesky students. Fortunately, physicists were able to find a much better name. They call these systems adiabatic. That works much better because the word adiabatic is a well-known term in thermodynamics: it indicates systems that evolve fast enough that heat conduction with the surroundings can be ignored. So, what better name to use also for quantum systems that evolve slowly enough that they stay in equilibrium with their surroundings? No one familiar with even the most basic thermodynamics will ever guess what it means.

As a simple example of an adiabatic system, assume that you have a particle in the ground state in a box. Now you change the volume of the box by a significant amount. The question is, will the particle still be in the ground state after the volume change? Normally there is no reason to assume so; after all, either way the energy of the particle will change significantly. However, the “adiabatic theorem” says that if the change is performed slowly enough, it will. The particle will indeed remain in the ground state, even though that state slowly changes into a completely different form.

If the system is in an energy state other than the ground state, the particle will stay in that state as it evolves during an adiabatic process. The theorem does assume that the energy is nondegenerate, so that the energy state is unambiguous. More sophisticated versions of the analysis exist to deal with degeneracy and continuous spectra.

A derivation of the theorem can be found in {D.34}. Some additional implications are in addendum {A.16}. The most important practical application of the adiabatic theorem is without doubt the Born-Oppenheimer approximation, which is discussed separately in chapter 9.2.


Key Points
$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
If the properties of a system in its ground state are changed, but slowly, the system will remain in the changing ground state.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
More generally, the adiabatic approximation can be used to analyze slowly changing systems.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
No, it has nothing to do with the normal use of the word adiabatic.