Subsections


9.3 The Hartree-Fock Approximation

Many of the most important problems that you want to solve in quantum mechanics are all about atoms and/or molecules. These problems involve a number of electrons around a number of atomic nuclei. Unfortunately, a full quantum solution of such a system of any nontrivial size is very difficult. However, approximations can be made, and as section 9.2 explained, the real skill you need to master is solving the wave function for the electrons given the positions of the nuclei.

But even given the positions of the nuclei, a brute-force solution for any nontrivial number of electrons turns out to be prohibitively laborious. The Hartree-Fock approximation is one of the most important ways to tackle that problem, and has been so since the early days of quantum mechanics. This section explains some of the ideas.


9.3.1 Wave function approximation

The key to the basic Hartree-Fock method is the assumptions it makes about the form of the electron wave function. It will be assumed that there are a total of $I$ electrons in orbit around a number of nuclei. The wave function describing the set of electrons then has the general form:

\begin{displaymath}
\Psi({\skew0\vec r}_1,S_{z1},{\skew0\vec r}_2,S_{z2},\ldots,{\skew0\vec r}_i,S_{zi},\ldots
{\skew0\vec r}_I,S_{zI})
\end{displaymath}

where ${\skew0\vec r}_i$ is the position of electron number $i$, and $S_{zi}$ its spin in a chosen $z$-​direction, with measurable values $\frac12\hbar$ and $-\frac12\hbar$. Of course, what answer you get for the wave function will also depend on where the nuclei are, but in this section, the nuclei are supposed to be at given positions, so to reduce the clutter, the dependence of the electron wave function on the nuclear positions will not be explicitly shown.

Hartree-Fock approximates the wave function in terms of a set of single-electron functions, each a product of a spatial function and a spin state:

\begin{displaymath}
\pe1/{\skew0\vec r}/b/z/,\; \pe2/{\skew0\vec r}/b/z/,\; \pe3/{\skew0\vec r}/b/z/,\; \ldots
\end{displaymath}

where ${\updownarrow}$ stands for either spin-up, ${\uparrow}$, or spin-down, ${\downarrow}$. (By definition, function ${\uparrow}(S_z)$ equals one if the spin $S_z$ is $\frac12\hbar$, and zero if it is $-\frac12\hbar$, while function ${\downarrow}(S_z)$ equals zero if $S_z$ is $\frac12\hbar$ and one if it is $-\frac12\hbar$.) These single-electron functions are called “orbitals” or “spin orbitals.” The reason is that you tend to think of them as describing a single electron being in orbit around the nuclei with a particular spin. Wrong, of course: the electrons do not have reasonably defined positions on these scales. But you do tend to think of them that way anyway.

The spin orbitals are taken to be an orthonormal set. Note that any two spin orbitals are automatically orthogonal if they have opposite spins: spin states are orthonormal so $\langle{\uparrow}\vert{\downarrow}\rangle$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0. If they have the same spin, their spatial orbitals will need to be orthogonal.

Single-electron functions can be combined into multi-electron functions by forming products of them of the form

\begin{displaymath}
a_{n_1,n_2,\ldots,n_I}\;
\pe n_1/{\skew0\vec r}_1/b/z1/\...
...\vec r}_2/b/z2/\;
\ldots\;
\pe n_I/{\skew0\vec r}_I/b/zI/
\end{displaymath}

where $n_1$ is the number of the single-electron function used for electron 1, $n_2$ the number of the single-electron function used for electron 2, and so on, and $a_{n_1,n_2,\ldots,n_I}$ is a suitable numerical constant. Such a product wave function is called a “Hartree product.”

Now if you use enough single-electron functions, with all their Hartree products, you can approximate any multi-electron wave function to arbitrarily high accuracy. Unfortunately, using many of them produces a problem much too big to be solved on even the most powerful computer. So you really want to use as little of them as possible. But you cannot use too few either; as chapter 5.7 explained, nature imposes an antisymmetrization requirement: the complete wave function that you write must change sign whenever any two electrons are exchanged, in other words when you replace ${\skew0\vec r}_i,S_{zi}$ by ${\skew0\vec r}_{\underline i},S_{z{\underline i}}$ and vice-versa for any pair of electrons numbered $i$ and ${\underline i}$. That is only possible if you use at least $I$ different single-electron functions for your $I$ electrons. This is known as the Pauli exclusion principle: any group of $I-1$ electrons occupying the minimum of $I-1$ single-electron functions exclude an additional $I$-​th electron from simply entering the same functions. The $I$-​th electron will have to find its own single-electron function to add to the mix.

The basic Hartree-Fock approximation uses the absolute minimum that is possible, just $I$ different single-electron functions for the $I$ electrons. In that case, the wave function $\Psi$ can be written as a single Slater determinant:

\begin{displaymath}
\frac{a}{\sqrt{I!}}
\left\vert
\begin{array}{cccccc}
...
...ts & \pe I/{\skew0\vec r}_I/b/zI/
\end{array}
\right\vert
\end{displaymath} (9.14)

where $a$ is a constant of magnitude one. As chapter 5.7 explained, a Slater determinant is really equivalent to a sum of $I!$ Hartree products, each with the single-electron functions in a different order. It is the one wave function obtainable from the $I$ single-electron functions that is antisymmetric with respect to exchanging any two of the $I$ electrons.

Displaying the Slater determinant fully as above may look impressive, but it is a lot to read. Therefore, from now on it will be abbreviated as

\begin{displaymath}
\Psi = \frac{a}{\sqrt{I!}}
\Big \vert{\rm det}(\pe 1//b/...
... 2//b//,\ldots,\pe n//b//,
\ldots,\pe I//b//)\Big\rangle. %
\end{displaymath} (9.15)

It is important to realize that using the minimum number of single-electron functions will unavoidably produce an error that is mathematically speaking not small {N.16}. To get a vanishingly small error, you would need a large number of different Slater determinants, not just one. Still, the results you get with the basic Hartree-Fock approach may be good enough to satisfy your needs. Or you may be able to improve upon them enough with post-Hartree-Fock methods.

But none of that would be likely if you just selected the single-electron functions $\pe1//b//$, $\pe2//b//$, ... at random. The cleverness in the Hartree-Fock approach will be in writing down equations for these single-electron functions that produce the best approximation possible with a single Slater determinant.

This section will reserve the term orbitals specifically for the single-​electron functions that provide the best single-determinant approximation. In those terms, if the Hartree-Fock orbitals provide the best single-determinant approximation, their results will certainly be better than the solutions that were written down for the atoms in chapter 5.9, because those were really single Slater determinants. In fact, you could find much more accurate ways to average out the effects of the neighboring electrons than just putting them in the nucleus like the section on atoms essentially did. You could smear them out over some optimal area, say. But the solution you will get doing so will be no better than you could get using Hartree-Fock.

That assumes of course that the spins are taken the same way. Consider that problem for a second. Typically, a nonrelativistic approach is used, in which spin effects on the energy are ignored. Spin then really only directly affects the antisymmetrization requirements.

Things are straightforward if you try to solve, say, a helium atom. In the exact ground state, the two electrons are in the spatial wave function that has the absolutely lowest energy, regardless of any antisymmetrization concerns. This spatial wave function is symmetric under electron exchange since the two electrons are identical. The antisymmetrization requirement is met since the electrons assume the singlet configuration,

\begin{displaymath}
\frac{{\uparrow}(S_{z1}){\downarrow}(S_{z2})
-{\downarrow}(S_{z1}){\uparrow}(S_{z2})}{\sqrt2},
\end{displaymath}

for their combined spins.

The approximate Hartree-Fock wave function for helium you would correspondingly take to be

\begin{displaymath}
\frac{1}{\sqrt{2}} \Big\vert{\rm {det}}(\pe1//u//,\pe2//d//)\Big\rangle
\end{displaymath}

and then you would make things easier for yourself by postulating a priori that the spatial orbitals are the same, $\pe1////$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\pe2////$. Lo and behold, when you multiply out the Slater determinant,

\begin{displaymath}
\frac{1}{\sqrt{2}}
\left\vert
\begin{array}{cc}
\pe1...
...}(S_{z2})
-{\downarrow}(S_{z1}){\uparrow}(S_{z2})}{\sqrt2},
\end{displaymath}

it automagically reproduces the correct singlet spin state. And you only need to find one spatial orbital instead of two.

As discussed in chapter 5.9, a beryllium atom has two electrons with opposite spins in the 1s shell like helium, and two more in the 2s shell. An appropriate Hartree-Fock wave function would be $\big\vert{\rm {det}}(\pe1//u//,\pe1//d//,\pe3//u//,\pe3//d//)\big\rangle $$\raisebox{.5pt}{$/$}$$\sqrt{4!}$, in other words, two pairs of orbitals with the same spatial states and opposite spins. Similarly, Neon has an additional 6 paired electrons in a closed 2p shell, and you could use 3 more pairs of orbitals with the same spatial states and opposite spins. The number of spatial orbitals that must be found in such solutions is only half the number of electrons. This is called the closed shell “Restricted Hartree-Fock (RHF)” method. It restricts the form of the spatial states to be pair-wise equal.

But now look at lithium. Lithium has two paired 1s electrons like helium, and an unpaired 2s electron. For the third orbital in the Hartree-Fock determinant, you will now have to make a choice whether to take it of the form $\pe3//u//$ or $\pe3//d//$. Lets assume you take $\pe3//u//$, so the wave function is

\begin{displaymath}
\frac{1}{\sqrt{3!}}
\Big\vert{\rm {det}}(\pe1//u//,\pe2//d//,\pe3//u//)\Big\rangle
\end{displaymath}

You have introduced a bias in the determinant: there is now a real difference between $\pe1//u//$ and $\pe2//d//$: $\pe1//u//$ has the same spin as the third spin orbital, and $\pe2//d//$ opposite.

If you find the best approximation among all possible orbitals $\pe1//u//$, $\pe2//d//$, and $\pe3//u//$, you will end up with spatial orbitals $\pe1////$ and $\pe2////$ that are not the same. Allowing for them to be different is called the “Unrestricted Hartree-Fock (UHF)” method. In general, you no longer require that equivalent spatial orbitals are the same in their spin-up and spin down versions. For a bigger system, you will end up with one set of orthonormal spatial orbitals for the spin-up orbitals and a different set of orthonormal spatial orbitals for the spin-down ones. These two sets of orthonormal spatial orbitals are not mutually orthogonal; the only reason the complete spin orbitals are still orthonormal is because the two spins are orthogonal, $\langle{\uparrow}\vert{\downarrow}\rangle$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0.

If instead of using unrestricted Hartree-Fock, you insist on demanding that the spatial orbitals for spin up and down do form a single set of orthonormal functions, it is called open shell restricted Hartree-Fock. In the case of lithium, you would then demand that $\pe2////$ equals $\pe1////$. Since the best (in terms of energy) solution has them different, your solution is then no longer the best possible. You pay a price, but you now only need to find two spatial orbitals rather than three. The spin orbital $\pe3//u//$ without a matching opposite-spin orbital counts as an open shell. For nitrogen, you might want to use three open shells to represent the three different spatial states 2p$_x$, 2p$_y$, and 2p$_z$ with an unpaired electron in it.

If you use unrestricted Hartree-Fock instead, you will need to compute more spatial functions, and you pay another price, spin. Since all spin effects in the Hamiltonian are ignored, it commutes with the spin operators. So, the exact energy eigenfunctions are also, or can be taken to be also, spin eigenfunctions. Restricted Hartree-Fock has the capability of producing approximate energy eigenstates with well defined spin. Indeed, as you saw for helium, in restricted Hartree-Fock all the paired spin-up and spin-down states combine into zero-spin singlet states. If any additional unpaired states are all spin up, say, you get an energy eigenstate with a net spin equal to the sum of the spins of the unpaired states.

But a true unrestricted Hartree-Fock solution does not have correct, definite, spin. For two electrons to produce states of definite combined spin, the coefficients of spin up and spin down must come in specific ratios. As a simple example, an unrestricted Slater determinant of $\pe1//u//$ and $\pe2//d//$ with unequal spatial orbitals multiplies out to

\begin{displaymath}
\frac{1}{\sqrt{2}}
\Big\vert{\rm {det}}(\pe1//u//,\pe2//...
...w0\vec r}_2///{\downarrow}(S_{z1}){\uparrow}(S_{z2})}{\sqrt2}
\end{displaymath}

or, writing the spin combinations in terms of singlets and triplets,

\begin{eqnarray*}
& & \frac{\pe1/{\skew0\vec r}_1///\pe2/{\skew0\vec r}_2///+\...
...row}(S_{z2})
+{\downarrow}(S_{z1}){\uparrow}(S_{z2})}{\sqrt2}
\end{eqnarray*}

So, the spin will be some combination of zero spin (the singlet) and spin one (the triplet), and the combination will be different at different locations of the electrons to boot. However, it may be noted that unrestricted wave functions are commonly used as first approximations of doublet and triplet states anyway [45, p. 105].

To show that all this can make a real difference, take the example of the hydrogen molecule, chapter 5.2, when the two nuclei are far apart. The correct electronic ground state is

\begin{displaymath}
\frac{\psi_{\rm {l}}({\skew0\vec r}_1)\psi_{\rm {r}}({\ske...
...rrow}(S_{z2})-{\downarrow}(S_{z1}){\uparrow}(S_{z2})}{\sqrt2}
\end{displaymath}

where $\psi_{\rm {l}}({\skew0\vec r}_1)\psi_{\rm {r}}({\skew0\vec r}_2)$ is the state in which electron 1 is around the left proton and electron 2 around the right one, and $\psi_{\rm {r}}({\skew0\vec r}_1)\psi_{\rm {l}}({\skew0\vec r}_2)$ is the same state but with the electrons reversed. Note that the spin state is a singlet one with zero net spin.

Now try to approximate it with a restricted closed shell Hartree-Fock wave function of the form $\big\vert{\rm {det}}(\pe1//u//,\pe1//d//)\big\rangle$$\raisebox{.5pt}{$/$}$$\sqrt2$. The determinant multiplies out to

\begin{displaymath}
\pe1/{\skew0\vec r}_1///\pe1/{\skew0\vec r}_2///
\frac{{...
...w}(S_{z2})
-{\downarrow}(S_{z1}){\uparrow}(S_{z2})}{\sqrt2}
\end{displaymath}

Now $\pe1////$ will be something like $(\psi_{\rm {l}}+\psi_{\rm {r}})$$\raisebox{.5pt}{$/$}$$\sqrt2$; the energy of the electrons is lowest when they are near the nuclei. But if $\pe1/{\skew0\vec r}_1///$ is appreciable when electron 1 is near say the left nucleus, then $\pe1/{\skew0\vec r}_2///$ is also appreciable when electron 2 is near the same nucleus, since it is the exact same function. So there is a big chance of finding both electrons together near the same nucleus. That is all wrong, since the electrons repel each other: if one electron is around the left nucleus, the other should be around the right one. The computed energy, which should be that of two neutral hydrogen atoms far apart, will be much too high. Note however that you do get the correct spin state. Also, at the nuclear separation distance corresponding to the ground state of the complete molecule, the errors are much less, [45, p. 166]. Only when you are “breaking the bond” (dissociating the molecule, i.e. taking the nuclei apart) do you get into major trouble.

If instead you would use unrestricted Hartree-Fock, $\big\vert{\rm {det}}(\pe1//u//,\pe2//d//)\big\rangle$$\raisebox{.5pt}{$/$}$$\sqrt2$, you should find $\pe1////$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\psi_{\rm {l}}$ and $\pe2////$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\psi_{\rm {r}}$ (or vice versa), which would produce a wave function

\begin{displaymath}
\frac{
\psi_{\rm {l}}({\skew0\vec r}_1)\psi_{\rm {r}}({\...
...0\vec r}_2){\downarrow}(S_{z1}){\uparrow}(S_{z2})}{\sqrt{2}}.
\end{displaymath}

This would produce the correct energy, though the spin would now be all wrong. Little in life is ideal, is it?

All of the above may be much more than you ever wanted to hear about the wave function. The purpose was mainly to indicate that things are not as simple as you might initially suppose. As the examples showed, some understanding of the system that you are trying to model definitely helps. Or experiment with different approaches.

Let’s go on to the next step: how to get the equations for the spatial orbitals $\pe1////,\pe2////,\ldots$ that give the most accurate approximation of a multi-electron problem. The expectation value of energy will be needed for that, and to get that, first the Hamiltonian is needed. That will be the subject of the next subsection.


9.3.2 The Hamiltonian

The nonrelativistic Hamiltonian of the system of $I$ electrons consists of a number of contributions. First there is the kinetic energy of the electrons; the sum of the kinetic energy operators of the individual electrons:

\begin{displaymath}
{\widehat T}^{\rm E}
= - \sum_{i=1}^I \frac{\hbar^2}{2m_...
...rtial y_i^2} +
\frac{\partial^2}{\partial z_i^2}
\right).
\end{displaymath} (9.16)

Next there is the potential energy due to the ambient electric field that the electrons move in. It will be assumed that this field is caused by $J$ nuclei, numbered using an index $j$, and having charge $Z_je$ (i.e. there are $Z_j$ protons in nucleus number $j$). In that case, the total potential energy due to nucleus-electron attractions is, summing over all electrons and over all nuclei:

\begin{displaymath}
V^{\rm NE}
= - \sum_{i=1}^I \sum_{j=1}^J
\frac{Z_j e^2}{4\pi\epsilon_0} \frac{1}{r_{ij}}
\end{displaymath} (9.17)

where $r_{ij}\equiv\vert{\skew0\vec r}_i-{\skew0\vec r}^{\,\rm {n}}_j\vert$ is the distance between electron number $i$ and nucleus number $j$, and $\epsilon_0$ $\vphantom0\raisebox{1.5pt}{$=$}$ 8.85 10$\POW9,{-12}$ C$\POW9,{2}$/J m is the permittivity of space.

And now for the black plague of quantum mechanics, the electron to electron repulsions. The potential energy for those repulsions is

\begin{displaymath}
V^{\rm EE}=
{\textstyle\frac{1}{2}} \sum_{i=1}^I
\sum_...
...^I
\frac{e^2}{4\pi\epsilon_0} \frac{1}{r_{i{\underline i}}}
\end{displaymath} (9.18)

where $r_{i{\underline i}}\equiv\vert{\skew0\vec r}_i-{\skew0\vec r}_{\underline i}\vert$ is the distance between electron number $i$ and electron number ${\underline i}$. Half of this repulsion energy will be blamed on electron $i$ and half on electron ${\underline i}$, accounting for the factor $\frac12$.

Without this interaction between different electrons, you could solve for each electron separately, and all would be nice. But you do have it, and so you really need to solve for all electrons at once, usually an impossible task. You may recall that when chapter 5.9 examined the atoms heavier than hydrogen, those with more than one electron, the discussion cleverly threw out the electron to electron repulsion terms, by assuming that the effect of each neighboring electron is approximately like canceling out one proton in the nucleus. And you may also remember how this outrageous assumption led to all those wrong predictions that had to be corrected by various excuses. The Hartree-Fock approximation tries to do better than that.

It is helpful to split the Hamiltonian into the single electron terms and the troublesome interactions, as follows,

\begin{displaymath}
H = \sum_{i=1}^I h^{\rm e}_i
+ {\textstyle\frac{1}{2}} \...
...atop {\underline i}\ne i}}^I
v^{\rm ee}_{i{\underline i}} %
\end{displaymath} (9.19)

where $h^{\rm e}_i$ is the single-electron Hamiltonian of electron $i$,
\begin{displaymath}
h^{\rm e}_i =
- \frac{\hbar^2}{2m_{\rm e}}\nabla_i^2
+...
...m_{j=1}^J \frac{Z_j e^2}{4\pi\epsilon_0}
\frac{1}{r_{ij}} %
\end{displaymath} (9.20)

and $v^{\rm ee}_{i{\underline i}}$ is the electron $i$ to electron ${\underline i}$ repulsion potential energy
\begin{displaymath}
v^{\rm ee}_{i{\underline i}} = \frac{e^2}{4\pi\epsilon_0}
\frac{1}{r_{i{\underline i}}}. %
\end{displaymath} (9.21)

Note that $h^{\rm e}_1$, $h^{\rm e}_2$, ..., $h^{\rm e}_I$ all take the same general form; the difference is just in which electron you are talking about. That is not surprising because the electrons all have the same properties. Similarly, the difference between $v^{\rm ee}_{12}$, $v^{\rm ee}_{13}$, ..., $v^{\rm ee}_{I(I-2)}$, $v^{\rm ee}_{I(I-1)}$ is just in which pair of electrons you talk about.


9.3.3 The expectation value of energy

As was discussed in more detail in section 9.1, to find the best possible Hartree-Fock approximation, the expectation value of energy will be needed. For example, the best approximation to the ground state is the one that has the smallest expectation value of energy.

The expectation value of energy is defined as $\langle{E}\rangle$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\langle\Psi\vert H\Psi\rangle$. There is a problem with using this expression as stands, though. Look once again at the arsenic atom example. There are 33 electrons in it, so you could try to choose 33 promising single-electron functions to describe it. You could then try to multiply out the Slater determinant for $\Psi$, integrate the inner products of the individual terms on a computer, and add it all together. However, the inner product of each pair of terms involves an integration over 99 scalar coordinates. Taking 10 locations along each axis to perform the integration, you would need to compute 10$\POW9,{99}$ values for each pair of terms. And there are 33! terms in the Slater determinant, or (33!)$\POW9,{2}$ = 7.5 10$\POW9,{73}$ pairs of terms... A computer that could do that is unimaginable.

Fortunately, it turns out that almost all of those integrations are trivial since the single-electron functions are orthonormal. If you sit down and identify what is really left, you find that only a few three-di­men­sion­al and six-di­men­sion­al inner products survive the weeding-out process.

In particular, the single-electron Hamiltonians produce only single-electron energy expectation values of the general form

\begin{displaymath}
\fbox{$\displaystyle
E^{\rm e}_n \equiv \langle\pe n/{\s...
...}///\vert h^{\rm e}\vert\pe n/{\skew0\vec r}///\rangle
$} %
\end{displaymath} (9.22)

You might think there should be an index $i$ on ${\skew0\vec r}_i$ and $h^{\rm e}_i$ to indicate which electron it is. But remember that an inner product is really an integral; this one is

\begin{displaymath}
\int_{{\rm all}\;{\skew0\vec r}_i} \pe n/{\skew0\vec r}_i/...
...\rm e}_i\pe n/{\skew0\vec r}_i///{\,\rm d}^3{\skew0\vec r}_i,
\end{displaymath}

and that the name of the integration variable ${\skew0\vec r}_i$ does not make any difference: you get the exact same value for electron 1 as for electron $I$ or any other. So the value of $i$ does not make a difference, and it will just be left away.

If there was just one electron and it was in single-electron state $\pe{n}//b//$, $E^{\rm e}_n$ would be its expectation value of energy. Actually, of course, there are $I$ electrons, each partly present in state $\pe{n}//b//$ because of the way the Slater determinant writes out, and each electron turns out to contribute an equal share $E^{\rm e}_n$$\raisebox{.5pt}{$/$}$$I$ to the total energy $E^{\rm e}_n$ associated with single-electron state $\pe{n}//b//$.

The Hamiltonians of the pair-repulsions produce six-di­men­sion­al inner products that come in two types. The inner products of the first type will be indicated by $J_{n{\underline n}}$, and they are

\begin{displaymath}
\fbox{$\displaystyle
J_{n{\underline n}} \equiv
\langl...
...\pe{\underline n}/{\underline{\skew0\vec r}}///\rangle
$} %
\end{displaymath} (9.23)

Again, what electrons ${\skew0\vec r}$ and ${\underline{\skew0\vec r}}$ refer to is of no consequence. But written out as an integral for a specific set of electrons, using the expression for the pair energy $v^{\rm ee}_{i{\underline i}}$ of the previous section, you get

\begin{displaymath}
\int_{{\rm all}\;{\skew0\vec r}_i} \int_{{\rm all}\;{\skew...
...d}^3{\skew0\vec r}_i {\,\rm d}^3{\skew0\vec r}_{\underline i}
\end{displaymath}

If all you had was one electron $i$ in single-electron state $\pe{n}//b//$ and a second electron ${\underline i}$ in single-electron state $\pe{\underline n}//b//$, this would be the expectation potential energy of their interaction. It would be the probability of electron $i$ being near ${\skew0\vec r}_i$ and electron ${\underline i}$ being near ${\skew0\vec r}_{\underline i}$ times the Coulomb potential energy at those positions. For that reason these integrals are called “Coulomb integrals.”

The second type of integrals will be indicated by $K_{n{\underline n}}$, and they are

\begin{displaymath}
\fbox{$\displaystyle
K_{n{\underline n}} \equiv
\langl...
...w0\vec r}///\pe n/{\underline{\skew0\vec r}}///\rangle
$} %
\end{displaymath} (9.24)

These integrals are called the “exchange integrals.” Now don't start thinking that they are there because the wave function must be antisymmetric under electron exchange. They, and others, would show up in any reasonably general wave function. You can think of them instead as Coulomb integrals with the electrons in the right hand side of the inner product exchanged.

The exchange integrals are a reflection of nature doing business in terms of an unobservable wave function, rather than the observable probabilities that appear in the Coulomb integrals. They are the equivalent of the twilight terms that have appeared before in two-state systems. Written out as integrals, you get

\begin{displaymath}
\int_{{\rm all}\;{\skew0\vec r}_i} \int_{{\rm all}\;{\skew...
...}^3{\skew0\vec r}_i {\,\rm d}^3{\skew0\vec r}_{\underline i}.
\end{displaymath}

Going back to the original question of the expectation energy of the complete system of $I$ electrons, it turns out that it can be written in terms of the various inner products above as

\begin{displaymath}
\fbox{$\displaystyle
\big\langle E\big\rangle =
\sum_{...
...wnarrow}_{\underline n}\rangle^2
K_{n{\underline n}}
$} %
\end{displaymath} (9.25)

The spin inner product $\langle{\updownarrow}_n\vert{\updownarrow}_{\underline n}\rangle$ is one if the orbitals have the same spin, and zero if they have opposite spin, so the square is somewhat superfluous. Consider it a reminder that if you want, you can shove the spins into $K_{n{\underline n}}$ to make it a spin, rather than spatial, orbital inner product.

If you want to see where all this comes from, the derivations are in {D.52}. There are also some a priori things you can say about the Coulomb and exchange integrals, {D.53}; they are real, and additionally

\begin{displaymath}
J_{nn} = K_{nn}
\qquad
J_{n{\underline n}}\mathrel{\ra...
...e n}n}
\qquad
K_{n{\underline n}} = K_{{\underline n}n} %
\end{displaymath} (9.26)

So in terms of linear algebra, they are real symmetric matrices with nonnegative coefficients and the same main diagonal.

The analysis can easily be extended to generalized orbitals that take the form

\begin{displaymath}
\pp n/{\skew0\vec r}//z/ = \pe n+/{\skew0\vec r}/u/z/+\pe n-/{\skew0\vec r}/d/z/.
\end{displaymath}

However, the normal unrestricted spin-up or spin-down orbitals, in which either $\pe{n+}////$ or $\pe{n-}////$ is zero, already satisfy the variational requirement $\delta\langle{E}\rangle$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0 even if generalized variations in the orbitals are allowed, {N.17}.

In any case, the expectation value of energy has been found.


9.3.4 The canonical Hartree-Fock equations

The previous section found the expectation value of energy for any electron wave function described by a single Slater determinant. The final step is to find the orbitals that produce the best approximation of the true wave function using such a single determinant. For the ground state, the best single determinant would be the one with the lowest expectation value of energy. But surely you would not want to guess spatial orbitals at random until you find some with really, really, low energy.

What you would like to have is specific equations for the best spatial orbitals that you can then solve in a methodical way. And you can have them using the methods of section 9.1, {D.54}. In unrestricted Hartree-Fock, for every spatial orbital $\pe{n}/{\skew0\vec r}///$ there is an equation of the form:

\begin{displaymath}
\fbox{$\displaystyle
\begin{array}[b]{l}
\displaystyle...
.../
= \epsilon_n \pe n/{\skew0\vec r}///
\end{array}
$} %
\end{displaymath} (9.27)

These are called the canonical Hartree-Fock equations. For equations valid for the restricted closed-shell and single-determinant open-shell approximations, see the derivation in {D.54}.

Recall that $h^{\rm e}$ is the single-electron Hamiltonian consisting of its kinetic energy and its potential energy due to nuclear attractions, and that $v^{\rm ee}$ is the potential energy of repulsion between two electrons at given locations:

\begin{displaymath}
h^{\rm e}= - \frac{\hbar^2}{2m_{\rm e}}\nabla^2
- \sum_{...
... r}\equiv\vert{\skew0\vec r}- {\underline{\skew0\vec r}}\vert
\end{displaymath}

So, if there were no electron-electron repulsions, i.e. $v^{\rm ee}$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0, the canonical equations above would turn into single-electron Hamiltonian eigenvalue problems of the form $h^{\rm e}\pe{n}////$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\epsilon_n\pe{n}////$ where $\epsilon_n$ would be the energy of the single-electron orbital. This is really what happened in the approximate analysis of atoms in chapter 5.9: the electron to electron repulsions were ignored there in favor of nuclear strength reductions, and the result was single-electron hydrogen-atom orbitals.

In the presence of electron to electron repulsions, the equations for the orbitals can still symbolically be written as if they were single-electron eigenvalue problems,

\begin{displaymath}
{\cal F}\pe n/{\skew0\vec r}/b/z/
=\epsilon_n\pe n/{\skew0\vec r}/b/z/
\end{displaymath}

where ${\cal F}$ is called the “Fock operator,” and is written out further as:

\begin{displaymath}
{\cal F}= h^{\rm e}+ v^{\rm HF}.
\end{displaymath}

The first term in the Fock operator is the single-electron Hamiltonian. The mischief is in the innocuous-looking second term $v^{\rm {HF}}$. Supposedly, this is the potential energy related to the repulsion by the other electrons. What is it? Well, it will have to be the terms in the canonical equations (9.27) not described by the single-electron Hamiltonian $h^{\rm e}$:

\begin{eqnarray*}
v^{\rm HF} \pe/{\skew0\vec r}/b/z/ & = & \sum_{{\underline n...
...kew0\vec r}}/b/z1/\rangle \pe{\underline n}/{\skew0\vec r}/b/z/
\end{eqnarray*}

To recover the canonical equations (9.27) from the Fock form, take an inner product with the spin ${\updownarrow}(S_z)$. The definition of the Fock operator is unavoidably in terms of spin rather than spatial single-electron functions: the spin of the state on which it operates must be known to evaluate the final term.

Note that the above expression did not give an expression for $v^{\rm {HF}}$ by itself, but only for $v^{\rm {HF}}$ applied to an arbitrary single-electron function $\pe//b//$. The reason is that $v^{\rm {HF}}$ is not a normal potential at all: the second term, the one due to the exchange integrals, does not multiply $\pe//b//$ by a potential function, it shoves it into an inner product! The Hartree-Fock potential $v^{\rm {HF}}$ is an operator, not a normal potential energy. Given a single-electron function, it produces another function.

Actually, even that is not quite true. The Hartree-Fock potential is only an operator after you have found the orbitals $\pe1//b//$, $\pe2//b//$, ..., $\pe{\underline n}//b//$, ..., $\pe{I}//b//$ appearing in it. While you are still trying to find them, the Fock operator is not even an operator, it is just a thing. However, given the orbitals, at least the Fock operator is a Hermitian one, one that can be taken to the other side if it appears in an inner product, and that has real eigenvalues and a complete set of eigenfunctions, {D.55}.

So how do you solve the canonical Hartree-Fock equations for the orbitals $\pe{n}////$? If the Hartree-Fock potential $v^{\rm {HF}}$ was a known operator, you would have only linear, single-electron eigenvalue problems to solve. That would be relatively easy, as far as those things come. But since the operator $v^{\rm {HF}}$ contains the unknown orbitals, you do not have a linear problem at all; it is a system of coupled cubic equations in infinitely many unknowns. The usual way to solve it is iteratively: you guess an approximate form of the orbitals and plug it into the Hartree-Fock potential. With this guessed potential, the orbitals may then be found from solving linear eigenvalue problems. If all goes well, the obtained orbitals, though not perfect, will at least be better than the ones that you guessed at random. So plug those improved orbitals into the Hartree-Fock potential and solve the eigenvalue problems again. Still better orbitals should result. Keep going until you get the correct solution to within acceptable accuracy.

You will know when you have got the correct solution since the Hartree-Fock potential will no longer change; the potential that you used to compute the final set of orbitals is really the potential that those final orbitals produce. In other words, the final Hartree-Fock potential that you compute is consistent with the final orbitals. Since the potential would be a field if it was not an operator, that explains why such an iterative method to compute the Hartree-Fock solution is called a “self-consistent field method.” It is like calling an iterative scheme for the Laplace equation on a mesh a “self-consistent neighbors method,” instead of point relaxation. Surely the equivalent for Hartree-Fock, like “iterated potential” or potential relaxation would have been much clearer to a general audience?


9.3.5 Additional points

This brief section was not by any means a tutorial of the Hartree-Fock method. The purpose was only to explain the basic ideas in terms of the notations and coverage of this book. If you actually want to apply the method, you will need to take up a book written by experts who know what they are talking about. The book by Szabo and Ostlund [45] was the main reference for this section, and is recommended as a well written introduction. Below are some additional concepts you may want to be aware of.


9.3.5.1 Meaning of the orbital energies

In the single electron case, the orbital energy $\epsilon_n$ in the canonical Hartree-Fock equation

\begin{displaymath}
\begin{array}[b]{l}
\displaystyle
h^{\rm e}\pe n/{\ske...
...ec r}///
= \epsilon_n \pe n/{\skew0\vec r}///
\end{array}
\end{displaymath}

represents the actual energy of the electron. It also represents the ionization energy, the energy required to take the electron away from the nuclei and leave it far away at rest. This subsubsection will show that in the multiple electron case, the “orbital energies” $\epsilon_n$ are not orbital energies in the sense of giving the contributions of the orbitals to the total expectation energy. However, they can still be taken to be approximate ionization energies. This result is known as “Koopman’s theorem.”

To verify the theorem, a suitable equation for $\epsilon_n$ is needed. It can be found by taking an inner product of the canonical equation above with $\pe{n}/{\skew0\vec r}///$, i.e. by putting $\pe{n}/{\skew0\vec r}///^*$ to the left of both sides and integrating over ${\skew0\vec r}$. That produces

\begin{displaymath}
\epsilon_n = E^{\rm e}_n
+ \sum_{{\underline n}=1}^I J_{...
...\updownarrow}_{\underline n}\rangle^2
K_{n{\underline n}} %
\end{displaymath} (9.28)

which consists of the single-electron energy $E^{\rm e}_n$, Coulomb integrals $J_{n{\underline n}}$ and exchange integrals $K_{n{\underline n}}$ as defined in subsection 9.3.3. It can already be seen that if all the $\epsilon_n$ are summed together, it does not produce the total expectation energy (9.25), because that one includes a factor ${\textstyle\frac{1}{2}}$ in front of the Coulomb and exchange integrals. So, $\epsilon_n$ cannot be seen as the part of the system energy associated with orbital $\pe{n}//b//$ in any meaningful sense.

However, $\epsilon_n$ can still be viewed as an approximate ionization energy. Assume that the electron is removed from orbital $\pe{n}//b//$, leaving the electron at infinite distance at rest. No, scratch that; all electrons share orbital $\pe{n}//b//$, not just one. Assume that one electron is removed from the system and that the remaining $I-1$ electrons stay out of the orbital $\pe{n}//b//$. Then, if it is assumed that the other orbitals do not change, the new system’s Slater determinant is the same as the original system’s, except that column $n$ and a row have been removed. The expectation energy of the new state then equals the original expectation energy, except that $E^{\rm e}_n$ and the $n$-​th column plus the $n$-​th row of the Coulomb and exchange integral matrices have been removed. The energy removed is then exactly $\epsilon_n$ above. (While $\epsilon_n$ only involves the $n$-​th row of the matrices, not the $n$-​th column, it does not have the factor $\frac12$ in front of them like the expectation energy does. And rows equal columns in the matrices, so half the row in $\epsilon_n$ counts as the half column in the expectation energy and the other half as the half row. This counts the element ${\underline n}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $n$ twice, but that is zero anyway since $J_{nn}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $K_{nn}$.)

So by the removal of the electron from (read: and) orbital $\pe{n}//b//$, an amount of energy $\epsilon_n$ has been removed from the expectation energy. Better put, a positive amount of energy $-\epsilon_n$ has been added to the expectation energy. So the ionization energy is $-\epsilon_n$ if the electron is removed from orbital $\pe{n}//b//$ according to this story.

Of course, the assumption that the other orbitals do not change after the removal of one electron and orbital is dubious. If you were a lithium electron in the expansive 2s state, and someone removed one of the two inner 1s electrons, would you not want to snuggle up a lot more closely to the now much less shielded three-proton nucleus? On the other hand, in the more likely case that someone removed the 2s electron, it would probably not seem like that much of an event to the remaining two 1s electrons near the nucleus, and the assumption that the orbitals do not change would appear more reasonable. And normally, when you say ionization energy, you are talking about removing the electron from the highest energy state.

But still, you should really recompute the remaining two orbitals from the canonical Hartree-Fock equations for a two-electron system to get the best, lowest, energy for the new $I-1$ electron ground state. The energy you get by not doing so and just sticking with the original orbitals will be too high. Which means that all else being the same, the ionization energy will be too high too.

However, there is another error of importance here, the error in the Hartree-Fock approximation itself. If the original and final system would have the same Hartree-Fock error, then it would not make a difference and $\epsilon_n$ would overestimate the ionization energy as described above. But Szabo and Ostlund [45, p. 128] note that Hartree-Fock tends to overestimate the energy for the original larger system more than for the final smaller one. The difference in Hartree-Fock error tends to compensate for the error you make by not recomputing the final orbitals, and in general the orbital energies provide reasonable first approximations to the experimental ionization energies.

The opposite of ionization energy is “electron affinity,” the energy with which the atom or molecule will bind an additional free electron [in its valence shell], {N.19}. It is not to be confused with electronegativity, which has to do with willingness to take on electrons in chemical bonds, rather than free electrons.

To compute the electron affinity of an atom or molecule with $I$ electrons using the Hartree-Fock method, you can either recompute the $I+1$ orbitals with the additional electron from scratch, or much easier, just use the Fock operator of the $I$ electrons to compute one more orbital $\pe{I+1}//b//$. In the later case however, the energy of the final system will again be higher than Hartree-Fock, and it being the larger system, the Hartree-Fock energy will be too high compared to the $I$-​electron system already. So now the errors add up, instead of subtract as in the ionization case. If the final energy is too high, then the computed binding energy will be too low, so you would expect $\epsilon_{I+1}$ to underestimate the electron affinity relatively badly. That is especially so since affinities tend to be relatively small compared to ionization energies. Indeed Szabo and Ostlund [45, p. 128] note that while many neutral molecules will take up and bind a free electron, producing a stable negative ion, the orbital energies almost always predict negative binding energy, hence no stable ion.


9.3.5.2 Asymptotic behavior

The exchange terms in the Hartree-Fock potential are not really a potential, but an operator. It turns out that this makes a major difference in how the probability of finding an electron decays with distance from the system.

Consider again the Fock eigenvalue problem, but with the single-electron Hamiltonian identified in terms of kinetic energy and nuclear attraction,

\begin{displaymath}
\begin{array}[b]{l}
\displaystyle
- \frac{\hbar^2}{2m_...
...ec r}///
= \epsilon_n \pe n/{\skew0\vec r}///
\end{array}
\end{displaymath}

Now consider the question which of these terms dominate at large distance from the system and therefore determine the large-distance behavior of the solution.

The first term that can be thrown out is $v^{\rm Ne}$, the Coulomb potential due to the nuclei; this potential decays to zero approximately inversely proportional to the distance from the system. (At large distance from the system, the distances between the nuclei can be ignored, and the potential is then approximately the one of a single point charge with the combined nuclear strengths.) Since $\epsilon_n$ in the right hand side does not decay to zero, the nuclear term cannot survive compared to it.

Similarly the third term, the Coulomb part of the Hartree-Fock potential, cannot survive since it too is a Coulomb potential, just with a charge distribution given by the orbitals in the inner product.

However, the final term in the left hand side, the exchange part of the Hartree-Fock potential, is more tricky, because the various parts of this sum have other orbitals outside of the inner product. This term can still be ignored for the slowest-decaying spin-up and spin-down states, because for them none of the other orbitals is any larger, and the multiplying inner product still decays like a Coulomb potential (faster, actually). Under these conditions the kinetic energy will have to match the right hand side, implying

\begin{displaymath}
\mbox{slowest decaying orbitals: }
\pe n/{\skew0\vec r}/// \sim \exp(-\sqrt{-2m_{\rm e}\epsilon_n}r/\hbar + \ldots)
\end{displaymath}

From this expression, it can also be seen that the $\epsilon_n$ values must be negative, or else the slowest decaying orbitals would not have the exponential decay with distance of a bound state.

The other orbitals, however, cannot be less than the slowest decaying one of the same spin by more than algebraic factors: the slowest decaying orbital with the same spin appears in the exchange term sum and will have to be matched. So, with the exchange terms included, all orbitals normally decay slowly, raising the chances of finding electrons at significant distances. The decay can be written as

\begin{displaymath}
\pe n/{\skew0\vec r}/// \sim
\exp(-\sqrt{2m_{\rm e}\vert...
..._m\vert _{\rm min,\ same\ spin,\ no\ ss}}r
/\hbar + \ldots)
\end{displaymath} (9.29)

where $\epsilon_m$ is the $\epsilon$ value of smallest magnitude (absolute value) among all the orbitals with the same spin.

However, in the case that $\pe{n}/{\skew0\vec r}///$ is spherically symmetric, (i.e. an s state), exclude other s-states as possibilities for $\epsilon_m$. The reason is a peculiarity of the Coulomb potential that makes the inner product appearing in the exchange term exponentially small at large distance for two orthogonal, spherically symmetric states. (For the incurably curious, it is a result of Maxwell’s first equation applied to a spherically symmetric configuration like figure 13.1, but with multiple spherically distributed charges rather than one, and the net charge being zero.)


9.3.5.3 Hartree-Fock limit

The Hartree-Fock approximation greatly simplifies finding a many-di­men­sion­al wave function. But really, solving the “eigenvalue problems” (9.27) for the orbitals iteratively is not that easy either. Typically, what one does is to write the orbitals $\pe{n}////$ as sums of chosen single-electron functions $f_1,f_2,\ldots$. You can then precompute various integrals in terms of those functions. Of course, the number of chosen single-electron functions will have to be a lot more than the number of orbitals $I$; if you are only using $I$ chosen functions, it really means that you are choosing the orbitals $\pe{n}////$ rather than computing them.

But you do not want to choose too many functions either, because the required numerical effort will go up. So there will be an error involved; you will not get as close to the true best orbitals as you can. One thing this means is that the actual error in the ground state energy will be even larger than true Hartree-Fock would give. For that reason, the Hartree-Fock value of the ground state energy is called the Hartree-Fock limit: it is how close you could come to the correct energy if you were able to solve the Hartree-Fock equations exactly.


9.3.5.4 Configuration interaction

According to the previous subsubsection, to compute the Hartree-Fock solution accurately, you want to select a large number of single-electron functions to represent the orbitals. But don't start using zillions of them. The bottom line is that the Hartree-Fock solution still has a finite error, because a wave function cannot in general be described accurately using only a single Slater determinant. So what is the point in computing the wrong numbers to ten digits accuracy?

You might think that the error in the Hartree-Fock approximation would be called something like Hartree-Fock error, single determinant error, or “representation error,“ since it is due to an incomplete representation of the true wave function. However, the error is called “correlation energy” because there is a energizing correlation between the more impenetrable and poorly defined your jargon, and the more respect you will get for doing all that incomprehensible stuff, {N.18}.

Anyway, in view of the fact that even an exact solution to the Hartree-Fock problem has a finite error, trying to get it exactly right is futile. At some stage, you would be much better off spending your efforts trying to reduce the inherent error in the Hartree-Fock approximation itself by including more determinants. As noted in section 5.7, if you include enough orthonormal basis functions, using all their possible Slater determinants, you can approximate any function to arbitrary accuracy.

After the $I$, (or $I$$\raisebox{.5pt}{$/$}$​2 in the restricted closed-shell case,) orbitals $\pe1//b//,\pe2//b//,\ldots$ have been found, the Hartree-Fock operator becomes just a Hermitian operator, and can be used to compute further orthonormal orbitals $\pe{I+1}//b//,\pe{I+2}//b//,\ldots$. You can add these to the stew, say to get a better approximation to the true ground state wave function of the system.

You might want to try to start small. If you compute just one more orbital $\pe{I+1}//b//$, you can already form $I$ more Slater determinants: you can replace any of the $I$ orbitals in the original determinant by the new function $\pe{I+1}//b//$. So you can now approximate the true wave function by the more general expression

\begin{eqnarray*}
\Psi
&= &a_0 \Big(
\vert{\rm det}(\pe1//b//,\pe2//b//,\p...
...//b//,\pe2//b//,\pe3//b//,\ldots,\pe{I+1}//b//)\rangle
\Big).
\end{eqnarray*}

where the coefficients $a_1,a_2,\ldots$ are to be chosen to approximate the ground state energy more closely and $a_0$ is a normalization constant.

The additional $I$ Slater determinants are called “excited determinants”. For example, the first excited state $\big\vert{\rm {det}}(\pe{I+1}//b//,\pe2//b//,\pe3//b//,\ldots,\pe{I}//b//)\big\rangle $ is like a state where you excited an electron out of the lowest state $\pe1//b//$ into an elevated energy state $\pe{I+1}//b//$. (However, note that if you really wanted to satisfy the variational requirement $\delta\langle{E}\rangle$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0 for such a state, you would have to recompute the orbitals from scratch, using $\pe{I+1}//b//$ in the Fock operator instead of $\pe1//b//$. That is not what you want to do here; you do not want to create totally new orbitals, just more of them.)

It may seem that this must be a winner: as much as $I$ more determinants to further minimize the energy. Unfortunately, now you pay the price for doing such a great job with the single determinant. Since, hopefully, the Slater determinant is the best single determinant that can be formed, any changes that are equivalent to simply changing the determinant's orbitals will do no good. And it turns out that the $I+1$-​determinant wave function above is equivalent to the single-determinant wave function

\begin{displaymath}
\Psi = a_0\vert{\rm det}(\pe1//b//+a_1\pe{I+1}//b//,\pe2//...
...2\pe{I+1}//b//,
\ldots,\pe{I}//b//+a_I\pe{I+1}//b//)\rangle
\end{displaymath}

as you can check with some knowledge of the properties of determinants. Since you already have the best single determinant, all your efforts are going to be wasted if you try this.

You might try forming another set of $I$ excited determinants by replacing one of the orbitals in the original Hartree-Fock determinant by $\pe{I+2}//b//$ instead of $\pe{I+1}//b//$, but the fact is that the variational condition $\delta\langle{E}\rangle$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0 is still going to be satisfied when the wave function is the original Hartree-Fock one. For small changes in wave function, the additional determinants can still be pushed inside the Hartree-Fock one. To ensure a decrease in energy, you want to include determinants that allow a nonzero decrease in energy even for small changes from the original determinant, and that requires doubly excited determinants, in which two different original states are replaced by excited ones like $\pe{I+1}//b//$ and $\pe{I+2}//b//$.

Note that you can form $I(I-1)$ such determinants; the number of determinants rapidly explodes when you include more and more orbitals. And a mathematically convergent process would require an asymptotically large set of orbitals, compare chapter 5.7. How big is your computer?

Most people would probably call improving the wave function representation using multiple Slater determinants something like multiple-determinant representation, or maybe excited-determinant correction or so. However, it is called configuration interaction, because every nonexpert will wonder whether the physicist is talking about the configuration of the nuclei or the electrons, (actually, it refers to the practitioner configuring all those determinants, no kidding,) and what it is interacting with (with the person bringing in the coffee, of course. OK.) If you said that you were performing a configuration interaction while actually doing, say, some finite difference or finite element computation, just because it requires you to specify a configuration of mesh points, some people might doubt your sanity. But in physics, the standards are not so high.