6.13 Fermi-Dirac Distribution

The previous sections discussed the ground state of a system of fermions like electrons. The ground state corresponds to absolute zero temperature. This section has a look at what happens to the system when the temperature becomes greater than zero.

For nonzero temperature, the average number of fermions $\iota^{\rm {f}}$ per single-particle state can be found from the so-called

\begin{displaymath}
\fbox{$\displaystyle
\mbox{Fermi-Dirac distribution:}\qu...
...1}{e^{({\vphantom' E}^{\rm p}- \mu)/{k_{\rm B}}T} + 1}
$} %
\end{displaymath} (6.19)

This distribution is derived in chapter 11. Like the Bose-Einstein distribution for bosons, it depends on the energy ${\vphantom' E}^{\rm p}$ of the single-particle state, the absolute temperature $T$, the Boltzmann constant $k_{\rm B}$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1.38 10$\POW9,{-23}$ J/K, and a chemical potential $\mu$. In fact, the mathematical difference between the two distributions is merely that the Fermi-Dirac distribution has a plus sign in the denominator where the Bose-Einstein one has a minus sign. Still, that small change makes for very different statistics.

The biggest difference is that $\iota^{\rm {f}}$ is always less than one: the Fermi-Dirac distribution can never have more than one fermion in a given single-particle state. That follows from the fact that the exponential in the denominator of the distribution is always greater than zero, making the denominator greater than one.

It reflects the exclusion principle: there cannot be more than one fermion in a given state, so the average per state cannot exceed one either. The Bose-Einstein distribution can have many bosons in a single state, especially in the presence of Bose-Einstein condensation.

Note incidentally that both the Fermi-Dirac and Bose-Einstein distributions count the different spin versions of a given spatial state as separate states. In particular for electrons, the spin-up and spin-down versions of a spatial state count as two separate states. Each can hold one electron.

Consider now the system ground state that is predicted by the Fermi-Dirac distribution. In the limit that the temperature becomes zero, single-particle states end up with either exactly one electron or exactly zero electrons. The states that end up with one electron are the ones with energies ${\vphantom' E}^{\rm p}$ below the chemical potential $\mu$. Similarly the states that end up empty are the ones with ${\vphantom' E}^{\rm p}$ above $\mu$.

To see why, note that for ${\vphantom' E}^{\rm p}-\mu$ $\raisebox{.3pt}{$<$}$ 0, in the limit $T\to0$ the argument of the exponential in the Fermi-Dirac distribution becomes minus infinity. That makes the exponential zero, and $\iota^{\rm {f}}$ is then equal to one. Conversely, for ${\vphantom' E}^{\rm p}-\mu$ $\raisebox{.3pt}{$>$}$ 0, in the limit $T\to0$ the argument of the exponential in the Fermi-Dirac distribution becomes positive infinity. That makes the exponential infinite, and $\iota^{\rm {f}}$ is then zero.

The correct ground state, as pictured earlier in figure 6.11, has one electron per state below the Fermi energy ${\vphantom' E}^{\rm p}_{\rm {F}}$ and zero electrons per state above the Fermi energy. The Fermi-Dirac ground state can only agree with this if the chemical potential at absolute zero temperature is the same as the Fermi energy:

\begin{displaymath}
\fbox{$\displaystyle
\mu = {\vphantom' E}^{\rm p}_{\rm{F}} \quad \mbox{at}\quad T = 0
$}
\end{displaymath} (6.20)

Figure 6.15: A system of fermions at a nonzero temperature.
\begin{figure}
\centering
\setlength{\unitlength}{1pt}
\begin{picture}(...
...35){\makebox(0,0)[r]{${\vphantom' E}^{\rm p}$}}
\end{picture}
\end{figure}

Next consider what happens if the absolute temperature is not zero but a bit larger than that. The story given above for zero temperature does not change significantly unless the value of ${\vphantom' E}^{\rm p}-\mu$ is comparable to ${k_{\rm B}}T$. Only in a energy range of order ${k_{\rm B}}T$ around the Fermi energy does the average number of particles in a state change from its value at absolute zero temperature. Compare the spectrum at absolute zero temperature as sketched to the right in figure 6.11 to the one at a nonzero temperature shown in figure 6.15. The sharp transition from one particle per state, red, below the Fermi energy to zero particles per state, grey, above it smooths out a bit. As the wave number space to the left in figure 6.15 illustrates, at nonzero temperature a typical system energy eigenfunction has a few electrons slightly beyond the Fermi surface. Similarly it has a few holes (states that have lost their electron) immediately below the Fermi surface.

Put in physical terms, some electrons just below the Fermi energy pick up some thermal energy, which gives them an energy just above the Fermi energy. The affected energy range, and also the typical energy that the electrons in this range pick up, is comparable to ${k_{\rm B}}T$.

You may at first hardly notice the effect in the wave number space shown in figure 6.15. And that figure greatly exaggerates the effect to ensure that it is visible at all. Recall the ballpark Fermi energy given earlier for copper. It was equal to a ${k_{\rm B}}T$ value for an equivalent temperature of 33,000 K. Since the melting point of copper is only 1,356 K, ${k_{\rm B}}T$ is still negligibly small compared to the Fermi energy when copper melts. To good approximation, the electrons always remain like they were in their ground state at 0 K.

One of the mysteries of physics before quantum mechanics was why the valence electrons in metals do not contribute to the heat capacity. At room temperature, the atoms in typical metals were known to have picked up an amount of thermal energy comparable to ${k_{\rm B}}T$ per atom. Classical physics predicted that the valence electrons, which could obviously move independently of the atoms, should pick up a similar amount of energy per electron. That should increase the heat capacity of metals. However, no such increase was observed.

The Fermi-Dirac distribution explains why: only the electrons within a distance comparable to ${k_{\rm B}}T$ of the Fermi energy pick up the additional ${k_{\rm B}}T$ of thermal energy. This is only a very small fraction of the total number of electrons, so the contribution to the heat capacity is usually negligible. While classically the electrons may seem to move freely, in quantum mechanics they are constrained by the exclusion principle. Electrons cannot move to higher energy states if there are already electrons in these states.

To discourage the absence of confusion, some or all of the following terms may or may not indicate the chemical potential $\mu$, depending on the physicist: Fermi level, Fermi brim, Fermi energy, and electrochemical potential. It is more or less common to reserve Fermi energy to absolute zero temperature, but to not do the same for Fermi level or “Fermi brim.” In any case, do not count on it. This book will occasionally use the term Fermi level for the chemical potential where it is common to do so. In particular, a Fermi-level electron has an energy equal to the chemical potential.

The term electrochemical potential needs some additional comment. The surfaces of solids are characterized by unavoidable layers of electric charge. These charge layers produce an electrostatic potential inside the solid that shifts all energy levels, including the chemical potential, by that amount. Since the charge layers vary, so does the electrostatic potential and with it the value of the chemical potential. It would therefore seem logical to define some intrinsic chemical potential, and add to it the electrostatic potential to get the total, or electrochemical potential.

For example, you might consider defining the intrinsic chemical potential $\mu_{\rm {i}}$ of a solid as the value of the chemical potential $\mu$ when the solid is electrically neutral and isolated. Now, when you bring dissimilar solids at a given temperature into electrical contact, double layers of charge build up at the contact surfaces between them. These layers change the electrostatic potentials inside the solids and with it their total electrochemical potential $\mu$.

In particular, the strengths of the double layers adjust so that in thermal equilibrium, the electrochemical potentials $\mu$ of all the solids (intrinsic plus additional electrostatic contribution due to the changed surface charge layers) are equal. They have to; solids in electrical contact become a single system of electrons. A single system should have a single chemical potential.

Unfortunately, the assumed intrinsic chemical potential in the above description is a somewhat dubious concept. Even if a solid is uncharged and isolated, its chemical potential is not a material property. It still depends unavoidably on the surface properties: their contamination, roughness, and angular orientation relative to the atomic crystal structure. If you mentally take a solid attached to other solids out to isolate it, then what are you to make of the condition of the surfaces that were previously in contact with other solids?

Because of such concerns, nowadays many physicists disdain the concept of an intrinsic chemical potential and simply refer to $\mu$ as the chemical potential. Note that this means that the actual value of the chemical potential depends on the detailed conditions that the solid is in. But then, so do the electron energy levels. The location of the chemical potential relative to the spectrum is well defined regardless of the electrostatic potential.

And the chemical potentials of solids in contact and in thermal equilibrium still line up.

The Fermi-Dirac distribution is also known as the “Fermi factor.” Note that in proper quantum terms, it gives the probability that a state is occupied by an electron.


Key Points
$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
The Fermi-Dirac distribution gives the number of electrons, or other fermions, per single-particle state for a macroscopic system at a nonzero temperature.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
Typically, the effects of nonzero temperature remain restricted to a, relatively speaking, small number of electrons near the Fermi energy.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
These electrons are within a distance comparable to ${k_{\rm B}}T$ of the Fermi energy. They pick up a thermal energy that is also comparable to ${k_{\rm B}}T$.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
Because of the small number of electrons involved, the effect on the heat capacity can usually be ignored.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
When solids are in electrical contact and in thermal equilibrium, their (electro)chemical potentials / Fermi levels / Fermi brims / whatever line up.