11.12 The New Variables

The new kid on the block is the entropy $S$. For an adiabatic system the entropy is always increasing. That is highly useful information, if you want to know what thermodynamically stable final state an adiabatic system will settle down into. No need to try to figure out the complicated time evolution leading to the final state. Just find the state that has the highest possible entropy $S$, that will be the stable final state.

But a lot of systems of interest are not well described as being adiabatic. A typical alternative case might be a system in a rigid box in an environment that is big enough, and conducts heat well enough, that it can at all times be taken to be at the same temperature $T_{\rm {surr}}$. Also assume that initially the system itself is in some state 1 at the ambient temperature $T_{\rm {surr}}$, and that it ends up in a state 2 again at that temperature. In the evolution from 1 to 2, however, the system temperature could be be different from the surroundings, or even undefined, no thermal equilibrium is assumed. The first law, energy conservation, says that the heat $Q_{12}$ added to the system from the surroundings equals the change in internal energy $E_2-E_1$ of the system. Also, the entropy change in the isothermal environment will be $-Q_{12}$$\raisebox{.5pt}{$/$}$$T_{\rm {surr}}$, so the system entropy change $S_2-S_1$ must be at least $Q_{12}$$\raisebox{.5pt}{$/$}$$T_{\rm {surr}}$ in order for the net entropy in the universe not to decrease. From that it can be seen by simply writing it out that the “Helmholtz free energy”

\begin{displaymath}
\fbox{$\displaystyle
F = E - TS
$} %
\end{displaymath} (11.21)

is smaller for the final system 2 than for the starting one 1. In particular, if the system ends up into a stable final state that can no longer change, it will be the state of smallest possible Helmholtz free energy. So, if you want to know what will be the final fate of a system in a rigid, heat conducting, box in an isothermal environment, just find the state of lowest possible Helmholtz energy. That will be the one.

A slightly different version occurs even more often in real applications. In these the system is not in a rigid box, but instead its surface is at all times exposed to ambient atmospheric pressure. Energy conservation now says that the heat added $Q_{12}$ equals the change in internal energy $E_2-E_1$ plus the work done expanding against the atmospheric pressure, which is $P_{\rm {surr}}(V_2-V_1)$. Assuming that both the initial state 1 and final state 2 are at ambient atmospheric pressure, as well as at ambient temperature as before, then it is seen that the quantity that decreases is the “Gibbs free energy”

\begin{displaymath}
\fbox{$\displaystyle
G = H - TS
$} %
\end{displaymath} (11.22)

in terms of the enthalpy $H$ defined as $H$ $\vphantom0\raisebox{1.5pt}{$=$}$ $E+PV$. As an example, phase equilibria are at the same pressure and temperature. In order for them to be stable, the phases need to have the same specific Gibbs energy. Otherwise all particles would end up in whatever phase has the lower Gibbs energy. Similarly, chemical equilibria are often posed at an ambient pressure and temperature.

There are a number of differential expressions that are very useful in doing thermodynamics. The primary one is obtained by combining the differential first law (11.11) with the differential second law (11.19) for reversible processes:

\begin{displaymath}
\fbox{$\displaystyle
{\rm d}E = T {\,\rm d}S - P {\,\rm d}V
$} %
\end{displaymath} (11.23)

This no longer involves the heat transferred from the surroundings, just state variables of the system itself. The equivalent one using the enthalpy $H$ instead of the internal energy $E$ is
\begin{displaymath}
\fbox{$\displaystyle
{\rm d}H = T {\,\rm d}S + V {\,\rm d}P
$} %
\end{displaymath} (11.24)

The differentials of the Helmholtz and Gibbs free energies are, after cleaning up with the two expressions immediately above:

\begin{displaymath}
\fbox{$\displaystyle
{\rm d}F = - S {\,\rm d}T - P {\,\rm d}V
$} %
\end{displaymath} (11.25)

and
\begin{displaymath}
\fbox{$\displaystyle
{\rm d}G = - S {\,\rm d}T + V {\,\rm d}P
$} %
\end{displaymath} (11.26)

Expression (11.25) shows that the work obtainable in an isothermal reversible process is given by the decrease in Helmholtz free energy. That is why Helmholtz called it “free energy” in the first place. The Gibbs free energy is applicable to steady flow devices such as compressors and turbines; the first law for these devices must be corrected for the “flow work” done by the pressure forces on the substance entering and leaving the device. The effect is to turn $P{\,\rm d}{}V$ into $\vphantom0\raisebox{1.5pt}{$-$}$$V{\,\rm d}{}P$ as the differential for the actual work obtainable from the device. (This assumes that the kinetic and/or potential energy that the substance picks up while going through the device is a not a factor.)

Maxwell noted that, according to the total differential of calculus, the coefficients of the differentials in the right hand sides of (11.23) through (11.26) must be the partial derivatives of the quantity in the left hand side:

 $\displaystyle \displaystyle
\left(\frac{\partial E}{\partial S}\right)_V =\phan...
...artial T}{\partial V}\right)_S
=
- \left(\frac{\partial P}{\partial S}\right)_V$      (11.27)
 $\displaystyle \displaystyle
\left(\frac{\partial H}{\partial S}\right)_P =\phan...
...{\partial P}\right)_S
=
\phantom{-}\left(\frac{\partial V}{\partial S}\right)_P$      (11.28)
 $\displaystyle \displaystyle
\left(\frac{\partial F}{\partial T}\right)_V = - S
...
...{\partial V}\right)_T
=
\phantom{-}\left(\frac{\partial P}{\partial T}\right)_V$      (11.29)
 $\displaystyle \displaystyle
\left(\frac{\partial G}{\partial T}\right)_P = - S
...
...ial S}{\partial P}\right)_T
=
- \left(\frac{\partial V}{\partial T}\right)_P%
$      (11.30)

The final equation in each line can be verified by substituting in the previous two and noting that the order of differentiation does not make a difference. Those are called the “Maxwell relations.” They have a lot of practical uses. For example, either of the final equations in the last two lines allows the entropy to be found if the relationship between the normal variables $P$, $V$, and $T$ is known, assuming that at least one data point at every temperature is already available. Even more important from an applied point of view, the Maxwell relations allow whatever data you find about a substance in literature to be stretched thin. Approximate the derivatives above with difference quotients, and you can compute a host of information not initially in your table or graph.

There are two even more remarkable relations along these lines. They follow from dividing (11.23) and (11.24) by $T$ and rearranging so that $S$ becomes the quantity differentiated. That produces

\begin{displaymath}
\begin{array}[b]{r}
\displaystyle
\left(\frac{\partial...
...left(\frac{\partial P/T}{\partial T}\right)_V
\end{array} %
\end{displaymath} (11.31)


\begin{displaymath}
\begin{array}[b]{r}
\displaystyle
\left(\frac{\partial...
...left(\frac{\partial V/T}{\partial T}\right)_P
\end{array} %
\end{displaymath} (11.32)

What is so remarkable is the final equation in each case: they do not involve entropy in any way, just the normal variables $P$, $V$, $T$, $H$, and $E$. Merely because entropy exists, there must be relationships between these variables which seemingly have absolutely nothing to do with the second law.

As an example, consider an ideal gas, more precisely, any substance that satisfies the ideal gas law

\begin{displaymath}
\fbox{$\displaystyle
Pv=RT \quad\mbox{with}\quad R = \fr...
..._{\rm{u}} = 8.314{,}472\frac{\mbox{kJ}}{\mbox{kmol K}}
$} %
\end{displaymath} (11.33)

The constant $R$ is called the specific gas constant; it can be computed from the ratio of the Boltzmann constant $k_{\rm B}$ and the mass of a single molecule $m$. Alternatively, it can be computed from the “universal gas constant” $R_{\rm {u}}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $I_{\rm A}k_{\rm B}$ and the molar mass $M$ $\vphantom0\raisebox{1.5pt}{$=$}$ $I_{\rm A}{}m$. For an ideal gas like that, the equations above show that the internal energy and enthalpy are functions of temperature only. And then so are the specific heats $C_v$ and $C_p$, because those are their temperature derivatives:
\begin{displaymath}
\fbox{$\displaystyle
\mbox{For ideal gases:}\quad
e,h,C_v,C_p=e,h,C_v,C_p(T)
\quad
C_P = C_v + R
$} %
\end{displaymath} (11.34)

(The final relation is because $C_P$ $\vphantom0\raisebox{1.5pt}{$=$}$ ${\rm d}{h}$$\raisebox{.5pt}{$/$}$${\rm d}{T}$ $\vphantom0\raisebox{1.5pt}{$=$}$ ${\rm d}(e+Pv)$$\raisebox{.5pt}{$/$}$${\rm d}{T}$ with ${\rm d}{e}$$\raisebox{.5pt}{$/$}$${\rm d}{T}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $C_v$ and $Pv$ $\vphantom0\raisebox{1.5pt}{$=$}$ $RT$.) Ideal gas tables can therefore be tabulated by temperature only, there is no need to include a second independent variable. You might think that entropy should be tabulated against both varying temperature and varying pressure, because it does depend on both pressure and temperature. However, the Maxwell equation (11.30) may be used to find the entropy at any pressure as long as it is listed for just one pressure, say for one bar.

There is a sleeper among the Maxwell equations; the very first one, in (11.27). Turned on its head, it says that

\begin{displaymath}
\fbox{$\displaystyle
\frac{1}{T} =
\left(
\frac{\par...
...ght)_{V{\rm\ and\ other\ external\ parameters\ fixed}}
$} %
\end{displaymath} (11.35)

This can be used as a definition of temperature. Note that in taking the derivative, the volume of the box, the number of particles, and other external parameters, like maybe an external magnetic field, must be held constant. To understand qualitatively why the above derivative defines a temperature, consider two systems $A$ and $B$ for which $A$ has the larger temperature according to the definition above. If these two systems are brought into thermal contact, then net messiness increases when energy flows from high temperature system $A$ to low temperature system $B$, because system $B$, with the higher value of the derivative, increases its entropy more than $A$ decreases its.

Of course, this new definition of temperature is completely consistent with the ideal gas one; it was derived from it. However, the new definition also works fine for negative temperatures. Assume a system $A$ has a negative temperature according to he definition above. Then its messiness (entropy) increases if it gives up heat. That is in stark contrast to normal substances at positive temperatures that increase in messiness if they take in heat. So assume that system $A$ is brought into thermal contact with a normal system $B$ at a positive temperature. Then $A$ will give off heat to $B$, and both systems increase their messiness, so everyone is happy. It follows that $A$ will give off heat however hot is the normal system it is brought into contact with. While the temperature of $A$ may be negative, it is hotter than any substance with a normal positive temperature!

And now the big question: what is that “chemical potential” you hear so much about? Nothing new, really. For a pure substance with a single constituent like this chapter is supposed to discuss, the chemical potential is just the specific Gibbs free energy on a molar basis, $\bar\mu$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\bar{g}$. More generally, if there is more than one constituent the chemical potential $\bar\mu_c$ of each constituent $c$ is best defined as

\begin{displaymath}
\fbox{$\displaystyle
\bar\mu_c \equiv
\left(
\frac{\partial G}{\partial \bar\imath_c}
\right)_{P,T}
$} %
\end{displaymath} (11.36)

(If there is only one constituent, then $G$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\bar\imath\bar{g}$ and the derivative does indeed produce $\bar{g}$. Note that an intensive quantity like $\bar{g}$, when considered to be a function of $P$, $T$, and $\bar\imath$, only depends on the two intensive variables $P$ and $T$, not on the amount of particles $\bar\imath$ present.) If there is more than one constituent, and assuming that their Gibbs free energies simply add up, as in

\begin{displaymath}
G = \bar\imath_1 \bar g_1 + \bar\imath \bar g_2 + \ldots
= \sum_c \bar\imath_c \bar g_c,
\end{displaymath}

then the chemical potential $\bar\mu_c$ of each constituent is simply the molar specific Gibbs free energy $\bar{g}_c$ of that constituent,

The partial derivatives described by the chemical potentials are important for figuring out the stable equilibrium state a system will achieve in an isothermal, isobaric, environment, i.e. in an environment that is at constant temperature and pressure. As noted earlier in this section, the Gibbs free energy must be as small as it can be in equilibrium at a given temperature and pressure. Now according to calculus, the full differential for a change in Gibbs free energy is

\begin{displaymath}
{\rm d}G(P,T,\bar\imath_1,\bar\imath_2,\ldots) =
\frac{\...
...al G}{\partial \bar\imath_2} {\,\rm d}\bar\imath_2
+ \ldots
\end{displaymath}

The first two partial derivatives, which keep the number of particles fixed, were identified in the discussion of the Maxwell equations as $\vphantom0\raisebox{1.5pt}{$-$}$$S$ and $V$; also the partial derivatives with respect to the numbers of particles of the constituent have been defined as the chemical potentials $\bar\mu_c$. Therefore more shortly,
\begin{displaymath}
\fbox{$\displaystyle
{\rm d}G =
- S {\,\rm d}T + V {\,...
... V {\,\rm d}P + \sum_c \bar\mu_c {\,\rm d}\bar\imath_c
$} %
\end{displaymath} (11.37)

This generalizes (11.26) to the case that the numbers of constituents change. At equilibrium at given temperature and pressure, the Gibbs energy must be minimal. It means that ${\rm d}{G}$ must be zero whenever ${\rm d}{T}$ $\vphantom0\raisebox{1.5pt}{$=$}$ ${\rm d}{P}$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0, regardless of any infinitesimal changes in the amounts of the constituents. That gives a condition on the fractions of the constituents present.

Note that there are typically constraints on the changes ${\rm d}\bar\imath_c$ in the amounts of the constituents. For example, in a liquid-vapor “phase equilibrium,” any additional amount of particles ${\rm d}\bar\imath_{\rm {f}}$ that condenses to liquid must equal the amount $-{\rm d}\bar\imath_{\rm {g}}$ of particles that disappears from the vapor phase. (The subscripts follow the unfortunate convention liquid=fluid=f and vapor=gas=g. Don’t ask.) Putting this relation in (11.37) it can be seen that the liquid and vapor phase must have the same chemical potential, $\bar\mu_{\rm {f}}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\bar\mu_{\rm {g}}$. Otherwise the Gibbs free energy would get smaller when more particles enter whatever is the phase of lowest chemical potential and the system would collapse completely into that phase alone.

The equality of chemical potentials suffices to derive the famous Clausius-Clapeyron equation relating pressure changes under two-phase, or “saturated,” conditions to the corresponding temperature changes. For, the changes in chemical potentials must be equal too, ${\rm d}\mu_{\rm {f}}$ $\vphantom0\raisebox{1.5pt}{$=$}$ ${\rm d}\mu_{\rm {g}}$, and substituting in the differential (11.26) for the Gibbs free energy, taking it on a molar basis since $\bar\mu$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\bar{g}$,

\begin{displaymath}
- \bar s_{\rm {f}} {\rm d}T + \bar v_{\rm {f}} {\rm d}P
=
- \bar s_{\rm {g}} {\rm d}T + \bar v_{\rm {g}} {\rm d}P
\end{displaymath}

and rearranging gives the Clausius-Clapeyron equation:

\begin{displaymath}
\frac{{\rm d}P}{{\rm d}T} = \frac{s_{\rm {g}} - s_{\rm {f}}}{v_{\rm {g}} - v_{\rm {f}}}
\end{displaymath}

Note that since the right-hand side is a ratio, it does not make a difference whether you take the entropies and volumes on a molar basis or on a mass basis. The mass basis is shown since that is how you will typically find the entropy and volume tabulated. Typical engineering thermodynamic textbooks will also tabulate $s_{\rm {fg}}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $s_{\rm {g}}-s_{\rm {f}}$ and $v_{\rm {fg}}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $v_{\rm {g}}-v_{\rm {f}}$, making the formula above very convenient.

In case your tables do not have the entropies of the liquid and vapor phases, they often still have the “latent heat of vaporization,” also known as “enthalpy of vaporization” or similar, and in engineering thermodynamics books typically indicated by $h_{\rm {fg}}$. That is the difference between the enthalpy of the saturated liquid and vapor phases, $h_{\rm {fg}}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $h_{\rm {g}}-h_{\rm {f}}$. If saturated liquid is turned into saturated vapor by adding heat under conditions of constant pressure and temperature, (11.24) shows that the change in enthalpy $h_{\rm {g}}-h_{\rm {f}}$ equals $T(s_{\rm {g}}-s_{\rm {f}})$. So the Clausius-Clapeyron equation can be rewritten as

\begin{displaymath}
\fbox{$\displaystyle
\frac{{\rm d}P}{{\rm d}T} = \frac{h_{\rm{fg}}}{T(v_{\rm{g}} - v_{\rm{f}})}
$} %
\end{displaymath} (11.38)

Because $T{\,\rm d}{s}$ is the heat added, the physical meaning of the latent heat of vaporization is the heat needed to turn saturated liquid into saturated vapor while keeping the temperature and pressure constant.

For chemical reactions, like maybe

\begin{displaymath}
2H_2 + O_2 \Longleftrightarrow 2 H_2O,
\end{displaymath}

the changes in the amounts of the constituents are related as

\begin{displaymath}
{\rm d}\bar\imath_{{\rm H}_2} = - 2 {\rm d}\bar r \quad
...
...quad
{\rm d}\bar\imath_{{\rm H}_2{\rm O}} = 2 {\rm d}\bar r
\end{displaymath}

where ${\rm d}\bar{r}$ is the additional number of times the forward reaction takes place from the starting state. The constants $\vphantom0\raisebox{1.5pt}{$-$}$2, $\vphantom0\raisebox{1.5pt}{$-$}$1, and 2 are called the “stoichiometric coefficients.” They can be used when applying the condition that at equilibrium, the change in Gibbs energy due to an infinitesimal amount of further reactions ${\rm d}\bar{r}$ must be zero.

However, chemical reactions are often posed in a context of constant volume rather than constant pressure, for one because it simplifies the reaction kinematics. For constant volume, the Helmholtz free energy must be used instead of the Gibbs one. Does that mean that a second set of chemical potentials is needed to deal with those problems? Fortunately, the answer is no, the same chemical potentials will do for Helmholtz problems. To see why, note that by definition $F$ $\vphantom0\raisebox{1.5pt}{$=$}$ $G-PV$, so ${\rm d}{F}$ $\vphantom0\raisebox{1.5pt}{$=$}$ ${\rm d}{G}-P{\rm d}{V}-V{\rm d}{P}$, and substituting for ${\rm d}{G}$ from (11.37), that gives

\begin{displaymath}
\fbox{$\displaystyle
{\rm d}F =
- S {\,\rm d}T - P {\,...
...-P {\,\rm d}V + \sum_c \bar\mu_c {\,\rm d}\bar\imath_c
$} %
\end{displaymath} (11.39)

Under isothermal and constant volume conditions, the first two terms in the right hand side will be zero and $F$ will be minimal when the differentials with respect to the amounts of particles add up to zero.

Does this mean that the chemical potentials are also specific Helmholtz free energies, just like they are specific Gibbs free energies? Of course the answer is no, and the reason is that the partial derivatives of $F$ represented by the chemical potentials keep extensive volume $V$, instead of intensive molar specific volume $\bar{v}$ constant. A single-constituent molar specific Helmholtz energy $\bar{f}$ can be considered to be a function $\bar{f}(T,\bar{v})$ of temperature and molar specific volume, two intensive variables, and then $F$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\bar\imath\bar{f}(T,\bar{v})$, but $\Big(\partial\bar\imath\bar{f}(T,V/\bar\imath)/\partial\bar\imath\Big)_{TV}$ does not simply produce $\bar{f}$, even if $\Big(\partial\bar\imath\bar{g}(T,P)/\partial\bar\imath\Big)_{TP}$ produces $\bar{g}$.