### 11.10 Entropy

With the cleverest inventors and the greatest scientists relentlessly trying to fool nature and circumvent the second law, how come nature never once gets confused, not even by the most complicated, convoluted, unusual, ingenious schemes? Nature does not outwit them by out-thinking them, but by maintaining an accounting system that cannot be fooled. Unlike human accounting systems, this accounting system does not assign a monetary value to each physical system, but a measure of messiness called entropy. Then, in any transaction within or between systems, nature simply makes sure that this entropy is not being reduced; whatever entropy one system gives up must always be less than what the other system receives.

So what can this numerical grade of messiness called entropy be? Surely, it must be related somehow to the second law as stated by Clausius and Kelvin and Planck, and to the resulting Carnot engines that cannot be beat. Note that the Carnot engines relate heat added to temperature. In particular an infinitesimally small Carnot engine would take in an infinitesimal amount of heat at a temperature and give up an infinitesimal amount at a temperature . This is done so that , or separating the two ends of the device, . The quantity is the same at both sides, except that one is going in and the other out. Might this, then, be the change in messiness? After all, for the ideal reversible machine no messiness can be created, otherwise in the reversed process, messiness would be reduced. Whatever increase in messiness one side receives, the other side must give up, and fits the bill for that.

If gives the infinitesimal change in messiness, excuse, entropy, then it should be possible to find the entropy of a system by integration. In particular, choosing some arbitrary state of the system as reference, the entropy of a system in thermal equilibrium can be found as:

 (11.18)

The entropy as defined above is a specific number for a system in thermal equilibrium, just like its pressure, temperature, particle density, and internal energy are specific numbers. You might think that you could get a different value for the entropy by following a different process path from the reference state to the desired state. But the second law prevents that. To see why, consider the pressure-volume diagram in figure 11.14. Two different reversible processes are shown leading from the reference state to a desired state. A bundle of reversible adiabatic process lines is also shown; those are graphical representations of processes in which there is no heat exchange between the system and its surroundings. The bundle of adiabatic lines chops the two process paths into small pieces, of almost constant temperature, that pairwise have the same value of . For, if a piece like AB would have a lower value for than the corresponding piece CD, then a heat engine running the cycle CDBAC would lose less of the heat at the low temperature side than the Carnot ideal, hence have a higher efficiency than Carnot and that is not possible. Conversely, if AB would have a higher value for than CD, then a refrigeration device running the cycle ABDCA would remove more heat from the low side than Carnot, again not possible. So all the little segments pairwise have the same value for , which means the complete integrals must also be the same. It follows that the entropy for a system in thermal equilibrium is uniquely defined.

So what happens if the reference and final states are still the same, but there is a slight glitch for a single segment AB, making the process over that one segment irreversible? In that case, the heat engine argument no longer applies, since it runs through the segment AB in reversed order, and irreversible processes cannot be reversed. The refrigeration cycle argument says that the amount of heat absorbed by the system will be less; more of the heat going out at the high temperature side CD will come from the work done, and less from the heat removed at the cold side. The final entropy is still the same, because it only depends on the final state, not on the path to get there. So during the slight glitch, the entropy of the system increased more than . In general:

 (11.19)

where = applies if the change is reversible and if it is not.

Note that the above formula is only valid if the system has an definite temperature, as in this particular example. Typically this is simply not true in irreversible processes; for example, the interior of the system might be hotter than the outside. The real importance of the above formula is to confirm that the defined entropy is indeed a measure of messiness and not of order; reversible processes merely shuffle entropy around from one system to the next, but irreversible processes increase the net entropy content in the universe.

So what about the entropy of a system that is not in thermal equilibrium? Equation (11.18) only applies for systems in thermal equilibrium. In order for nature not to become confused in its entropy accounting system, surely entropy must still have a numerical value for nonequilibrium systems. If the problem is merely temperature or pressure variations, where the system is still in approximate thermal equilibrium locally, you could just integrate the entropy per unit volume over the volume. But if the system is not in thermal equilibrium even on macroscopically small scales, it gets much more difficult. For example, air crossing a typical shock wave (sonic boom) experiences a significant increase in pressure over an extremely short distance. Better bring out the quantum mechanics trick box. Or at least molecular dynamics.

Still, some important general observations can be made without running to a computer. An “isolated” system is a system that does not interact with its surroundings in any way. Remember the example where the air inside a room was collected and neatly put inside a glass? That was an example of an isolated system. Presumably, the doors of the room were hermetically sealed. The walls of the room are stationary, so they do not perform work on the air in the room. And the air comes rushing back out of the glass so quickly that there is really no time for any heat conduction through the walls. If there is no heat conduction with the outside, then there is no entropy exchange with the outside. So the entropy of the air can only increase due to irreversible effects. And that is exactly what happens: the air exploding out of the glass is highly irreversible, (no, it has no plans to go back in), and its entropy increases rapidly. Quite quickly however, the air spreads again out over the entire room and settles down. Beyond that point, the entropy remains further constant.

An isolated system evolves to the state of maximum possible entropy and then stays there.
The state of maximum possible entropy is the thermodynamically stable state a system will assume if left alone.

A more general system is an “adiabatic” or “insulated” system. Work may be performed on such a system, but there is still no heat exchange with the surroundings. That means that the entropy of such a system can again only increase due to reversibility. A simple example is a thermos bottle with a cold drink inside. If you continue shaking this thermos bottle violently, the cold drink will heat up due to its viscosity, its internal friction, and it will not stay a cold drink for long. Its entropy will increase while you are shaking it.

The entropy of adiabatic systems can only increase.
But, of course, that of an open system may not. It is the recipe of life, {N.25}.

You might wonder why this book on quantum mechanics included a concise, but still very lengthy classical description of the second law. It is because the evidence for the second law is so much more convincing based on the macroscopic evidence than on the microscopic one. Macroscopically, the most complex systems can be accurately observed, microscopically, the quantum mechanics of only the most simplistic systems can be rigorously solved. And whether we can observe the solution is still another matter.

However, given the macroscopic fact that there really is an accounting measure of messiness called entropy, the question becomes what is its actual microscopic nature? Surely, it must have a relatively simple explanation in terms of the basic microscopic physics? For one, nature never seems to get confused about what it is, and for another, you really would expect something that is clearly so fundamental to nature to be relatively esthetic when expressed in terms of mathematics.

And that thought is all that is needed to guess the true microscopic nature of entropy. And guessing is good, because it gives a lot of insight why entropy is what it is. And to ensure that the final result is really correct, it can be cross checked against the macroscopic definition (11.18) and other known facts about entropy.

The first guess is about what physical microscopic quantity would be involved. Now microscopically, a simple system is described by energy eigenfunctions , and there is nothing messy about those. They are the systematic solutions of the Hamiltonian eigenvalue problem. But these eigenfunctions have probabilities , being the square magnitudes of their coefficients, and they are a different story. A system of a given energy could in theory exist neatly as a single energy eigenfunction with that energy. But according to the fundamental assumption of quantum statistics, this simply does not happen. In thermal equilibrium, every single energy eigenfunction of the given energy achieves about the same probability. Instead of nature neatly leaving the system in the single eigenfunction it may have started out with, it gives every Johnny-come-lately state about the same probability, and it becomes a mess.

If the system is in a single eigenstate for sure, the probability of that one eigenstate is one, and all others are zero. But if the probabilities are equally spread out over a large number, call it , of eigenfunctions, then each eigenfunction receives a probability 1/. So your simplest thought would be that maybe entropy is the average value of the probability. In particular, just like the average energy is , the average probability would be . It is always the sum of the values for which you want the average times their probability. You second thought would be that since is one for the single eigenfunction case, and 1 for the spread out case, maybe the entropy should be in order that the single eigenfunction case has the lower value of messiness. But macroscopically it is known that you can keep increasing entropy indefinitely by adding more and more heat, and the given expression starts at minus one and never gets above zero.

So try a slightly more general possibility, that the entropy is the average of some function of the probability, as in . The question is then, what function? Well, macroscopically it is also known that entropy is additive, the values of the entropies of two systems simply add up. It simplifies nature’s task of maintaining a tight accounting system on messiness. For two systems with probabilities and ,

This can be rewritten as

since probabilities by themselves must sum to one. On the other hand, if you combine two systems, the probabilities multiply, just like the probability of throwing a 3 with your red dice and a 4 with your black dice is . So the combined entropy should also be equal to

Comparing this with the previous equation, you see that must equal . The function that does that is the logarithmic function. More precisely, you want minus the logarithmic function, since the logarithm of a small probability is a large negative number, and you need a large positive messiness if the probabilities are spread out over a large number of states. Also, you will need to throw in a factor to ensure that the units of the microscopically defined entropy are the same as the ones in the macroscopical definition. The appropriate factor turns out to be the Boltzmann constant 1.380,65 10 J/K; note that this factor has absolutely no effect on the physical meaning of entropy; it is just a matter of agreeing on units.

The microscopic definition of entropy has been guessed:

 (11.20)

That wasn’t too bad, was it?

At absolute zero temperature, the system is in the ground state. That means that probability of the ground state is 1 and all other probabilities are zero. Then the entropy is zero, because 0. The fact that the entropy is zero at absolute zero is known as the “third law of thermodynamics,” {A.34}.

At temperatures above absolute zero, many eigenfunctions will have nonzero probabilities. That makes the entropy positive, because logarithms of numbers less than one are negative. (It should be noted that becomes zero when becomes zero; the blow up of is no match for the reduction in magnitude of . So highly improbable states will not contribute significantly to the entropy despite their relatively large values of the logarithm.)

To put the definition of entropy on a less abstract basis, assume that you schematize the system of interest into unimportant eigenfunctions that you give zero probability, and a remaining important eigenfunctions that all have the same average probability 1/. Sure, it is crude, but it is just to get an idea. In this simple model, the entropy is , proportional to the logarithm of the number of quantum states that have an important probability. The more states, the higher the entropy. This is what you will find in popular expositions. And it would actually be correct for systems with zero indeterminacy in energy, if they existed.

The next step is to check the expression. Derivations are given in {D.61}, but here are the results. For systems in thermal equilibrium, is the entropy the same as the one given by the classical integration (11.18)? Check. Does the entropy exist even for systems that are not in thermal equilibrium? Check, quantum mechanics still applies. For a system of given energy, is the entropy smallest when the system is in a single energy eigenfunction? Check, it is zero then. For a system of given energy, is the entropy the largest when all eigenfunctions of that energy have the same probability, as the fundamental assumption of quantum statistics suggests? Check. For a system with given expectation energy but uncertainty in energy, is the entropy highest when the probabilities are given by the canonical probability distribution? Check. For two systems in thermal contact, is the entropy greatest when their temperatures have become equal? Check.

Feynman [18, p. 8] gives an argument to show that the entropy of an isolated system always increases with time. Taking the time derivative of (11.20),

the final equality being from time-dependent perturbation theory, with 0 the transition rate from state to state . In the double summation, a typical term with indices and combines with the term having the reversed indices as

and that is always greater that zero because the terms in the square brackets have the same sign: if is greater/less than then so is greater/less than . However, given the dependence of time-dependent perturbation theory on linearization and worse, the measurement wild card, chapter 7.6 you might consider this more a validation of time dependent perturbation theory than of the expression for entropy. Then there is the problem of ensuring that a perturbed and measured system is adiabatic.

In any case, it may be noted that the checks on the expression for entropy, as given above, cut both ways. If you accept the expression for entropy, the canonical probability distribution follows. They are consistent, and in the end, it is just a matter of which of the two postulates you are more willing to accept as true.