Sub­sec­tions


A.22 Forces by par­ti­cle ex­change

As noted in chap­ter 7.5.2, the fun­da­men­tal forces of na­ture arise from the ex­change of par­ti­cles. This ad­den­dum will il­lus­trate the gen­eral idea. It will first de­rive the hy­po­thet­i­cal Koulomb force due to the ex­change of equally hy­po­thet­i­cal par­ti­cles called fo­tons.

The Koulomb po­ten­tial pro­vides a fairly sim­ple model of a quan­tum field. It also pro­vides a sim­ple con­text to in­tro­duce some key con­cepts in quan­tum field the­o­ries, such as Green’s func­tions, vari­a­tional cal­cu­lus, La­grangians, the lim­i­ta­tion of the speed of light, de­scrip­tion in terms of mo­men­tum modes, Fock space kets, an­ni­hi­la­tion and cre­ation op­er­a­tors, an­tipar­ti­cles, spe­cial rel­a­tiv­ity, the im­per­fec­tions of physi­cists, and Lorentz in­vari­ance. The Koulomb po­ten­tial can also read­ily be mod­i­fied to ex­plain nu­clear forces. How­ever, that will have to wait un­til a later ad­den­dum, {A.42}.

In the cur­rent ad­den­dum, the Koulomb po­ten­tial pro­vides the start­ing point for a dis­cus­sion of the elec­tro­mag­netic field. The clas­si­cal Maxwell equa­tions for the elec­tro­mag­netic field will be de­rived in a slightly un­con­ven­tional way. Who needs to know clas­si­cal elec­tro­mag­net­ics when all it takes is quan­tum me­chan­ics, rel­a­tiv­ity, and a few plau­si­ble guesses to de­rive elec­tro­mag­net­ics from scratch?

To quan­tize the elec­tro­mag­netic field is not that straight­for­ward; it has un­ex­pected fea­tures that do not oc­cur for the Koulomb field. This book fol­lows the de­riva­tion as for­mu­lated by Fermi in 1932. This de­riva­tion is the ba­sis for more ad­vanced mod­ern quan­tum field ap­proaches. These ad­vanced the­o­ries will not be cov­ered, how­ever.

Es­sen­tially, the Fermi de­riva­tion splits off the Coulomb po­ten­tial from the elec­tro­mag­netic field. What is left is then read­ily de­scribed by a sim­ple quan­tum field the­ory much like for the Koulomb po­ten­tial. This is suf­fi­cient to han­dle im­por­tant ap­pli­ca­tions such as the emis­sion or ab­sorp­tion of ra­di­a­tion by atoms and atomic nu­clei. That, how­ever, will again be done in sub­se­quent ad­denda.

A word to the wise. While this ad­den­dum is on the cal­cu­lus level like vir­tu­ally every­thing else in this book, there is just quite a lot of math­e­mat­ics. Some math­e­mat­i­cal ma­tu­rity may be needed not to get lost. Note that this ad­den­dum is not needed to un­der­stand the dis­cus­sion of the emis­sion and ab­sorp­tion of ra­di­a­tion in the sub­se­quent ad­denda.


A.22.1 Clas­si­cal se­lec­to­sta­t­ics

The Koulomb force holds the sarged spo­tons and se­lec­tons to­gether in satoms. The force is due to the ex­change of mass­less par­ti­cles called fo­tons be­tween the sarged par­ti­cles. (It will be as­sumed that the spo­ton is an el­e­men­tary par­ti­cle, though re­ally it con­sists of three skarks.)

This sub­sec­tion will de­rive the se­lec­to­sta­tic Koulomb force by rep­re­sent­ing the fo­tons by a clas­si­cal field, not a quan­tum field. The next sub­sec­tion will ex­plain clas­si­cal se­lec­to­dy­nam­ics, and how it obeys the speed of light. Sub­sec­tion A.22.3 will even­tu­ally fully quan­tize the se­lec­tic field. It will show how quan­tum ef­fects mod­ify some of the physics ex­pected from the clas­si­cal analy­sis.

Physi­cists have some trou­ble mea­sur­ing the pre­cise prop­er­ties of the se­lec­tic field. How­ever, a few ba­sic quan­tum ideas and some rea­son­able guesses read­ily sub­sti­tute for the lack of em­pir­i­cal data. And guess­ing is good. If you can guess a self-con­sis­tent Koulomb field, you have a lot of in­sight into its na­ture.

Con­sider first the wave func­tion for the ex­changed fo­ton in iso­la­tion. A fo­ton is a bo­son with­out spin. That means that its wave func­tion is a sim­ple func­tion, not some vec­tor. But since the fo­ton is mass­less, the Schrö­din­ger equa­tion does not ap­ply to it. The ap­pro­pri­ate equa­tion fol­lows from the rel­a­tivis­tic ex­pres­sion for the en­ergy of a mass­less par­ti­cle as given by Ein­stein, chap­ter 1.1.2 (1.2):

\begin{displaymath}
E^2= {\skew0\vec p}^{\,2} c^2
\end{displaymath}

Here $E$ is the fo­ton en­ergy, ${\skew0\vec p}$ its lin­ear mo­men­tum, and $c$ the speed of light. The squares are used be­cause mo­men­tum is re­ally a vec­tor, not a num­ber like en­ergy.

Quan­tum me­chan­ics re­places the mo­men­tum vec­tor by the op­er­a­tor

\begin{displaymath}
{\skew 4\widehat{\skew{-.5}\vec p}}= \frac{\hbar}{{\rm i}} ...
...ac{\partial}{\partial y} + {\hat k}\frac{\partial}{\partial z}
\end{displaymath}

Note the vec­tor op­er­a­tor $\nabla$, called nabla or del. This op­er­a­tor is treated much like an or­di­nary vec­tor in var­i­ous com­pu­ta­tions. Its prop­er­ties are cov­ered in Cal­cu­lus III in the U.S. sys­tem. (Brief sum­maries of prop­er­ties of rel­e­vance here can be found in the no­ta­tions sec­tion.)

The Hamil­ton­ian eigen­value prob­lem for a fo­ton wave func­tion $\varphi_{\rm {f}}$ then takes the form

\begin{displaymath}
{\skew 4\widehat{\skew{-.5}\vec p}}\strut^{\,2} c^2 \varphi_{\rm {f}} = E^2 \varphi_{\rm {f}}
\end{displaymath}

A so­lu­tion $\varphi_{\rm {f}}$ to this equa­tion is an en­ergy eigen­state. The cor­re­spond­ing value of $E$ is the en­ergy of the state. (To be picky, the above is an eigen­value prob­lem for the square Hamil­ton­ian. But eigen­func­tions of an op­er­a­tor are also eigen­func­tions of the square op­er­a­tor. The re­verse is not al­ways true, but that is not a con­cern here.)

Us­ing the mo­men­tum op­er­a­tor as given above and some re­ar­rang­ing, the eigen­value prob­lem be­comes

\begin{displaymath}
- \nabla^2 \varphi_{\rm {f}} = \frac{E^2}{\hbar^2c^2} \varp...
...\partial^2}{\partial y^2}
+ \frac{\partial^2}{\partial z^2} %
\end{displaymath} (A.101)

This is called the “time-in­de­pen­dent Klein-Gor­don equa­tion” for a mass­less par­ti­cle.

For fo­ton wave func­tions that are not nec­es­sar­ily en­ergy eigen­states, quan­tum me­chan­ics re­places the en­ergy $E$ by the op­er­a­tor ${\rm i}\hbar\partial$$\raisebox{.5pt}{$/$}$$\partial{t}$. That gives the time-de­pen­dent Klein-Gor­don equa­tion as:

\begin{displaymath}
- \nabla^2 \varphi_{\rm {f}} =
- \frac{1}{c^2}\frac{\partial^2\varphi_{\rm {f}}}{\partial t^2} %
\end{displaymath} (A.102)

Now con­sider so­lu­tions of this equa­tion of the form

\begin{displaymath}
\varphi_{\rm {f}}({\skew0\vec r};t) = e^{-{\rm i}{\omega}t} \varphi_{\rm {fs}}({\skew0\vec r})
\end{displaymath}

Here $\omega$ is a pos­i­tive con­stant called the an­gu­lar fre­quency. Sub­sti­tu­tion in the time-de­pen­dent Klein-Gor­don equa­tion shows that this so­lu­tion also sat­is­fies the time-in­de­pen­dent Klein-Gor­don equa­tion, with en­ergy

\begin{displaymath}
E = \hbar\omega
\end{displaymath}

That is the fa­mous Planck-Ein­stein re­la­tion. It is im­plicit in the as­so­ci­a­tion of $E$ with ${\rm i}\hbar\partial$$\raisebox{.5pt}{$/$}$$\partial{t}$.

Note how­ever that there will also be a so­lu­tion of the form

\begin{displaymath}
\varphi_{\rm {f}}({\skew0\vec r};t) = e^{{\rm i}{\omega}t} \varphi_{\rm {fs}}({\skew0\vec r})
\end{displaymath}

This so­lu­tion too has en­ergy $\hbar\omega$. The dif­fer­ence in sign in the ex­po­nen­tial is taken to mean that the par­ti­cle moves back­wards in time. Note that chang­ing the sign in the ex­po­nen­tial is equiv­a­lent to chang­ing the sign of the time $t$. At least it is if you re­quire that $\omega$ $\vphantom0\raisebox{1.5pt}{$=$}$ $E$$\raisebox{.5pt}{$/$}$$\hbar$ can­not be neg­a­tive. If a par­ti­cle moves back­wards in time, it is called an an­tipar­ti­cle. So the wave func­tion above de­scribes an an­tifo­ton.

There is re­ally no phys­i­cal dif­fer­ence be­tween a fo­ton and an an­tifo­ton. That is not nec­es­sar­ily true for other types of par­ti­cles. Quan­ti­ties such as elec­tric charge, lep­ton num­ber, baryon num­ber, strange­ness, etcetera take op­po­site val­ues for a par­ti­cle and its an­tipar­ti­cle.

There is a very im­por­tant dif­fer­ence be­tween the Klein-Gor­don equa­tion and the Schrö­din­ger equa­tion. The Schrö­din­ger equa­tion de­scribes non­rel­a­tivis­tic physics where par­ti­cles can nei­ther be de­stroyed nor cre­ated. Mass must be con­served. But the Klein-Gor­don equa­tion ap­plies to rel­a­tivis­tic physics. In rel­a­tivis­tic physics par­ti­cles can be cre­ated out of pure en­ergy or de­stroyed fol­low­ing Ein­stein’s fa­mous re­la­tion­ship $E$ $\vphantom0\raisebox{1.5pt}{$=$}$ $mc^2$, chap­ter 1.

There is a math­e­mat­i­cal con­se­quence to this. It con­cerns the in­te­gral

\begin{displaymath}
\int\vert\varphi_{\rm {f}}\vert^2{\rm d}^3{\skew0\vec r}
\end{displaymath}

(In this ad­den­dum, in­te­grals like this are over all space un­less ex­plic­itly stated oth­er­wise. It is also as­sumed that the fields van­ish quickly enough at large dis­tances that such in­te­grals are fi­nite. Al­ter­na­tively, for par­ti­cles con­fined in a large box it is as­sumed that the box is pe­ri­odic, chap­ter 6.17.) Now for so­lu­tions of the Schrö­din­ger equa­tion, the in­te­gral $\int\vert\varphi_{\rm {f}}\vert^2{\rm d}^3{\skew0\vec r}$ keeps the same value, 1, for all time. Phys­i­cally, the in­te­gral rep­re­sent the prob­a­bil­ity of find­ing the par­ti­cle. The prob­a­bil­ity of find­ing the par­ti­cle if you look in all space must be 1.

But fo­tons are rou­tinely de­stroyed or cre­ated by sarged par­ti­cles. So the prob­a­bil­ity of find­ing a fo­ton is not a pre­served quan­tity. (It is not even clear what find­ing a fo­ton would mean in the first place.) The Klein-Gor­don equa­tion re­flects that. It does not pre­serve the in­te­gral $\int\vert\varphi_{\rm {f}}\vert^2{\rm d}^3{\skew0\vec r}$. (There is one ex­cep­tion: if the wave func­tion is purely de­scribed by par­ti­cle states or purely de­scribed by an­tipar­ti­cle states, the in­te­gral is still pre­served.)

But the Klein-Gor­don equa­tion does pre­serve an other in­te­gral, {D.32}. That is

\begin{displaymath}
\int \left\vert\frac{1}{c}\frac{\partial\varphi_{\rm {f}}}{...
...rt\nabla\varphi_{\rm {f}}\right\vert^2 {\rm d}^3{\skew0\vec r}
\end{displaymath}

Now if the num­ber of fo­tons is not a pre­served quan­tity, what can this pre­served in­te­gral stand for? Not mo­men­tum or an­gu­lar mo­men­tum, which are vec­tors. The in­te­gral must ob­vi­ously stand for the en­ergy. En­ergy is still pre­served in rel­a­tiv­ity, even if the num­ber of par­ti­cles of a given type is not.

Of course, the en­ergy of a fo­ton wave func­tion $\varphi_{\rm {f}}$ is also given by the Planck-Ein­stein re­la­tion. But wave func­tions are not ob­serv­able. Still, fo­tons do af­fect spo­tons and se­lec­tons. That is ob­serv­able. So there must be an ob­serv­able fo­ton field. This ob­serv­able field will be called the fo­ton po­ten­tial. It will be in­di­cated by sim­ply $\varphi$, with­out a sub­script f. Quan­tum un­cer­tainty in the val­ues of the field will be ig­nored in this sub­sec­tion. So the field will be mod­eled as a clas­si­cal (i.e. non­quan­tum) field.

And if there is an ob­serv­able field, there must be an ob­serv­able en­ergy as­so­ci­ated with that field. Now what could the ex­pres­sion for the en­ergy in the field be? Ob­vi­ously it will have to take the form of the in­te­gral above. What other op­tions are there that are plau­si­ble? Of course, there will be some ad­di­tional em­pir­i­cal con­stant. If the in­te­gral is con­stant, then any mul­ti­ple of it will be con­stant too. And the above in­te­gral will not have units of en­ergy as it is. The needed em­pir­i­cal con­stant is in­di­cated by $\epsilon_1$ and is called, um no, the per­mis­siv­ity of space. It is a mea­sure of how ef­fi­cient the fo­ton field is in gen­er­at­ing en­ergy. To be pre­cise, for ar­cane his­tor­i­cal rea­sons the con­stant in the en­ergy is ac­tu­ally de­fined as half the per­mis­siv­ity. The bot­tom line is that the ex­pres­sion for the en­ergy in the ob­serv­able fo­ton field is:

\begin{displaymath}
E_\varphi = \frac{\epsilon_1}{2}\int
\left\vert\frac{1}{c}...
...eft\vert\nabla\varphi\right\vert^2 {\,\rm d}^3{\skew0\vec r} %
\end{displaymath} (A.103)

That is re­ally all that is needed to fig­ure out the prop­er­ties of clas­si­cal se­lec­to­sta­t­ics in this sub­sec­tion. It will also be enough to fig­ure out clas­si­cal se­lec­to­dy­nam­ics in the next sub­sec­tion.

The first sys­tem that will be con­sid­ered here is that of a fo­ton field and a sin­gle spo­ton. It will be as­sumed that the spo­ton is pretty much lo­cated at the ori­gin. Of course, in quan­tum me­chan­ics a par­ti­cle must have some un­cer­tainty in po­si­tion, or its ki­netic en­ergy would be in­fi­nite. But it will be as­sumed that the spo­ton wave func­tion is only nonzero within a small dis­tance $\varepsilon$ of the ori­gin. Be­yond that dis­tance, the spo­ton wave func­tion is zero.

How­ever, since this is a clas­si­cal de­riva­tion and not a quan­tum one, the term spo­ton wave func­tion must not be used. So imag­ine in­stead that the spo­ton sarge $s_{\rm {p}}$ is smeared out over a small re­gion of ra­dius $\varepsilon$ around the ori­gin.

For a smeared out sarge, there will be a sarge den­sity $\sigma_{\rm {p}}$, de­fined as the lo­cal sarge per unit vol­ume. This sarge den­sity can be ex­pressed math­e­mat­i­cally as

\begin{displaymath}
\sigma_{\rm {p}}({\skew0\vec r}) = s_{\rm {p}} \delta_\varepsilon^3({\skew0\vec r})
\end{displaymath}

Here $\delta_\varepsilon^3({\skew0\vec r})$ is some func­tion that de­scribes the de­tailed shape of the smeared-out sarge dis­tri­b­u­tion. The in­te­gral of this func­tion must be 1, be­cause the sarge den­sity $\sigma_{\rm {p}}$ must in­te­grate to the to­tal spo­ton sarge $s_{\rm {p}}$. So:

\begin{displaymath}
\int \delta_\varepsilon^3({\skew0\vec r}) {\,\rm d}^3{\skew0\vec r}= 1
\end{displaymath}

To en­sure that the sarge den­sity is zero for dis­tances from the ori­gin $r$ greater than the given small value $\varepsilon$, $\delta_\varepsilon^3({\skew0\vec r})$ must be zero at these dis­tances. So:

\begin{displaymath}
\delta_\varepsilon^3({\skew0\vec r}) = 0 \quad\mbox{if}\qua...
...0\vec r}\vert\mathrel{\raisebox{-1pt}{$\geqslant$}}\varepsilon
\end{displaymath}

In the limit that $\varepsilon$ be­comes zero, $\delta_\varepsilon^3({\skew0\vec r})$ be­comes the so-called three-di­men­sion­al “Dirac delta func­tion” $\delta^3({\skew0\vec r})$. This func­tion is to­tally con­cen­trated at a sin­gle point, the ori­gin. But its in­te­gral over that sin­gle point is still 1. That is only pos­si­ble if the func­tion value at the point is in­fi­nite. Now in­fin­ity is not a proper num­ber, and so the Dirac delta func­tion is not a proper func­tion. How­ever, math­e­mati­cians have in fact suc­ceeded in gen­er­al­iz­ing the idea of func­tions to al­low delta func­tions. That need not con­cern the dis­cus­sion here be­cause “Physi­cists are sloppy about math­e­mat­i­cal rigor,” as Zee [52, p. 22] very rightly states. Delta func­tions are named af­ter the physi­cist Dirac. They are every­where in quan­tum field the­ory. That is not re­ally sur­pris­ing as Dirac was one of the ma­jor founders of the the­ory. See sec­tion 7.9 for more on delta func­tions.

Here the big ques­tion is how the spo­ton man­ages to cre­ate a fo­ton field around it­self. That is not triv­ial. If there was a nonzero prob­a­bil­ity of find­ing an en­er­getic fo­ton well away from the spo­ton, surely it would vi­o­late en­ergy con­ser­va­tion. How­ever, it turns out that the time-in­de­pen­dent Klein-Gor­don equa­tion (A.101) ac­tu­ally has a very sim­ple so­lu­tion where the fo­ton en­ergy $E$ ap­pears to be zero away from the ori­gin. In spher­i­cal co­or­di­nates, it is

\begin{displaymath}
\varphi_{\rm {f}} = \frac{C}{r} \quad\mbox{if}\quad r \ne 0
\end{displaymath}

Here $C$ is some con­stant which is still ar­bi­trary about this stage. To check the above so­lu­tion, plug it into the en­ergy eigen­value prob­lem (A.101) with $E$ zero.

This then seems to be a plau­si­ble form for the ob­serv­able po­ten­tial $\varphi$ away from the spo­ton at the ori­gin. How­ever, while the en­ergy of a $C$$\raisebox{.5pt}{$/$}$$r$ po­ten­tial ap­pears to be zero, it is not re­ally. Such a po­ten­tial is in­fi­nite at the ori­gin, and you can­not just ig­nore that. The cor­rect fo­ton field en­ergy is given by the ear­lier in­te­gral (A.103). For a steady po­ten­tial, it can be writ­ten as

\begin{displaymath}
E_\varphi
= \frac{\epsilon_1}{2}\int \left(\nabla\varphi\r...
...arphi\left(\nabla^2\varphi\right)
{\,\rm d}^3{\skew0\vec r} %
\end{displaymath} (A.104)

The fi­nal in­te­gral comes from an in­te­gra­tion by parts. (See {A.2} and {D.32} for ex­am­ples how to do such in­te­gra­tions by parts.) Note that it looks like the en­ergy could be zero ac­cord­ing to this fi­nal in­te­gral: $\nabla^2\varphi$ is zero if $\varphi$ $\vphantom0\raisebox{1.5pt}{$=$}$ $C$$\raisebox{.5pt}{$/$}$$r$. That is true out­side the small vicin­ity around the ori­gin. But if you look at the equiv­a­lent first in­te­gral, it is ob­vi­ous that the en­ergy is not zero: its in­te­grand is every­where pos­i­tive. So the en­ergy must be pos­i­tive. It fol­lows that in the fi­nal in­te­gral, the re­gion around the ori­gin, while small, still pro­duces an en­ergy that is not small. The in­te­grand must be not just nonzero, but large in this re­gion.

All that then raises the ques­tion why there is a fo­ton field in the first place. The in­ter­est in this sub­sec­tion is in the se­lec­to­sta­tic field. That is sup­posed to be the sta­ble ground state of low­est en­ergy. Ac­cord­ing to the above, the state of low­est en­ergy would be when there is no fo­ton field; $\varphi$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0.

And so it is. The only rea­son­able way to ex­plain that there is a non­triv­ial fo­ton field in the ground state of the spo­ton-fo­ton sys­tem is if the fo­ton field en­ergy is com­pen­sated for by some­thing else. There must be an en­ergy of in­ter­ac­tion be­tween the fo­ton field and the spo­ton.

Con­sider the math­e­mat­i­cal form that this en­ergy could take in a given vol­ume el­e­ment ${\rm d}^3{\skew0\vec r}$. Surely the sim­plest pos­si­bil­ity is that it is pro­por­tional to the po­ten­tial $\varphi$ at the lo­ca­tion times the sarge $\sigma_{\rm {p}}{\rm d}^3{\skew0\vec r}$. There­fore the to­tal en­ergy of spo­ton-fo­ton in­ter­ac­tion is pre­sum­ably

\begin{displaymath}
E_{\varphi\rm {p}} = - \int \varphi({\skew0\vec r}) \sigma_...
...delta^3_\varepsilon({\skew0\vec r}){\,\rm d}^3{\skew0\vec r} %
\end{displaymath} (A.105)

Note that this ex­pres­sion re­ally de­fines the sarge $s_{\rm {p}}$. Sarge gives the strength of the cou­pling be­tween spo­ton and fo­ton field. Its units and sign fol­low from writ­ing the en­ergy as the ex­pres­sion above.

The ques­tion is now, what is the ground state fo­ton field? In other words, for what po­ten­tial $\varphi$ is the com­plete sys­tem en­ergy min­i­mal? To an­swer that re­quires “vari­a­tional cal­cu­lus.” For­tu­nately, vari­a­tional cal­cu­lus is just cal­cu­lus. And you need to un­der­stand how it works if you want to make any sense at all out of books on quan­tum field the­ory.

Sup­pose that you wanted an equa­tion for the min­i­mum of some func­tion $f$ de­pend­ing on a sin­gle vari­able $x$. The equa­tion would be that ${\rm d}{f}$$\raisebox{.5pt}{$/$}$${\rm d}{x}$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0 at the po­si­tion of the min­i­mum $x_{\rm {min}}$. In terms of dif­fer­en­tials, that would mean that the func­tion does not change go­ing from po­si­tion $x_{\rm {min}}$ to a slightly dif­fer­ent po­si­tion $x_{\rm {min}}+{\rm d}{x}$:

\begin{displaymath}
{\rm d}f = \frac{{\rm d}f}{{\rm d}x} {\rm d}x = 0
\quad\mbox{at}\quad x = x_{\rm {min}}
\end{displaymath}

It is the same for the change in net en­ergy $E_\varphi+E_{\varphi\rm {p}}$. As­sume that $\varphi_{\rm {min}}$ is the de­sired po­ten­tial at min­i­mum net en­ergy. Then at $\varphi_{\rm {min}}$ the net en­ergy should not change when you change $\varphi$ by an in­fin­i­tes­i­mal amount ${\rm d}\varphi$. Or rather, by an in­fin­i­tes­i­mal amount $\delta\varphi$: the sym­bol $\delta$ is used in vari­a­tional cal­cu­lus in­stead of ${\rm d}$. That is to avoid con­fu­sion with any sym­bol ${\rm d}$ that may al­ready be around.

So the re­quire­ment for the ground state po­ten­tial is

\begin{displaymath}
\delta (E_\varphi+E_{\varphi\rm {p}}) = 0
\quad\mbox{when}...
...{\rm {min}}
\;\to\; \varphi=\varphi_{\rm {min}}+\delta\varphi
\end{displaymath}

Us­ing the ex­pres­sions (A.104) and (A.105) for the en­er­gies, that means that

\begin{displaymath}
\delta \left[
\frac{\epsilon_1}{2}\int \left(\nabla\varphi...
...{\rm {min}}
\;\to\; \varphi=\varphi_{\rm {min}}+\delta\varphi
\end{displaymath}

The usual rules of cal­cu­lus can be used, (see {A.2} for more de­tails). The only dif­fer­ence from ba­sic cal­cu­lus is that the change $\delta\varphi$ may de­pend on the point that you look at. In other words, it is some ar­bi­trary but small func­tion of the po­si­tion ${\skew0\vec r}$. For ex­am­ple,

\begin{displaymath}
\delta (\nabla\varphi)^2 = 2 (\nabla\varphi) \delta(\nabla\...
...a\varphi) - \nabla(\varphi_{\rm {min}})
= \nabla\delta\varphi
\end{displaymath}

Also, $\varphi$ by it­self is validly ap­prox­i­mated as $\varphi_{\rm {min}}$, but $\delta\varphi$ is a com­pletely sep­a­rate quan­tity that can be any­thing. Work­ing it out gives

\begin{displaymath}
\frac{\epsilon_1}{2} \int 2 (\nabla\varphi_{\rm {min}})
\c...
...elta^3_\varepsilon \delta \varphi {\,\rm d}^3{\skew0\vec r}= 0
\end{displaymath}

Per­form­ing an in­te­gra­tion by parts moves the $\nabla$ from $\delta\varphi$ to $\nabla\varphi_{\rm {min}}$ and adds a mi­nus sign. Then the two in­te­grals com­bine as

\begin{displaymath}
- \int \left(\epsilon_1\nabla^2{\varphi_{\rm {min}}}
+ s_{...
...\varepsilon \right) \delta\varphi {\,\rm d}^3{\skew0\vec r}= 0
\end{displaymath}

If this is sup­posed to be zero for what­ever you take the small change $\delta\varphi$ in field to be, then the par­en­thet­i­cal ex­pres­sion in the in­te­gral will have to be zero. If the par­en­thet­i­cal ex­pres­sion is nonzero some­where, you can eas­ily make up a nonzero change $\delta\varphi$ in that re­gion so that the in­te­gral is nonzero.

The par­en­thet­i­cal ex­pres­sion can now be re­arranged to give the fi­nal re­sult:

\begin{displaymath}
- \nabla^2\varphi
= \frac{s_{\rm {p}}}{\epsilon_1} \delta^3_\varepsilon %
\end{displaymath} (A.106)

Here the sub­script min was left away again as the ground state is the only state of in­ter­est here any­way.

The above equa­tion is the fa­mous “Pois­son equa­tion” for the se­lec­to­sta­tic po­ten­tial $\varphi$. The same equa­tion ap­pears in elec­tro­sta­t­ics, chap­ter 13.3.4. So far, this is all quite en­cour­ag­ing. Note also that the left hand side is the steady Klein-Gor­don equa­tion. The right hand side is math­e­mat­i­cally a forc­ing term; it forces a nonzero so­lu­tion for $\varphi$.

Be­yond the small vicin­ity of ra­dius $\varepsilon$ around the ori­gin, the spo­ton sarge den­sity in the right hand side is zero. That means that away from the spo­ton, you get the time-in­de­pen­dent Klein-Gor­don equa­tion (A.101) with $E$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0. That was a good guess, ear­lier. As­sum­ing spher­i­cal sym­me­try, away from the spo­ton the so­lu­tion to the Pois­son equa­tion is then in­deed

\begin{displaymath}
\varphi = \frac{C}{r} \quad\mbox{if}\quad r \mathrel{\raisebox{-1pt}{$\geqslant$}}\varepsilon
\end{displaymath}

But now that the com­plete Pois­son equa­tion (A.106) is known, the con­stant $C$ can be fig­ured out, {D.2}. The pre­cise field turns out to be

\begin{displaymath}
\varphi = \frac{s_{\rm {p}}}{4 \pi \epsilon_1 r}
\quad\mbox{if}\quad r \mathrel{\raisebox{-1pt}{$\geqslant$}}\varepsilon
\end{displaymath}

For unit value of $s_{\rm {p}}$$\raisebox{.5pt}{$/$}$$\epsilon_1$ the above so­lu­tion is called the “fun­da­men­tal so­lu­tion” or “Green’s func­tion” of the Pois­son equa­tion. It is the so­lu­tion due to a delta func­tion.

If the spo­ton is not at the ori­gin, but at some po­si­tion ${\skew0\vec r}_{\rm {p}}$, you sim­ply re­place $r$ by the dis­tance from that point:

\begin{displaymath}
\varphi^{\rm {p}} =
\frac{s_{\rm {p}}}{4 \pi \epsilon_1 \vert{\skew0\vec r}- {\skew0\vec r}_{\rm {p}}\vert} %
\end{displaymath} (A.107)

The su­per­script p in­di­cates that this po­ten­tial is cre­ated by a spo­ton at a po­si­tion ${\skew0\vec r}_{\rm {p}}$. This so­lu­tion of the Pois­son equa­tion will be­come very im­por­tant in the Fermi de­riva­tion.

Now the net en­ergy is of in­ter­est. It can be sim­pli­fied by sub­sti­tut­ing the Pois­son equa­tion (A.106) in the ex­pres­sion (A.104) for the fo­ton field en­ergy and adding the in­ter­ac­tion en­ergy (A.105). That gives

\begin{displaymath}
E_\varphi + E_{\varphi\rm {p}} =
{\textstyle\frac{1}{2}} \...
...\delta^3_\varepsilon({\skew0\vec r}) {\,\rm d}^3{\skew0\vec r}
\end{displaymath}

which sim­pli­fies to
\begin{displaymath}
E_\varphi + E_{\varphi\rm {p}} = - {\textstyle\frac{1}{2}}
...
...elta^3_\varepsilon({\skew0\vec r}) {\,\rm d}^3{\skew0\vec r} %
\end{displaymath} (A.108)

Note that the spo­ton-fo­ton in­ter­ac­tion en­ergy is twice the fo­ton field en­ergy, and neg­a­tive in­stead of pos­i­tive. That means that the to­tal en­ergy has been low­ered by an amount equal to the fo­ton field en­ergy, de­spite the fact that the field en­ergy it­self is pos­i­tive.

The fact that there is a fo­ton field in the ground state has now been ex­plained. The in­ter­ac­tion with the spo­ton low­ers the en­ergy more than the field it­self raises it.

Note fur­ther from the so­lu­tion for $\varphi$ above that $\varphi$ is large in the vicin­ity of the spo­ton. As a re­sult, the en­ergy in the fo­ton field be­comes in­fi­nite when the spo­ton sarge con­tracts to a point. (That is best seen from the orig­i­nal in­te­gral for the fo­ton field en­ergy in (A.104).) This blow up is very sim­i­lar to the fact that the en­ergy in a clas­si­cal elec­tro­mag­netic field is in­fi­nite for a point charge. For the Koulomb field, the in­ter­ac­tion en­ergy blows up too, as it is twice the fo­ton field en­ergy. All these blow ups are a good rea­son to use a sarge den­sity rather than a point sarge. Then all en­er­gies are nor­mal fi­nite num­bers.

The fi­nal step to de­rive the clas­si­cal Koulomb force is to add a se­lec­ton. The se­lec­ton is also sarged, so it too gen­er­ates a field. To avoid con­fu­sion, from now on the field gen­er­ated by the spo­ton will al­ways be in­di­cated by $\varphi^{\rm {p}}$, and the one gen­er­ated by the se­lec­ton by $\varphi^{\rm {e}}$. The vari­a­tional analy­sis can now be re­peated in­clud­ing the se­lec­ton, {D.37.1}. That shows that there are three ef­fects that pro­duce the Koulomb force be­tween the spo­ton and se­lec­ton:

1.
the se­lec­ton sarge in­ter­acts with the po­ten­tial $\varphi^{\rm {p}}$ gen­er­ated by the spo­ton;
2.
the spo­ton sarge in­ter­acts with the po­ten­tial $\varphi^{\rm {e}}$ gen­er­ated by the se­lec­ton;
3.
the en­ergy in the com­bined fo­ton field $\varphi^{\rm {p}}+\varphi^{\rm {e}}$ is dif­fer­ent from the sum of the en­er­gies of the sep­a­rate fields $\varphi^{\rm {p}}$ and $\varphi^{\rm {e}}$.

All three ef­fects turn out to pro­duce the same en­ergy, but the first two en­er­gies are neg­a­tive and the third pos­i­tive. So the net en­ergy change is the same as if there was just item 1, the in­ter­ac­tion of the se­lec­ton sarge den­sity $\sigma_{\rm {e}}$ with the po­ten­tial $\varphi^{\rm {p}}$ pro­duced by the spo­ton. That is of course given by a sim­i­lar ex­pres­sion as be­fore:

\begin{displaymath}
V_{\rm {ep}} = - \int \varphi^{\rm {p}}({\skew0\vec r}) \sigma_{\rm {e}}({\skew0\vec r}) {\,\rm d}^3{\skew0\vec r}
\end{displaymath}

The ex­pres­sion for $\varphi^{\rm {p}}({\skew0\vec r})$ was given above in (A.107) for any ar­bi­trary po­si­tion of the spo­ton ${\skew0\vec r}_{\rm {p}}$. And it will be as­sumed that the se­lec­ton sarge den­sity $\sigma_{\rm {e}}$ is spread out a bit just like the spo­ton one, but around a dif­fer­ent lo­ca­tion ${\skew0\vec r}_{\rm {e}}$. Then the in­ter­ac­tion en­ergy be­comes

\begin{displaymath}
V_{\rm {ep}} = - \int \frac{s_{\rm {p}}}{4\pi\epsilon_1\ver...
...ew0\vec r}-{\skew0\vec r}_{\rm {e}}) {\,\rm d}^3{\skew0\vec r}
\end{displaymath}

Since $\varepsilon$ is as­sumed small, the se­lec­ton sarge den­sity is only nonzero very close to the nom­i­nal po­si­tion ${\skew0\vec r}_{\rm {e}}$. There­fore you can ap­prox­i­mate ${\skew0\vec r}$ as ${\skew0\vec r}_{\rm {e}}$ in the frac­tion and take it out of the in­te­gral as a con­stant. Then the delta func­tion in­te­grates to 1, and you get

\begin{displaymath}
V_{\rm {ep}} = - \frac{s_{\rm {p}}s_{\rm {e}}}
{4 \pi \eps...
...\vert{\skew0\vec r}_{\rm {e}}-{\skew0\vec r}_{\rm {p}}\vert} %
\end{displaymath} (A.109)

That then is the fi­nal en­ergy of the Koulomb in­ter­ac­tion be­tween the two sarged par­ti­cles. Be­cause the spo­ton and the se­lec­ton both in­ter­act with the fo­ton field, in ef­fect it pro­duces a spo­ton-se­lec­ton in­ter­ac­tion en­ergy.

Of course, in clas­si­cal physics you would prob­a­bly want to know the ac­tual force on say the se­lec­ton. To get it, move the ori­gin of the co­or­di­nate sys­tem to the spo­ton and ro­tate it so that the se­lec­ton is on the pos­i­tive $x$-​axis. Now give the se­lec­ton a small dis­place­ment $\partial{x}_{\rm {e}}$ in the $x$-​di­rec­tion. Slowly of course; this is sup­posed to be se­lec­to­sta­t­ics. Be­cause of en­ergy con­ser­va­tion, the work done by the force ${F_x}_{\rm {e}}$ dur­ing this dis­place­ment must cause a cor­re­spond­ing small de­crease in en­ergy. So:

\begin{displaymath}
F_x\strut_{\rm {e}} \partial x\strut_{\rm {e}} = - \partial V_{\rm {ep}}
\end{displaymath}

But on the pos­i­tive $x$-​axis, $\vert{\skew0\vec r}_{\rm {e}}-{\skew0\vec r}_{\rm {p}}\vert$ is just the $x$-​po­si­tion of the se­lec­ton $x_{\rm {e}}$, so

\begin{displaymath}
F_x\strut_{\rm {e}} = - \frac{\partial V_{\rm {ep}}}{\parti...
...\frac{s_{\rm {p}}s_{\rm {e}}} {4 \pi \epsilon_1 x_{\rm {e}}^2}
\end{displaymath}

It is seen that if the sarges have equal sign, the force is in the neg­a­tive $x$-​di­rec­tion, to­wards the spo­ton. So sarges of the same sign at­tract.

More gen­er­ally, the force on the se­lec­ton points to­wards the spo­ton if the sarges are of the same sign. It points straight away from the spo­ton if the sarges are of op­po­site sign.

The Koulomb en­ergy $V_{\rm {ep}}$ looks al­most ex­actly the same as the Coulomb en­ergy in elec­tro­sta­t­ics. Re­call that the Coulomb en­ergy was used in chap­ter 4.3 to de­scribe the at­trac­tion be­tween the pro­ton and elec­tron in a hy­dro­gen atom. The dif­fer­ence is that the Coulomb en­ergy has no mi­nus sign. That means that while like sarges at­tract, like charges re­pel each other. For ex­am­ple, two spo­tons at­tract, but two pro­tons re­pel.

Now a spo­ton must nec­es­sar­ily cre­ate a fo­ton field that is at­trac­tive to spo­tons. Oth­er­wise there should be no field at all in the ground state. And if spo­tons cre­ate fields that at­tract spo­tons, then spo­tons at­tract. So the Koulomb force is clearly right.

It is the Coulomb force that does not seem to make any sense. Much more will be said about that in later sub­sec­tions.


A.22.2 Clas­si­cal se­lec­to­dy­nam­ics

Ac­cord­ing to the pre­vi­ous sec­tion the Koulomb en­ergy be­tween a spo­ton and a se­lec­ton is given by

\begin{displaymath}
V_{\rm {ep}} = -
\frac{s_{\rm {p}}s_{\rm {e}}}{4 \pi \epsi...
...1 \vert{\skew0\vec r}_{\rm {e}}-{\skew0\vec r}_{\rm {p}}\vert}
\end{displaymath}

How­ever, this re­sult can only be cor­rect in a sta­tion­ary state like a ground state, or maybe some other en­ergy state.

To see the prob­lem, imag­ine that the spo­ton is sud­denly given a kick. Ac­cord­ing to the Koulomb po­ten­tial given above, the se­lec­ton no­tices that in­stantly. There is no time in the Koulomb po­ten­tial, so there is no time de­lay. But Ein­stein showed that no ob­serv­able ef­fect can move faster than the speed of light. So there should be a time de­lay.

Ob­vi­ously then, to dis­cuss un­steady evo­lu­tion will re­quire the full gov­ern­ing equa­tions for se­lec­to­dy­nam­ics. The big ques­tion is how to find these equa­tions.

The quan­tum me­chan­ics in this book is nor­mally based on some Hamil­ton­ian $H$. But there is a more ba­sic quan­tity for a sys­tem than the Hamil­ton­ian. That quan­tity is called the “La­grangian” ${\cal L}$. If you can guess the cor­rect La­grangian of a sys­tem, its equa­tions of mo­tion fol­low. That is very im­por­tant for quan­tum field the­o­ries. In fact, a lot of what ad­vanced quan­tum field the­o­ries re­ally do is guess La­grangians.

To get at the La­grangian for se­lec­to­dy­nam­ics, con­sider first the mo­tion of the spo­ton for a given fo­ton field $\varphi$. The Hamil­ton­ian of the spo­ton by it­self is just the en­ergy of the spo­ton. As dis­cussed in the pre­vi­ous sub­sec­tion, a spo­ton has a po­ten­tial en­ergy of in­ter­ac­tion with the given fo­ton field

\begin{displaymath}
E_{\varphi\rm {p}} = - \int \varphi({\skew0\vec r};t) \sigma_{\rm {p}}({\skew0\vec r};t)
{\,\rm d}^3{\skew0\vec r} %
\end{displaymath} (A.110)

Here $\sigma_{\rm {p}}$ was the sarge den­sity of the spo­ton. The fo­ton po­ten­tial and sarge den­sity can now of course also de­pend on time.

How­ever, to dis­cuss the dy­nam­ics of the spo­ton, it is eas­ier to con­sider it a point par­ti­cle lo­cated at a sin­gle mov­ing point ${\skew0\vec r}_{\rm {p}}$. There­fore it will be as­sumed that the sarge den­sity is com­pletely con­cen­trated at that one point. That means that the only value of the fo­ton field of in­ter­est is the value at ${\skew0\vec r}_{\rm {p}}$. And the sarge dis­tri­b­u­tion in­te­grates to the net spo­ton sarge $s_{\rm {p}}$. So the above en­ergy of in­ter­ac­tion be­comes ap­prox­i­mately

\begin{displaymath}
E_{\varphi\rm {p}} \approx - \varphi_{\rm {p}} s_{\rm {p}}
...
...\varphi_{\rm {p}} \equiv \varphi({\skew0\vec r}_{\rm {p}};t) %
\end{displaymath} (A.111)

In terms of the com­po­nents of po­si­tion, this can be writ­ten out fully as

\begin{displaymath}
E_{\varphi\rm {p}} \approx - \varphi(r_{\rm {p}}\strut_1,r_{\rm {p}}\strut_2,
r_{\rm {p}}\strut_3;t) s_{\rm {p}}
\end{displaymath}

Note that in this ad­den­dum the po­si­tion com­po­nents are in­di­cated as $r_{\rm {p}}\strut_1$, $r_{\rm {p}}\strut_2$, and $r_{\rm {p}}\strut_3$ in­stead of the more fa­mil­iar $r_{\rm {p}}\strut_x$, $r_{\rm {p}}\strut_y$, and $r_{\rm {p}}\strut_z$ or $x_{\rm {p}}$, $y_{\rm {p}}$, and $z_{\rm {p}}$. That is in or­der that a generic po­si­tion com­po­nent can be in­di­cated by $r_{\rm {p}}\strut_i$ where $i$ can be 1, 2, or 3.

In ad­di­tion to the in­ter­ac­tion en­ergy above there is the ki­netic en­ergy of the spo­ton,

\begin{displaymath}
E_{\rm {p,kin}} = {\textstyle\frac{1}{2}} m_{\rm {p}} \vec v_{\rm {p}}^{\,2}
\end{displaymath}

Here $m_{\rm {p}}$ is the mass of the spo­ton and $\vec{v}_{\rm {p}}$ its ve­loc­ity,

\begin{displaymath}
\vec v_{\rm {p}} \equiv \frac{{\rm d}{\skew0\vec r}_{\rm {p}}}{{\rm d}t}
\end{displaymath}

The ki­netic en­ergy can be writ­ten out in terms of the ve­loc­ity com­po­nents as

\begin{displaymath}
E_{\rm {p,kin}} = {\textstyle\frac{1}{2}} m_{\rm {p}}
\lef...
...rac{{\rm d}r_{\rm {p}}\strut_i}{{\rm d}t} \mbox{ for } i=1,2,3
\end{displaymath}

Now the Hamil­ton­ian of the spo­ton is the sum of the ki­netic and po­ten­tial en­er­gies. But the La­grangian is the dif­fer­ence be­tween the ki­netic and po­ten­tial en­er­gies:

\begin{displaymath}
\Lag_{\rm {p}}(\vec v_{\rm {p}},{\skew0\vec r}_{\rm {p}})
...
...rm {p}}^{\,2} + \varphi({\skew0\vec r}_{\rm {p}};t)s_{\rm {p}}
\end{displaymath}

This La­grangian can now be used to find the equa­tion of mo­tion of the spo­ton. This comes about in a some­what weird way. Sup­pose that there is some range of times, from a time $t_1$ to a time $t_2$, dur­ing which you want to know the mo­tion of the spo­ton. (Maybe the spo­ton is at rest at time $t_1$ and be­comes again at rest at time $t_2$.) Sup­pose fur­ther that you now com­pute the so-called ac­tion in­te­gral

\begin{displaymath}
{\cal S}\equiv \int_{t_1}^{t_2} \Lag_{\rm {p}}(\vec v_{\rm {p}},{\skew0\vec r}_{\rm {p}}) {\,\rm d}t
\end{displaymath}

If you use the cor­rect ve­loc­ity and po­si­tion of the spo­ton, you will get some num­ber. But now sup­pose that you use a slightly dif­fer­ent (wrong) spo­ton path. Sup­pose it is dif­fer­ent by a small amount $\delta{\skew0\vec r}_{\rm {p}}$, which of course de­pends on time. You would think that the value of the ac­tion in­te­gral would change by a cor­re­spond­ing small amount. But that is not true. As­sum­ing that the path used in the orig­i­nal in­te­gral was in­deed the right one, and that the change in path is in­fin­i­tes­i­mally small, the ac­tion in­te­gral does not change. Math­e­mat­i­cally

\begin{displaymath}
\delta {\cal S}= 0 \quad\mbox{at the correct path}
\end{displaymath}

Yes, this is again vari­a­tional cal­cu­lus. The ac­tion may not be min­i­mal at the cor­rect spo­ton path, but it is def­i­nitely sta­tion­ary at it.

Prob­a­bly this sounds like a stu­pid math­e­mat­i­cal trick. But in the so-called path in­te­gral ap­proach to quan­tum field the­ory, the ac­tion is cen­tral to the for­mu­la­tion.

For clas­si­cal physics the ac­tion by it­self is pretty use­less. How­ever, with some ma­nip­u­la­tions, you can get the evo­lu­tion equa­tions for your sys­tem out of it, {A.1}. They are found as

\begin{displaymath}
\frac{{\rm d}}{{\rm d}t}
\left(\frac{\partial{\cal L}}{\pa...
...rac{\partial{\cal L}}{\partial r_{\rm {p}}\strut_i}
\right) %
\end{displaymath} (A.112)

Here $i$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1, 2, or 3 gives the equa­tion in the $x$, $y$, or $z$ di­rec­tion, re­spec­tively.

Note that for the gov­ern­ing equa­tions it does not mat­ter at all what you take the times $t_1$ and $t_2$ in the ac­tion to be. They are pretty vaguely de­fined any­way. You might want to let them go to mi­nus and plus in­fin­ity to get rid of them.

The next step is to write out the gov­ern­ing equa­tion (A.112) in terms of phys­i­cal quan­ti­ties. To do that cor­rectly, the trick is that the La­grangian must be treated as a func­tion of ve­loc­ity and po­si­tion, as in­de­pen­dent vari­ables. In re­al­ity ve­loc­ity and po­si­tion are not in­de­pen­dent; ve­loc­ity is the de­riv­a­tive of po­si­tion. But when dif­fer­en­ti­at­ing the La­grangian you are sup­posed to for­get about that. Con­sider how this works out for the $x$-​com­po­nent, $i$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1,

\begin{displaymath}
\frac{\partial{\cal L}}{\partial v_{\rm {p}}\strut_1} = m_{...
...m {p}}\strut_3;t)}
{\partial r_{\rm {p}}\strut_1} s_{\rm {p}}
\end{displaymath}

That are sim­ple dif­fer­en­ti­a­tions tak­ing the given La­grangian at face value.

How­ever, when you do the re­main­ing time de­riv­a­tive in (A.112) you have to do it prop­erly, treat­ing the ve­loc­ity as the func­tion of time that it is. That gives the fi­nal equa­tion of mo­tion as

\begin{displaymath}
m_{\rm {p}} \frac{{\rm d}v_{\rm {p}}\strut_1}{{\rm d}t} =
...
...{p}}\strut_3;t)}
{\partial r_{\rm {p}}\strut_1} s_{\rm {p}} %
\end{displaymath} (A.113)

Note that the left hand side is mass times ac­cel­er­a­tion in the $x$-​di­rec­tion. So the right hand side must be the se­lec­tic force on the spo­ton. This force is called the Sorentz force. It is seen that the Sorentz force is pro­por­tional to the de­riv­a­tive of the fo­ton po­ten­tial, eval­u­ated at the po­si­tion of the spo­ton. If you com­pare the Sorentz force with the force in elec­tro­sta­t­ics, you see that the force in elec­tro­sta­t­ics has an ad­di­tional mi­nus sign. That re­flects again that equal sarges at­tract, while equal charges re­pel.

So far, it was as­sumed that the fo­ton field was given. But in re­al­ity the fo­ton field is not given, it de­pends on the mo­tion of the spo­ton. To de­scribe the field, its en­er­gies must be added to the La­grangian too. The to­tal en­ergy in the fo­ton field was given in the pre­vi­ous sub­sec­tion as (A.103). Us­ing some short­hand no­ta­tion, this be­comes

\begin{displaymath}
E_\varphi = \frac{\epsilon_1}{2}\int \frac{1}{c^2} \varphi_t^2
+ \sum_{i=1}^3 \varphi_i^2 {\,\rm d}^3{\skew0\vec r}
\end{displaymath}

The short­hand is to in­di­cate de­riv­a­tives by sub­scripts, as in

\begin{displaymath}
\varphi_t \equiv \frac{\partial\varphi}{\partial t}
\qquad
\varphi_i \equiv \frac{\partial\varphi}{\partial r_i}
\end{displaymath}

with $i$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1, 2, or 3 for the $x$, $y$, and $z$ de­riv­a­tives re­spec­tively. For ex­am­ple, $\varphi_1$ would be the par­tial $x$-​de­riv­a­tive of $\varphi$.

Ac­tu­ally, even more con­cise short­hand will be used. If an in­dex like $i$ oc­curs twice in a term, sum­ma­tion over that in­dex is to be un­der­stood. The sum­ma­tion sym­bol will then not be shown. That is called the Ein­stein sum­ma­tion con­ven­tion. So the en­ergy in the fo­ton field will be in­di­cated briefly as

\begin{displaymath}
E_\varphi = \frac{\epsilon_1}{2}
\int \frac{1}{c^2} \varphi_t^2 + \varphi_i^2 {\,\rm d}^3{\skew0\vec r}
\end{displaymath}

(Note that $\varphi_i^2$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\varphi_i\varphi_i$, so $i$ oc­curs twice in the sec­ond term of the in­te­grand.) All this is done as a ser­vice to you, the reader. You are no doubt get­ting tired of hav­ing to look at all these math­e­mat­i­cal sym­bols.

Now the first term in the en­ergy above is a time de­riv­a­tive, just like $\vec{v}_{\rm {p}}$ was the time de­riv­a­tive of the spo­ton po­si­tion. So this term has pre­sum­ably the same sign in the La­grangian, while the sign of the other term flips over. That makes the to­tal se­lec­to­dy­namic La­grangian equal to

\begin{displaymath}
\Lag_{\varphi\rm {p}} = \frac{\epsilon_1}{2}
\int \frac{1}...
...}}m_{\rm {p}}\vec v_{\rm {p}}^2 + \varphi_{\rm {p}}s_{\rm {p}}
\end{displaymath}

The last two terms are as be­fore for a given field.

How­ever, for the fi­nal term it is now de­sir­able to go back to the rep­re­sen­ta­tion of the spo­ton in terms of a sarge den­sity $\sigma_{\rm {p}}$, as in (A.110). The fi­nal term as writ­ten would lead to a nasty delta func­tion in the analy­sis of the field. In the sarge den­sity form the term can be brought in­side the in­te­gral to give the com­plete La­grangian as

\begin{displaymath}
\Lag_{\varphi\rm {p}} = \int \frac{\epsilon_1}{2}\left(
\f...
...c r}+ {\textstyle\frac{1}{2}} m_{\rm {p}} \vec v_{\rm {p}}^2 %
\end{displaymath} (A.114)

Note that there is no longer a sub­script $p$ on $\varphi$; it is the in­te­gra­tion against the sarge den­sity that picks out the value of $\varphi$ at the spo­ton.

An in­te­grand of a spa­tial in­te­gral in a La­grangian is called a “La­grangian den­sity” and in­di­cated by the sym­bol $\pounds $. In this case:

\begin{displaymath}
\pounds = \frac{\epsilon_1}{2}\left(\frac{1}{c^2}\varphi_t^2-\varphi_i^2\right)
+ \varphi\sigma_{\rm {p}} %
\end{displaymath} (A.115)

When dif­fer­en­ti­at­ing this La­grangian den­sity, $\varphi$ and its de­riv­a­tives $\varphi_t$ and $\varphi_i$, (with $i$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1, 2, and 3), are to be con­sid­ered 5 sep­a­rate in­de­pen­dent vari­ables.

The ac­tion prin­ci­ple can read­ily be ex­tended to al­low for La­grangian den­si­ties, {D.37}. The equa­tions of mo­tion for the field are then found to be

\begin{displaymath}
\frac{\partial}{\partial t}
\left(\frac{\partial\pounds }{...
...l\varphi_i}\right) =
\frac{\partial\pounds }{\partial\varphi}
\end{displaymath}

Work­ing this out much like for the equa­tion of mo­tion of the spo­ton gives, tak­ing $\varepsilon_1$ to the other side,

\begin{displaymath}
\frac{1}{c^2}\frac{\partial^2\varphi}{\partial t^2}
- \fra...
...rphi}{\partial r_i^2}
= \frac{\sigma_{\rm {p}}}{\epsilon_1} %
\end{displaymath} (A.116)

This is the so-called Saxwell wave equa­tion of se­lec­to­dy­nam­ics. If there is also a se­lec­ton, say, its sarge den­sity can sim­ply be added to the spo­ton one in the right hand side.

To check the Saxwell equa­tion, first con­sider the case that the sys­tem is steady, i.e. in­de­pen­dent of time. In that case the Saxwell wave equa­tion be­comes the Pois­son equa­tion of the pre­vi­ous sub­sec­tion as it should. (The sec­ond term is summed over the three Carte­sian di­rec­tions $i$. That gives $\nabla^2\varphi$.) So the spo­ton pro­duces the same steady Koulomb field (A.107) as be­fore. So far, so good.

How about the force on a se­lec­ton in this field? Of course, the force on a se­lec­ton is a Sorentz force of the same form as (A.113),

\begin{displaymath}
F_x\strut_{\rm {e}} =
\frac
{\partial\varphi(r_{\rm {e}}\...
...{e}}\strut_3;t)}
{\partial r_{\rm {e}}\strut_1} s_{\rm {e}} %
\end{displaymath} (A.117)

In the steady case, the rel­e­vant po­ten­tial at the se­lec­ton is the elec­tro­sta­tic one (A.107) pro­duced by the spo­ton as given in the pre­vi­ous sub­sec­tion (Strictly speak­ing you should also in­clude the field pro­duced by the se­lec­ton it­self. But this self-in­ter­ac­tion pro­duces no net force. That is for­tu­nate be­cause if the se­lec­ton was re­ally a point sarge, the self-in­ter­ac­tion is math­e­mat­i­cally sin­gu­lar.) Now mi­nus the po­ten­tial (A.107) times the se­lec­ton sarge $s_{\rm {e}}$ gave the en­ergy $V_{\rm {ep}}$ of the spo­ton-se­lec­ton in­ter­ac­tion in the pre­vi­ous sub­sec­tion. And mi­nus the de­riv­a­tive of that gave the force on the se­lec­ton. A look at the force above then shows it is the same.

So in the steady case the Saxwell equa­tion com­bined with the Sorentz force does re­pro­duce se­lec­to­sta­t­ics cor­rectly. That means that the given La­grangian (A.114) con­tains all of se­lec­to­sta­t­ics in a sin­gle con­cise math­e­mat­i­cal ex­pres­sion. At the min­i­mum. Neat, isn’t it?

Con­sider next the case that the time de­pen­dence can­not be ig­nored. Then the time de­riv­a­tive in the Saxwell equa­tion (A.116) can­not be ig­nored. In that case the left hand side in the equa­tion is the com­plete un­steady Klein-Gor­don equa­tion. Since there is a nonzero right-hand side, math­e­mat­i­cally the Saxwell equa­tion is an in­ho­mo­ge­neous Klein-Gor­don equa­tion. Now it is known from the the­ory of par­tial dif­fer­en­tial equa­tions that the Klein-Gor­don equa­tion re­spects the speed of light. As an ex­am­ple, imag­ine that at time $t$ = 0 you briefly shake the spo­ton at the ori­gin and then put it back where it was. The right hand side of the Saxwell equa­tion is then again back to what it was. But near the ori­gin, the fo­ton field $\varphi$ will now con­tain ad­di­tional dis­tur­bances. These dis­tur­bances evolve ac­cord­ing to the ho­mo­ge­neous Saxwell equa­tion, i.e. the equa­tion with zero right hand side. And it is easy to check by sub­sti­tu­tion that the ho­mo­ge­neous equa­tion has so­lu­tions of the form

\begin{displaymath}
\varphi = f(x - c t)
\end{displaymath}

That are waves trav­el­ing in the x-di­rec­tion with the speed of light $c$. The wave shape is the ar­bi­trary func­tion $f$ and is pre­served in time. And note that the $x$-​di­rec­tion is ar­bi­trary. So waves like this can travel in any di­rec­tion. The per­tur­ba­tions near the ori­gin caused by shak­ing the spo­ton will con­sist of such waves. Since they travel with the speed of light, they need some time to reach the se­lec­ton. The se­lec­ton will not no­tice any­thing un­til this hap­pens. How­ever, when the per­tur­ba­tions in the fo­ton field do reach the se­lec­ton, they will change the fo­ton field $\varphi$ at the se­lec­ton. That then will change the force (A.117) on the se­lec­ton.

It fol­lows that se­lec­to­dy­nam­ics, as de­scribed by the La­grangian (A.114), also re­spects the speed of light lim­i­ta­tion.


A.22.3 Quan­tum se­lec­to­sta­t­ics

The pre­vi­ous sub­sec­tions de­rived the Koulomb force be­tween sarged par­ti­cles. This force was due to fo­ton ex­change. While the de­riva­tions used some ideas from quan­tum me­chan­ics, they were clas­si­cal. The ef­fect of the fo­tons took the form of a po­ten­tial $\varphi$ that the sarged par­ti­cles in­ter­acted with. This po­ten­tial was a clas­si­cal field; it had a def­i­nite nu­mer­i­cal value at each point. To be picky, there re­ally was an un­de­ter­mined con­stant in the po­ten­tial $\varphi$. But its gra­di­ent $\nabla\varphi$ pro­duced the fully de­ter­mined Sorentz force per unit sarge (A.117). This force can be ob­served by a sarged spo­ton or se­lec­ton.

How­ever, that very fact vi­o­lates the fun­da­men­tal pos­tu­lates of quan­tum me­chan­ics as for­mu­lated at the be­gin­ning of this book, chap­ter 3.4. Ob­serv­able val­ues should be the eigen­val­ues of Her­mit­ian op­er­a­tors that act on wave func­tions. While the fo­ton po­ten­tial was loosely as­so­ci­ated with a fo­ton wave func­tion, wave func­tions should not be ob­serv­able.

Now if clas­si­cally every po­si­tion has its own ob­serv­able lo­cal po­ten­tial $\varphi$, then in a proper quan­tum de­scrip­tion every po­si­tion must be as­so­ci­ated with its own Her­mit­ian op­er­a­tor $\widehat\varphi$. In the ter­mi­nol­ogy of ad­den­dum {A.15.9}, the fo­ton field $\widehat\varphi$ must be a quan­tum field; an in­fi­nite amount of op­er­a­tors, one for each po­si­tion.

The ob­jec­tive in this sub­sec­tion is to de­duce the form of this quan­tum field. And the type of wave func­tion that it op­er­ates on. The re­sults will then be used to ver­ify the Koulomb force be­tween sta­tion­ary sarges as found the first sub­sec­tion. It is im­per­a­tive to fig­ure out whether like sarges still at­tract in a proper quan­tum de­scrip­tion.

Do­ing this di­rectly would not be easy. It helps a lot if the field is writ­ten in terms of lin­ear mo­men­tum eigen­states.

In fact, typ­i­cal quan­tum field the­o­ries de­pend very heav­ily on this trick. How­ever, of­ten such the­o­ries use rel­a­tivis­tic com­bined en­ergy-mo­men­tum states in four-di­men­sion­al space-time. This sub­sec­tion will use sim­pler purely spa­tial mo­men­tum states. The ba­sic idea is the same. And it is es­sen­tial for un­der­stand­ing the later Fermi de­riva­tion of the Coulomb po­ten­tial.

Lin­ear mo­men­tum states are com­plex ex­po­nen­tials of the form $e^{{\rm i}{\vec k}\cdot{\skew0\vec r}}$. Here ${\vec k}$ is a con­stant vec­tor called the wave num­ber vec­tor. The mo­men­tum of such a state is given in terms of the wave num­ber vec­tor by the de Broglie re­la­tion as ${\skew0\vec p}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\hbar{\vec k}$. (It may be noted that the $e^{{\rm i}{\vec k}\cdot{\skew0\vec r}}$ states need an ad­di­tional con­stant to prop­erly nor­mal­ize them, chap­ter 6.18. But for con­cise­ness, in this ad­den­dum that nor­mal­iza­tion con­stant will be ab­sorbed in the con­stants mul­ti­ply­ing the ex­po­nen­tials.)

If a field $\varphi$ is writ­ten in terms of lin­ear mo­men­tum states, its value at any point ${\skew0\vec r}$ is given by:

\begin{displaymath}
\varphi({\skew0\vec r}) = \sum_{{\rm all\ }{\vec k}} c_{{\vec k}}\, e^{{\rm i}{\vec k}\cdot{\skew0\vec r}}
\end{displaymath}

Note that if you know the co­ef­fi­cients $c_{{\vec k}}$ of the mo­men­tum states, it is equiv­a­lent to know­ing the field $\varphi$. Then the field at any point can in prin­ci­ple be found by do­ing the sum.

The ex­pres­sion above as­sumes that the en­tire sys­tem is con­fined to a very large pe­ri­odic box, as in chap­ter 6.17. In in­fi­nite space the sum be­comes an in­te­gral, sec­tion 7.9. That would be much more messy. (But that is the way you will usu­ally find it in a typ­i­cal quan­tum field analy­sis.) The pre­cise val­ues of the wave num­ber vec­tors to sum over for a given pe­ri­odic box were given in chap­ter 6.18 (6.28); they are all points in fig­ure 6.17.

The first sub­sec­tion found the se­lec­to­sta­tic po­ten­tial $\varphi^{\rm {p}}$ that was pro­duced by a spo­ton, (A.107). This po­ten­tial was a clas­si­cal field; it had a def­i­nite nu­mer­i­cal value for each po­si­tion. The first step will be to see how this po­ten­tial looks in terms of mo­men­tum states. While the fi­nal ob­jec­tive is to red­erive the clas­si­cal po­ten­tial us­ing proper quan­tum me­chan­ics, the cor­rect an­swer will need to be rec­og­nized when writ­ten in terms of mo­men­tum states. Not to men­tion that the an­swer will reap­pear in the dis­cus­sion of the Coulomb po­ten­tial. For sim­plic­ity it will be as­sumed that the spo­ton is at the ori­gin.

Ac­cord­ing to the first sub­sec­tion, the clas­si­cal po­ten­tial was the so­lu­tion to a Pois­son equa­tion; a steady Klein-Gor­don equa­tion with forc­ing by the spo­ton:

\begin{displaymath}
- \nabla^2\varphi^{\rm {p}}_{\rm {cl}}
= \frac{s_{\rm {p}}}{\epsilon_1} \psi_{\rm {p}}^*\psi_{\rm {p}}
\end{displaymath}

As a re­minder that $\varphi^{\rm {p}}$ is a clas­si­cal po­ten­tial, not a quan­tum one, a sub­script cl has been added. Also note that since this is a now a quan­tum de­scrip­tion, the spo­ton sarge den­sity $\sigma_{\rm {p}}$ has been iden­ti­fied as the spo­ton sarge $s_{\rm {p}}$ times the square mag­ni­tude of the spo­ton wave func­tion $\vert\psi_{\rm {p}}\vert^2$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\psi_{\rm {p}}^*\psi_{\rm {p}}$.

Now the clas­si­cal po­ten­tial is to be writ­ten in the form

\begin{displaymath}
\varphi^{\rm {p}}_{\rm {cl}}({\skew0\vec r})
= \sum_{{\rm ...
...\vec k}} c_{{\vec k}}\, e^{{\rm i}{\vec k}\cdot{\skew0\vec r}}
\end{displaymath}

To fig­ure out the co­ef­fi­cients $c_{{\vec k}}$, plug it in the Pois­son equa­tion above. That gives

\begin{displaymath}
\sum_{{\rm all}\ {\vec k}} k^2 c_{{\vec k}}\, e^{{\rm i}{\v...
... \frac{s_{\rm {p}}}{\epsilon_1} \psi_{\rm {p}}^*\psi_{\rm {p}}
\end{displaymath}

Note that in the left hand side each $\nabla$ pro­duced a fac­tor ${\rm i}{\vec k}$ for $\vphantom{0}\raisebox{1.5pt}{$-$}$$k^2$ to­tal.

Now mul­ti­ply this equa­tion at both sides by some sam­ple com­plex-con­ju­gate mo­men­tum eigen­func­tion $e^{-{\rm i}\underline{\vec k}\cdot{\skew0\vec r}}$ and in­te­grate over the en­tire vol­ume ${\cal V}$ of the pe­ri­odic box. In the left hand side, you only get some­thing nonzero for the term in the sum where ${\vec k}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\underline{\vec k}$ be­cause eigen­func­tions are or­thog­o­nal. For that term, the ex­po­nen­tials mul­ti­ply to 1. So the re­sult is

\begin{displaymath}
{\underline k}^2 c_{\underline{\vec k}} {\cal V}= \frac{s_{...
...c r}} \psi_{\rm {p}}^*\psi_{\rm {p}} {\,\rm d}^3{\skew0\vec r}
\end{displaymath}

Now in the right hand side, as­sume again that the spo­ton is al­most ex­actly at the ori­gin. In other words, as­sume that its wave func­tion is zero ex­cept very close to the ori­gin. In that case, the ex­po­nen­tial in the in­te­gral is ap­prox­i­mately 1 when the spo­ton wave func­tion is not zero. Also, the square wave func­tion in­te­grates to 1. So the re­sult is, af­ter clean up,

\begin{displaymath}
c_{\underline{\vec k}} = \frac{s_{\rm {p}}}{\epsilon_1{\cal V}{\underline k}^2}
\end{displaymath}

This ex­pres­sion ap­plies for any wave num­ber vec­tor $\underline{\vec k}$, so you can leave the un­der­line away. It fully de­ter­mines $\varphi^{\rm {p}}_{\rm {cl}}$ in terms of the mo­men­tum states:
\begin{displaymath}
\varphi^{\rm {p}}_{\rm {cl}}({\skew0\vec r}) = \sum_{{\rm a...
...epsilon_1{\cal V}k^2} e^{{\rm i}{\vec k}\cdot{\skew0\vec r}} %
\end{displaymath} (A.118)

This so­lu­tion is def­i­nitely one to re­mem­ber. Note in par­tic­u­lar that the co­ef­fi­cients of the mo­men­tum states are a con­stant di­vided by $k^2$. Re­call also that for a unit value of $s_{\rm {p}}$$\raisebox{.5pt}{$/$}$$\epsilon_1$, this so­lu­tion is the fun­da­men­tal so­lu­tion, or Green’s func­tion, of the Pois­son equa­tion with point wise forc­ing at the ori­gin.

If the re­quire­ment that the spo­ton wave func­tion is com­pletely at the ori­gin is re­laxed, the in­te­gral in­volv­ing the spo­ton wave func­tion stays:

\begin{displaymath}
\varphi^{\rm {p}}_{\rm {cl}}({\skew0\vec r}) = \sum_{{\rm a...
...m {p}}\right\rangle}
e^{{\rm i}{\vec k}\cdot{\skew0\vec r}} %
\end{displaymath} (A.119)

where

\begin{displaymath}
{\left\langle\psi_{\rm {p}}\hspace{0.3pt}\right\vert}e^{-{\...
...({\skew0\vec r}_{\rm {p}}) {\,\rm d}^3{\skew0\vec r}_{\rm {p}}
\end{displaymath}

Note that the in­te­gra­tion vari­able over the spo­ton wave func­tion has been re­named ${\skew0\vec r}_{\rm {p}}$ to avoid con­fu­sion with the po­si­tion ${\skew0\vec r}$ at which the po­ten­tial is eval­u­ated. The above re­sult is re­ally bet­ter to work with in this sub­sec­tion, since it does not suf­fer from some con­ver­gence is­sues that the Green’s func­tion so­lu­tion has. And it is ex­act for a spo­ton wave func­tion that is some­what spread out.

Now the ob­jec­tive is to re­pro­duce this clas­si­cal re­sult us­ing a proper quan­tum field the­ory. And to find the force when a se­lec­ton is added to the sys­tem.

To do so, con­sider ini­tially a sys­tem of fo­tons and a sin­gle spo­ton. The spo­ton will be treated as a non­rel­a­tivis­tic par­ti­cle. Then its wave func­tion $\psi_{\rm {p}}$ de­scribes ex­actly one spo­ton. The spo­ton wave func­tion will be treated as given. Imag­ine some­thing keep­ing the spo­ton in a ground state squeezed around the ori­gin. Maxwell’s de­mon would work. He has not been do­ing much any­way af­ter he failed his thermo test.

Next the fo­tons. Their de­scrip­tion will be done based upon lin­ear mo­men­tum states. Such a state cor­re­sponds to a sin­gle-fo­ton wave func­tion of the form $e^{{\rm i}{\vec k}\cdot{\skew0\vec r}}$.

To keep it sim­ple, for now only a sin­gle mo­men­tum state will be con­sid­ered. In other words, only a sin­gle wave num­ber vec­tor ${\vec k}$ will be con­sid­ered. But there might be mul­ti­ple fo­tons in the state, or even un­cer­tainty in the num­ber of fo­tons.

Of course, at the end of the day the re­sults must still be summed over all val­ues of ${\vec k}$.

Some no­ta­tions are needed now. A sit­u­a­tion in which there are no fo­tons in the con­sid­ered state will be in­di­cated by the “Fock space ket” ${\left\vert\right\rangle}$. If there is one fo­ton in the state, it is in­di­cated by ${\left\vert 1\right\rangle}$, two by ${\left\vert 2\right\rangle}$, etcetera. In the math­e­mat­ics of quan­tum field the­ory, kets are taken to be or­tho­nor­mal, {A.15}:

\begin{displaymath}
{\left\langle i_1\hspace{0.3pt}\right\vert}{\left\vert i_2\...
...1\mbox{ if }i_1=i_2 \\ 0\mbox{ otherwise}\end{array} \right. %
\end{displaymath} (A.120)

In words, the in­ner prod­uct of kets is 0 un­less the num­bers of fo­tons are equal. Then it is 1.

The ground state wave func­tion for the com­bined spo­ton-fo­tons sys­tem is then as­sumed to be of the form

\begin{displaymath}
\psi_{\varphi\rm {p}} = C_0 \psi_{\rm {p}} {\left\vert\righ...
...\vert^2 + \vert C_1\vert^2 + \vert C_2\vert^2 + \ldots = 1\; %
\end{displaymath} (A.121)

That is a lin­ear com­bi­na­tion of sys­tem states with 0, 1, 2, ... fo­tons. So it is as­sumed that there may be un­cer­tainty about the num­ber of fo­tons in the con­sid­ered fo­ton state. The nor­mal­iza­tion con­di­tion for the con­stants ex­presses that the to­tal prob­a­bil­ity of find­ing the sys­tem in some state or the other is 1.

(It may be noted that in typ­i­cal quan­tum field the­o­ries, a charged rel­a­tivis­tic par­ti­cle would also be de­scribed in terms of kets and some quan­tum field $\widehat\psi$. How­ever, un­like for a pho­ton, for a charged par­ti­cle $\widehat\psi$ would nor­mally be a com­plex quan­tum field. Then $\widehat\psi^*\widehat\psi$ or some­thing along these lines pro­vides a real prob­a­bil­ity for a pho­ton to ob­serve the par­ti­cle. That re­sem­bles the Born in­ter­pre­ta­tion of the non­rel­a­tivis­tic wave func­tion some­what, es­pe­cially for a spin­less par­ti­cle. Com­pare [[17, pp. 49, 136, 144]]. The field $\widehat\psi$ will de­scribe both the par­ti­cle and its op­po­sitely charged an­tipar­ti­cle. The spo­ton wave func­tion $\psi_{\rm {p}}$ as used here rep­re­sents some non­rel­a­tivis­tic limit in which the an­tipar­ti­cle has been ap­prox­i­mated away from the field, [[17, pp. 41-45]]. Such a non­rel­a­tivis­tic limit sim­ply does not ex­ist for a real scalar field like the Koulomb one.)

Now, of course, the Hamil­ton­ian is needed. The Hamil­ton­ian de­ter­mines the en­ergy. It con­sists of three parts:

\begin{displaymath}
H = H_{\rm {p}} + H_\varphi + H_{\varphi\rm {p}}
\end{displaymath}

The first part is the Hamil­ton­ian for the spo­ton in iso­la­tion. It con­sists of the ki­netic en­ergy of the spo­ton, as well as the po­ten­tial pro­vided by the fin­gers of the de­mon. By de­f­i­n­i­tion
\begin{displaymath}
H_{\rm {p}} \psi_{\rm {p}} = E_{\rm {p}} \psi_{\rm {p}} %
\end{displaymath} (A.122)

where $E_{\rm {p}}$ is the en­ergy of the spo­ton in iso­la­tion.

The sec­ond part is the Hamil­ton­ian of the free fo­ton field. Each fo­ton in the con­sid­ered state should have an en­ergy $\hbar\omega$ with $\omega$ $\vphantom0\raisebox{1.5pt}{$=$}$ $kc$. That is the en­ergy that you get if you sub­sti­tute the mo­men­tum eigen­func­tion $e^{{\rm i}{\vec k}\cdot{\skew0\vec r}}$ into the Klein-Gor­don eigen­value prob­lem (A.101) for a mass­less par­ti­cle. And if one fo­ton has an en­ergy $\hbar\omega$, then $i$ of them should have en­ergy $i\hbar\omega$, so

\begin{displaymath}
H_\varphi {\left\vert i\right\rangle} = i \hbar\omega {\left\vert i\right\rangle} %
\end{displaymath} (A.123)

Note that spec­i­fy­ing what the Hamil­ton­ian does to each sep­a­rate ket tells you all you need to know about it. (Of­ten there is an ad­di­tional ground state en­ergy shown in the above ex­pres­sion, but that does not make a dif­fer­ence here. It re­flects the choice of the zero of en­ergy.)

Fi­nally, the third part of the to­tal Hamil­ton­ian is the in­ter­ac­tion be­tween the spo­ton and the fo­ton field. This is the tricky one. First re­call the clas­si­cal ex­pres­sion for the in­ter­ac­tion en­ergy. Ac­cord­ing to the pre­vi­ous sub­sec­tion, (A.111), it was $-s_{\rm {p}}\varphi_{\rm {p}}$. Here $\varphi_{\rm {p}}$ was the clas­si­cal fo­ton po­ten­tial, eval­u­ated at the po­si­tion of the spo­ton.

In quan­tum field the­ory, the ob­serv­able field $\varphi$ gets re­placed by a quan­tum field $\widehat\varphi$. The in­ter­ac­tion Hamil­ton­ian then be­comes

\begin{displaymath}
H_{\varphi\rm {p}} = - s_{\rm {p}} \widehat\varphi_{\rm {p}}
\end{displaymath} (A.124)

This Hamil­ton­ian needs to op­er­ate on the wave func­tion (A.121) in­volv­ing the spo­ton wave func­tion and Fock space kets for the fo­tons. The big ques­tion is now: what is that quan­tum field $\widehat\varphi$?

To an­swer that, first note that sarged par­ti­cles can cre­ate and de­stroy fo­tons. The above in­ter­ac­tion Hamil­ton­ian must ex­press that some­how. Af­ter all, it is the Hamil­ton­ian that de­ter­mines the time evo­lu­tion of sys­tems in quan­tum me­chan­ics.

Now in quan­tum field the­o­ries, cre­ation and de­struc­tion of par­ti­cles are ac­counted for through cre­ation and an­ni­hi­la­tion op­er­a­tors, {A.15}. A cre­ation op­er­a­tor $\widehat a_{\vec k}$ cre­ates a sin­gle par­ti­cle in a mo­men­tum state $e^{{\rm i}{\vec k}\cdot{\skew0\vec r}}$. An an­ni­hi­la­tion op­er­a­tor $\widehat a_{\vec k}$ an­ni­hi­lates a sin­gle par­ti­cle from such a state. More pre­cisely, the op­er­a­tors are de­fined as

\begin{displaymath}
\widehat a_{\vec k}{\left\vert i\right\rangle} = \sqrt{i} {...
...t i{-}1\right\rangle} = \sqrt{i} {\left\vert i\right\rangle} %
\end{displaymath} (A.125)

Here ${\left\vert i\right\rangle}$ is the Fock-space ket that in­di­cates that there are $i$ fo­tons in the con­sid­ered mo­men­tum state. Ex­cept for the nu­mer­i­cal fac­tor $\sqrt{i}$, the an­ni­hi­la­tion op­er­a­tor takes a fo­ton out of the state. The cre­ation op­er­a­tor puts it back in, adding an­other nu­mer­i­cal fac­tor $\sqrt{i}$.

Note in­ci­den­tally that the fo­ton field Hamil­ton­ian given ear­lier can now be rewrit­ten as

\begin{displaymath}
H_\varphi = \hbar\omega \widehat a^\dagger _{\vec k}\widehat a_{\vec k} %
\end{displaymath} (A.126)

That is be­cause

\begin{displaymath}
\hbar\omega \widehat a^\dagger _{\vec k}\widehat a_{\vec k}...
...\vert i\right\rangle}
= H_\varphi {\left\vert i\right\rangle}
\end{displaymath}

In gen­eral this Hamil­ton­ian will still need to be summed over all val­ues of ${\vec k}$.

But surely, the cre­ation and an­ni­hi­la­tion of par­ti­cles should also de­pend on where the spo­ton is. Fo­tons in the con­sid­ered state have a spa­tially vary­ing wave func­tion. That should be re­flected in the quan­tum field $\widehat\varphi$ some­how. To find the cor­rect ex­pres­sion, it is eas­i­est to first per­form a suit­able nor­mal­iza­tion of the fo­ton state. Now the full wave func­tion cor­re­spond­ing to the sin­gle-fo­ton mo­men­tum eigen­state in empty space is

\begin{displaymath}
\varphi_{\rm {f}} = C e^{-{\rm i}\omega t} e^{{\rm i}{\vec k}\cdot{\skew0\vec r}}
\end{displaymath}

Here $C$ is some nor­mal­iza­tion con­stant to be cho­sen. The above wave func­tion can be ver­i­fied by putting it into the Klein-Gor­don equa­tion (A.102). The en­ergy of the fo­ton is given in terms of its wave func­tion above as $\hbar\omega$. But the en­ergy in the fo­ton field is also re­lated to the ob­serv­able field $\varphi$; clas­si­cal se­lec­to­sta­t­ics gives that re­la­tion as (A.103). If you plug the fo­ton wave func­tion above into that clas­si­cal ex­pres­sion, you do not nor­mally get the cor­rect en­ergy $\hbar\omega$. There is no need for it; the fo­ton wave func­tion is not ob­serv­able. How­ever, it makes things sim­pler if you choose $C$ so that the clas­si­cal en­ergy does equal $\hbar\omega$. That gives a en­ergy-nor­mal­ized wave func­tion
\begin{displaymath}
\varphi_{\vec k}= \frac{\varepsilon_k}{k} e^{{\rm i}{\vec k...
...ilon_k \equiv
\sqrt{\frac{\hbar\omega}{\epsilon_1{\cal V}}} %
\end{displaymath} (A.127)

In those terms, the needed quan­tum field turns out to be

\begin{displaymath}
\widehat\varphi
= \frac{1}{\sqrt{2}} (\varphi_{\vec k}\,\w...
...ilon_k \equiv
\sqrt{\frac{\hbar\omega}{\epsilon_1{\cal V}}} %
\end{displaymath} (A.128)

The first term in the right hand side is the nor­mal­ized sin­gle-fo­ton wave func­tion at wave num­ber ${\vec k}$ times the cor­re­spond­ing an­ni­hi­la­tion op­er­a­tor. The sec­ond term is the com­plex-con­ju­gate fo­ton wave func­tion times the cre­ation op­er­a­tor. There is also the usual fac­tor 1$\raisebox{.5pt}{$/$}$$\sqrt{2}$ that ap­pears when you take a lin­ear com­bi­na­tion of two states.

You might of course won­der about that sec­ond term. Math­e­mat­i­cally it is needed to make the op­er­a­tor Her­mit­ian. Re­call that op­er­a­tors in quan­tum me­chan­ics need to be Her­mit­ian to en­sure that ob­serv­able quan­ti­ties have real, rather than com­plex val­ues, chap­ter 2.6. To check whether an op­er­a­tor is Her­mit­ian, you need to check that it is un­changed when you take it to the other side of an in­ner prod­uct. Now the wave func­tion is a nu­mer­i­cal quan­tity that changes into its com­plex con­ju­gate when taken to the other side. And $\widehat a$ changes into $\widehat a^\dagger $ and vice-versa when taken to the other side, {A.15.2}. So each term in $\widehat\varphi$ changes into the other one, leav­ing the sum un­changed. So the op­er­a­tor as shown is in­deed Her­mit­ian.

But what to make phys­i­cally of the two terms? One way of think­ing about it is that the ob­served field is real be­cause it does not just in­volve an in­ter­ac­tion with an $e^{{\rm i}({\vec k}\cdot{\skew0\vec r}-{\omega}t)}$ fo­ton, but also with an $e^{-{\rm i}({\vec k}\cdot{\skew0\vec r}-{\omega}t)}$ an­tifo­ton.

In gen­eral, the quan­tum field above would still need to be summed over all wave num­bers ${\vec k}$. (Or in­te­grated over ${\vec k}$ in in­fi­nite space). It may be noted that for given ${\skew0\vec r}$ the sum of the cre­ation op­er­a­tor terms over all ${\vec k}$ can be un­der­stood as a field op­er­a­tor that cre­ates a par­ti­cle at po­si­tion ${\skew0\vec r}$, [34, p. 24]. That is a slightly dif­fer­ent de­f­i­n­i­tion of the cre­ation field op­er­a­tor than given in {A.15.9}, [42, pp. 22]. But for non­rel­a­tivis­tic par­ti­cles (which have nonzero rest mass) it would not make a dif­fer­ence.

With the quan­tum field $\varphi$ now iden­ti­fied, the Hamil­ton­ian of the spo­ton-fo­tons in­ter­ac­tion be­comes fi­nally

\begin{displaymath}
H_{\varphi\rm {p}} = - s_{\rm {p}} \widehat\varphi_{\rm {p}...
...ilon_k \equiv
\sqrt{\frac{\hbar\omega}{\epsilon_1{\cal V}}} %
\end{displaymath} (A.129)

Note that the spo­ton has un­cer­tainty in po­si­tion. The spo­ton po­si­tion in the Hamil­ton­ian above is just a pos­si­ble spo­ton po­si­tion. In us­age it will still get mul­ti­plied by the square spo­ton wave func­tion mag­ni­tude that gives the prob­a­bil­ity for that po­si­tion. Still, at face value the in­ter­ac­tion of the spo­ton with the field takes place at the lo­ca­tion of the spo­ton. In­ter­ac­tions in quan­tum field the­o­ries are “lo­cal.” At least on macro­scopic scales that is needed to sat­isfy the lim­i­ta­tion of the speed of light.

Hav­ing a Hamil­ton­ian al­lows quan­tum se­lec­to­dy­nam­ics to be ex­plored. That will be done to some de­tail for the case of the elec­tro­mag­netic field in sub­se­quent ad­denda. How­ever, here the only ques­tion that will be ad­dressed is whether clas­si­cal se­lec­to­sta­t­ics as de­scribed in the first sub­sec­tion was cor­rect. In par­tic­u­lar, do equal sarges still at­tract in the quan­tum de­scrip­tion?

Se­lec­to­sta­t­ics of the spo­ton-fo­tons sys­tem should cor­re­spond to the ground state of the sys­tem. The ground state has the low­est pos­si­ble en­ergy. You can there­fore find the ground state by find­ing the state of low­est pos­si­ble sys­tem en­ergy. That is the same trick as was used to find the ground states of the hy­dro­gen mol­e­c­u­lar ion and the hy­dro­gen mol­e­cule in chap­ters 4.6 and 5.2. The ex­pec­ta­tion value of the sys­tem en­ergy is de­fined by the in­ner prod­uct

\begin{displaymath}
\left\langle{E}\right\rangle = {\left\langle\psi_{\varphi\r...
...-\nulldelimiterspace}\left.\psi_{\varphi\rm {p}}\right\rangle}
\end{displaymath}

Here the Hamil­to­ni­ans have al­ready been de­scribed above.

Now to find the ground state, the low­est pos­si­ble value of the ex­pec­ta­tion en­ergy above is needed. To get that, the in­ner prod­ucts be­tween the kets in the fac­tors $\psi_{\varphi\rm {p}}$ must be mul­ti­plied out. First ap­ply the Hamil­to­ni­ans (A.122), (A.123), and (A.129) on the wave func­tion $\psi_{\varphi\rm {p}}$, (A.121), us­ing (A.125). Then ap­ply the or­thog­o­nal­ity re­la­tions (A.120) of kets. Do not for­get the com­plex con­ju­gate on the left side of an in­ner prod­uct. That pro­duces

\begin{eqnarray*}
\left\langle{E}\right\rangle & = & E_{\rm {p}} \\
&& \mbox{...
...ght\rangle}
\left(C_1^* C_0 + \sqrt{2}C_2^* C_1 + \ldots\right)
\end{eqnarray*}

Here the dots stand for terms in­volv­ing the co­ef­fi­cients $C_3,C_4,\ldots$ of states with three or more fo­tons.

Note that the first term in the right hand side above is the en­ergy $E_{\rm {p}}$ of the spo­ton by it­self. That term is a given con­stant. The ques­tion is what fo­ton states pro­duce the low­est en­ergy for the re­main­ing terms. The an­swer is easy if the spo­ton sarge $s_{\rm {p}}$ is zero. Then the terms in the last two lines are zero. So the sec­ond line shows that $C_1,C_2,\ldots$ must all be zero. Then there are no fo­tons; only the state with zero fo­tons is then left in the sys­tem wave func­tion (A.121).

If the spo­ton sarge is nonzero how­ever, the in­ter­ac­tion terms in the last two lines can lower the en­ergy for suit­able nonzero val­ues of the con­stants $C_1$, $C_2$, .... To sim­plify mat­ters, it will be as­sumed that the spo­ton sarge is nonzero but small. Then so will be the con­stants. In that case only the $C_1$ terms need to be con­sid­ered; the other terms in the last two lines in­volve the prod­uct of two small con­stants, and those can­not com­pete. Fur­ther the nor­mal­iza­tion con­di­tion in (A.121) shows that $\vert C_0\vert$ will be ap­prox­i­mately 1 since even $C_1$ is small. Then $C_0$ may be as­sumed to be 1, be­cause any eigen­func­tion is in­de­ter­mi­nate by a fac­tor of mag­ni­tude 1 any­way.

Fur­ther, since any com­plex num­ber may al­ways be writ­ten as its mag­ni­tude times some ex­po­nen­tial of mag­ni­tude 1, the sec­ond last line of the en­ergy above can be writ­ten as

\begin{displaymath}
- \frac{s_{\rm {p}}\varepsilon_k}{\sqrt{2} k}
{\left\langl...
... \Big\vert e^{{\rm i}\alpha}\; \vert C_1\vert e^{{\rm i}\beta}
\end{displaymath}

Re­plac­ing ${\rm i}$ every­where by $\vphantom{0}\raisebox{1.5pt}{$-$}$${\rm i}$ gives the cor­re­spond­ing ex­pres­sion for the last line. The com­plete ex­pres­sion for the en­ergy then be­comes

\begin{eqnarray*}
E & = & E_{\rm {p}} + \vert C_1\vert^2 \hbar \omega \\
&& \...
...ght\rangle}
\Big\vert \vert C_1\vert e^{-{\rm i}(\alpha+\beta)}
\end{eqnarray*}

In the last term, note that the sign of ${\rm i}$ in­side an ab­solute value does not make a dif­fer­ence. Us­ing the Euler for­mula (2.5) on the trail­ing ex­po­nen­tials gives

\begin{displaymath}
E = E_{\rm {p}} +
\vert C_1\vert^2 \hbar \omega - 2 \frac{...
... {p}}\right\rangle}
\Big\vert\vert C_1\vert\cos(\alpha+\beta)
\end{displaymath}

But in the ground state, the en­ergy should be min­i­mal. Clearly, that re­quires that the co­sine is at its max­i­mum value 1. So it re­quires tak­ing $C_1$ so that $\beta$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\vphantom{0}\raisebox{1.5pt}{$-$}$$\alpha$.

That still leaves the mag­ni­tude $\vert C_1\vert$ to be found. Note that the fi­nal terms in ex­pres­sion above are now of the generic form

\begin{displaymath}
a \vert C_1\vert^2 - 2 b \vert C_1\vert
\end{displaymath}

where $a$ and $b$ are pos­i­tive con­stants. That is a qua­dratic func­tion of $\vert C_1\vert$. By dif­fer­en­ti­a­tion, it is seen that the min­i­mum oc­curs at $\vert C_1\vert$ $\vphantom0\raisebox{1.5pt}{$=$}$ $b$$\raisebox{.5pt}{$/$}$$a$ and has a value $\vphantom{0}\raisebox{1.5pt}{$-$}$$b^2$$\raisebox{.5pt}{$/$}$$a$. Putting in what $a$ and $b$ are then gives

\begin{displaymath}
C_1 = \frac{s_{\rm {p}}\varepsilon_k}{\sqrt{2} k \hbar\omeg...
...-\nulldelimiterspace}\left.\psi_{\rm {p}}\right\rangle}\vert^2
\end{displaymath}

The sec­ond part of the en­ergy is the en­ergy low­er­ing achieved by hav­ing a small prob­a­bil­ity $\vert C_1\vert^2$ of a sin­gle fo­ton in the con­sid­ered mo­men­tum state.

This en­ergy-low­er­ing still needs to be summed over all states ${\vec k}$ to get the to­tal:

\begin{displaymath}
E - E_{\rm {p}} = - \sum_{{\vec k}} \frac{s_{\rm {p}}^2}{2\...
...hspace{-\nulldelimiterspace}\left.\psi_{\rm {p}}\right\rangle}
\end{displaymath}

Note that the fi­nal two in­ner prod­ucts rep­re­sent sep­a­rate in­te­gra­tions. There­fore to avoid con­fu­sion, the sub­script p was dropped from one in­te­gra­tion vari­able. In the sum, the clas­si­cal field (A.119) can now be rec­og­nized:

\begin{displaymath}
E - E_{\rm {p}} =
- \frac{s_{\rm {p}}}{2}
{\left\langle\p...
...ec r})\psi_{\rm {p}}({\skew0\vec r}) {\,\rm d}^3{\skew0\vec r}
\end{displaymath}

Ig­nor­ing the dif­fer­ences in no­ta­tion, the en­ergy low­er­ing is ex­actly the same as (A.108) found in the clas­si­cal analy­sis. The clas­si­cal analy­sis, while not re­ally jus­ti­fied, did give the right an­swer.

How­ever now an ac­tual pic­ture of the quan­tum ground state has been ob­tained. It is a quan­tum su­per­po­si­tion of sys­tem states. The most likely state is the one where there are no fo­tons at all. But there are also small prob­a­bil­i­ties for sys­tem states where there is a sin­gle fo­ton in a sin­gle lin­ear mo­men­tum fo­ton state. This pic­ture does as­sume that the spo­ton sarge is small. If that was not true, things would get much more dif­fi­cult.

An­other ques­tion is whether the ob­serv­able val­ues of the fo­ton po­ten­tial are the same as those ob­tained in the clas­si­cal analy­sis. This is ac­tu­ally a trick ques­tion be­cause even the clas­si­cal fo­ton po­ten­tial is not ob­serv­able. There is still an un­de­ter­mined con­stant in it. What is ob­serv­able are the de­riv­a­tives of the po­ten­tial: they give the ob­serv­able se­lec­tic force per unit sarge on sarged par­ti­cles.

Now, in terms of mo­men­tum modes, the de­riv­a­tives of the clas­si­cal po­ten­tial can be found by dif­fer­en­ti­at­ing (A.119). That gives

\begin{displaymath}
\varphi_i\strut_{\rm {p,cl}} = \sum_{{\rm all\ }{\vec k}}
...
...ght\rangle}
{\rm i}k_i e^{{\rm i}{\vec k}\cdot{\skew0\vec r}}
\end{displaymath}

Re­call again the con­ven­tion in­tro­duced in the pre­vi­ous sub­sec­tion that a sub­script $i$ on $\varphi$ in­di­cates the de­riv­a­tive $\partial$$\raisebox{.5pt}{$/$}$$\partial{r}_i$, where $r_1$, $r_2$, and $r_3$ cor­re­spond to $x$, $y$, and $z$ re­spec­tively. So the above ex­pres­sion gives the se­lec­tic force per unit sarge in the $x$, $y$, or $z$ di­rec­tion, de­pend­ing on whether $i$ is 1, 2, or 3.

The ques­tion is now whether the quan­tum analy­sis pre­dicts the same ob­serv­able forces. Un­for­tu­nately, the an­swer here is no. The ob­serv­able forces have quan­tum un­cer­tainty that the clas­si­cal analy­sis missed. How­ever, the Ehren­fest the­o­rem of chap­ter 7.2.1 sug­gests that the ex­pec­ta­tion forces should still match the clas­si­cal ones above.

The quan­tum ex­pec­ta­tion force per unit sarge in the $i$-​di­rec­tion is given by

\begin{displaymath}
\left\langle{\varphi_i}\right\rangle =
{\left\langle\Psi_{...
...{\varphi\rm {p}} = e^{-{\rm i}E t/\hbar} \psi_{\varphi\rm {p}}
\end{displaymath}

Here $E$ is the ground state en­ergy. Note that in this case the full, time-de­pen­dent wave func­tion $\Psi_{\varphi\rm {p}}$ is used. That is done be­cause in prin­ci­ple an ob­served field could vary in time as well as in space. Sub­sti­tut­ing in the $r_i$-​de­riv­a­tive of the quan­tum field (A.128) gives

\begin{displaymath}
\left\langle{\varphi_i}\right\rangle = \frac{\varepsilon_k}...
...-\nulldelimiterspace}\left.\psi_{\varphi\rm {p}}\right\rangle}
\end{displaymath}

Note that here ${\skew0\vec r}$ is not a pos­si­ble po­si­tion of the spo­ton, but a given po­si­tion at which the se­lec­tic force per unit sarge is to be found. Also note that the time de­pen­dent ex­po­nen­tials have dropped out against each other; the ex­pec­ta­tion forces are steady like for the clas­si­cal field.

The above ex­pres­sion can be mul­ti­plied out as be­fore. Us­ing the ob­tained ex­pres­sion for $C_1$, and the fact that ${\left\langle\psi_{\rm {p}}\right.\hspace{-\nulldelimiterspace}}{\left\vert\psi_{\rm {p}}\right\rangle}$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1 be­cause wave func­tions are nor­mal­ized, that gives.

\begin{displaymath}
\left\langle{\varphi_i}\right\rangle =
\frac{s_{\rm {p}}}{...
...ht\rangle}
{\rm i}k_i e^{-{\rm i}{\vec k}\cdot{\skew0\vec r}}
\end{displaymath}

Summed over all ${\vec k}$, the two terms in the right hand side pro­duce the same re­sult, be­cause op­po­site val­ues of ${\vec k}$ ap­pear equally in the sum­ma­tion. In other words, for every ${\vec k}$ term in the first sum, there is a $\vphantom{0}\raisebox{1.5pt}{$-$}$${\vec k}$ term in the sec­ond sum that pro­duces the same value. And that then means that the ex­pec­ta­tion se­lec­tic forces are the same as the clas­si­cal ones. The clas­si­cal analy­sis got that right, too.

To see that there re­ally is quan­tum un­cer­tainty in the forces, it suf­fices to look at the ex­pec­ta­tion square forces. If there was no un­cer­tainty in the forces, the ex­pec­ta­tion square forces would be just the square of the ex­pec­ta­tion forces. To see that that is not true, it is suf­fi­cient to sim­ply take the spo­ton sarge zero. Then the ex­pec­ta­tion field is zero too. But the ex­pec­ta­tion square field is given by

\begin{displaymath}
\left\langle{\varphi_i^2}\right\rangle = \frac{\varepsilon_...
...si_{\varphi\rm {p}} = \psi_{\rm {p}} {\left\vert\right\rangle}
\end{displaymath}

Mul­ti­ply­ing this out gives

\begin{displaymath}
\left\langle{\varphi_i^2}\right\rangle = \frac{\varepsilon_...
...^2}{2 k^2} =
\frac{\hbar\omega k_i^2}{2\epsilon_1{\cal V}k^2}
\end{displaymath}

Since Planck’s con­stant is not zero, this is not zero ei­ther. So even with­out the spo­ton, a se­lec­tic force mea­sure­ment will give a ran­dom, but nonzero value. The av­er­age of a large num­ber of such force mea­sure­ments will be zero, but not the in­di­vid­ual mea­sure­ments.

The above ex­pres­sion can be com­pared with the cor­re­spond­ing $\vert\varphi_{{\vec k},i}\vert^2$ of a sin­gle fo­ton, as given by (A.127). That com­par­i­son in­di­cates that even in the ground state in empty space, there is still half a fo­ton of ran­dom field en­ergy left. Re­call now the Hamil­ton­ian (A.126) for the fo­ton field. Usu­ally, this Hamil­ton­ian would be de­fined as

\begin{displaymath}
H_\varphi = \sum_{{\vec k}} \hbar\omega (\widehat a^\dagger _{\vec k}\widehat a_{\vec k}+{\textstyle\frac{1}{2}})
\end{displaymath}

The ad­di­tional ${\textstyle\frac{1}{2}}$ ex­presses the half fo­ton of en­ergy left in the ground state. The ground state en­ergy does not change the dy­nam­ics. How­ever, it is phys­i­cally re­flected in ran­dom nonzero val­ues if the se­lec­tic field is mea­sured in vac­uum.

The bad news is that if you sum these ground state en­er­gies over all val­ues of ${\vec k}$, you get in­fi­nite en­ergy. The ex­act same thing hap­pens for the pho­tons of the elec­tro­mag­netic field. Quan­tum field the­o­ries are plagued by in­fi­nite re­sults; this “vac­uum en­ergy” is just a sim­ple ex­am­ple. What it re­ally means phys­i­cally is as yet not known ei­ther. More on this can be found in {A.23.4}.

The fi­nal is­sue to be ad­dressed is the at­trac­tion be­tween a spo­ton and a se­lec­ton. That can be an­swered by sim­ply adding the se­lec­ton to the spo­ton-fo­tons analy­sis above, {D.37.2}. The an­swer is that the spo­ton-se­lec­ton in­ter­ac­tion en­ergy is the same as found in the clas­si­cal analy­sis.

So equal sarges still at­tract.


A.22.4 Poin­caré and Ein­stein try to save the uni­verse

The Koulomb uni­verse is a grim place. In se­lec­to­dy­nam­ics, par­ti­cles with the same sarge at­tract. So all se­lec­tons clump to­gether into one gi­gan­tic ball. As­sum­ing that spo­tons have the op­po­site sarge, they clump to­gether into an­other big ball at the other end of the uni­verse. But ac­tu­ally there is no jus­ti­fi­ca­tion to as­sume that spo­tons would have a dif­fer­ent sarge from se­lec­tons. That then means that all mat­ter clumps to­gether into a sin­gle gi­gan­tic satom. A satom like that will form one gi­gan­tic, ob­scene, black hole. It is hardly con­duc­tive to the de­vel­op­ment of life as we know it.

Un­for­tu­nately, the Koulomb force is based on highly plau­si­ble, ap­par­ently pretty un­avoid­able as­sump­tions. The re­sult­ing force sim­ply makes sense. None of these things can be said about the Coulomb force.

But maybe, just maybe, the Koulomb jug­ger­naut can be tripped up by some le­gal tech­ni­cal­ity. Things like that have hap­pened be­fore.

Now in a time not re­ally that very long ago, there lived a rev­o­lu­tion­ary of math­e­mat­ics called Poin­caré. Poin­caré dreamt of count­less shin­ing stars that would sweep through a gi­gan­tic, oth­er­wise dark uni­verse. And around these stars there would be plan­ets pop­u­lated by liv­ing be­ings called ob­servers. But if the stars all moved in ran­dom di­rec­tions, with ran­dom speeds, then which star would be the one at rest? Which star would be the king around which the other stars danced? Poin­caré thought long and hard about that prob­lem. No! he thun­dered even­tu­ally; “It shall not be. I hereby pro­claim that all stars are cre­ated equal. Any ob­server at any star can say at any given time that its star is at rest and that the other stars are mov­ing. On penalty of dead, noth­ing in physics may in­di­cate that ob­server to be wrong.”

Now nearby lived a young physi­cist called Ein­stein who was very lazy. For ex­am­ple, he al­most never both­ered to write the proper sum­ma­tion sym­bols in his for­mu­lae. Of course, that made it dif­fi­cult for him to find a well pay­ing job in some lab­o­ra­tory where they smash spo­tons into each other. Ein­stein ended up work­ing in some patent of­fice for lit­tle pay. But, for­tu­nate for our story, work­ing in a patent of­fice did give Ein­stein a fine in­sight in le­gal tech­ni­cal­i­ties.

First Ein­stein noted that the Procla­ma­tion of Poin­caré meant that ob­servers at dif­fer­ent stars had to dis­agree se­ri­ously about the lo­ca­tions and times of events. How­ever, it would not be com­plete chaos. The lo­ca­tions and times of events as per­ceived by dif­fer­ent ob­servers would still be re­lated. The re­la­tion would be a trans­for­ma­tion that the fa­mous physi­cist Lorentz had writ­ten down ear­lier, chap­ter 1.2.1 (1.6).

And the Procla­ma­tion of Poin­caré also im­plied that dif­fer­ent ob­servers had to agree about the same laws of physics. So the laws of physics should re­main the same when you change them from one ob­server to the next us­ing the Lorentz trans­for­ma­tion. Nowa­days we would say that the laws of physics should be “Lorentz in­vari­ant.” But at the time, Ein­stein did not want to use the name of Lorentz in vain.

Re­call now the clas­si­cal ac­tion prin­ci­ple of sub­sec­tion A.22.2. The so-called ac­tion in­te­gral had to be un­changed un­der small de­vi­a­tions from the cor­rect physics. The Procla­ma­tion of Poin­caré de­mands that all ob­servers must agree that the ac­tion is un­changed. If the ac­tion is un­changed for an ob­server at one star, but not for one at an­other star, then not all stars are cre­ated equal.

To see what that means re­quires a few fun­da­men­tal facts about spe­cial rel­a­tiv­ity, the the­ory of sys­tems in rel­a­tive mo­tion.

The Lorentz trans­for­ma­tion badly mixes up spa­tial po­si­tions and times of events as seen by dif­fer­ent ob­servers. To deal with that ef­fi­ciently, it is con­ve­nient to com­bine the three spa­tial co­or­di­nates and time into a sin­gle four-di­men­sion­al vec­tor, a four-vec­tor, chap­ter 1.2.4. Time be­comes the “ze­roth co­or­di­nate” that joins the three spa­tial co­or­di­nates. In var­i­ous no­ta­tions, the four-vec­tor looks like

\begin{displaymath}
\kern-1pt{\buildrel\raisebox{-1.5pt}[0pt][0pt]
{\hbox{\hspa...
...\\ \strut x^3\end{array}\right)
\equiv
\{x^\mu\}
\to
x^\mu
\end{displaymath}

First of all, note that the ze­roth co­or­di­nate re­ceives an ad­di­tional fac­tor $c$, the speed of light. That is to en­sure that it has units of length just like the other com­po­nents. It has al­ready been noted be­fore that the spa­tial co­or­di­nates $x$, $y$, and $z$ are in­di­cated by $r_1$, $r_2$, and $r_3$ in this ad­den­dum. That al­lows a generic com­po­nent to be in­di­cated by $r_i$ for $i$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1, 2, or 3. Note also that curly brack­ets are a stan­dard math­e­mat­i­cal way of in­di­cat­ing a set or col­lec­tion. So $\{r_i\}$ stands for the set of all three $r_i$ val­ues; in other words, it stands for the com­plete po­si­tion vec­tor ${\skew0\vec r}$. That is the pri­mary no­ta­tion that will be used in this ad­den­dum.

How­ever, in vir­tu­ally any quan­tum field book, you will find four-vec­tors in­di­cated by $x^\mu$. Here $\mu$ is an in­dex that can have the val­ues 0, 1, 2, or 3. (Ex­cept that some books make time the fourth com­po­nent in­stead of the ze­roth.) An $x^\mu$ by it­self prob­a­bly re­ally means $\{x^\mu\}$, in other words, the com­plete four-vec­tor. Physi­cists have trou­ble typ­ing curly brack­ets, so they leave them away. When more than one in­dex is needed, an­other Greek sym­bol will be used, like $x^\nu$. How­ever, $x^i$ would stand for just the spa­tial com­po­nents, so for the po­si­tion vec­tor $\{r_i\}$. The give-away is here that a ro­man su­per­script is used. Ro­man su­per­script $j$ would mean the same thing as $i$; the spa­tial com­po­nents only.

There are sim­i­lar no­ta­tions for the de­riv­a­tives of a func­tion $f$:

\begin{displaymath}
\kern-1pt{\buildrel\raisebox{-1.5pt}[0pt][0pt]
{\hbox{\hspa...
...array}\right)
\equiv
\{\partial_\mu f\}
\to
\partial_\mu f
\end{displaymath}

Note again that time de­riv­a­tives in this ad­den­dum are in­di­cated by a sub­script $t$ and spa­tial de­riv­a­tives by a sub­script $i$ for $i$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1, 2, or 3.

Quan­tum field books use $\partial_{\mu}f$ for de­riv­a­tives. They still have prob­lems with typ­ing curly brack­ets, so $\partial_{\mu}f$ by it­self prob­a­bly means the set of all four de­riv­a­tives. Sim­i­larly $\partial_if$ would prob­a­bly mean the spa­tial gra­di­ent $\nabla{f}$.

The fi­nal key fact to re­mem­ber about spe­cial rel­a­tiv­ity is:

In dot prod­ucts be­tween four-vec­tors, the prod­uct of the ze­roth com­po­nents gets a mi­nus sign.
Dot prod­ucts be­tween four-vec­tors are very im­por­tant be­cause all ob­servers agree about the val­ues of these dot prod­ucts. They are Lorentz in­vari­ant. (In non­rel­a­tivis­tic me­chan­ics, all ob­servers agree about the usual dot prod­ucts be­tween spa­tial vec­tors. That is no longer true at rel­a­tivis­tic speeds.)

One warn­ing. In al­most all mod­ern quan­tum field books, the prod­ucts of the spa­tial com­po­nents get the mi­nus sign in­stead of the time com­po­nents. The pur­pose is to make the rel­a­tivis­tic dot prod­uct in­com­pat­i­ble with the non­rel­a­tivis­tic one. Af­ter all, back­ward com­pat­i­bil­ity is so, well, back­ward. (One source that does use the com­pat­i­ble dot prod­uct is [48]. This is a truly ex­cel­lent book writ­ten by a No­bel Prize win­ning pi­o­neer in quan­tum field the­ory. It may well be the best book on the sub­ject avail­able. Un­for­tu­nately it is also very math­e­mat­i­cal and the en­tire thing spans three vol­umes. Then again, you could cer­tainly live with­out su­per­sym­me­try.)

One other con­ven­tion might be men­tioned. Some books put a fac­tor ${\rm i}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\sqrt{-1}$ in the ze­roth com­po­nents of four-vec­tors. That takes care of the mi­nus sign in dot prod­ucts au­to­mat­i­cally. But mod­ern quan­tum field books do not this.

Armed with this knowl­edge about spe­cial rel­a­tiv­ity, the Koulomb force can now be checked. Ac­tion is de­fined as

\begin{displaymath}
{\cal S}\equiv \int_{t_1}^{t_2} {\cal L}{\,\rm d}t
\end{displaymath}

Here the time range from $t_1$ to $t_2$ should be cho­sen to in­clude the times of in­ter­est. Fur­ther ${\cal L}$ is the so-called La­grangian.

If all ob­servers agree about the value of the ac­tion in se­lec­to­dy­nam­ics, then se­lec­to­dy­nam­ics is Lorentz in­vari­ant. Now the La­grangian of clas­si­cal se­lec­to­dy­nam­ics was of the form, sub­sec­tion A.22.2,

\begin{displaymath}
\Lag_{\varphi\rm {p}} = \int \pounds _\varphi {\,\rm d}^3{\...
...m_{\rm {p}} \vec v_{\rm {p}}^2 + \varphi_{\rm {p}} s_{\rm {p}}
\end{displaymath}

Here the La­grangian den­sity of the fo­ton field $\varphi$ was

\begin{displaymath}
\pounds _\varphi = - \frac{\epsilon_1}{2}
\left(-\frac{1}{c^2}\varphi_t^2 + \varphi_i^2\right)
\end{displaymath}

To this very day, a sum­ma­tion sym­bol may not be used to re­veal to non­physi­cists that the last term needs to be summed over all three val­ues of $i$. That is in honor of the lazy young physi­cist, who tried to save the uni­verse.

Note that the par­en­thet­i­cal term in the La­grangian den­sity is sim­ply the square of the four-vec­tor of de­riv­a­tives of $\varphi$. In­deed, the rel­a­tivis­tic dot prod­uct puts the mi­nus sign in front of the prod­uct of the time de­riv­a­tives. Since all ob­servers agree about dot prod­ucts, they all agree about the val­ues of the La­grangian den­sity. It is Lorentz in­vari­ant.

To be sure, it is the ac­tion and not the La­grangian den­sity that must be Lorentz in­vari­ant. But note that in the ac­tion, the La­grangian den­sity gets in­te­grated over both space and time. Such in­te­grals are the same for any two ob­servers. You can eas­ily check that from the Lorentz trans­for­ma­tion chap­ter 1.2.1 (1.6) by com­put­ing the Ja­co­bian of the ${\rm d}{x}{\rm d}{t}$ in­te­gra­tion be­tween ob­servers.

(OK, the lim­its of in­te­gra­tion are not re­ally the same for dif­fer­ent ob­servers. One sim­ple way to get around that is to as­sume that the field van­ishes at large neg­a­tive and pos­i­tive times. Then you can in­te­grate over all space-time. A more so­phis­ti­cated ar­gu­ment can be given based on the de­riva­tion of the ac­tion prin­ci­ple {A.1.5}. From that de­riva­tion it can be seen that it suf­fices to con­sider small de­vi­a­tions from the cor­rect physics that are lo­cal­ized in both space and time. It im­plies that the lim­its of in­te­gra­tion in the ac­tion in­te­gral are phys­i­cally ir­rel­e­vant.)

(Note that this sub­sec­tion does no longer men­tion pe­ri­odic boxes. In rel­a­tiv­ity pe­ri­od­ic­ity is not in­de­pen­dent of the ob­server, so the cur­rent ar­gu­ments re­ally need to be done in in­fi­nite space.)

The bot­tom line is that the first, in­te­gral, term in the La­grangian pro­duces a Lorentz-in­vari­ant ac­tion. The sec­ond term in the La­grangian is the nonrel­a­tivis­tic ki­netic en­ergy of the spo­ton. Ob­vi­ously the ac­tion pro­duced by this term will not be Lorentz in­vari­ant. But you can eas­ily fix that up by sub­sti­tut­ing the cor­re­spond­ing rel­a­tivis­tic term as given in chap­ter 1.3.2. So the lack of Lorentz in­vari­ance of this term will sim­ply be ig­nored in this ad­den­dum. If you want, you can con­sider the spo­ton mass to be the mov­ing mass in the re­sult­ing equa­tions of mo­tion.

The fi­nal term in the La­grangian is the prob­lem. It rep­re­sents the spo­ton-fo­tons in­ter­ac­tion. The term by it­self would be Lorentz in­vari­ant, but it gets in­te­grated with re­spect to time. Now in rel­a­tiv­ity time in­ter­vals ${\rm d}{t}$ are not the same for dif­fer­ent ob­servers. So the ac­tion for this term is not Lorentz in­vari­ant. Se­lec­to­dy­nam­ics can­not be cor­rect. The Koulomb jug­ger­naut has been stopped by a small le­gal tech­ni­cal­ity.

(To be sure, any good lawyer would have pointed out that there is no prob­lem if the spo­ton sarge den­sity, in­stead of the spo­ton sarge $s_{\rm {p}}$, is the same for dif­fer­ent ob­servers. But the Koulomb force was so sure about its in­vin­ci­bil­ity that it never both­ered to seek com­pe­tent le­gal coun­sel.)

The ques­tion is now of course how to fix this up. That will hope­fully pro­duce a more ap­peal­ing uni­verse. One in which par­ti­cles like pro­tons and elec­trons have charges $q$ rather than sarges $s$. Where these charges al­low them to in­ter­act with the pho­tons of the elec­tro­mag­netic field. And where these pho­tons as­sure that par­ti­cles with like charges re­pel, rather than at­tract.

Con­sider the form of the prob­lem term in the Koulomb ac­tion:

\begin{displaymath}
\int_{t_1}^{t_2} \varphi_{\rm {p}} s_{\rm {p}} {\,\rm d}t
\end{displaymath}

It seems log­i­cal to try to write this in rel­a­tivis­tic terms, like

\begin{displaymath}
\int_{t_1}^{t_2} \Big(\frac{\varphi_{\rm {p}}}{c}\Big)
\Bi...
...}}{c}\Big)
\Big(s_{\rm {p}} {\,\rm d}r_{\rm {p}}\strut_0\Big)
\end{displaymath}

Here ${\rm d}{r}_{\rm {p}}\strut_0$ is the ze­roth com­po­nent of the change in spo­ton four-vec­tor ${\rm d}\kern-1pt{\buildrel\raisebox{-1.5pt}[0pt][0pt]
{\hbox{\hspace{1pt}$\scriptscriptstyle\hookrightarrow$\hspace{0pt}}}\over r}
\kern-1.3pt_{\rm {p}}$. The prod­uct of the two par­en­thet­i­cal fac­tors is def­i­nitely not Lorentz in­vari­ant. But sup­pose that you turn each of the fac­tors into a com­plete four-vec­tor? Dot prod­ucts are Lorentz in­vari­ant. And the four-vec­tor cor­re­spond­ing to ${\rm d}{r}_{\rm {p}}\strut_0$ is clearly ${\rm d}\kern-1pt{\buildrel\raisebox{-1.5pt}[0pt][0pt]
{\hbox{\hspace{1pt}$\scriptscriptstyle\hookrightarrow$\hspace{0pt}}}\over r}
\kern-1.3pt_{\rm {p}}$.

But the pho­ton po­ten­tial must also be­come a four-vec­tor, in­stead of a scalar. That is what it takes to achieve Lorentz in­vari­ance. So elec­tro­dy­nam­ics de­fines a four-vec­tor of po­ten­tials of the form

\begin{displaymath}
{\buildrel\raisebox{-1.5pt}[0pt][0pt]
{\hbox{\hspace{2.5pt}...
...\\ \strut A^3\end{array}\right)
\equiv
\{A^\mu\}
\to
A^\mu
\end{displaymath}

Here $\skew3\vec A$ is the so-called vec­tor po­ten­tial while $\varphi$ is now the elec­tro­sta­tic po­ten­tial.

The in­ter­ac­tion term in the ac­tion now be­comes, re­plac­ing the spo­ton sarge $s_{\rm {p}}$ by mi­nus the pro­ton charge $q_{\rm {p}}$,

\begin{displaymath}
\int_{t_1}^{t_2} {\buildrel\raisebox{-1.5pt}[0pt][0pt]
{\hb...
...ace{0pt}}}\over r}
\kern-1.3pt_{\rm {p}}}{{\rm d}t} {\,\rm d}t
\end{displaymath}

In writ­ing out the dot prod­uct, note that the spa­tial com­po­nents of ${\rm d}\kern-1pt{\buildrel\raisebox{-1.5pt}[0pt][0pt]
{\hbox{\hspace{1pt}$\scriptscriptstyle\hookrightarrow$\hspace{0pt}}}\over r}
\kern-1.3pt_{\rm {p}}$$\raisebox{.5pt}{$/$}$${\rm d}{t}$ are sim­ply the pro­ton ve­loc­ity com­po­nents $v_{\rm {p}}\strut_j$. That gives the in­ter­ac­tion term in the ac­tion as

\begin{displaymath}
\int_{t_1}^{t_2} \left(-\varphi_{\rm {p}} q_{\rm {p}}
+ A_...
...t_{\rm {p}} q_{\rm {p}} v_{\rm {p}}\strut_j \right) {\,\rm d}t
\end{displaymath}

Once again non­physi­cists may not be told that the sec­ond term in paren­the­ses must be summed over all three val­ues of $j$ since $j$ ap­pears twice.

The in­te­grand above is the in­ter­ac­tion term in the elec­tro­mag­netic La­grangian,

\begin{displaymath}
\Lag_{\rm int} = -\varphi_{\rm {p}} q_{\rm {p}}
+ A_j\strut_{\rm {p}} q_{\rm {p}} v_{\rm {p}}\strut_j
\end{displaymath}

For now at least.

The La­grangian den­sity of the pho­ton field is also needed. Since the pho­ton field is a four-vec­tor rather than a scalar, the self-ev­i­dent elec­tro­mag­netic den­sity is

\begin{displaymath}
\pounds _{\rm seem} = - \frac{\epsilon_0}{2}
\left(
- A_j...
..._j\strut_i^2 + \frac{1}{c^2}\varphi_t^2 - \varphi_i^2
\right)
\end{displaymath}

Here the con­stant $\epsilon_0$ is called the “per­mit­tiv­ity of space.” Note that the sec­ond term in paren­the­ses must be summed over both $i$ and $j$. The cu­ri­ous sign pat­tern for the par­en­thet­i­cal terms arises be­cause it in­volves two dot prod­ucts: one from the square four-gra­di­ent (de­riv­a­tives), and one from the square four-po­ten­tial. Sim­ply put, hav­ing elec­tro­sta­tic po­ten­tials is worth a mi­nus sign, and hav­ing time de­riv­a­tives is too.

It might be noted that in prin­ci­ple the proper La­grangian den­sity could be mi­nus the above ex­pres­sion. But a mi­nus sign in a La­grangian does not change the mo­tion. The con­ven­tion is to choose the sign so that the cor­re­spond­ing Hamil­ton­ian de­scribes en­er­gies that can be in­creased by ar­bi­trar­ily large amounts, not low­ered by ar­bi­trar­ily large amounts. Par­ti­cles can have un­lim­ited amounts of pos­i­tive ki­netic en­ergy, not neg­a­tive ki­netic en­ergy.

Still, it does seem wor­ri­some that the proper sign of the La­grangian den­sity is not self-ev­i­dent. But that is­sue will have to wait un­til the next sub­sec­tion.

Col­lect­ing things to­gether, the self-ev­i­dent La­grangian for elec­tro­mag­netic field plus pro­ton is

\begin{displaymath}
\Lag_{\rm seem+p} = \int \pounds _{\rm seem} {\,\rm d}^3{\s...
...\rm {p}} + A_j\strut_{\rm {p}} q_{\rm {p}} v_{\rm {p}}\strut_j
\end{displaymath}

Here $\pounds _{\rm {seem}}$ was given above.

The first thing to check now is the equa­tion of mo­tion for the pro­ton. Fol­low­ing sub­sec­tion A.22.2, it can be found from

\begin{displaymath}
\frac{{\rm d}}{{\rm d}t}
\left(\frac{\partial{\cal L}}{\pa...
...t(\frac{\partial{\cal L}}{\partial r_{\rm {p}}\strut_i}\right)
\end{displaymath}

Sub­sti­tut­ing in the La­grangian above gives

\begin{displaymath}
\frac{{\rm d}}{{\rm d}t}
\left(m_{\rm {p}} v_{\rm {p}}\str...
...}}
+ {{A_j}_i}\strut_{\rm {p}} q_{\rm {p}}v_{\rm {p}}\strut_j
\end{displaymath}

This can be cleaned up, {D.6}. In short, first an elec­tric” field $\skew3\vec{\cal E}$ and a “mag­netic field $\skew2\vec{\cal B}$ are de­fined as, in vec­tor no­ta­tion,

\begin{displaymath}
\skew3\vec{\cal E}= - \nabla \varphi - \frac{\partial\skew3...
...al t}
\qquad \skew2\vec{\cal B}= \nabla \times \skew3\vec A %
\end{displaymath} (A.130)

The in­di­vid­ual com­po­nents are
\begin{displaymath}
{\cal E}_i = - \varphi_i - A_i\strut_t \qquad
{\cal B}_i =...
...- A_{\overline{\imath}}\strut_{\overline{\overline{\imath}}} %
\end{displaymath} (A.131)

Here $i$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1, 2, or 3 cor­re­sponds to the $x$, $y$, or $z$ com­po­nents re­spec­tively. Also ${\overline{\imath}}$ fol­lows $i$ in the pe­ri­odic se­quence $\ldots123123\ldots$ and ${\overline{\overline{\imath}}}$ pre­cedes $i$. In these terms, the sim­pli­fied equa­tion of mo­tion of the pro­ton be­comes, in vec­tor no­ta­tion,
\begin{displaymath}
\frac{{\rm d}m_{\rm {p}}\vec v_{\rm {p}}}{{\rm d}t} = q_{\r...
...}}+\vec v_{\rm {p}}\times\skew2\vec{\cal B}_{\rm {p}}\right) %
\end{displaymath} (A.132)

The left hand side is mass times ac­cel­er­a­tion. Rel­a­tivis­ti­cally speak­ing, the mass should re­ally be the mov­ing mass here, but OK. The right hand side is known as the Lorentz force.

Note that there are 4 po­ten­tials with 4 de­riv­a­tives each, for a to­tal of 16 de­riv­a­tives. But mat­ter does not ob­serve all 16 in­di­vid­u­ally. Only the 3 com­po­nents of the elec­tric field and the 3 of the mag­netic field are ac­tu­ally ob­served. That sug­gests that there may be changes to the fields that can be made that are not ob­serv­able. Such changes are called gage (or gauge) changes. The name arises from the fact that a gage is a mea­sur­ing de­vice. You and I would then of course say that these changes should be called nongage changes. They are not mea­sur­able. But gage is re­ally short­hand for “Take that, you stu­pid gage.”

Con­sider the most gen­eral form of such gage changes. Given po­ten­tials $\varphi$ and $\skew3\vec A$, equiv­a­lent po­ten­tials can be cre­ated as

\begin{displaymath}
\varphi' = \varphi - \chi_t \qquad \skew3\vec A' = \skew3\vec A+ \nabla\chi
\end{displaymath}

Here $\chi$ can be any func­tion of space and time that you want.

The po­ten­tials $\varphi'$ and $\skew3\vec A'$ give the ex­act same elec­tric and mag­netic fields as $\varphi$ and $\skew3\vec A$. (These claims are eas­ily checked us­ing a bit of vec­tor cal­cu­lus. Use Stokes to show that they are the most gen­eral changes pos­si­ble.)

The fact that you can make un­mea­sur­able changes to the po­ten­tials like that is called the gage (or gauge) prop­erty of the elec­tro­mag­netic field. Non­physi­cists prob­a­bly think it is some­thing you read off from a volt­age gage. Hi­lar­i­ous, isn’t it?

Ob­serv­able or not, the evo­lu­tion equa­tions of the four po­ten­tials are also needed. To find them it is con­ve­nient to spread the pro­ton charge out a bit. That is the same trick as was used in sub­sec­tion A.22.2. For the spread-out charge, a “charge den­sity” $\rho_{\rm {p}}$ can be de­fined as the charge per unit vol­ume. It is also con­ve­nient to de­fine a “cur­rent den­sity” $\vec\jmath_{\rm {p}}$ as the charge den­sity times its ve­loc­ity. Then the pro­ton-pho­tons in­ter­ac­tion terms in the La­grangian are:

\begin{displaymath}
\int\Big(- \varphi \rho_{\rm {p}} + A_j \jmath_{\rm {p}}\st...
... {p}}
+ A_j\strut_{\rm {p}} q_{\rm {p}} v_{\rm {p}}\strut_j %
\end{displaymath} (A.133)

Here the right hand side is an ap­prox­i­ma­tion if the pro­ton charge is al­most con­cen­trated at a sin­gle point, or ex­act for a point charge.

The in­ter­ac­tion terms can now be in­cluded in the La­grangian den­sity to give the to­tal La­grangian

\begin{displaymath}
\Lag_{\rm seem+p} = \int \Big(\pounds _{\rm seem} + \pounds...
... + {\textstyle\frac{1}{2}} m_{\rm {p}} v_{\rm {p}}\strut_j^2 %
\end{displaymath} (A.134)


\begin{displaymath}
\pounds _{\rm {seem}} = - \frac{\epsilon_0}{2}
\left(
- A...
...int} = - \varphi \rho_{\rm {p}} + A_j \jmath_{\rm {p}}\strut_j
\end{displaymath}

If there are more charged par­ti­cles than just a pro­ton, their charge and cur­rent den­si­ties will com­bine into a net $\rho$ and $\vec\jmath$.

The field equa­tions now fol­low sim­i­larly as in sub­sec­tion A.22.2. For the elec­tro­sta­tic po­ten­tial:

\begin{displaymath}
\frac{\partial}{\partial t}
\left(\frac{\partial\pounds }{...
...l\varphi_i}\right) =
\frac{\partial\pounds }{\partial\varphi}
\end{displaymath}

where $\pounds $ is the com­bined La­grangian den­sity. Worked out and con­verted to vec­tor no­ta­tion, that gives
\begin{displaymath}
\frac{1}{c^2}\frac{\partial^2\varphi}{\partial t^2} - \nabla^2\varphi
= \frac{\rho}{\epsilon_0} %
\end{displaymath} (A.135)

This is the same equa­tion as for the Koulomb po­ten­tial ear­lier.

Sim­i­larly, for the com­po­nents of the vec­tor po­ten­tial

\begin{displaymath}
\frac{\partial}{\partial t}
\left(\frac{\partial\pounds }{...
..._j\strut_i}\right) =
\frac{\partial\pounds }{\partial\varphi}
\end{displaymath}

That gives
\begin{displaymath}
\frac{\partial^2 \skew3\vec A}{\partial t^2} - c^2 \nabla^2 \skew3\vec A
= \frac{\vec\jmath}{\epsilon_0} %
\end{displaymath} (A.136)

The above equa­tions are again Klein-Gor­don equa­tions, so they re­spect the speed of light. And since the ac­tion is now Lorentz in­vari­ant, all ob­servers agree with the evo­lu­tion. That seems very en­cour­ag­ing.

Con­sider now the steady case, with no charge mo­tion. The cur­rent den­sity $\vec\jmath$ is then zero. That leads to zero vec­tor po­ten­tials. Then there is no mag­netic field ei­ther, (A.130).

The steady equa­tion (A.135) for the elec­tro­sta­tic field $\varphi$ is ex­actly the same as the one for the Koulomb po­ten­tial. But note that the elec­tric force per unit charge is now mi­nus the gra­di­ent of the elec­tro­sta­tic po­ten­tial, (A.130) and (A.132). And that means that like charges re­pel, not at­tract. All pro­tons in the uni­verse no longer clump to­gether into one big ball. And nei­ther do elec­trons. That sounds great.

But wait a sec­ond. How come that ap­par­ently pro­tons sud­denly man­age to cre­ate fields that are re­pul­sive to pro­tons? What hap­pened to en­ergy min­i­miza­tion? It seems that all is not yet well in the uni­verse.


A.22.5 Lorenz saves the uni­verse

The pre­vi­ous sub­sec­tion de­rived the self-ev­i­dent equa­tions of elec­tro­mag­net­ics. But there were some wor­ri­some as­pects. A look at the Hamil­ton­ian can clar­ify the prob­lem.

Given the La­grangian (A.134) of the pre­vi­ous sub­sec­tion, the Hamil­ton­ian can be found as, {A.1.5}:

\begin{displaymath}
H_{\rm seem+p} =
\int \left(\frac{\partial\pounds }{\parti...
...}}{\partial v_{\rm {p}}\strut_j} v_{\rm {p}}\strut_j - {\cal L}\end{displaymath}

That gives
\begin{displaymath}
H_{\rm seem+p} = \frac{\epsilon_0}{2} \int
\left(
A_j\str...
...t_j^2 +
\int \varphi\rho_{\rm {p}}{\,\rm d}^3{\skew0\vec r} %
\end{displaymath} (A.137)

(This would nor­mally still need to be rewrit­ten in terms of canon­i­cal mo­menta, but that is not im­por­tant here.)

Note that the elec­tro­sta­tic po­ten­tial $\varphi$ pro­duces neg­a­tive elec­tro­mag­netic en­ergy. That means that the elec­tro­mag­netic en­ergy can have ar­bi­trar­ily large neg­a­tive val­ues for large enough $\varphi$.

That then an­swers the ques­tion of the pre­vi­ous sub­sec­tion: “How come a pro­ton pro­duces an elec­tro­sta­tic field that re­pels it? What hap­pened to en­ergy min­i­miza­tion?” There is no such thing as en­ergy min­i­miza­tion here. If there is no low­est en­ergy, then there is no ground state. In­stead the uni­verse should evolve to­wards larger and larger elec­tro­sta­tic fields. That would re­lease in­fi­nite amounts of en­ergy. It should blow life as we know it to smithereens. (The so-called sec­ond law of ther­mo­dy­nam­ics says, sim­ply put, that ther­mal en­ergy is eas­ier to put into par­ti­cles than to take out again. See chap­ter 11.)

In fact, the Koulomb force would also pro­duce re­pul­sion be­tween equal sarges, if its field en­ergy was neg­a­tive in­stead of pos­i­tive. Just change the sign of the con­stant $\epsilon_1$ in clas­si­cal se­lec­to­dy­nam­ics. Then its uni­verse should blow up too. Un­like what you will read else­where, the dif­fer­ence be­tween the Koulomb force, (or its more widely known sib­ling, the Yukawa force of {A.42}), and the Coulomb force is not sim­ply that the pho­ton wave func­tion is a four-vec­tor. It is whether neg­a­tive field en­ergy ap­pears in the most straight­for­ward for­mu­la­tion.

As the pre­vi­ous sub­sec­tion noted, you might as­sume that the elec­tro­dy­namic La­grangian, and hence the Hamil­ton­ian, would have the op­po­site sign. But that does not help. In that case the vec­tor po­ten­tials $A_j$ would pro­duce the neg­a­tive en­er­gies. Re­vers­ing the sign of the Hamil­ton­ian is like re­vers­ing the di­rec­tion of time. In ei­ther di­rec­tion, the uni­verse gets blown to smithereens.

To be sure, it is not com­pletely sure that the uni­verse will be blown to smithereens. A neg­a­tive field en­ergy only says that it is in the­ory pos­si­ble to ex­tract lim­it­less amounts of en­ergy out of the field. But you would still need some ac­tual mech­a­nism to do so. There might not be one. Na­ture might be care­fully con­strained so that there is no dy­namic mech­a­nism to ex­tract the en­ergy.

In that case, there might then be some math­e­mat­i­cal ex­pres­sion for the con­straint. As an­other way to look at that, sup­pose that you would in­deed have a highly un­sta­ble sys­tem. And sup­pose that there is still some­thing rec­og­niz­able left at the end of the first day. Then surely you would ex­pect that what­ever is left is spe­cial in some way. That it obeys some spe­cial math­e­mat­i­cal con­di­tion.

So pre­sum­ably, the elec­tro­mag­netic field that we ob­serve obeys some spe­cial con­di­tion, some con­straint. What could this con­straint be? Since this is very ba­sic physics, you would guess it to be rel­a­tively sim­ple. Cer­tainly it must be Lorentz in­vari­ant. The sim­plest con­di­tion that meets this re­quire­ment is that the dot prod­uct of the four-gra­di­ent $\kern-1pt{\buildrel\raisebox{-1.5pt}[0pt][0pt]
{\hbox{\hspace{1pt}$\scriptscriptstyle\hookrightarrow$\hspace{0pt}}}\over\nabla}
\kern-1.3pt$ with the four-po­ten­tial ${\buildrel\raisebox{-1.5pt}[0pt][0pt]
{\hbox{\hspace{2.5pt}$\scriptscriptstyle\hookrightarrow$\hspace{0pt}}}\over A}$ is zero. Writ­ten out that pro­duces the so-called “Lorenz con­di­tion:”

\begin{displaymath}
\fbox{$\displaystyle
\frac{1}{c}\frac{\partial\varphi/c}{\partial t} + \nabla\cdot\skew3\vec A= 0
$} %
\end{displaymath} (A.138)

This con­di­tion im­plies that only a very spe­cial sub­set of pos­si­ble so­lu­tions of the Klein-Gor­don equa­tions given in the pre­vi­ous sub­sec­tion is ac­tu­ally ob­served in na­ture.

Please note that the Lorenz con­di­tion is named af­ter the Dan­ish physi­cist Lud­vig Lorenz, not the Dutch physi­cist Hen­drik Lorentz. Al­most all my sources mis­la­bel it the Lorentz con­di­tion. The sav­ior of the uni­verse de­serves more re­spect. Al­ways re­mem­ber: the Lorenz con­di­tion is Lorentz in­vari­ant.

(You might won­der why the first term in the Lorenz con­di­tion does not have the mi­nus sign of dot prod­ucts. One way of think­ing about it is that the four-gra­di­ent in its nat­ural con­di­tion al­ready has a mi­nus sign on the time de­riv­a­tive. Physi­cists call it a co­vari­ant four-vec­tor rather than a con­travari­ant one. A bet­ter way to see it is to grind it out; you can use the Lorentz trans­form (1.6) of chap­ter 1.2.1 to show di­rectly that the above form is the same for dif­fer­ent ob­servers. But those fa­mil­iar with in­dex no­ta­tion will rec­og­nize im­me­di­ately that the Lorenz con­di­tion is Lorentz in­vari­ant from the fact that it equals $\partial_{\mu}A^{\mu}$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0, and that has $\mu$ as both sub­script and su­per­script. See chap­ter 1.2.5 for more.)

To be sure, the Lorenz con­di­tion can only be true if the in­ter­ac­tion with mat­ter does not pro­duce vi­o­la­tions. To check that, the evo­lu­tion equa­tion for the Lorenz con­di­tion quan­tity can be ob­tained from the Klein-Gor­don equa­tions of the pre­vi­ous sub­sec­tion. In par­tic­u­lar, in vec­tor no­ta­tion take $\partial$$\raisebox{.5pt}{$/$}$$\partial{t}$ (A.135) plus $\nabla$ (A.136) to get

\begin{displaymath}
\left[\frac{\partial^2}{\partial t^2} - c^2 \nabla^2\right]...
...ac{\partial\rho}{\partial t}
+ \nabla\cdot\vec\jmath\right) %
\end{displaymath} (A.139)

The par­en­thet­i­cal ex­pres­sion in the left hand side should be zero ac­cord­ing to the Lorenz con­di­tion. But that is only pos­si­ble if the left hand side is zero too, so

\begin{displaymath}
\frac{\partial\rho}{\partial t} = - \nabla\cdot\vec\jmath
\end{displaymath}

This im­por­tant re­sult is known as “Maxwell’s con­ti­nu­ity equa­tion.” It ex­presses con­ser­va­tion of charge. (To see that, take any ar­bi­trary vol­ume. In­te­grate both sides of the con­ti­nu­ity equa­tion over that vol­ume. The left hand side then be­comes the time de­riv­a­tive of the charge in­side the vol­ume. The right hand side be­comes, us­ing the [di­ver­gence] [Gauss] [Os­tro­grad­sky] the­o­rem, the net in­flow of charge. And if the charge in­side can only change due to in­flow or out­flow, then no charge can be cre­ated out of noth­ing or de­stroyed.) So charge con­ser­va­tion can be seen as a con­se­quence of the need to main­tain the Lorenz con­di­tion.

Note that the Lorenz con­di­tion (A.138) looks math­e­mat­i­cally just like the con­ti­nu­ity equa­tion. It pro­duces con­ser­va­tion of the in­te­grated elec­tro­sta­tic po­ten­tial. In sub­sec­tion A.22.7 it will be ver­i­fied that it is in­deed enough to pro­duce a sta­ble elec­tro­mag­netic field. One with mean­ing­fully de­fined en­er­gies that do not run off to mi­nus in­fin­ity.

Note that charge con­ser­va­tion by it­self is not quite enough to en­sure that the Lorenz con­di­tion is sat­is­fied. How­ever, if in ad­di­tion the Lorenz quan­tity and its time de­riv­a­tive are zero at just a sin­gle time, it is OK. Then (A.139) en­sures that the Lorenz con­di­tion re­mains true for all time.


A.22.6 Gupta-Bleuler con­di­tion

The ideas of the pre­vi­ous sub­sec­tion pro­vide one way to quan­tize the elec­tro­mag­netic field, [[17, 6]].

As al­ready seen in sub­sec­tion A.22.3 (A.128), in quan­tum field the­ory the po­ten­tials be­come quan­tum fields, i.e. op­er­a­tor fields. For elec­tro­mag­net­ics the quan­tum field four-vec­tor is a bit more messy

\begin{displaymath}
\widehat{\buildrel\raisebox{-1.5pt}[0pt][0pt]
{\hbox{\hspac...
...k}\cdot{\skew0\vec r}}\widehat a^\dagger _{{\vec k}\nu}\right)
\end{displaymath}

Since a four-vec­tor has four com­po­nents, a gen­eral four-vec­tor can be writ­ten as a lin­ear com­bi­na­tion of four cho­sen ba­sis four-vec­tors $\kern-1pt{\buildrel\raisebox{-1.5pt}[0pt][0pt]
{\hbox{\hspace{1pt}$\scriptscriptstyle\hookrightarrow$\hspace{0pt}}}\over e}
\kern-1.3pt_{{\vec k}}^{\,0}$, $\kern-1pt{\buildrel\raisebox{-1.5pt}[0pt][0pt]
{\hbox{\hspace{1pt}$\scriptscriptstyle\hookrightarrow$\hspace{0pt}}}\over e}
\kern-1.3pt_{{\vec k}}^{\,1}$, $\kern-1pt{\buildrel\raisebox{-1.5pt}[0pt][0pt]
{\hbox{\hspace{1pt}$\scriptscriptstyle\hookrightarrow$\hspace{0pt}}}\over e}
\kern-1.3pt_{{\vec k}}^{\,2}$, and $\kern-1pt{\buildrel\raisebox{-1.5pt}[0pt][0pt]
{\hbox{\hspace{1pt}$\scriptscriptstyle\hookrightarrow$\hspace{0pt}}}\over e}
\kern-1.3pt_{{\vec k}}^{\,3}$. (That is much like a gen­eral vec­tor in three di­men­sions can be writ­ten as a lin­ear com­bi­na­tion of ${\hat\imath}$, ${\hat\jmath}$, and ${\hat k}$.) The four ba­sis vec­tors phys­i­cally rep­re­sent dif­fer­ent pos­si­ble po­lar­iza­tions of the elec­tro­mag­netic field. That is why they are typ­i­cally aligned with the mo­men­tum of the wave rather than with some Carte­sian axis sys­tem and its time axis. Note that each po­lar­iza­tion vec­tor has its own an­ni­hi­la­tion op­er­a­tor $\widehat a_{{\vec k}\nu}$ and cre­ation op­er­a­tor $\widehat a^\dagger _{{\vec k}\nu}$. These an­ni­hi­late re­spec­tively cre­ate pho­tons with that wave num­ber vec­tor ${\vec k}$ and po­lar­iza­tion.

(Elec­tro­mag­netic waves in empty space are spe­cial; for them only two in­de­pen­dent po­lar­iza­tions are pos­si­ble. Or to be pre­cise, even in empty space the Klein-Gor­don equa­tions with Lorenz con­di­tion al­low a third po­lar­iza­tion. But these waves pro­duce no elec­tric and mag­netic fields and con­tain no net elec­tro­mag­netic en­ergy. So they are phys­i­cally ir­rel­e­vant. You can call them “gage equiv­a­lent to the vac­uum.” That sounds bet­ter than ir­rel­e­vant.)

The Lorenz con­di­tion of the pre­vi­ous sub­sec­tion is again needed to get rid of neg­a­tive en­ergy states. The ques­tion is now ex­actly how to phrase the Lorenz con­di­tion in quan­tum terms.

(There is an epi­demic among my, highly au­tho­r­a­tive, sources that come up with neg­a­tive norm states with­out Lorenz con­di­tion. Now the present au­thor him­self is far from an ex­pert on quan­tum field the­o­ries. But he knows one thing: norms can­not be neg­a­tive. If you come up with neg­a­tive norms, it tells you noth­ing about the physics. It tells you that you are do­ing the ma­he­mat­ics wrong. I be­lieve the cor­rect ar­gu­ment goes some­thing like this: “Sup­pose that we can do our usual stu­pid canon­i­cal quan­ti­za­tion tricks for this sys­tem. Blah Blah. That gives neg­a­tive norm states. Norms can­not be neg­a­tive. Ergo: we can­not do our usual stu­pid canon­i­cal quan­ti­za­tion tricks for this sys­tem.” If you prop­erly de­fine the cre­ation and an­ni­hi­la­tion op­er­a­tors to put pho­tons in neg­a­tive en­ergy states, there is no math­e­mat­i­cal prob­lem. The com­mu­ta­tor re­la­tion for the neg­a­tive en­ergy states picks up a mi­nus sign and the norms are pos­i­tive as they should. Now the math­e­mat­ics is sound and you can start wor­ry­ing about prob­lems in the physics. Like that there are neg­a­tive en­ergy states. And maybe lack of Lorentz in­vari­ance, al­though the orig­i­nal sys­tem is Lorentz in­vari­ant, and I do not see what would not be Lorentz in­vari­ant about putting par­ti­cles in the neg­a­tive en­ergy states.)

The sim­plest idea would be to re­quire that the quan­tum field above sat­is­fies the Lorenz con­di­tion. But the quan­tum field de­ter­mines the dy­nam­ics. Like in the clas­si­cal case, you do not want to change the dy­nam­ics. In­stead you want to throw cer­tain so­lu­tions away. That means that you want to throw cer­tain wave func­tions ${\left\vert\Psi\right\rangle}$ away.

The strict con­di­tion would be to re­quire (in the Heisen­berg pic­ture {A.12})

\begin{displaymath}
\bigg(\frac{1}{c}\frac{\partial\widetilde\varphi/c}{\partia...
...idetilde{\skew3\vec A}\bigg) {\left\vert\Psi\right\rangle} = 0
\end{displaymath}

for all phys­i­cally ob­serv­able states ${\left\vert\Psi\right\rangle}$. Here the par­en­thet­i­cal ex­pres­sion is the op­er­a­tor of the Lorenz quan­tity that must be zero. The above re­quire­ment makes ${\left\vert\Psi\right\rangle}$ an eigen­vec­tor of the Lorenz quan­tity with eigen­value zero. Then ac­cord­ing to the rules of quan­tum me­chan­ics, chap­ter 3.4, the only mea­sur­able value of the Lorenz quan­tity is zero.

But the above strict con­di­tion is too re­stric­tive. Not even the vac­uum state with no pho­tons would be phys­i­cally ob­serv­able. That is be­cause the cre­ation op­er­a­tors in $\widehat\varphi$ and $\skew6\widehat{\skew3\vec A}$ will cre­ate nonzero pho­ton states when ap­plied on the vac­uum state. That sug­gests that only the an­ni­hi­la­tion terms should be in­cluded. That then gives the “Gupta-Bleuler con­di­tion:”

\begin{displaymath}
\bigg(\frac{1}{c}\frac{\partial\widetilde\varphi^+\!/c}{\pa...
...tilde{\skew3\vec A}}^+\bigg) {\left\vert\Psi\right\rangle} = 0
\end{displaymath}

for phys­i­cally ob­serv­able states ${\left\vert\Psi\right\rangle}$. Here the su­per­script $+$ on the quan­tum fields means that only the $\widehat a_{{\vec k}\nu}$ an­ni­hi­la­tion op­er­a­tor terms are in­cluded.

You might of course won­der why the an­ni­hi­la­tion terms are in­di­cated by a plus sign, in­stead of the cre­ation terms. Af­ter all, it are the cre­ation op­er­a­tors that cre­ate more pho­tons. But the plus sign ac­tu­ally stands for the fact that the an­ni­hi­la­tion terms are as­so­ci­ated with an $e^{-{\rm i}{\omega}t}$ time de­pen­dence in­stead of $e^{{\rm i}{\omega}t}$. Yes true, $e^{-{\rm i}{\omega}t}$ has a mi­nus sign, not a plus sign. But $e^{-{\rm i}{\omega}t}$ has the nor­mal sign, and nor­mal is rep­re­sented by a plus sign. Is not ad­di­tion more nor­mal than sub­trac­tion? Please do not pull at your hair like that, there are less dras­tic ways to save on pro­fes­sional hair care.

Sim­ply drop­ping the cre­ation terms may seem com­pletely ar­bi­trary. But it ac­tu­ally has some phys­i­cal logic to it. Con­sider the in­ner prod­uct

\begin{displaymath}
{\left\langle\Psi'\hspace{0.3pt}\right\vert}
\bigg(\frac{1...
...etilde{\skew3\vec A}\,\bigg) {\left\vert\Psi\right\rangle} = 0
\end{displaymath}

This is the amount of state ${\left\vert\Psi'\right\rangle}$ pro­duced by ap­ply­ing the Lorenz quan­tity on the phys­i­cally ob­serv­able state ${\left\vert\Psi\right\rangle}$. The strict con­di­tion is equiv­a­lent to say­ing that this in­ner prod­uct must al­ways be zero; no amount of any state may be pro­duced. For the Gupta-Bleuler con­di­tion, the above in­ner prod­uct re­mains zero if ${\left\vert\Psi'\right\rangle}$ is a phys­i­cally ob­serv­able state. (The rea­son is that the cre­ation terms can be taken to the other side of the in­ner prod­uct as an­ni­hi­la­tion terms. Then they pro­duce zero if ${\left\vert\Psi'\right\rangle}$ is phys­i­cally ob­serv­able.) So the Gupta-Bleuler con­di­tion im­plies that no amount of any phys­i­cally ob­serv­able state may be pro­duced by the Lorenz quan­tity.

There are other ways to do quan­ti­za­tion of the elec­tro­mag­netic field. The quan­ti­za­tion fol­low­ing Fermi, as dis­cussed in sub­sec­tion A.22.8, can be con­verted into a mod­ern quan­tum field the­ory. But that turns out to be a very messy process in­deed, [[17, 6]]. The de­riva­tion is es­sen­tially to mess around at length un­til you more or less prove that you can use the Lorenz con­di­tion re­sult in­stead. You might as well start there.

It does turns out that the so-called path-in­te­gral for­mu­la­tion of quan­tum me­chan­ics does a very nice job here, [52, pp. 30ff]. It avoids many of the con­tor­tions of canon­i­cal quan­ti­za­tion like the ones above.

In fact, a pop­u­lar quan­tum field text­book, [34, p. 79], re­fuses to do canon­i­cal quan­ti­za­tion of the elec­tro­mag­netic field at all, call­ing it an awk­ward sub­ject. This book is typ­i­cally used dur­ing the sec­ond year of grad­u­ate study in physics, so it is not that its read­ers are un­so­phis­ti­cated.


A.22.7 The con­ven­tional La­grangian

Re­turn­ing to the clas­si­cal elec­tro­mag­netic field, it still needs to be ex­am­ined whether the Lorenz con­di­tion has made the uni­verse safe for life as we know it.

The an­swer de­pends on the La­grangian, be­cause the La­grangian de­ter­mines the evo­lu­tion of a sys­tem. So far, the La­grangian has been writ­ten in terms of the four po­ten­tials $\varphi$ and $A_j$ (with $j$ = 1, 2, and 3) of the elec­tro­mag­netic field. But re­call that mat­ter does not ob­serve the four po­ten­tials di­rectly. It only no­tices the elec­tric field $\skew3\vec{\cal E}$ and the mag­netic field $\skew2\vec{\cal B}$. So it may help to re­for­mu­late the La­grangian in terms of the elec­tric and mag­netic fields. Con­cen­trat­ing on the ob­served fields is likely to show up more clearly what is ac­tu­ally ob­served.

With a bit of math­e­mat­i­cal ma­nip­u­la­tion, {D.37.3}, the self-ev­i­dent elec­tro­mag­netic La­grangian den­sity can be writ­ten as:

\begin{displaymath}
\pounds _{\rm seem} = \frac{\epsilon_0}{2} \bigg({\cal E}^2...
...}{\partial t}
+\nabla\cdot\skew3\vec A\bigg\}^2\,\bigg) + ...
\end{displaymath}

Here the dots stand for terms that do not af­fect the mo­tion. (Since in the ac­tion, La­grangian den­si­ties get in­te­grated over space and time, terms that are pure spa­tial or time de­riv­a­tives in­te­grate away. The quan­ti­ties rel­e­vant to the ac­tion prin­ci­ple van­ish at the lim­its of in­te­gra­tion.)

The term in­side the curly brack­ets is zero ac­cord­ing to the Lorenz con­di­tion (A.138). There­fore, it too does not af­fect the mo­tion. (To be pre­cise, the term does not af­fect the mo­tion be­cause it is squared. By it­self it would af­fect the mo­tion. In the for­mal way in which the La­grangian is dif­fer­en­ti­ated, one power is lost.)

The con­ven­tional La­grangian den­sity is found by dis­re­gard­ing the terms that do not change the mo­tion:

\begin{displaymath}
\pounds _{\rm conem} = \frac{\epsilon_0}{2} \Big({\cal E}^2 - c^2 {\cal B}^2\Big)
\end{displaymath}

So the con­ven­tional La­grangian den­sity of the elec­tro­mag­netic field is com­pletely in terms of the ob­serv­able fields.

As an aside, it might be noted that physi­cists find the above ex­pres­sion too in­tu­itive. So you will find it in quan­tum field books in rel­a­tivis­tic in­dex no­ta­tion as:

\begin{displaymath}
\pounds _{\rm conem} = - \frac{\epsilon_0}{4} F_{\mu\nu}F^{\mu\nu}
\end{displaymath}

Here the “field strength ten­sor” is de­fined by

\begin{displaymath}
F_{\mu\nu} = c \left(\partial_\mu A_\nu - \partial_\nu A_\mu\right)
\qquad \mu=0,1,2,3 \quad \nu=0,1,2,3
\end{displaymath}

Note that the in­dices on each $A$ are sub­scripts in­stead of su­per­scripts as they should be. That means that you must add a mi­nus sign when­ever the in­dex on an $A$ is 0. If you do that cor­rectly, you will find that from the 16 $F_{\mu\nu}$ val­ues, some are zero, while the rest are com­po­nents of the elec­tric or mag­netic fields. To go from $F_{\mu\nu}$ to $F^{\mu\nu}$, you must raise both in­dices, so add a mi­nus sign for each in­dex that is zero. If you do all that the same La­grangian den­sity as be­fore re­sults.

Be­cause the con­ven­tional La­grangian den­sity is dif­fer­ent from the self-ev­i­dent one, the field equa­tions (A.135) and (A.136) for the po­ten­tials pick up a few ad­di­tional terms. To find them, re­peat the analy­sis of sub­sec­tion A.22.4 but use the con­ven­tional den­sity above in (A.134). Note that you will need to write the elec­tric and mag­netic fields in terms of the po­ten­tials us­ing (A.131). (Us­ing the field strength ten­sor is ac­tu­ally some­what sim­pler in con­vert­ing to the po­ten­tials. If you can get all the blasted sign changes right, that is.)

Then the con­ven­tional field equa­tions be­come:

\begin{displaymath}
\frac{1}{c^2}\frac{\partial^2\varphi}{\partial t^2} - \nabl...
...bla\cdot\skew3\vec A}{\partial t}
= \frac{\rho}{\epsilon_0} %
\end{displaymath} (A.140)


\begin{displaymath}
\frac{\partial^2 \skew3\vec A}{\partial t^2} - c^2 \nabla^2...
...a(\nabla\cdot\skew3\vec A)
= \frac{\vec\jmath}{\epsilon_0} %
\end{displaymath} (A.141)

Here $\rho$ is again the charge den­sity and $\vec\jmath$ the cur­rent den­sity of the charges that are around,

The ad­di­tional terms in each equa­tion above are the two be­fore the equals signs. Note that these ad­di­tional terms are zero on ac­count of the Lorenz con­di­tion. So they do not change the so­lu­tion.

The con­ven­tional field equa­tions above are ob­vi­ously more messy than the orig­i­nal ones. Even if you can­cel the sec­ond or­der time de­riv­a­tives in (A.140). How­ever, they do have one ad­van­tage. If you use these con­ven­tional equa­tions, you do not have to worry about sat­is­fy­ing the Lorenz con­di­tion. Any so­lu­tion to the equa­tions will give you the right elec­tric and mag­netic fields and so the right mo­tion of the charged par­ti­cles.

To be sure, the po­ten­tials will be dif­fer­ent if you do not sat­isfy the Lorenz con­di­tion. But the po­ten­tials have no mean­ing of their own. At least not in clas­si­cal elec­tro­mag­net­ics.

To ver­ify that the Lorenz con­di­tion is no longer needed, first re­call the in­de­ter­mi­nacy in the po­ten­tials. As sub­sec­tion A.22.4 dis­cussed, more than one set of po­ten­tials can pro­duce the same elec­tric and mag­netic fields. In par­tic­u­lar, given po­ten­tials $\varphi$ and $\skew3\vec A$, you can cre­ate equiv­a­lent po­ten­tials as

\begin{displaymath}
\varphi' = \varphi - \chi_t \qquad \skew3\vec A' = \skew3\vec A+ \nabla\chi
\end{displaymath}

Here $\chi$ can be any func­tion of space and time that you want. The po­ten­tials $\varphi'$ and $\skew3\vec A'$ give the ex­act same elec­tric and mag­netic fields as $\varphi$ and $\skew3\vec A$. Such a trans­for­ma­tion of po­ten­tials is called a gage trans­form.

Now sup­pose that you have a so­lu­tion $\varphi$ and $\skew3\vec A$ of the con­ven­tional field equa­tions, but it does not sat­isfy the Lorenz con­di­tion. In that case, sim­ply ap­ply a gage trans­form as above to get new fields $\varphi'$ and $\skew3\vec A'$ that do sat­isfy the Lorenz con­di­tion. To do so, write out the Lorenz con­di­tion for the new po­ten­tials,

\begin{displaymath}
\frac{1}{c^2}\frac{\partial\varphi'}{\partial t} + \nabla\c...
...2\chi}{\partial t^2}
+ \nabla\cdot\skew3\vec A+ \nabla^2 \chi
\end{displaymath}

You can al­ways choose the func­tion $\chi$ to make this quan­tity zero. (Note that that gives an in­ho­mo­ge­neous Klein-Gor­don equa­tion for $\chi$.)

Now it turns out that the new po­ten­tials $\varphi'$ and $\skew3\vec A'$ still sat­isfy the con­ven­tional equa­tions. That can be seen by straight sub­sti­tu­tion of the ex­pres­sions for the new po­ten­tials in the con­ven­tional equa­tions. So the new po­ten­tials are per­fectly OK: they sat­isfy both the Lorenz con­di­tion and the con­ven­tional equa­tions. But the orig­i­nal po­ten­tials $\varphi$ and $\skew3\vec A$ pro­duced the ex­act same elec­tric and mag­netic fields. So the orig­i­nal po­ten­tials were OK too.

The evo­lu­tion equa­tion (A.140) for the elec­tro­sta­tic field is worth a sec­ond look. Be­cause of the de­f­i­n­i­tion of the elec­tric field (A.130), it can be writ­ten as

\begin{displaymath}
\nabla\cdot\skew3\vec{\cal E}= \frac{\rho}{\epsilon_0} %
\end{displaymath} (A.142)

That is called Maxwell’s first equa­tion, chap­ter 13.2. It ties the charge den­sity to the elec­tric field quite rigidly.

Maxwell’s first equa­tion is a con­se­quence of the Lorenz con­di­tion. It would not be re­quired for the orig­i­nal Klein-Gor­don equa­tions with­out Lorenz con­di­tion. In par­tic­u­lar, it is the Lorenz con­di­tion that al­lows the ad­di­tional two terms in the evo­lu­tion equa­tion (A.140) for the elec­tro­sta­tic po­ten­tial. These then elim­i­nate the sec­ond or­der time de­riv­a­tive from the equa­tion. That then turns the equa­tion from a nor­mal evo­lu­tion equa­tion into a re­stric­tive spa­tial con­di­tion on the elec­tric field.

It may be noted that the other evo­lu­tion equa­tion (A.141) is Maxwell’s fourth equa­tion. Just rewrite it in terms of the elec­tric and mag­netic fields. The other two Maxwell equa­tions fol­low triv­ially from the de­f­i­n­i­tions (A.130) of the elec­tric and mag­netic fields in terms of the po­ten­tials.

Since there is no Lorenz con­di­tion for the con­ven­tional equa­tions, it be­comes in­ter­est­ing to find the cor­re­spond­ing Hamil­ton­ian. That should al­low the sta­bil­ity of elec­tro­mag­net­ics to be ex­am­ined more eas­ily.

The Hamil­ton­ian for elec­tro­mag­netic field plus a pro­ton may be found the same way as (A.137) in sub­sec­tion A.22.5, {A.1.5}. Just use the con­ven­tional La­grangian den­sity in­stead. That gives

\begin{displaymath}
H_{\rm conem+p} = \int\Big( \frac{\epsilon_0}{2}
({\cal E}...
...}
+ {\textstyle\frac{1}{2}} m_{\rm {p}} v_{\rm {p}}\strut_j^2
\end{displaymath}

But the pro­ton charge den­sity $\rho_{\rm {p}}$ may elim­i­nated us­ing Maxwell’s first equa­tion above. An ad­di­tional in­te­gra­tion by parts of that term then causes it to drop away against the pre­vi­ous term. That gives the con­ven­tional en­ergy as
\begin{displaymath}
E_{\rm conem+p} = \frac{\epsilon_0}{2} \int({\cal E}^2 + c^...
...
+ {\textstyle\frac{1}{2}}m_{\rm {p}}\vec v_{\rm {p}}^{\,2} %
\end{displaymath} (A.143)

The first term is the en­ergy in the ob­serv­able fields and the fi­nal term is the ki­netic en­ergy of the pro­ton.

The sim­pli­fied en­ergy above is no longer re­ally a Hamil­ton­ian; you can­not write Hamil­ton’s equa­tions based on it as in {A.1.5}. But it does still give the en­ergy that is con­served.

The en­ergy above is al­ways pos­i­tive. So it can no longer be low­ered by ar­bi­trary amounts. The sys­tem will not blow up. And that then means that the orig­i­nal Klein-Gor­don equa­tions (A.135) and (A.136) for the fields are sta­ble too as long as the Lorenz con­di­tion is sat­is­fied. They pro­duce the same evo­lu­tion. And they sat­isfy the speed of light re­stric­tion and are Lorentz in­vari­ant. Lorenz did it!

Note also the re­mark­able re­sult that the in­ter­ac­tion en­ergy be­tween pro­ton charge and field has dis­ap­peared. The pro­ton can no longer min­i­mize any en­ergy of in­ter­ac­tion be­tween it­self and the field it cre­ates. Maxwell’s first equa­tion is too re­stric­tive. All the pro­ton can try to do is min­i­mize the en­ergy in the elec­tric and mag­netic fields.


A.22.8 Quan­ti­za­tion fol­low­ing Fermi

Quan­tiz­ing the elec­tro­mag­netic field is not easy. The pre­vi­ous sub­sec­tion showed a cou­ple of prob­lems. The gage prop­erty im­plies that the elec­tro­mag­netic po­ten­tials $\varphi$ and $\skew3\vec A$ are in­de­ter­mi­nate. Also, tak­ing the Lorenz con­di­tion into ac­count, the sec­ond or­der time de­riv­a­tive is lost in the Klein-Gor­don equa­tion for the elec­tro­sta­tic po­ten­tial $\varphi$. The equa­tion turns into Maxwell’s first equa­tion,

\begin{displaymath}
\nabla\cdot\skew3\vec{\cal E}= \frac{\rho}{\epsilon_0}
\end{displaymath}

That is not an evo­lu­tion equa­tion but a spa­tial con­straint for the elec­tric field $\skew3\vec{\cal E}$ in terms of the charge den­sity $\rho$.

Var­i­ous ways to deal with that have been de­vel­oped. The quan­ti­za­tion pro­ce­dure dis­cussed in this sub­sec­tion is a sim­pli­fied ver­sion of the one found in Bethe’s book, [6, pp. 255-271]. It is due to Fermi, based on ear­lier work by Dirac and Heisen­berg & Pauli. This de­riva­tion was a great achieve­ment at the time, and fun­da­men­tal to more ad­vanced quan­tum field ap­proaches, [6, p. 266]. Note that all five men­tioned physi­cists re­ceived a No­bel Prize in physics at one time or the other.

The start­ing point in this dis­cus­sion will be the orig­i­nal po­ten­tials $\varphi$ and $\skew3\vec A$ of sub­sec­tion A.22.4. The ones that sat­is­fied the Klein-Gor­don equa­tions (A.135) and (A.136) as well as the Lorenz con­di­tion (A.138).

It was Fermi who rec­og­nized that you can make things a lot sim­pler for your­self if you write the po­ten­tials as sums of ex­po­nen­tials of the form $e^{{\rm i}{\vec k}\cdot{\skew0\vec r}}$:

\begin{displaymath}
\varphi = \sum_{{\rm all}\ {\vec k}} c_{\vec k}e^{{\rm i}{\...
...\vec k}} \vec d_{\vec k}e^{{\rm i}{\vec k}\cdot{\skew0\vec r}}
\end{displaymath}

That is the same trick as was used in quan­tiz­ing the Koulomb po­ten­tial in sub­sec­tion A.22.3. How­ever, in clas­si­cal me­chan­ics you do not call these ex­po­nen­tials mo­men­tum eigen­states. You call them Fourier modes. The prin­ci­ple is the same. The con­stant vec­tor ${\vec k}$ that char­ac­ter­izes each ex­po­nen­tial is still called the wave num­ber vec­tor. Since the po­ten­tials con­sid­ered here vary with time, the co­ef­fi­cients $c_{\vec k}$ and $\vec{d}_{\vec k}$ are func­tions of time.

Note that the co­ef­fi­cients $\vec{d}_{\vec k}$ are vec­tors. These will have three in­de­pen­dent com­po­nents. So the vec­tor po­ten­tial can be writ­ten more ex­plic­itly as

\begin{displaymath}
\skew3\vec A= \sum_{{\rm all}\ {\vec k}} d_{1,{\vec k}\,} \...
...} \vec e_{3,{\vec k}\,} e^{{\rm i}{\vec k}\cdot{\skew0\vec r}}
\end{displaymath}

where $\vec{e}_{1,{\vec k}}$, $\vec{e}_{2,{\vec k}}$, and $\vec{e}_{3,{\vec k}}$ are unit vec­tors. Fermi pro­posed that the smart thing to do is to take the first of these unit vec­tors in the same di­rec­tion as the wave num­ber vec­tor ${\vec k}$. The cor­re­spond­ing elec­tro­mag­netic waves are called lon­gi­tu­di­nal. The other two unit vec­tors should be or­thog­o­nal to the first com­po­nent and to each other. That still leaves a bit choice in di­rec­tion. For­tu­nately, in prac­tice it does not re­ally make a dif­fer­ence ex­actly how you take them. The cor­re­spond­ing elec­tro­mag­netic waves are called trans­verse.

In short, the fields can be writ­ten as

\begin{displaymath}
\varphi = \sum_{{\rm all}\ {\vec k}} c_{{\vec k}\,} e^{{\rm...
...\vec e_{3,{\vec k}\,} e^{{\rm i}{\vec k}\cdot{\skew0\vec r}} %
\end{displaymath} (A.144)

where

\begin{displaymath}
\vec e_{1,{\vec k}\,} = \frac{{\vec k}}{k} \qquad \vec e_{2...
...\vec k}= \vec e_{2,{\vec k}\,} \cdot \vec e_{3,{\vec k}\,} = 0
\end{displaymath}

From those ex­pres­sions, and the di­rec­tions of the unit vec­tors, it can be checked by straight sub­sti­tu­tion that the curl of the lon­gi­tu­di­nal po­ten­tial is zero:

\begin{displaymath}
\mathop{\rm curl}\nolimits \skew3\vec A_\parallel \equiv
\nabla\times\skew3\vec A_\parallel = 0 \quad \mbox{(irrotational)}
\end{displaymath}

A vec­tor field with zero curl is called “ir­ro­ta­tional.” (The term can be un­der­stood from fluid me­chan­ics; there the curl of the fluid ve­loc­ity field gives the lo­cal av­er­age an­gu­lar ve­loc­ity of the fluid.)

The same way, it turns out that that the di­ver­gence of the trans­verse po­ten­tial is zero

\begin{displaymath}
\div\skew3\vec A_\perp \equiv \nabla\cdot\skew3\vec A_\perp = 0 \quad \mbox{(solenoidal)}
\end{displaymath}

A field with zero di­ver­gence is called “so­le­noidal.” (This term can be un­der­stood from mag­ne­to­sta­t­ics; a mag­netic field, like the one pro­duced by a so­le­noid, an elec­tro­mag­net, has zero di­ver­gence.)

To be fair, Fermi did not re­ally dis­cover that it can be smart to take vec­tor fields apart into ir­ro­ta­tional and so­le­noidal parts. That is an old trick known as the “Helmholtz de­com­po­si­tion.”

Since the trans­verse po­ten­tial has no di­ver­gence, the lon­gi­tu­di­nal po­ten­tial is solely re­spon­si­ble for the Lorenz con­di­tion (A.138). The trans­verse po­ten­tial can do what­ever it wants.

The real prob­lem is there­fore with the lon­gi­tu­di­nal po­ten­tial $\skew3\vec A_\parallel$ and the elec­tro­sta­tic po­ten­tial $\varphi$. Bethe [6] deals with these in terms of the Fourier modes. How­ever, that re­quires some fairly so­phis­ti­cated analy­sis. It is ac­tu­ally eas­ier to re­turn to the po­ten­tials them­selves now.

Re­con­sider the ex­pres­sions (A.130) for the elec­tric and mag­netic fields in terms of the po­ten­tials. They show that the elec­tro­sta­tic po­ten­tial pro­duces no mag­netic field. And nei­ther does the lon­gi­tu­di­nal po­ten­tial be­cause it is ir­ro­ta­tional.

They do pro­duce a com­bined elec­tric field $\skew3\vec{\cal E}_{\varphi\parallel}$. But this elec­tric field is ir­ro­ta­tional, be­cause the lon­gi­tu­di­nal po­ten­tial is, and the gra­di­ent $\nabla$ of any scalar func­tion is. That helps, be­cause then the Stokes the­o­rem of cal­cu­lus im­plies that the elec­tric field $\skew3\vec{\cal E}_{\varphi\parallel}$ is mi­nus the gra­di­ent of some scalar po­ten­tial:

\begin{displaymath}
\skew3\vec{\cal E}_{\varphi\parallel} = - \nabla\varphi_{\rm {C}}
\end{displaymath}

Note that nor­mally $\varphi_{\rm {C}}$ is not the same as the elec­tro­sta­tic po­ten­tial $\varphi$, since there is also the lon­gi­tu­di­nal po­ten­tial. To keep them apart, $\varphi_{\rm {C}}$ will be called the Coulomb po­ten­tial.

As far as the di­ver­gence of the elec­tric field $\skew3\vec{\cal E}_{\varphi\parallel}$ is con­cerned, it is the same as the di­ver­gence of the com­plete elec­tric field. The rea­son is that the trans­verse field has no di­ver­gence. And the di­ver­gence of the com­plete elec­tric field is given by Maxwell’s first equa­tion. To­gether these ob­ser­va­tions give

\begin{displaymath}
\skew3\vec{\cal E}_{\varphi\parallel} = - \nabla\varphi_{\r...
...lel} =
- \nabla^2 \varphi_{\rm {C}} = \frac{\rho}{\epsilon_0}
\end{displaymath}

Note that the fi­nal equa­tion is a Pois­son equa­tion for the Coulomb po­ten­tial.

Now sup­pose that you re­placed the elec­tro­sta­tic field $\varphi$ with the Coulomb po­ten­tial $\varphi_{\rm {C}}$ and had no lon­gi­tu­di­nal field $\skew3\vec A_\parallel$ at all. It would give the same elec­tric and mag­netic fields. And they are the only ones that are ob­serv­able. They give the forces on the par­ti­cles. The po­ten­tials are just math­e­mat­i­cal tools in clas­si­cal elec­tro­mag­net­ics.

So why not? To be sure, the com­bi­na­tion of the Coulomb po­ten­tial $\varphi_{\rm {C}}$ and re­main­ing vec­tor po­ten­tial $\skew3\vec A_\perp$ will no longer sat­isfy the Lorenz con­di­tion. But who cares?

In­stead of the Lorenz con­di­tion, the com­bi­na­tion of Coulomb po­ten­tial plus trans­verse po­ten­tial sat­is­fies the so-called “Coulomb con­di­tion:”

\begin{displaymath}
\fbox{$\displaystyle
\nabla\cdot\skew3\vec A= 0
$}
\end{displaymath} (A.145)

The rea­son is that now $\skew3\vec A$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\skew3\vec A_\perp$ and the trans­verse vec­tor po­ten­tial has no di­ver­gence. Physi­cists like to say that the orig­i­nal po­ten­tials used the Lorenz gage, while the new ones use the Coulomb gage.

Be­cause the po­ten­tials $\varphi_{\rm {C}}$ and $\skew3\vec A_\perp$ do no longer sat­isfy the Lorenz con­di­tion, the Klein-Gor­don equa­tions (A.135) and (A.136) do no longer ap­ply. But the con­ven­tional equa­tions (A.140) and (A.141) do still ap­ply; they do not need the Lorenz con­di­tion.

Now con­sider the Coulomb po­ten­tial some­what closer. As noted above it sat­is­fies the Pois­son equa­tion

\begin{displaymath}
- \nabla^2 \varphi_{\rm {C}} = \frac{\rho}{\epsilon_0}
\end{displaymath}

The so­lu­tion to this equa­tion was al­ready found in the first sub­sec­tion, (A.107). If the charge dis­tri­b­u­tion $\rho$ con­sists of a to­tal of $I$ point charges, it is
\begin{displaymath}
\varphi_{\rm {C}}({\skew0\vec r};t) = \sum_{i=1}^I \frac{q_i}{4\pi\epsilon_0\vert{\skew0\vec r}-{\skew0\vec r}_i\vert}
\end{displaymath} (A.146)

Here $q_i$ is the charge of point charge num­ber $i$, and ${\skew0\vec r}_i$ its po­si­tion.

If the charge dis­tri­b­u­tion $\rho$ is smoothly dis­trib­uted, sim­ply take it apart in small point charges $\rho({\underline{\skew0\vec r}};t){\rm d}^3{\underline{\skew0\vec r}}$. That gives

\begin{displaymath}
\varphi_{\rm {C}}({\skew0\vec r};t) = \int_{{\rm all\ }{\un...
...line{\skew0\vec r}}\vert}{\,\rm d}^3{\underline{\skew0\vec r}}
\end{displaymath} (A.147)

The key point to note here is that the Coulomb po­ten­tial has no life of its own. It is rigidly tied to the po­si­tions of the charges. That then pro­vides the most de­tailed an­swer to the ques­tion: “What hap­pened to en­ergy min­i­miza­tion?” Charged par­ti­cles have no op­tion of min­i­miz­ing any en­ergy of in­ter­ac­tion with the field. Maxwell’s first equa­tion, the Pois­son equa­tion above, forces them to cre­ate a Coulomb field that is re­pul­sive to them. Whether they like it or not.

Note fur­ther that all the me­chan­ics as­so­ci­ated with the Coulomb field is quasi-steady. The Pois­son equa­tion does not de­pend on how fast the charged par­ti­cles evolve. The Coulomb elec­tric field is mi­nus the spa­tial gra­di­ent of the po­ten­tial, so that does not de­pend on the speed of evo­lu­tion ei­ther. And the Coulomb force on the charged par­ti­cles is merely the elec­tric field times the charge.

It is still not ob­vi­ous how to quan­tize the Coulomb po­ten­tial, even though there is no longer a lon­gi­tu­di­nal field. But who cares about the Coulomb po­ten­tial in the first place? The im­por­tant thing is how the charged par­ti­cles are af­fected by it. And the forces on the par­ti­cles caused by the Coulomb po­ten­tial can be com­puted us­ing the elec­tro­sta­tic po­ten­tial en­ergy, {D.37.4},

\begin{displaymath}
V_{\rm C} = {\textstyle\frac{1}{2}} \sum_{i=1}^I\sum_{\text...
...0\vert{\skew0\vec r}_i-{\skew0\vec r}_{{\underline i}}\vert} %
\end{displaymath} (A.148)

For ex­am­ple, this is the Coulomb po­ten­tial en­ergy that was used to find the en­ergy lev­els of the hy­dro­gen atom in chap­ter 4.3. It can still be used in un­steady mo­tion be­cause every­thing as­so­ci­ated with the Coulomb po­ten­tial is quasi-steady. Sure, it is due to the in­ter­ac­tion of the par­ti­cles with the elec­tro­mag­netic field. But where in the above math­e­mat­i­cal ex­pres­sion does it say elec­tro­mag­netic field? All it con­tains are the co­or­di­nates of the charged par­ti­cles. So what dif­fer­ence does it make where the po­ten­tial en­ergy comes from? Just add the en­ergy above to the Hamil­ton­ian and then pre­tend that there are no elec­tro­sta­tic and lon­gi­tu­di­nal fields.

In­ci­den­tally, note the re­quired omis­sion of the terms with ${\underline i}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $i$ in the po­ten­tial en­ergy above. Oth­er­wise you would get in­fi­nite en­ergy. In fact, a point charge in clas­si­cal elec­tro­mag­net­ics does have in­fi­nite Coulomb en­ergy. Just take any of the point charges and men­tally chop it up into two equal parts sit­ting at the same po­si­tion. The in­ter­ac­tion en­ergy be­tween the halves is in­fi­nite.

The is­sue does not ex­ist if the charge is smoothly dis­trib­uted. In that case the Coulomb po­ten­tial en­ergy is, {D.37.4},

\begin{displaymath}
V_{\rm C} = {\textstyle\frac{1}{2}} \int_{{\rm all\ }{\skew...
...{\,\rm d}^3{\skew0\vec r}{\rm d}^3{\underline{\skew0\vec r}} %
\end{displaymath} (A.149)

While the in­te­grand is in­fi­nite at ${\underline{\skew0\vec r}}$ $\vphantom0\raisebox{1.5pt}{$=$}$ ${\skew0\vec r}$, the in­te­gral re­mains fi­nite.

So the big idea is to throw away the elec­tro­sta­tic and lon­gi­tu­di­nal po­ten­tials and re­place them with the Coulomb en­ergy $V_{\rm {C}}$, ori­gin un­known. Now it is mainly a mat­ter of work­ing out the de­tails.

First, con­sider the Fermi La­grangian. It is found by throw­ing out the elec­tro­sta­tic and lon­gi­tu­di­nal po­ten­tials from the ear­lier La­grangian (A.134) and sub­tract­ing $V_{\rm {C}}$. That gives, us­ing the point charge ap­prox­i­ma­tion (A.133) and in vec­tor no­ta­tion,

\begin{displaymath}
\Lag_{\rm F} = \frac{\epsilon_0}{2}
\int \bigg[\left\vert\...
...+ {\textstyle\frac{1}{2}} m_i \vec v_i^{\,2} \Big] - V_{\rm C}
\end{displaymath} (A.150)

Note that it is now as­sumed that there are $I$ par­ti­cles in­stead of just the sin­gle pro­ton in (A.134). Be­cause $i$ is al­ready used to in­dex the par­ti­cles, ${\underline j}$ is used to in­dex the three di­rec­tions of spa­tial dif­fer­en­ti­a­tion. The Coulomb en­ergy $V_{\rm {C}}$ was al­ready given in (A.148). The ve­loc­ity of par­ti­cle $i$ is $\vec{v}_i$, while $q_i$ is its charge and $m_i$ its mass. The sub­script $i$ on the trans­verse po­ten­tial in the in­ter­ac­tion term in­di­cates that it is eval­u­ated at the lo­ca­tion of par­ti­cle $i$.

You may won­der how you can achieve that only the trans­verse po­ten­tial $\skew3\vec A_\perp$ is left. That would in­deed be dif­fi­cult to do if you work in terms of spa­tial co­or­di­nates. The sim­plest way to han­dle it is to work in terms of the trans­verse waves (A.144). They are trans­verse by con­struc­tion.

The un­knowns are now no longer the val­ues of the po­ten­tial at the in­fi­nitely many pos­si­ble po­si­tions. In­stead the un­knowns are now the co­ef­fi­cients $d_{2,{\vec k}}$ and $d_{3,{\vec k}}$ of the trans­verse waves. Do take into ac­count that since the field is real,

\begin{displaymath}
d_{2,-{\vec k}} = d_{2,{\vec k}}^* \qquad d_{3,-{\vec k}} = d_{3,{\vec k}}^*
\end{displaymath}

So the num­ber of in­de­pen­dent vari­ables is half of what it seems. The most straight­for­ward way of han­dling this is to take the un­knowns as the real and imag­i­nary parts of the $d_{2,{\vec k}}$ and $d_{3,{\vec k}}$ for half of the ${\vec k}$ val­ues. For ex­am­ple, you could re­strict the ${\vec k}$ val­ues to those for which the first nonzero com­po­nent is pos­i­tive. The cor­re­spond­ing un­knowns must then de­scribe both the ${\vec k}$ and $\vphantom{0}\raisebox{1.5pt}{$-$}$${\vec k}$ waves.

(The ${\vec k}$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0 terms are awk­ward. One way to deal with it is to take an ad­ja­cent pe­ri­odic box and re­verse the sign of all the charges and fields in it. Then take the two boxes to­gether to be a new big­ger pe­ri­odic box. The net ef­fect of this is to shift the mesh of ${\vec k}$-​val­ues fig­ure 6.17 by half an in­ter­val. That means that the ${\vec k}$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0 terms are gone. And other prob­lems that may arise if you sum over all boxes, like to find the to­tal Coulomb po­ten­tial, are gone too. Since the change in ${\vec k}$ val­ues be­comes zero in the limit of in­fi­nite box size, all this re­ally amounts to is sim­ply ig­nor­ing the ${\vec k}$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0 terms.)

The Hamil­ton­ian can be ob­tained just like the ear­lier one (A.137), {A.1.5}. (Or make that {A.1.4}, since the un­knowns, $d_{2,{\vec k}}$ and $d_{3,{\vec k}}$, are now in­dexed by the dis­crete val­ues of the wave num­ber ${\vec k}$.) But this time it re­ally needs to be done right, be­cause this Hamil­ton­ian is sup­posed to be ac­tu­ally used. It is best done in terms of the com­po­nents of the po­ten­tial and ve­loc­ity vec­tors. Us­ing $j$ to in­dex the com­po­nents, the La­grangian be­comes

\begin{displaymath}
\Lag_{\rm F} = \frac{\epsilon_0}{2}
\int \sum_{j=1}^3 \big...
... + {\textstyle\frac{1}{2}} m_i v_i\strut_j^2 \Big] - V_{\rm C}
\end{displaymath}

Now Hamil­to­ni­ans should not be in terms of par­ti­cle ve­loc­i­ties, de­spite what (A.137) said. Hamil­to­ni­ans should be in terms of canon­i­cal mo­menta, {A.1.4}. The canon­i­cal mo­men­tum cor­re­spond­ing to the ve­loc­ity com­po­nent $v_i\strut_j$ of a par­ti­cle $i$ is de­fined as

\begin{displaymath}
p^{\rm {c}}_i\strut_j \equiv \frac{\partial {\cal L}}{\partial v_i\strut_j}
\end{displaymath}

Dif­fer­en­ti­at­ing the La­grangian above gives

\begin{displaymath}
p^{\rm {c}}_i\strut_j = m_i v_i\strut_j + q_i A_j\strut_i
\end{displaymath}

It is this canon­i­cal mo­men­tum that in quan­tum me­chan­ics gets re­placed by the op­er­a­tor $\hbar\partial$$\raisebox{.5pt}{$/$}$${\rm i}\partial{r}_j$. That is im­por­tant since, as the above ex­pres­sion shows, canon­i­cal mo­men­tum is not just lin­ear mo­men­tum in the pres­ence of an elec­tro­mag­netic field.

The time de­riv­a­tives of the real and imag­i­nary parts of the co­ef­fi­cients $d_{2,{\vec k}}$ and $d_{3,{\vec k}}$ should be re­placed by sim­i­larly de­fined canon­i­cal mo­menta. How­ever, that turns out to be a mere rescal­ing of these time de­riv­a­tives.

The Hamil­ton­ian then be­comes, fol­low­ing {A.1.4} and in vec­tor no­ta­tion,

 $\displaystyle H_{\rm F}$ $\textstyle =$ $\displaystyle \frac{\epsilon_0}{2}
\int \bigg[\left\vert\skew3\vec A_\perp\stru...
...\vec A_\perp\strut_{\underline j}\right\vert^2 \bigg] {\,\rm d}^3{\skew0\vec r}$   
     $\displaystyle \mbox{} + \sum_{i=1}^I \frac{({\skew0\vec p}^{\,\rm {c}}_i - q_i ...
...}}}{4\pi\epsilon_0\vert{\skew0\vec r}_i-{\skew0\vec r}_{{\underline i}}\vert}%
$  (A.151)

Note in par­tic­u­lar that the cen­ter term is the ki­netic en­ergy of the par­ti­cles, but in terms of their canon­i­cal mo­menta.

In terms of the waves (A.144), the in­te­gral falls apart in sep­a­rate con­tri­bu­tions from each $d_{2,{\vec k}}$ and $d_{3,{\vec k}}$ mode. That is a con­se­quence of the or­thog­o­nal­ity of the ex­po­nen­tials, com­pare the Par­se­val iden­tity in {A.26}. (Since the ex­po­nen­tials are com­plex, the ab­solute val­ues in the in­te­gral are now re­quired.) As a re­sult, the equa­tions for dif­fer­ent co­ef­fi­cients are only in­di­rectly cou­pled by the in­ter­ac­tion with the charged par­ti­cles. In par­tic­u­lar, it turns out that each co­ef­fi­cient sat­is­fies its own har­monic os­cil­la­tor equa­tion with forc­ing by the charged par­ti­cles, {A.1.4},

\begin{displaymath}
\epsilon_0 {\cal V}(\ddot d_{j,{\vec k}} + k^2c^2 d_{j,{\ve...
...{\vec k}\cdot{\skew0\vec r}_i}
\quad\mbox{for $j$\ = 2 and 3}
\end{displaymath}

If the speed of the par­ti­cle gets com­pa­ra­ble to the speed of light, you may want to use the rel­a­tivis­tic en­ergy (1.2);

\begin{displaymath}
\frac{({\skew0\vec p}^{\,\rm {c}}_i - q_i \skew3\vec A_\per...
...\vec p}^{\,\rm {c}}_i - q_i \skew3\vec A_\perp\strut_i)^2 c^2}
\end{displaymath}

Some­times, it is con­ve­nient to as­sume that the sys­tem un­der con­sid­er­a­tion also ex­pe­ri­ences an ex­ter­nal elec­tro­mag­netic field. For ex­am­ple, you might con­sider an atom or atomic nu­cleus in the mag­netic field pro­duced by an elec­tro­mag­net. You prob­a­bly do not want to in­clude every elec­tron in the wires of the elec­tro­mag­net in your model. That would be some­thing else. In­stead you can sim­ply add the vec­tor po­ten­tial $\skew3\vec A_{\rm {ext}}\strut_i$ that they pro­duce to $\skew3\vec A_\perp\strut_i$ in the Hamil­ton­ian. If there is also an ex­ter­nal elec­tro­sta­tic po­ten­tial, add a sep­a­rate term $q_i{\varphi_{\rm {ext}}}_i$ to the Hamil­ton­ian for each par­ti­cle $i$. The ex­ter­nal fields will be so­lu­tions of the ho­mo­ge­neous evo­lu­tion equa­tions (A.140) and (A.141), (i.e. the equa­tions with­out charge and cur­rent den­si­ties). How­ever, the ex­ter­nal fields will not van­ish at in­fin­ity; that is why they can be nonzero with­out charge and cur­rent den­si­ties.

Note that the en­tire ex­ter­nal vec­tor po­ten­tial is needed, not just the trans­verse part. The lon­gi­tu­di­nal part is not in­cluded in $V_{\rm {C}}$. Bethe [6, p. 266] also notes that the ex­ter­nal field should sat­isfy the Lorenz con­di­tion. No fur­ther de­tails are given. How­ever, at least in var­i­ous sim­ple cases, a gage trans­form that kills off the Lorenz con­di­tion may be ap­plied. See for ex­am­ple the gage prop­erty for a pure ex­ter­nal field {A.19.5}. In the clas­si­cal case a gage trans­form of the ex­ter­nal fields does not make a dif­fer­ence ei­ther, be­cause it does not change ei­ther the La­grangian equa­tions for the trans­verse field nor those for the par­ti­cles. Us­ing the Lorenz con­di­tion can­not hurt, any­way.

Par­ti­cle spin, if any, is not in­cluded in the above Hamil­ton­ian. At non­rel­a­tivis­tic speeds, its en­ergy can be de­scribed as a dot pro­duce with the lo­cal mag­netic field, chap­ter 13.4.

So far all this was clas­si­cal elec­tro­dy­nam­ics. But the in­ter­ac­tion be­tween the charges and the trans­verse waves can read­ily be quan­tized us­ing es­sen­tially the same pro­ce­dure as used for the Koulomb po­ten­tial in sub­sec­tion A.22.3. The de­tails are worked out in ad­den­dum {A.23} for the fields. It al­lows a rel­a­tivis­tic de­scrip­tion of the emis­sion of elec­tro­mag­netic ra­di­a­tion by atoms and nu­clei, {A.24} and {A.25}.

While the trans­verse field must be quan­tized, the Coulomb po­ten­tial can be taken un­changed into quan­tum me­chan­ics. That was done, for ex­am­ple, for the non­rel­a­tivis­tic hy­dro­gen atom in chap­ter 4.3 and for the rel­a­tivis­tic one in ad­den­dum {D.81}.

Fi­nally, any ex­ter­nal fields are as­sumed to be given; they are not quan­tized ei­ther.

Note that the Fermi quan­ti­za­tion is not fully rel­a­tivis­tic. In a fully rel­a­tivis­tic the­ory, the par­ti­cles too should be de­scribed by quan­tum fields. The Fermi quan­ti­za­tion does not do that. So even the rel­a­tivis­tic hy­dro­gen atom is not quite ex­act, even though it is or­ders of mag­ni­tude more ac­cu­rate than the al­ready very ac­cu­rate non­rel­a­tivis­tic one. The en­ergy lev­els are still wrong by the so-called Lamb shift, {A.39} But this is an ex­tremely tiny ef­fect. Lit­tle in life is per­fect, isn’t it?


A.22.9 The Coulomb po­ten­tial and the speed of light

The Coulomb po­ten­tial

\begin{displaymath}
\varphi_{\rm {C}}({\skew0\vec r};t) =
\sum_{i=1}^I \frac{q_i}{4\pi\epsilon_0\vert{\skew0\vec r}- {\skew0\vec r}_i\vert}
\end{displaymath}

does not re­spect the speed of light $c$. Move a charge, and the Coulomb po­ten­tial im­me­di­ately changes every­where in space. How­ever, spe­cial rel­a­tiv­ity says that an event may not af­fect events else­where un­less these events are reach­able by the speed of light. Some­thing else must pre­vent the use of the Coulomb po­ten­tial to trans­mit ob­serv­able ef­fects at a speed greater than that of light.

To un­der­stand what is go­ing on, as­sume that at time zero some charges at the ori­gin are given a well-de­served kick. As men­tioned ear­lier, the Klein-Gor­don equa­tions re­spect the speed of light. There­fore the orig­i­nal po­ten­tials $\varphi$ and $\skew3\vec A$, the ones that sat­is­fied the Klein-Gor­don equa­tions and Lorenz con­di­tion, are un­af­fected by the kick be­yond a dis­tance $ct$ from the ori­gin. The orig­i­nal po­ten­tials do re­spect the speed of light.

The Coulomb po­ten­tial above, how­ever, in­cludes the lon­gi­tu­di­nal part $\skew3\vec A_\parallel$ of the vec­tor po­ten­tial $\skew3\vec A$. As the Coulomb po­ten­tial re­flects, $\skew3\vec A_\parallel$ does change im­me­di­ately all the way up to in­fin­ity. But the trans­verse part $\skew3\vec A_\perp$ also changes im­me­di­ately all the way up to in­fin­ity. Be­yond the limit dic­tated by the speed of light, the two parts of the po­ten­tial ex­actly can­cel each other. As a re­sult, be­yond the speed of light limit, the net vec­tor po­ten­tial $\skew3\vec A$ does not change.

The bot­tom line is

The math­e­mat­ics of the Helmholtz de­com­po­si­tion of $\skew3\vec A$ into $\skew3\vec A_\parallel$ and $\skew3\vec A_\perp$ hides, but of course does not change, the lim­i­ta­tion im­posed by the speed of light.
The lim­i­ta­tion is still there, it is just much more dif­fi­cult to see. The change in cur­rent den­sity $\vec\jmath$ caused by kick­ing the charges near the ori­gin is re­stricted to the im­me­di­ate vicin­ity of the ori­gin. But both the lon­gi­tu­di­nal part $\vec\jmath_\parallel$ and the trans­verse part $\vec\jmath_\perp$ ex­tend all the way to in­fin­ity. And then so do the lon­gi­tu­di­nal and trans­verse po­ten­tials. It is only when you add the two that you see that the sum is zero be­yond the speed of light limit.