D.40 Quan­ti­za­tion of ra­di­a­tion de­riva­tions

This gives var­i­ous de­riva­tions for the ad­den­dum of the same name.

It is to be shown first that

$\parbox{400pt}{\hspace{11pt}\hfill$\displaystyle
\int_{\rm all} c^2(\skew2\vec...
...} (\skew3\vec{\cal E}_\gamma^{\rm{n}})^2 {\,\rm d}^3{\skew0\vec r}
$\hfill(1)}$
To see that, note from (A.157) that

\begin{displaymath}
c \skew2\vec{\cal B}_\gamma^{\rm {n}} = \frac{1}{{\rm i}k} \nabla \times \skew3\vec{\cal E}_\gamma^{\rm {n}}
\end{displaymath}

so the left-hand in­te­gral be­comes

\begin{displaymath}
\int_{\rm all} c^2(\skew2\vec{\cal B}_\gamma^{\rm {n}})^2 {...
...\skew3\vec{\cal E}_\gamma^{\rm {n}}) {\,\rm d}^3{\skew0\vec r}
\end{displaymath}

Now the curl, $\nabla\times$, is Her­mit­ian, {D.10}, so the sec­ond curl can be pushed in front of the first curl. Then curl curl acts as $\vphantom0\raisebox{1.5pt}{$-$}$$\nabla^2$ be­cause $\skew3\vec{\cal E}_\gamma^{\rm {n}}$ is so­le­noidal and the stan­dard vec­tor iden­tity (D.1). And the eigen­value prob­lem turns $\vphantom0\raisebox{1.5pt}{$-$}$$\nabla^2$ into $k^2$.

Note in­ci­den­tally that the ad­di­tional sur­face in­te­gral in {D.10} is zero even for the pho­ton modes of def­i­nite an­gu­lar mo­men­tum, {A.21.7}, be­cause for them ei­ther $\skew3\vec{\cal E}_\gamma^{\rm {n}}$ is zero on the sur­face or $\nabla$ $\times$ $\skew3\vec{\cal E}_\gamma^{\rm {n}}$ is. Also note that the in­te­grals be­come equal in­stead of op­po­site if you push com­plex con­ju­gates on the first fac­tors in the in­te­grands.

Now the Hamil­ton­ian can be worked out. Us­ing Us­ing (A.152) and (A.162), it is

\begin{displaymath}
H = {\textstyle\frac{1}{4}} \epsilon_0 \int_{\rm all} \left...
...cal B}_\gamma^{\rm {n}*})^2
\right] {\,\rm d}^3{\skew0\vec r}
\end{displaymath}

When that is mul­ti­plied out and in­te­grated, the $(\widehat a)^2$ and $(\widehat a^\dagger )^2$ terms drop out be­cause of (1). The re­main­ing mul­ti­plied-out terms in the Hamil­ton­ian pro­duce the stated Hamil­ton­ian af­ter not­ing the wave func­tion nor­mal­iza­tion (A.158).

The fi­nal is­sue is to iden­tify the re­la­tion­ships be­tween the co­ef­fi­cients $D_0$, $D_1$ and $C$ as given in the text. The most im­por­tant ques­tion is here un­der what cir­cum­stances $2\vert D_1\vert$ and $4\vert C\vert^2$ can get very close to the larger value $2D_0$.

The co­ef­fi­cient $D_1$ was de­fined as

\begin{displaymath}
2D_1 = \sum_i c_{i-1}^* c_{i+1} \sqrt{i} \sqrt{i+1}
\end{displaymath}

To es­ti­mate this, con­sider the in­fi­nite-di­men­sion­al vec­tors $\vec{a}$ and $\vec{b}$ with co­ef­fi­cients

\begin{displaymath}
a_i \equiv c_{i-1}\sqrt{i} \qquad b_i \equiv c_{i+1} \sqrt{i+1}
\end{displaymath}

Note that $2D_1$ above is the in­ner prod­uct of these two vec­tors. And an in­ner prod­uct is less in mag­ni­tude than the prod­uct of the lengths of the vec­tors in­volved.

\begin{displaymath}
\vert 2D_1\vert = \vert{\left\langle\vec a\right.\hspace{-\...
...}\vert^2i\right]\left[\sum_i \vert c_{i+1}\vert^2(i+1)\right]}
\end{displaymath}

By chang­ing the no­ta­tions for the sum­ma­tion in­dices, (let­ting $i-1\to{i}$ and $i+1\to{i}$), the sums be­come the ex­pec­ta­tion val­ues of $i+1$, re­spec­tively $i$. So

\begin{displaymath}
\vert 2D_1\vert
\mathrel{\raisebox{-.7pt}{$\leqslant$}}\sq...
...t\langle{i}\right\rangle +{\textstyle\frac{1}{2}})^2}
= 2 D_0
\end{displaymath}

The fi­nal equal­ity is by the de­f­i­n­i­tion of $D_0$. The sec­ond in­equal­ity al­ready im­plies that $\vert D_1\vert$ is al­ways smaller than $D_0$. How­ever, if the ex­pec­ta­tion value of $i$ is large, it does not make much of a dif­fer­ence.

In that case, the big­ger prob­lem is the in­ner prod­uct be­tween the vec­tors $\vec{a}$ and $\vec{b}$. Nor­mally it is smaller than the prod­uct of the lengths of the vec­tors. For it to be­come equal, the two vec­tors have to be pro­por­tional. The co­ef­fi­cients of $\vec{b}$ must be some mul­ti­ple, call it $B^2e^{2{\rm i}\beta}$, of those of $\vec{a}$:

\begin{displaymath}
c_{i+1} \sqrt{i+1} \approx B^2 e^{2{\rm i}\beta} c_{i-1}\sqrt{i}
\end{displaymath}

For larger val­ues of $i$ the square roots are about the same. Then the above re­la­tion­ship re­quires an ex­po­nen­tial de­cay of the co­ef­fi­cients. For small val­ues of $i$, ob­vi­ously the above re­la­tion can­not be sat­is­fied. The needed val­ues of $c_i$ for neg­a­tive $i$ do not ex­ist. To re­duce the ef­fect of this start-up prob­lem, sig­nif­i­cant co­ef­fi­cients will have to ex­ist for a con­sid­er­able range of $i$ val­ues.

In ad­di­tion to the above con­di­tions, the co­ef­fi­cient $4\vert C\vert^2$ has to be close to $2D_0$. Here the co­ef­fi­cient $C$ was de­fined as

\begin{displaymath}
\sqrt{2} C = \sum_i c_{i-1}^* c_{i} \sqrt{i}
\end{displaymath}

Us­ing the same ma­nip­u­la­tions as for $D_1$, but with

\begin{displaymath}
a_i \equiv c_{i-1}\sqrt{\sqrt{i}} \qquad b_i \equiv c_{i+1} \sqrt{\sqrt{i}}
\end{displaymath}

gives

\begin{displaymath}
2 \vert C\vert^2
\mathrel{\raisebox{-.7pt}{$\leqslant$}}\l...
...}}\right\rangle \left\langle{\textstyle\sqrt{i}}\right\rangle
\end{displaymath}

To bound this fur­ther, de­fine

\begin{displaymath}
f(x)=\left\langle{\textstyle\sqrt{i+{\textstyle\frac{1}{2}}+x}}\right\rangle
\end{displaymath}

By ex­pand­ing the square root in a Tay­lor se­ries,

\begin{displaymath}
f(-{\textstyle\frac{1}{2}}) < f(0) - \Delta f \qquad f({\textstyle\frac{1}{2}}) < f(0) + \Delta f
\end{displaymath}

where $\Delta{f}$ is the ex­pec­ta­tion value of the lin­ear term in the Tay­lor se­ries; the in­equal­i­ties ex­press that a square root func­tion has a neg­a­tive sec­ond or­der de­riv­a­tive. Mul­ti­ply­ing these two ex­pres­sions shows that

\begin{displaymath}
f(-{\textstyle\frac{1}{2}}) f({\textstyle\frac{1}{2}}) < f^...
...le{\textstyle\sqrt{i+{\textstyle\frac{1}{2}}}}\right\rangle ^2
\end{displaymath}

Since it has al­ready been shown that the ex­pec­ta­tion value of $i$ must be large, this in­equal­ity will be al­most an equal­ity, any­way.

In any case,

\begin{displaymath}
2\vert C\vert^2 < \left\langle{\textstyle\sqrt{i+{\textstyle\frac{1}{2}}}}\right\rangle ^2
\end{displaymath}

This is less than

\begin{displaymath}
\left\langle{\textstyle\sqrt{i+{\textstyle\frac{1}{2}}}^2}\right\rangle = 2D_0
\end{displaymath}

The big ques­tion is now how much it is smaller. To an­swer that, use the short­hand

\begin{displaymath}
\sqrt{i+{\textstyle\frac{1}{2}}} \equiv x_i = x + x'_i
\end{displaymath}

where $x$ is the ex­pec­ta­tion value of the square root and $x'_i$ is the de­vi­a­tion from the av­er­age. Then, not­ing that the ex­pec­ta­tion value of $x'_i$ is zero,

\begin{displaymath}
2 D_0 = \left\langle{(x+x'_i)^2}\right\rangle = \left\langle{x}\right\rangle ^2 + \left\langle{(x'_i)^2}\right\rangle
\end{displaymath}

The sec­ond-last term is the bound for $2\vert C\vert^2$ as ob­tained above. So, the only way that $2\vert C\vert^2$ can be close to $2D_0$ is if the fi­nal term is rel­a­tively small. That means that the de­vi­a­tion from the ex­pec­ta­tion square root must be rel­a­tively small. So the co­ef­fi­cients $c_i$ can only be sig­nif­i­cant in some lim­ited range around an av­er­age value of $i$. In ad­di­tion, for the vec­tors $\vec{a}$ and $\vec{b}$ in the ear­lier es­ti­mate for $C$ to be al­most pro­por­tional,

\begin{displaymath}
c_{i-1} \sqrt{\sqrt{i}} \approx A e^{{\rm i}\alpha} c_i \sqrt{\sqrt{i}}
\end{displaymath}

where $Ae^{{\rm i}\alpha}$ is some con­stant. That again means an ex­po­nen­tial de­pen­dence, like for the con­di­tion on $D_1$. And $Ae^{{\rm i}\alpha}$ will have to be ap­prox­i­mately $Be^{{\rm i}\beta}$. And $A$ will have to be about 1, be­cause oth­er­wise start and end ef­fects will dom­i­nate the ex­po­nen­tial part. That gives the sit­u­a­tion as de­scribed in the text.