D.40 Quantization of radiation derivations

This gives various derivations for the addendum of the same name.

It is to be shown first that

$\parbox{400pt}{\hspace{11pt}\hfill$\displaystyle
\int_{\rm all} c^2(\skew2\ve...
... (\skew3\vec{\cal E}_\gamma^{\rm{n}})^2 {\,\rm d}^3{\skew0\vec r}
$\hfill(1)}$
To see that, note from (A.157) that

\begin{displaymath}
c \skew2\vec{\cal B}_\gamma^{\rm {n}} = \frac{1}{{\rm i}k} \nabla \times \skew3\vec{\cal E}_\gamma^{\rm {n}}
\end{displaymath}

so the left-hand integral becomes

\begin{displaymath}
\int_{\rm all} c^2(\skew2\vec{\cal B}_\gamma^{\rm {n}})^2 ...
...skew3\vec{\cal E}_\gamma^{\rm {n}}) {\,\rm d}^3{\skew0\vec r}
\end{displaymath}

Now the curl, $\nabla\times$, is Hermitian, {D.10}, so the second curl can be pushed in front of the first curl. Then curl curl acts as $\vphantom0\raisebox{1.5pt}{$-$}$$\nabla^2$ because $\skew3\vec{\cal E}_\gamma^{\rm {n}}$ is solenoidal and the standard vector identity (D.1). And the eigenvalue problem turns $\vphantom0\raisebox{1.5pt}{$-$}$$\nabla^2$ into $k^2$.

Note incidentally that the additional surface integral in {D.10} is zero even for the photon modes of definite angular momentum, {A.21.7}, because for them either $\skew3\vec{\cal E}_\gamma^{\rm {n}}$ is zero on the surface or $\nabla$ $\times$ $\skew3\vec{\cal E}_\gamma^{\rm {n}}$ is. Also note that the integrals become equal instead of opposite if you push complex conjugates on the first factors in the integrands.

Now the Hamiltonian can be worked out. Using Using (A.152) and (A.162), it is

\begin{displaymath}
H = {\textstyle\frac{1}{4}} \epsilon_0 \int_{\rm all} \lef...
...l B}_\gamma^{\rm {n}*})^2
\right] {\,\rm d}^3{\skew0\vec r}
\end{displaymath}

When that is multiplied out and integrated, the $(\widehat a)^2$ and $(\widehat a^\dagger )^2$ terms drop out because of (1). The remaining multiplied-out terms in the Hamiltonian produce the stated Hamiltonian after noting the wave function normalization (A.158).

The final issue is to identify the relationships between the coefficients $D_0$, $D_1$ and $C$ as given in the text. The most important question is here under what circumstances $2\vert D_1\vert$ and $4\vert C\vert^2$ can get very close to the larger value $2D_0$.

The coefficient $D_1$ was defined as

\begin{displaymath}
2D_1 = \sum_i c_{i-1}^* c_{i+1} \sqrt{i} \sqrt{i+1}
\end{displaymath}

To estimate this, consider the infinite-di­men­sion­al vectors $\vec{a}$ and $\vec{b}$ with coefficients

\begin{displaymath}
a_i \equiv c_{i-1}\sqrt{i} \qquad b_i \equiv c_{i+1} \sqrt{i+1}
\end{displaymath}

Note that $2D_1$ above is the inner product of these two vectors. And an inner product is less in magnitude than the product of the lengths of the vectors involved.

\begin{displaymath}
\vert 2D_1\vert = \vert\big\langle\vec a\big\vert\vec b\bi...
...\vert^2i\right]\left[\sum_i \vert c_{i+1}\vert^2(i+1)\right]}
\end{displaymath}

By changing the notations for the summation indices, (letting $i-1\to{i}$ and $i+1\to{i}$), the sums become the expectation values of $i+1$, respectively $i$. So

\begin{displaymath}
\vert 2D_1\vert
\mathrel{\raisebox{-.7pt}{$\leqslant$}}\...
...ig\langle i\big\rangle +{\textstyle\frac{1}{2}})^2}
= 2 D_0
\end{displaymath}

The final equality is by the definition of $D_0$. The second inequality already implies that $\vert D_1\vert$ is always smaller than $D_0$. However, if the expectation value of $i$ is large, it does not make much of a difference.

In that case, the bigger problem is the inner product between the vectors $\vec{a}$ and $\vec{b}$. Normally it is smaller than the product of the lengths of the vectors. For it to become equal, the two vectors have to be proportional. The coefficients of $\vec{b}$ must be some multiple, call it $B^2e^{2{\rm i}\beta}$, of those of $\vec{a}$:

\begin{displaymath}
c_{i+1} \sqrt{i+1} \approx B^2 e^{2{\rm i}\beta} c_{i-1}\sqrt{i}
\end{displaymath}

For larger values of $i$ the square roots are about the same. Then the above relationship requires an exponential decay of the coefficients. For small values of $i$, obviously the above relation cannot be satisfied. The needed values of $c_i$ for negative $i$ do not exist. To reduce the effect of this start-up problem, significant coefficients will have to exist for a considerable range of $i$ values.

In addition to the above conditions, the coefficient $4\vert C\vert^2$ has to be close to $2D_0$. Here the coefficient $C$ was defined as

\begin{displaymath}
\sqrt{2} C = \sum_i c_{i-1}^* c_{i} \sqrt{i}
\end{displaymath}

Using the same manipulations as for $D_1$, but with

\begin{displaymath}
a_i \equiv c_{i-1}\sqrt{\sqrt{i}} \qquad b_i \equiv c_{i+1} \sqrt{\sqrt{i}}
\end{displaymath}

gives

\begin{displaymath}
2 \vert C\vert^2
\mathrel{\raisebox{-.7pt}{$\leqslant$}}...
...qrt{i+1}\big\rangle \big\langle\textstyle\sqrt{i}\big\rangle
\end{displaymath}

To bound this further, define

\begin{displaymath}
f(x)=\big\langle\textstyle\sqrt{i+{\textstyle\frac{1}{2}}+x}\big\rangle
\end{displaymath}

By expanding the square root in a Taylor series,

\begin{displaymath}
f(-{\textstyle\frac{1}{2}}) < f(0) - \Delta f \qquad f({\textstyle\frac{1}{2}}) < f(0) + \Delta f
\end{displaymath}

where $\Delta{f}$ is the expectation value of the linear term in the Taylor series; the inequalities express that a square root function has a negative second order derivative. Multiplying these two expressions shows that

\begin{displaymath}
f(-{\textstyle\frac{1}{2}}) f({\textstyle\frac{1}{2}}) < f...
...angle\textstyle\sqrt{i+{\textstyle\frac{1}{2}}}\big\rangle ^2
\end{displaymath}

Since it has already been shown that the expectation value of $i$ must be large, this inequality will be almost an equality, anyway.

In any case,

\begin{displaymath}
2\vert C\vert^2 < \big\langle\textstyle\sqrt{i+{\textstyle\frac{1}{2}}}\big\rangle ^2
\end{displaymath}

This is less than

\begin{displaymath}
\big\langle\textstyle\sqrt{i+{\textstyle\frac{1}{2}}}^2\big\rangle = 2D_0
\end{displaymath}

The big question is now how much it is smaller. To answer that, use the shorthand

\begin{displaymath}
\sqrt{i+{\textstyle\frac{1}{2}}} \equiv x_i = x + x'_i
\end{displaymath}

where $x$ is the expectation value of the square root and $x'_i$ is the deviation from the average. Then, noting that the expectation value of $x'_i$ is zero,

\begin{displaymath}
2 D_0 = \big\langle(x+x'_i)^2\big\rangle = \big\langle x\big\rangle ^2 + \big\langle(x'_i)^2\big\rangle
\end{displaymath}

The second-last term is the bound for $2\vert C\vert^2$ as obtained above. So, the only way that $2\vert C\vert^2$ can be close to $2D_0$ is if the final term is relatively small. That means that the deviation from the expectation square root must be relatively small. So the coefficients $c_i$ can only be significant in some limited range around an average value of $i$. In addition, for the vectors $\vec{a}$ and $\vec{b}$ in the earlier estimate for $C$ to be almost proportional,

\begin{displaymath}
c_{i-1} \sqrt{\sqrt{i}} \approx A e^{{\rm i}\alpha} c_i \sqrt{\sqrt{i}}
\end{displaymath}

where $Ae^{{\rm i}\alpha}$ is some constant. That again means an exponential dependence, like for the condition on $D_1$. And $Ae^{{\rm i}\alpha}$ will have to be approximately $Be^{{\rm i}\beta}$. And $A$ will have to be about 1, because otherwise start and end effects will dominate the exponential part. That gives the situation as described in the text.