13.2 Maxwell’s Equations

Maxwell’s equations are commonly not covered in a typical engineering program. While these laws are not directly related to quantum mechanics, they do tend to pop up in nanotechnology. This section intends to give you some of the ideas. The description is based on the divergence and curl spatial derivative operators, and the related Gauss and Stokes theorems commonly found in calculus courses (Calculus III in the US system.)

Skipping the first equation for now, the second of Maxwell’s equations comes directly out of the quantum mechanical description of the previous section. Consider the expression for the magnetic field $\skew2\vec{\cal B}$ derived (guessed) there, (13.3). If you take its divergence, (premultiply by $\nabla\cdot$), you get rid of the vector potential $\skew3\vec A$, since the divergence of any curl is always zero, so you get

\begin{displaymath}
\mbox{Maxwell\rq{}s second equation: }\quad
\nabla\cdot\skew2\vec{\cal B}= 0 %
\end{displaymath} (13.4)

and that is the second of Maxwell’s four beautifully concise equations. (The compact modern notation using divergence and curl is really due to Heaviside and Gibbs, though.)

The first of Maxwell’s equations is a similar expression for the electric field $\skew3\vec{\cal E}$, but its divergence is not zero:

\begin{displaymath}
\mbox{Maxwell\rq{}s first equation: }\quad
\nabla\cdot\skew3\vec{\cal E}= \frac{\rho}{\epsilon_0} %
\end{displaymath} (13.5)

where $\rho$ is the electric charge per unit volume that is present and the constant $\epsilon_0$ $\vphantom0\raisebox{1.5pt}{$=$}$ 8.85 10$\POW9,{-12}$ C$\POW9,{2}$/J m is called the permittivity of space.

Figure 13.1: Relationship of Maxwell’s first equation to Coulomb’s law.
\begin{figure}
\centering
{}%
\setlength{\unitlength}{1pt}
\begin{pic...
...t(8,81){$q$}
\put(70,5){$\skew3\vec{\cal E}$}
\end{picture}
\end{figure}

What does it all mean? Well, the first thing to verify is that Maxwell’s first equation is just a very clever way to write Coulomb’s law for the electric field of a point charge. Consider therefore an electric point charge of strength $q$, and imagine this charge surrounded by a translucent sphere of radius $r$, as shown in figure 13.1. By symmetry, the electric field at all points on the spherical surface is radial, and everywhere has the same magnitude ${\cal E}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\vert\skew3\vec{\cal E}\vert$; figure 13.1 shows it for eight selected points.

Now watch what happens if you integrate both sides of Maxwell’s first equation (13.5) over the interior of this sphere. Starting with the right hand side, since the charge density is the charge per unit volume, by definition its integral over the volume is the charge $q$. So the right hand side integrates simply to $q$$\raisebox{.5pt}{$/$}$$\epsilon_0$. How about the left hand side? Well, the Gauss, or divergence, theorem of calculus says that the divergence of any vector, $\skew3\vec{\cal E}$ in this case, integrated over the volume of the sphere, equals the radial electric field ${\cal E}$ integrated over the surface of the sphere. Since ${\cal E}$ is constant on the surface, and the surface of a sphere is just $4\pi{r}^2$, the right hand side integrates to $4\pi{r}^2{\cal E}$. So in total, you get for the integrated first Maxwell’s equation that $4\pi{r}^2{\cal E}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $q$$\raisebox{.5pt}{$/$}$$\epsilon_0$. Take the $4{\pi}r^2$ to the other side and there you have the Coulomb electric field of a point charge:

\begin{displaymath}
\mbox{Coulomb\rq{}s law: }\quad
{\cal E}=\frac{q}{4\pi r^2\epsilon_0} %
\end{displaymath} (13.6)

Multiply by $\vphantom0\raisebox{1.5pt}{$-$}$$e$ and you have the electrostatic force on an electron in that field according to the Lorentz equation (13.1). Integrate with respect to $r$ and you have the potential energy $V$ $\vphantom0\raisebox{1.5pt}{$=$}$ $-qe$$\raisebox{.5pt}{$/$}$$4\pi\epsilon_0r$ that has been used earlier to analyze atoms and molecules.

Of course, all this raises the question, why bother? If Maxwell’s first equation is just a rewrite of Coulomb’s law, why not simply stick with Coulomb’s law in the first place? Well, to describe the electric field at a given point using Coulomb’s law requires you to consider every charge everywhere else. In contrast, Maxwell’s equation only involves local quantities at the given point, to wit, the derivatives of the local electric field and the local charge per unit volume. It so happens that in numerical or analytical work, most of the time it is much more convenient to deal with local quantities, even if those are derivatives, than with global ones.

Figure 13.2: Maxwell’s first equation for a more arbitrary region. The figure to the right includes the field lines through the selected points.
\begin{figure}
\centering
{}%
\setlength{\unitlength}{1pt}
\begin{pic...
...{\cal E}$}
\put(189,28){$\skew3\vec{\cal E}$}
\end{picture}
\end{figure}

Of course, you can also integrate Maxwell’s first equation over more general regions than a sphere centered around a charge. For example figure 13.2 shows a sphere with an off-center charge. But the electric field strength is no longer constant over the surface, and divergence theorem now requires you to integrate the component of the electric field normal to the surface over the surface. Clearly, that does not have much intuitive meaning. However, if you are willing to loosen up a bit on mathematical preciseness, there is a better way to look at it. It is in terms of the electric field lines, the lines that everywhere trace the direction of the electric field. The left figure in figure 13.2 shows the field lines through the selected points; a single charge has radial field lines.

Figure 13.3: The net number of field lines leaving a region is a measure for the net charge inside that region.
\begin{figure}
\centering
{}%
\setlength{\unitlength}{1pt}
\begin{pic...
...ll1f.eps}}}
% some labels
\put(16,104){$q$}
\end{picture}
\end{figure}

Assume that you draw the field lines densely, more like figure 13.3 say, and moreover, that you make the number of field lines coming out of a charge proportional to the strength of that charge. In that case, the local density of field lines at a point becomes a measure of the strength of the electric field at that point, and in those terms, Maxwell’s integrated first equation says that the net number of field lines leaving a region is proportional to the net charge inside that region. That remains true when you add more charges inside the region. In that case the field lines will no longer be straight, but the net number going out will still be a measure of the net charge inside.

Now consider the question why Maxwell’s second equation says that the divergence of the magnetic field is zero. For the electric field you can shove, say, some electrons in the region to create a net negative charge, or you can shove in some ionized molecules to create a net positive charge. But the magnetic equivalents to such particles, called “magnetic monopoles”, being separate magnetic north pole particles or magnetic south pole particles, simply do not exist, {N.31}. It might appear that your bar magnet has a north pole and a south pole, but if you take it apart into little pieces, you do not end up with north pole pieces and south pole pieces. Each little piece by itself is still a little magnet, with equally strong north and south poles. The only reason the combined magnet seems to have a north pole is that all the microscopic magnets of which it consists have their north poles preferentially pointed in that direction.

Figure 13.4: Since magnetic monopoles do not exist, the net number of magnetic field lines leaving a region is always zero.
\begin{figure}
\centering
{}%
\setlength{\unitlength}{1pt}
\begin{pic...
...some labels
\put(11,100){S}
\put(22,102){N}
\end{picture}
\end{figure}

If all microscopic magnets have equal strength north and south poles, then the same number of magnetic field lines that come out of the north poles go back into the south poles, as figure 13.4 illustrates. So the net magnetic field lines leaving a given region will be zero; whatever goes out comes back in. True, if you enclose the north pole of a long bar magnet by an imaginary sphere, you can get a pretty good magnetic approximation of the electrical case of figure 13.1. But even then, if you look inside the magnet where it sticks through the spherical surface, the field lines will be found to go in towards the north pole, instead of away from it. You see why Maxwell’s second equation is also called absence of magnetic monopoles. And why, say, electrons can have a net negative charge, but have zero magnetic pole strength; their spin and orbital angular momenta produce equally strong magnetic north and south poles, a magnetic dipole (di meaning two.)

You can get Maxwell’s third equation from the electric field derived in the previous section. If you take its curl, (premultiply by $\nabla\times$), you get rid of the potential $\varphi$, since the curl of any gradient is always zero, and the curl of $\skew3\vec A$ is the magnetic field. So the third of Maxwell’s equations is:

\begin{displaymath}
\mbox{Maxwell\rq{}s third equation: }\quad
\nabla\times\...
...{\cal E}=
-\frac{\partial \skew2\vec{\cal B}}{\partial t} %
\end{displaymath} (13.7)

The curl, $\nabla\times$, is also often indicated as rot.

Figure 13.5: Electric power generation.
\begin{figure}
\centering
{}%
\setlength{\unitlength}{1pt}
\begin{pic...
...c{\cal E}$}
\put(57,18){$\skew3\vec{\cal E}$}
\end{picture}
\end{figure}

Now what does that one mean? Well, the first thing to verify in this case is that this is just a clever rewrite of Faraday's law of induction, governing electric power generation. Assume that you want to create a voltage to drive some load (a bulb or whatever, don’t worry what the load is, just how to get the voltage for it.) Just take a piece of copper wire and bend it into a circle, as shown in figure 13.5. If you can create a voltage difference between the ends of the wire you are in business; just hook your bulb or whatever to the ends of the wire and it will light up. But to get such a voltage, you will need an electric field as shown in figure 13.5 because the voltage difference between the ends is the integral of the electric field strength along the length of the wire. Now Stokes' theorem of calculus says that the electric field strength along the wire integrated over the length of the wire equals the integral of the curl of the electric field strength integrated over the inside of the wire, in other words over the imaginary translucent circle in figure 13.5. So to get the voltage, you need a nonzero curl of the electric field on the translucent circle. And Maxwell’s third equation above says that this means a time-varying magnetic field on the translucent circle. Moving the end of a strong magnet closer to the circle should do it, as suggested by figure 13.5. You better not make that a big bulb unless you you wrap the wire around a lot more times to form a spool, but anyway. {N.32}.

Maxwell’s fourth and final equation is a similar expression for the curl of the magnetic field:

\begin{displaymath}
\mbox{Maxwell\rq{}s fourth equation: }\quad
c^2 \nabla\t...
...psilon_0}
+\frac{\partial \skew3\vec{\cal E}}{\partial t} %
\end{displaymath} (13.8)

where $\vec\jmath$ is the electric current density, the charge flowing per unit cross sectional area, and $c$ is the speed of light. (It is possible to rescale $\skew2\vec{\cal B}$ by a factor $c$ to get the speed of light to show up equally in the equations for the curl of $\skew3\vec{\cal E}$ and the curl of $\skew2\vec{\cal B}$, but then the Lorentz force law must be adjusted too.)

Figure 13.6: Two ways to generate a magnetic field: using a current (left) or using a varying electric field (right).
\begin{figure}
\centering
{}%
\setlength{\unitlength}{1pt}
\begin{pic...
...c{\cal B}$}
\put(152,4){$\skew2\vec{\cal B}$}
\end{picture}
\end{figure}

The big difference from the third equation is the appearance of the current density $\vec\jmath$. So, there are two ways to create a circulatory magnetic field, as shown in figure 13.6: (1) pass a current through the enclosed circle (the current density integrates over the area of the circle into the current through the circle), and (2) by creating a varying electric field over the circle, much like was done for the electric field in figure 13.5.

The fact that a current creates a surrounding magnetic field was already known as Ampere's law when Maxwell did his analysis. Maxwell himself however added the time derivative of the electric field to the equation to have the mathematics make sense. The problem was that the divergence of any curl must be zero, and by itself, the divergence of the current density in the right hand side of the fourth equation is not zero. Just like the divergence of the electric field is the net field lines coming out of a region per unit volume, the divergence of the current density is the net current coming out. And it is perfectly OK for a net charge to flow out of a region: it simply reduces the charge remaining within the region by that amount. This is expressed by the continuity equation:

\begin{displaymath}
\mbox{Maxwell\rq{}s continuity equation: }\quad
\nabla\cdot\vec\jmath = -\frac{\partial \rho}{\partial t} %
\end{displaymath} (13.9)

So Maxwell’s fourth equation without the time derivative of the electric field is mathematically impossible. But after he added it, if you take the divergence of the total right hand side then you do indeed get zero as you should. To check that, use the continuity equation above and the first equation.

In empty space, Maxwell’s equations simplify: there are no charges so both the charge density $\rho$ and the current density $\vec\jmath$ will be zero. In that case, the solutions of Maxwell’s equations are simply combinations of “traveling waves.” A traveling wave takes the form

\begin{displaymath}
\skew3\vec{\cal E}= {\hat k}{\cal E}_0 \cos\Big(\omega(t -...
...ath}\frac1c {\cal E}_0 \cos\Big(\omega(t - y/c)-\alpha\Big) %
\end{displaymath} (13.10)

where for simplicity, the $y$-​axis of the coordinate system has been aligned with the direction in which the wave travels, and the $z$-​axis with the amplitude ${\hat k}{\cal E}_0$ of the electric field of the wave. Such a wave is called “linearly polarized” in the $z$-​direction. The constant $\omega$ is the angular frequency of the wave, equal to $2\pi$ times its frequency $\nu$ in cycles per second, and is related to its wave length $\lambda$ by $\omega\lambda$$\raisebox{.5pt}{$/$}$$c$ $\vphantom0\raisebox{1.5pt}{$=$}$ $2\pi$. The constant $\alpha$ is just a phase angle. For these simple waves, the magnetic and electric field must be normal to each other, as well as to the direction of wave propagation.

You can plug the above wave solution into Maxwell’s equations and so verify that it satisfies them all. With more effort and knowledge of Fourier analysis, you can show that they are the most general possible solutions that take this traveling wave form, and that any arbitrary solution is a combination of these waves (if all directions of the propagation direction and of the electric field relative to it, are included.)

The point is that the waves travel with the speed $c$. When Maxwell wrote down his equations, $c$ was just a constant to him, but when the propagation speed of electromagnetic waves matched the experimentally measured speed of light, it was just too much of a coincidence and he correctly concluded that light must be traveling electromagnetic waves.

It was a great victory of mathematical analysis. Long ago, the Greeks had tried to use mathematics to make guesses about the physical world, and it was an abysmal failure. You do not want to hear about it. Only when the Renaissance started measuring how nature really works, the correct laws were discovered for people like Newton and others to put into mathematical form. But here, Maxwell successfully amends Ampere's measured law, just because the mathematics did not make sense. Moreover, by deriving how fast electromagnetic waves move, he discovers the very fundamental nature of the then mystifying physical phenomenon humans call light.

For those with a knowledge of partial differential equations, separate wave equations for the electric and magnetic fields and their potentials are derived in addendum {A.36}.

An electromagnetic field obviously contains energy; that is how the sun transports heat to our planet. The electromagnetic energy within an otherwise empty volume ${\cal V}$ can be found as

\begin{displaymath}
\fbox{$\displaystyle
E_{\cal V}= {\textstyle\frac{1}{2}}...
...^2\skew2\vec{\cal B}^2\right){\,\rm d}^3{\skew0\vec r}
$} %
\end{displaymath} (13.11)

This is typically derived by comparing the energy from discharging a condenser to the electric field that it initially holds, and from comparing the energy from discharging a coil to the magnetic field it initially holds. That is too much detail for this book.

But at least the result can be made plausible. First note that the time derivative of the energy above can be written as

\begin{displaymath}
\frac{{\rm d}E_{\cal V}}{{\rm d}t} =
- \int_S \epsilon_0...
...{\cal E}\times c \skew2\vec{\cal B}) \cdot {\vec n}{\,\rm d}S
\end{displaymath}

Here $S$ is the surface of volume ${\cal V}$, and ${\vec n}$ is the unit vector normal to the surface element ${\rm d}{S}$. To verify this expression, bring the time derivative inside the integral in (13.11), then get rid of the time derivatives using Maxwell’s third and fourth laws, use the standard vector identity [40, 20.40], and finally the divergence theorem.

Now suppose you have a finite amount of radiation in otherwise empty space. If the amount of radiation is finite, the field should disappear at infinity. So, taking the volume to be all of space, the integral in the right hand side above will be zero. So $E_{\cal V}$ will be constant. That indicates that $E_{\cal V}$ should be at least a multiple of the energy. After all, what other scalar quantity than energy would be constant? And the factor $\epsilon_0$ is needed because of units. That misses only the factor $\frac12$ in the expression for the energy.

For an arbitrary volume ${\cal V}$, the surface integral must then be the energy outflow through the surface of the volume. That suggests that the energy flow rate per unit area is given by the so-called “Poynting vector”

\begin{displaymath}
\epsilon_0c\,\skew3\vec{\cal E}\times c\skew2\vec{\cal B} %
\end{displaymath} (13.12)

Unfortunately, this argument is flawed. You cannot deduce local values of the energy flow from its integral over an entire closed surface. In particular, you can find different vectors that describe the energy flow also without inconsistency. Just add an arbitrary solenoidal vector, a vector whose divergence is zero, to the Poynting vector. For example, adding a multiple of the magnetic field would do it. However, if you look at simple lightwaves like (13.10), the Poynting vector seems the intuitive choice. This paragraph was included because other books have Poynting vectors and you would be very disappointed if yours did not.

You will usually not find Maxwell’s equations in the exact form described here. To explain what is going on inside materials, you would have to account for the electric and magnetic fields of every electron and proton (and neutron!) of the material. That is just an impossible task, so physicists have developed ways to average away all those effects by messing with Maxwell’s equations. But then the messed-up $\skew3\vec{\cal E}$ in one of Maxwell’s equations is no longer the same as the messed-up $\skew3\vec{\cal E}$ in another, and the same for $\skew2\vec{\cal B}$. So physicists rename one messed-up $\skew3\vec{\cal E}$ as, maybe, the electric flux density $\vec{D}$, and a messed up magnetic field as, maybe, the auxiliary field. And they define many other symbols, and even refer to the auxiliary field as being the magnetic field, all to keep engineers out of nanotechnology. Don’t let them! When you need to understand the messed-up Maxwell’s equations, Wikipedia has a list of the countless definitions.