Notations

The below are the simplest possible descriptions of various symbols, just to help you keep reading if you do not remember/know what they stand for. Don't cite them on a math test and then blame this book for your grade.

Watch it. There are so many ad hoc usages of symbols, some will have been overlooked here. Always use common sense first in guessing what a symbol means in a given context.

The quoted values of physical constants are usually taken from NIST CODATA in 2012 or later. The final digit of the listed value is normally doubtful. (It corresponds to the first nonzero digit of the standard deviation). Numbers ending in triple dots are exact and could be written down to more digits than listed if needed.

$
\setbox 0=\hbox{$\cdot$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
A dot might indicate And also many more prosaic things (punctuation signs, decimal points, ...).

$
\setbox 0=\hbox{$\times$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Multiplication symbol. May indicate:

$
\setbox 0=\hbox{$!$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Might be used to indicate a factorial. Example: 5! $\vphantom0\raisebox{1.5pt}{$=$}$ 1 $\times$ 2 $\times$ 3 $\times$ 4 $\times$ 5 $\vphantom0\raisebox{1.5pt}{$=$}$ 120.

The function that generalizes $n!$ to noninteger values of $n$ is called the gamma function; $n!$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\Gamma(n+1)$. The gamma function generalization is due to, who else, Euler. (However, the fact that $n!$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\Gamma(n+1)$ instead of $n!$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\Gamma(n)$ is due to the idiocy of Legendre.) In Legendre-resistant notation,

\begin{displaymath}
n!=\int_0^{\infty}t^ne^{-t}{\,\rm d}{t}
\end{displaymath}

Straightforward integration shows that 0! is 1 as it should, and integration by parts shows that $(n+1)!$ $\vphantom0\raisebox{1.5pt}{$=$}$ $(n+1)n!$, which ensures that the integral also produces the correct value of $n!$ for any higher integer value of $n$ than 0. The integral, however, exists for any real value of $n$ above $\vphantom0\raisebox{1.5pt}{$-$}$1, not just integers. The values of the integral are always positive, tending to positive infinity for both $n\downarrow-1$, (because the integral then blows up at small values of $t$), and for $n\uparrow\infty$, (because the integral then blows up at medium-large values of $t$). In particular, Stirling’s formula says that for large positive $n$, $n!$ can be approximated as

\begin{displaymath}
n! \sim \sqrt{2\pi n} n^n e^{-n} \left[1 + \ldots\right]
\end{displaymath}

where the value indicated by the dots becomes negligibly small for large $n$. The function $n!$ can be extended further to any complex value of $n$, except the negative integer values of $n$, where $n!$ is infinite, but is then no longer positive. Euler’s integral can be done for $n$ $\vphantom0\raisebox{1.5pt}{$=$}$ $-\frac12$ by making the change of variables $\sqrt{t}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $u$, producing the integral $\int_0^\infty2e^{-u^2}{\,\rm d}{u}$, or $\int_{-\infty}^{\infty}e^{-u^2}{\,\rm d}{u}$, which equals $\sqrt{\int_{-\infty}^{\infty}e^{-x^2}{\,\rm d}{x}\int_{-\infty}^{\infty}e^{-y^2}{\,\rm d}{y}}$ and the integral under the square root can be done analytically using polar coordinates. The result is that

\begin{displaymath}
(-\frac12)! = \int_{-\infty}^{\infty}e^{-u^2}{\,\rm d}{u} = \sqrt{\pi}
\end{displaymath}

To get $\frac12!$, multiply by $\frac12$, since $n!$ $\vphantom0\raisebox{1.5pt}{$=$}$ $n(n-1)!$.

A double exclamation mark may mean every second item is skipped, e.g. 5!! $\vphantom0\raisebox{1.5pt}{$=$}$ 1 $\times$ 3 $\times$ 5. In general, $(2n+1)!!$ $\vphantom0\raisebox{1.5pt}{$=$}$ $(2n+1)!$/$2^nn!$. Of course, 5!! should logically mean (5!)!. Logic would indicate that 5 $\times$ 3 $\times$ 1 should be indicated by something like 5!’. But what is logic in physics?

$
\setbox 0=\hbox{$\vert$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$\vert\ldots\rangle$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
A ket is used to indicate some state. For example, $\big\vert l\:m\big\rangle $ indicates an angular momentum state with azimuthal quantum number $l$ and magnetic quantum number $m$. Similarly, $\big\vert\leavevmode \kern.03em\raise.7ex\hbox{\the\scriptfont0 1}\kern-.2em
/\...
...\kern-.2em
/\kern-.2em\lower.4ex\hbox{\the\scriptfont0 2}\kern.05em\big\rangle $ is the spin-down state of a particle with spin $\frac12$. Other common ones are $\big\vert{\underline x}\big\rangle $ for the position eigenfunction ${\underline x}$, i.e. $\delta(x-{\underline x})$, $\big\vert 1{\rm {s}}\big\rangle $ for the 1s or $\psi_{100}$ hydrogen state, $\big\vert 2{\rm {p}}_z\big\rangle $ for the 2p$_z$ or $\psi_{210}$ state, etcetera. In short, whatever can indicate some state can be pushed into a ket.

$
\setbox 0=\hbox{$\langle\ldots\vert$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
A bra is like a ket $\big\vert\ldots\big\rangle $, but appears in the left side of inner products, instead of the right one.

$
\setbox 0=\hbox{$\uparrow$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Indicates the spin up state. Mathematically, equals the function ${\uparrow}(S_z)$ which is by definition equal to 1 at $S_z$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\frac12\hbar$ and equal to 0 at $S_z$ $\vphantom0\raisebox{1.5pt}{$=$}$ $-\frac12\hbar$. A spatial wave function multiplied by $\uparrow$ is a particle in that spatial state with its spin up. For multiple particles, the spins are listed with particle 1 first.

$
\setbox 0=\hbox{$\downarrow$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Indicates the spin down state. Mathematically, equals the function ${\downarrow}(S_z)$ which is by definition equal to 0 at $S_z$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\frac12\hbar$ and equal to 1 at $S_z$ $\vphantom0\raisebox{1.5pt}{$=$}$ $-\frac12\hbar$. A spatial wave function multiplied by $\downarrow$ is a particle in that spatial state with its spin down. For multiple particles, the spins are listed with particle 1 first.

$
\setbox 0=\hbox{$\sum$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Summation symbol. Example: if in three dimensional space a vector $\vec{f}$ has components $f_1$ $\vphantom0\raisebox{1.5pt}{$=$}$ 2, $f_2$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1, $f_3$ $\vphantom0\raisebox{1.5pt}{$=$}$ 4, then $\sum_{{\rm {all~}}i}f_i$ stands for $2+1+4$ $\vphantom0\raisebox{1.5pt}{$=$}$ 7.

One important thing to remember: the symbol used for the summation index does not make a difference: $\sum_{{\rm {all~}}j}f_j$ is exactly the same as $\sum_{{\rm {all~}}i}f_i$. So freely rename the index, but always make sure that the new name is not already used for something else in the part that it appears in. If you use the same name for two different things, it becomes a mess.

Related to that, $\sum_{{\rm {all~}}i}f_i$ is not something that depends on an index $i$. It is just a combined simple number. Like 7 in the example above. It is commonly said that the summation index sums away.

$
\setbox 0=\hbox{$\prod$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(Not to be confused with ${\mit\Pi}$ further down.) Multiplication symbol. Example: if in three dimensional space a vector $\vec{f}$ has components $f_1$ $\vphantom0\raisebox{1.5pt}{$=$}$ 2, $f_2$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1, $f_3$ $\vphantom0\raisebox{1.5pt}{$=$}$ 4, then $\prod_{{\rm {all~}}i}f_i$ stands for $2\times1\times4$ $\vphantom0\raisebox{1.5pt}{$=$}$ 6.

One important thing to remember: the symbol used for the multiplications index does not make a difference: $\prod_{{\rm {all~}}j}f_j$ is exactly the same as $\prod_{{\rm {all~}}i}f_i$. So freely rename the index, but always make sure that the new name is not already used for something else in the part that it appears in. If you use the same name for two different things, it becomes a mess.

Related to that, $\prod_{{\rm {all~}}i}f_i$ is not something that depends on an index $i$. It is just a combined simple number. Like 6 in the example above. It is commonly said that the multiplication index factors away. (By who?)

$
\setbox 0=\hbox{$\int$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Integration symbol, the continuous version of the summation symbol. For example,

\begin{displaymath}
\int_{\mbox{\scriptsize all }x} f(x){\,\rm d}x
\end{displaymath}

is the summation of $f(x){\,\rm d}{x}$ over all infinitesimally small fragments ${\rm d}{x}$ that make up the entire $x$-range. For example, $\int_{x=0}^2(2+x){\,\rm d}{x}$ equals 3 $\times$ 2 $\vphantom0\raisebox{1.5pt}{$=$}$ 6; the average value of $2+x$ between $x$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0 and $x$ $\vphantom0\raisebox{1.5pt}{$=$}$ 2 is 3, and the sum of all the infinitesimally small segments ${\rm d}{x}$ gives the total length 2 of the range in $x$ from 0 to 2.

One important thing to remember: the symbol used for the integration variable does not make a difference: $\int_{{\rm {all~}}y}f(y){\,\rm d}{y}$ is exactly the same as $\int_{{\rm {all~}}x}f(x){\,\rm d}{x}$. So freely rename the integration variable, but always make sure that the new name is not already used for something else in the part it appears in. If you use the same name for two different things, it becomes a mess.

Related to that $\int_{{\rm {all~}}x}f(x){\,\rm d}{x}$ is not something that depends on a variable $x$. It is just a combined number. Like 6 in the example above. It is commonly said that the integration variable integrates away.

$
\setbox 0=\hbox{$\to$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$\vec{\phantom{a}}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Vector symbol. An arrow above a letter indicates it is a vector. A vector is a quantity that requires more than one number to be characterized. Typical vectors in physics include position ${\skew0\vec r}$, velocity $\vec{v}$, linear momentum ${\skew0\vec p}$, acceleration $\vec{a}$, force $\vec{F}$, angular momentum $\vec{L}$, etcetera.

$
\setbox 0=\hbox{$\widehat{\phantom{a}}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
A hat over a letter in this book indicates that it is the operator, turning functions into other functions.

$
\setbox 0=\hbox{$'$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$\nabla$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The spatial differentiation operator nabla. In Cartesian coordinates:

\begin{displaymath}
\nabla \equiv
\left(
\frac{\partial}{\partial x},
\f...
...partial}{\partial y} +
{\hat k}\frac{\partial}{\partial z}
\end{displaymath}

Nabla can be applied to a scalar function $f$ in which case it gives a vector of partial derivatives called the gradient of the function:

\begin{displaymath}
\mathop{\rm grad}\nolimits f = \nabla f =
{\hat\imath}\f...
...al f}{\partial y} +
{\hat k}\frac{\partial f}{\partial z}.
\end{displaymath}

Nabla can be applied to a vector in a dot product multiplication, in which case it gives a scalar function called the divergence of the vector:

\begin{displaymath}
\div\vec v = \nabla\cdot\vec v =
\frac{\partial v_x}{\pa...
...partial v_y}{\partial y} +
\frac{\partial v_z}{\partial z}
\end{displaymath}

or in index notation

\begin{displaymath}
\div\vec v = \nabla\cdot\vec v =
\sum_{i=1}^3 \frac{\partial v_i}{\partial x_i}
\end{displaymath}

Nabla can also be applied to a vector in a vectorial product multiplication, in which case it gives a vector function called the curl or rot of the vector. In index notation, the $i$-th component of this vector is

\begin{displaymath}
\left(\mathop{\rm curl}\nolimits \vec v\right)_i =
\left...
...line{\imath}}}}{\partial x_{{\overline{\overline{\imath}}}}}
\end{displaymath}

where ${\overline{\imath}}$ is the index following $i$ in the sequence 123123..., and ${\overline{\overline{\imath}}}$ the one preceding it (or the second following it).

The operator $\nabla^2$ is called the Laplacian. In Cartesian coordinates:

\begin{displaymath}
\nabla^2 \equiv
\frac{\partial^2}{\partial x^2}+
\frac{\partial^2}{\partial y^2}+
\frac{\partial^2}{\partial z^2}
\end{displaymath}

Sometimes the Laplacian is indicated as $\Delta$. In relativistic index notation it is equal to $\partial_i\partial^i$, with maybe a minus sign depending on who you talk with.

In non Cartesian coordinates, don’t guess; look these operators up in a table book, [40, pp. 124-126]: . For example, in spherical coordinates,

\begin{displaymath}
\nabla = {\hat\imath}_r \frac{\partial}{\partial r} +
{\...
...hi \frac{1}{r \sin\theta}
\frac{\partial}{\partial \phi} %
\end{displaymath} (N.2)

That allows the gradient of a scalar function $f$, i.e. $\nabla{f}$, to be found immediately. But if you apply $\nabla$ on a vector, you have to be very careful because you also need to differentiate ${\hat\imath}_r$, ${\hat\imath}_\theta$, and ${\hat\imath}_\phi$. In particular, the correct divergence of a vector $\vec{v}$ is
\begin{displaymath}
\nabla \cdot \vec v = \frac{1}{r^2} \frac{\partial r^2 v_r...
...rac{1}{r\sin\theta}
\frac{\partial v_\phi}{\partial\phi} %
\end{displaymath} (N.3)

The curl $\nabla$ $\times$ $\vec{v}$ of the vector is
\begin{displaymath}
\frac{{\hat\imath}_r}{r\sin\theta} \left(
\frac{\partial...
...rtial r}
- \frac{\partial v_r}{\partial\theta}
\right) %
\end{displaymath} (N.4)

Finally the Laplacian is:
\begin{displaymath}
\nabla^2 = \frac{1}{r^2}
\left\{
\frac{\partial}{\part...
...n^2\theta}
\frac{\partial^2}{\partial \phi^2}
\right\} %
\end{displaymath} (N.5)

See also spherical coordinates.

Cylindrical coordinates are usually indicated as $r$, $\theta$ and $z$. Here $z$ is the Cartesian coordinate, while $r$ is the distance from the $z$-axis and $\theta$ the angle around the $z$ axis. In two dimensions, i.e. without the $z$ terms, they are usually called polar coordinates. In cylindrical coordinates:

\begin{displaymath}
\nabla = {\hat\imath}_r \frac{\partial}{\partial r} +
{\...
...ial \theta} +
{\hat\imath}_z \frac{\partial}{\partial z} %
\end{displaymath} (N.6)


\begin{displaymath}
\nabla \cdot \vec v = \frac{1}{r} \frac{\partial r v_r}{\p...
...theta}{\partial\theta}
+ \frac{\partial v_z}{\partial z} %
\end{displaymath} (N.7)


\begin{displaymath}
\nabla\times\vec{v} =
{\hat\imath}_r \left(
\frac{1}{r...
...rtial r}
- \frac{\partial v_r}{\partial\theta}
\right) %
\end{displaymath} (N.8)


\begin{displaymath}
\nabla^2 =
\frac{1}{r} \frac{\partial}{\partial r}
\le...
...}{\partial \theta^2}
+
\frac{\partial^2}{\partial z^2} %
\end{displaymath} (N.9)

$
\setbox 0=\hbox{$\mathop{\Box}\nolimits $}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The D'Alembertian is defined as

\begin{displaymath}
\frac{1}{c^2}\frac{\partial^2}{\partial t^2}
- \frac{\pa...
...partial^2}{\partial y^2}
- \frac{\partial^2}{\partial z^2}
\end{displaymath}

where $c$ is a constant called the wave speed. In relativistic index notation, $\mathop{\Box}\nolimits $ is equal to $-\partial_\mu\partial^\mu$.

$
\setbox 0=\hbox{$^*$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
A superscript star normally indicates a complex conjugate. In the complex conjugate of a number, every ${\rm i}$ is changed into a $-{\rm i}$.

$\raisebox{.3pt}{$<$}$    
Less than.

$\raisebox{-.3pt}{$\leqslant$}$    
Less than or equal.

$
\setbox 0=\hbox{$\langle\ldots\rangle$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$\raisebox{.3pt}{$>$}$    
Greater than.

$\raisebox{-.5pt}{$\geqslant$}$    
Greater than or equal.

$
\setbox 0=\hbox{$[\ldots]$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$\vphantom0\raisebox{1.5pt}{$=$}$    
Equals sign. The quantity to the left is the same as the one to the right.

$\vphantom0\raisebox{1.5pt}{$\equiv$}$    
Emphatic equals sign. Typically means “by definition equal” or everywhere equal.

$\vphantom0\raisebox{1.1pt}{$\approx$}$    
Indicates approximately equal. Read it as “is approximately equal to.”

$\vphantom0\raisebox{1.5pt}{$\sim$}$    
Indicates approximately equal. Often used when the approximation applies only when something is small or large. Read it as is approximately equal to or as “is asymptotically equal to.”

$
\setbox 0=\hbox{$\propto$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Proportional to. The two sides are equal except for some unknown constant factor.

$
\setbox 0=\hbox{$\alpha$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(alpha) May indicate:

$
\setbox 0=\hbox{$\beta$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(beta) May indicate:

$
\setbox 0=\hbox{$\Gamma$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(Gamma) May indicate:

$
\setbox 0=\hbox{$\gamma$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(gamma) May indicate:

$
\setbox 0=\hbox{$\Delta$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(capital delta) May indicate:

$
\setbox 0=\hbox{$\delta$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(delta) May indicate:

$
\setbox 0=\hbox{$\partial$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(partial) Indicates a vanishingly small change or interval of the following variable. For example, $\partial{f}$/$\partial{x}$ is the ratio of a vanishingly small change in function $f$ divided by the vanishingly small change in variable $x$ that causes this change in $f$. Such ratios define derivatives, in this case the partial derivative of $f$ with respect to $x$.

Also used in relativistic index notation, chapter 1.2.5.

$
\setbox 0=\hbox{$\epsilon$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(epsilon) May indicate:

$
\setbox 0=\hbox{$\varepsilon$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(variant of epsilon) May indicate:

$
\setbox 0=\hbox{$\eta$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(eta) May be used to indicate a $y$-position of a particle.

$
\setbox 0=\hbox{$\Theta$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(capital theta) Used in this book to indicate some function of $\theta$ to be determined.

$
\setbox 0=\hbox{$\theta$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(theta) May indicate:

$
\setbox 0=\hbox{$\vartheta$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(variant of theta) An alternate symbol for $\theta$.

$
\setbox 0=\hbox{$\kappa$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(kappa) May indicate:

$
\setbox 0=\hbox{$\Lambda$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(Lambda) May indicate:

$
\setbox 0=\hbox{$\lambda$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(lambda) May indicate:

$
\setbox 0=\hbox{$\mu$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(mu) May indicate:

$
\setbox 0=\hbox{$\nu$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(nu) May indicate:

$
\setbox 0=\hbox{$\xi$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(xi) May indicate:

$
\setbox 0=\hbox{${\mit\Pi}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(Oblique Pi) (Not to be confused with $\prod$ described higher up.) Parity operator. Replaces ${\skew0\vec r}$ by $-{\skew0\vec r}$. That is equivalent to a mirroring in a mirror through the origin, followed by a 180$\POW9,{\circ}$ rotation around the axis normal to the mirror.

$
\setbox 0=\hbox{$\pi$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(pi) May indicate:

$
\setbox 0=\hbox{$\tilde\pi $}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Canonical momentum density.

$
\setbox 0=\hbox{$\rho$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(rho) May indicate:

$
\setbox 0=\hbox{$\sigma$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(sigma) May indicate:

$
\setbox 0=\hbox{$\tau$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(tau) May indicate:

$
\setbox 0=\hbox{$\Phi$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(capital phi) May indicate:

$
\setbox 0=\hbox{$\phi$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(phi) May indicate:

$
\setbox 0=\hbox{$\varphi$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(variant of phi) May indicate:

$
\setbox 0=\hbox{$\chi$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(chi) May indicate

$
\setbox 0=\hbox{$\Psi$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(capital psi) Upper case psi is used for the wave function.

$
\setbox 0=\hbox{$\psi$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(psi) Typically used to indicate an energy eigenfunction. Depending on the system, indices may be added to distinguish different ones. In some cases $\psi$ might be used instead of $\Psi$ to indicate a system in an energy eigenstate. Let me know and I will change it. A system in an energy eigenstate should be written as $\Psi$ $\vphantom0\raisebox{1.5pt}{$=$}$ $c\psi$, not $\psi$, with $c$ a constant of magnitude 1.

$
\setbox 0=\hbox{$\Omega$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(Omega) May indicate:

$
\setbox 0=\hbox{$\omega$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(omega) May indicate:

$
\setbox 0=\hbox{$A$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

Å    
Ångstrom. Equal to 10$\POW9,{-10}$ m.

$
\setbox 0=\hbox{$a$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$a_0$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

absolute    
May indicate:

adiabatic    
An adiabatic process is a process in which there is no heat transfer with the surroundings. If the process is also reversible, it is called isentropic. Typically, these processes are fairly quick, in order not to give heat conduction enough time to do its stuff, but not so excessively quick that they become irreversible.

Adiabatic processes in quantum mechanics are defined quite differently to keep students on their toes. See chapter 7.1.5. These processes are very slow, to keep the system all possible time to adjust to its surroundings. Of course, quantum physicist were not aware that the same term had already been used for a hundred years or so for relatively fast processes. They assumed they had just invented a great new term!

adjoint    
The adjoint $A^H$ or $A^\dagger$ of an operator is the one you get if you take it to the other side of an inner product. (While keeping the value of the inner product the same regardless of whatever two vectors or functions may be involved.) Hermitian operators are self-adjoint;they do not change if you take them to the other side of an inner product. Skew-Hermitianoperators just change sign. Unitary operatorschange into their inverse when taken to the other side of an inner product. Unitary operators generalize rotations of vectors: an inner product of vectors is the same whether you rotate the first vector one way, or the second vector the opposite way. Unitary operators preserve inner products (when applied to both vectors or functions). Fourier transforms are unitary operators on account of the Parseval equality that says that inner products are preserved.

amplitude    
Everything in quantum mechanics is an amplitude. However, most importantly, the quantum amplitude gives the coefficient of a state in a wave function. For example, the usual quantum wave function gives the quantum amplitude that the particle is at the given position.

angle    
Consider two semi-infinite lines extending from a common intersection point. Then the angle between these lines is defined in the following way: draw a unit circle in the plane of the lines and centered at their intersection point. The angle is then the length of the circular arc that is in between the lines. More precisely, this gives the angle in radians, rad. Sometimes an angle is expressed in degrees, where $2\pi$ rad is taken to be 360$\POW9,{\circ}$. However, using degrees is usually a very bad idea in science.

In three dimensions, you may be interested in the so-called solid angle $\Omega$ inside a conical surface. This angle is defined in the following way: draw a sphere of unit radius centered at the apex of the conical surface. Then the solid angle is the area of the spherical surface that is inside the cone. Solid angles are in steradians. The cone does not need to be a circular one, (i.e. have a circular cross section), for this to apply. In fact, the most common case is the solid angle corresponding to an infinitesimal element ${\rm d}\theta$ $\times$ ${\rm d}\phi$ of spherical coordinate system angles. In that case the surface of the unit sphere inside the conical surface is is approximately rectangular, with sides ${\rm d}\theta$ and $\sin(\theta){\rm d}\phi$. That makes the enclosed solid angle equal to ${\rm d}\Omega$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\sin(\theta){\rm d}\theta{\rm d}\phi$.

$
\setbox 0=\hbox{$B$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{${\cal B}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$b$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

basis    
A basis is a minimal set of vectors or functions that you can write all other vectors or functions in terms of. For example, the unit vectors ${\hat\imath}$, ${\hat\jmath}$, and ${\hat k}$ are a basis for normal three-dimensional space. Every three-dimensional vector can be written as a linear combination of the three.

$
\setbox 0=\hbox{$C$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$\POW9,{\circ}$C    
Degrees Centigrade. A commonly used temperature scale that has the value $-$273.15 $\POW9,{\circ}$C instead of zero when systems are in their ground state. Recommendation: use degrees Kelvin (K) instead. However, differences in temperature are the same in Centigrade as in Kelvin.

$
\setbox 0=\hbox{$c$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

Cauchy-Schwartz inequality    
The Cauchy-Schwartz inequality describes a limitation on the magnitude of inner products. In particular, it says that for any $f$ and $g$,

\begin{displaymath}
\vert\big\langle f\big\vert g\big\rangle \vert \le \sqrt{\...
...rt f\big\rangle }\sqrt{\big\langle g\big\vert g\big\rangle }
\end{displaymath}

In words, the magnitude of an inner product $\big\langle f\big\vert g\big\rangle $ is at most the magnitude (i.e. the length or norm) of $f$ times the one of $g$. For example, if $f$ and $g$ are real vectors, the inner product is the dot product and you have $f\cdot{g}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\vert f\vert\vert g\vert\cos\theta$, where $\vert f\vert$ is the length of vector $f$ and $\vert g\vert$ the one of $g$, and $\theta$ is the angle in between the two vectors. Since a cosine is less than one in magnitude, the Cauchy-Schwartz inequality is therefore true for vectors.

But it is true even if $f$ and $g$ are functions. To prove it, first recognize that $\big\langle f\big\vert g\big\rangle $ may in general be a complex number, which according to (2.6) must take the form $e^{{\rm i}\alpha}\vert\big\langle f\big\vert g\big\rangle \vert$ where $\alpha$ is some real number whose value is not important, and that $\big\langle g\big\vert f\big\rangle $ is its complex conjugate $e^{-{\rm i}\alpha}\vert\big\langle f\big\vert g\big\rangle \vert$. Now, (yes, this is going to be some convoluted reasoning), look at

\begin{displaymath}
\big\langle f + \lambda e^{-{\rm i}\alpha} g\big\vert f + \lambda e^{-{\rm i}\alpha} g\big\rangle
\end{displaymath}

where $\lambda$ is any real number. The above dot product gives the square magnitude of $f+{\lambda}e^{-{\rm i}\alpha}g$, so it can never be negative. But if you multiply out, you get

\begin{displaymath}
\big\langle f\big\vert f\big\rangle + 2 \vert\big\langle f...
...vert \lambda + \big\langle g\big\vert g\big\rangle \lambda^2
\end{displaymath}

and if this quadratic form in $\lambda$ is never negative, its discriminant must be less or equal to zero:

\begin{displaymath}
\vert\big\langle f\big\vert g\big\rangle \vert^2 \le \big\...
...f\big\vert f\big\rangle \big\langle g\big\vert g\big\rangle
\end{displaymath}

and taking square roots gives the Cauchy-Schwartz inequality.

Classical    
Can mean any older theory. In this work, most of the time it either means nonquantum, or nonrelativistic.

$
\setbox 0=\hbox{$\cos$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The cosine function, a periodic function oscillating between 1 and -1 as shown in [40, pp. 40-]. See also sin.

curl    
The curl of a vector $\vec{v}$ is defined as $\mathop{\rm curl}\nolimits \vec{v}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\mathop{\rm {rot}}\vec{v}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\nabla$ $\times$ $\vec{v}$.

$
\setbox 0=\hbox{$D$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$\vec{D}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Primitive (translation) vector of a reciprocal lattice.

$
\setbox 0=\hbox{${\cal D}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Density of states.

D    
Often used to indicate a state with two units of orbital angular momentum.

$
\setbox 0=\hbox{$d$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$\vec{d}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Primitive (translation) vector of a crystal lattice.

$
\setbox 0=\hbox{${\rm d}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Indicates a vanishingly small change or interval of the following variable. For example, ${\rm d}{x}$ can be thought of as a small segment of the $x$-axis.

In three dimensions, ${\rm d}^3{\skew0\vec r}$ $\vphantom0\raisebox{1.5pt}{$\equiv$}$ ${\rm d}{x}{\rm d}{y}{\rm d}{z}$ is an infinitesimal volume element. The symbol $\int$ means that you sum over all such infinitesimal volume elements.

derivative    
A derivative of a function is the ratio of a vanishingly small change in a function divided by the vanishingly small change in the independent variable that causes the change in the function. The derivative of $f(x)$ with respect to $x$ is written as ${\rm d}{f}$/${\rm d}{x}$, or also simply as $f'$. Note that the derivative of function $f(x)$ is again a function of $x$: a ratio $f'$ can be found at every point $x$. The derivative of a function $f(x,y,z)$ with respect to $x$ is written as $\partial{f}$/$\partial{x}$ to indicate that there are other variables, $y$ and $z$, that do not vary.

determinant    
The determinant of a square matrix $A$ is a single number indicated by $\vert A\vert$. If this number is nonzero, $A\vec{v}$ can be any vector $\vec{w}$ for the right choice of $\vec{v}$. Conversely, if the determinant is zero, $A\vec{v}$ can only produce a very limited set of vectors, though if it can produce a vector $w$, it can do so for multiple vectors $\vec{v}$.

There is a recursive algorithm that allows you to compute determinants from increasingly bigger matrices in terms of determinants of smaller matrices. For a 1 $\times$ 1 matrix consisting of a single number, the determinant is simply that number:

\begin{displaymath}
\left\vert a_{11} \right\vert = a_{11}
\end{displaymath}

(This determinant should not be confused with the absolute value of the number, which is written the same way. Since you normally do not deal with 1 $\times$ 1 matrices, there is normally no confusion.) For 2 $\times$ 2 matrices, the determinant can be written in terms of 1 $\times$ 1 determinants:

\begin{displaymath}
\left\vert
\begin{array}{ll}
a_{11} & a_{12} \\
a_{...
... \\
a_{21} & \phantom{a_{22}}
\end{array}
\right\vert
\end{displaymath}

so the determinant is $a_{11}a_{22}-a_{12}a_{21}$ in short. For 3 $\times$ 3 matrices, you have

\begin{eqnarray*}
\lefteqn{
\left\vert
\begin{array}{lll}
a_{11} & a_{12...
...a_{31} & a_{32} & \phantom{a_{33}}
\end{array}
\right\vert
\end{eqnarray*}

and you already know how to work out those 2 $\times$ 2 determinants, so you now know how to do 3 $\times$ 3 determinants. Written out fully:

\begin{displaymath}
a_{11}(a_{22}a_{33}-a_{23}a_{32})
-a_{12}(a_{21}a_{33}-a_{23}a_{31})
+a_{13}(a_{21}a_{32}-a_{22}a_{31})
\end{displaymath}

For 4 $\times$ 4 determinants,

\begin{eqnarray*}
\lefteqn{
\left\vert
\begin{array}{llll}
a_{11} & a_{1...
...a_{42} & a_{43} & \phantom{a_{44}}
\end{array}
\right\vert
\end{eqnarray*}

Etcetera. Note the alternating sign pattern of the terms.

As you might infer from the above, computing a good size determinant takes a large amount of work. Fortunately, it is possible to simplify the matrix to put zeros in suitable locations, and that can cut down the work of finding the determinant greatly. You are allowed to use the following manipulations without seriously affecting the computed determinant:

  1. You can transposethe matrix, i.e. change its columns into its rows.
  2. You can create zeros in a row by subtracting a suitable multiple of another row.
  3. You can also swap rows, as long as you remember that each time that you swap two rows, it will flip over the sign of the computed determinant.
  4. You can also multiply an entire row by a constant, but that will multiply the computed determinant by the same constant.
Applying these tricks in a systematic way, called “Gaussian elimination” or “reduction to lower triangular form”, you can eliminate all matrix coefficients $a_{ij}$ for which $j$ is greater than $i$, and that makes evaluating the determinant pretty much trivial.

div(ergence)    
The divergence of a vector $\vec{v}$ is defined as $\div\vec{v}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\nabla\cdot\vec{v}$.

$
\setbox 0=\hbox{$E$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{${\cal E}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$e$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

e    
May indicate

$
\setbox 0=\hbox{$e^{{{\rm i}}ax}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Assuming that $a$ is an ordinary real number, and $x$ a real variable, $e^{{{\rm i}}ax}$ is a complex function of magnitude one. The derivative of $e^{{{\rm i}}ax}$ with respect to $x$ is ${{\rm i}}ae^{{{\rm i}}ax}$

eigenvector    
A concept from linear algebra. A vector $\vec{v}$ is an eigenvector of a matrix $A$ if $\vec{v}$ is nonzero and $A\vec{v}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\lambda\vec{v}$ for some number $\lambda$ called the corresponding eigenvalue.

The basic quantum mechanics section of this book avoids linear algebra completely, and the advanced part almost completely. The few exceptions are almost all two-dimensional matrix eigenvalue problems. In case you did not have any linear algebra, here is the solution: the two-dimensional matrix eigenvalue problem

\begin{displaymath}
\left(\begin{array}{cc} a_{11}&a_{12} \\ a_{21}&a_{22} \end{array}\right)
\vec v = \lambda \vec v
\end{displaymath}

has eigenvalues that are the two roots of the quadratic equation

\begin{displaymath}
\lambda^2 - (a_{11}+a_{22})\lambda + a_{11}a_{22}-a_{12}a_{21} = 0
\end{displaymath}

The corresponding eigenvectors are

\begin{displaymath}
\vec v_1 =
\left(\begin{array}{c} a_{12} \\ \lambda_1-a_...
...begin{array}{c} \lambda_2-a_{22} \\ a_{21}\end{array}\right)
\end{displaymath}

On occasion you may have to swap $\lambda_1$ and $\lambda_2$ to use these formulae. If $\lambda_1$ and $\lambda_2$ are equal, there might not be two eigenvectors that are not multiples of each other; then the matrix is called defective. However, Hermitian matrices are never defective.

See also matrix” and “determinant.

eV    
The electron volt, a commonly used unit of energy equal to 1.602,176,57 10$\POW9,{-19}$ J.

exponential function    
A function of the form $e^{\ldots}$, also written as $\exp(\ldots)$. See function and $e$.

$
\setbox 0=\hbox{$F$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{${\cal F}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Fock operator.

$
\setbox 0=\hbox{$f$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

function    
A mathematical object that associates values with other values. A function $f(x)$ associates every value of $x$ with a value $f$. For example, the function $f(x)$ $\vphantom0\raisebox{1.5pt}{$=$}$ $x^2$ associates $x$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0 with $f$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0, $x$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\frac12$ with $f$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\frac14$, $x$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1 with $f$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1, $x$ $\vphantom0\raisebox{1.5pt}{$=$}$ 2 with $f$ $\vphantom0\raisebox{1.5pt}{$=$}$ 4, $x$ $\vphantom0\raisebox{1.5pt}{$=$}$ 3 with $f$ $\vphantom0\raisebox{1.5pt}{$=$}$ 9, and more generally, any arbitrary value of $x$ with the square of that value $x^2$. Similarly, function $f(x)$ $\vphantom0\raisebox{1.5pt}{$=$}$ $x^3$ associates any arbitrary $x$ with its cube $x^3$, $f(x)$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\sin(x)$ associates any arbitrary $x$ with the sine of that value, etcetera.

One way of thinking of a function is as a procedure that allows you, whenever given a number, to compute another number.

A wave function $\Psi(x,y,z)$ associates each spatial position $(x,y,z)$ with a wave function value. Going beyond mathematics, its square magnitude associates any spatial position with the relative probability of finding the particle near there.

functional    
A functional associates entire functions with single numbers. For example, the expectation energy is mathematically a functional: it associates any arbitrary wave function with a number: the value of the expectation energy if physics is described by that wave function.

$
\setbox 0=\hbox{$G$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$g$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

Gauss' Theorem    
This theorem, also called divergence theorem or Gauss-Ostrogradsky theorem, says that for a continuously differentiable vector $\vec{v}$,

\begin{displaymath}
\int_V \nabla \cdot \vec v {\,\rm d}V
=
\int_A \vec v \cdot \vec n {\,\rm d}A
\end{displaymath}

where the first integral is over the volume of an arbitrary region and the second integral is over all the surface area of that region; $\vec{n}$ is at each point found as the unit vector that is normal to the surface at that point.

grad(ient)    
The gradient of a scalar $f$ is defined as $\mathop{\rm grad}\nolimits {f}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\nabla{f}$.

$
\setbox 0=\hbox{$H$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$h$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$\hbar$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The reduced Planck constant, equal to 1.054,571,73 10$\POW9,{-34}$ J s. A measure of the uncertainty of nature in quantum mechanics. Multiply by $2\pi$ to get the original Planck constant $h$. For nuclear physics, a frequently helpful value is $\hbar{c}$ $\vphantom0\raisebox{1.5pt}{$=$}$ 197.326,972 MeV fm.

hypersphere    
A hypersphere is the generalization of the normal three-di­men­sion­al sphere to $n$-di­men­sion­al space. A sphere of radius $R$ in three-di­men­sion­al space consists of all points satisfying

\begin{displaymath}
r_1^2 + r_2^2 + r_3^2 \mathrel{\raisebox{-.7pt}{$\leqslant$}}R^2
\end{displaymath}

where $r_1$, $r_2$, and $r_3$ are Cartesian coordinates with origin at the center of the sphere. Similarly a hypersphere in $n$-di­men­sion­al space is defined as all points satisfying

\begin{displaymath}
r_1^2 + r_2^2 + \ldots + r_n^2 \mathrel{\raisebox{-.7pt}{$\leqslant$}}R^2
\end{displaymath}

So a two-di­men­sion­al hypersphere of radius $R$ is really just a circle of radius $R$. A one-di­men­sion­al hypersphere is really just the line segment $\vphantom0\raisebox{1.5pt}{$-$}$$R$ $\raisebox{-.3pt}{$\leqslant$}$ $x$ $\raisebox{-.3pt}{$\leqslant$}$ $\vphantom0\raisebox{1.5pt}{$-$}$$R$.

The volume” ${\cal V}_n$ and surface “area $A_n$ of an $n$-di­men­sion­al hypersphere is given by

\begin{displaymath}
{\cal V}_n = C_n R^n \qquad A_n = n C_n R^{n-1}
\end{displaymath}


\begin{displaymath}
C_n = \left\{
\begin{array}{l}
\strut(2\pi)^{n/2}/2 \t...
...imes n
\quad \mbox{if $n$\ is odd}
\end{array}
\right.
\end{displaymath}

(This is readily derived recursively. For a sphere of unit radius, note that the $n$-di­men­sion­al volume is an integration of $n{-}1$-di­men­sion­al volumes with respect to $r_1$. Then renotate $r_1$ as $\sin\phi$ and look up the resulting integral in a table book. The formula for the area follows because ${\cal V}=\int{A}{\rm d}{r}$ where $r$ is the distance from the origin.) In three dimensions, $C_3=4\pi/3$ according to the above formula. That makes the three-di­men­sion­al volume $4\pi{R}^3$$\raisebox{.5pt}{$/$}$​3 equal to the actual volume of the sphere, and the three-di­men­sion­al area $4{\pi}R^2$ equal to the actual surface area. On the other hand in two dimensions, $C_2=\pi$. That makes the two-di­men­sion­al volume ${\pi}R^2$ really the area of the circle. Similarly the two-di­men­sion­al surface area $2{\pi}R$ is really the perimeter of the circle. In one dimensions $C_1=2$ and the volume $2R$ is really the length of the interval, and the area 2 is really its number of end points.

Often the infinitesimal $n$-di­men­sion­al volume element ${\rm d}^n{\skew0\vec r}$ is needed. This is the infinitesimal integration element for integration over all coordinates. It is:

\begin{displaymath}
{\rm d}^n{\skew0\vec r}= {\rm d}r_1 {\rm d}r_2 \ldots {\rm d}r_n = {\rm d}A_n {\rm d}r
\end{displaymath}

Specifically, in two dimensions:

\begin{displaymath}
{\rm d}^2{\skew0\vec r}= {\rm d}r_1 {\rm d}r_2 = {\rm d}x {\rm d}y = (r {\,\rm d}\theta) {\rm d}r
= {\rm d}A_2 {\rm d}r
\end{displaymath}

while in three dimensions:

\begin{displaymath}
{\rm d}^3{\skew0\vec r}= {\rm d}r_1 {\rm d}r_2 {\rm d}r_3 ...
... {\,\rm d}\theta {\rm d}\phi) {\rm d}r = {\rm d}A_3 {\rm d}r
\end{displaymath}

The expressions in parentheses are ${\rm d}{A}_2$ in polar coordinates, respectively ${\rm d}{A}_3$ in spherical coordinates.

$
\setbox 0=\hbox{$I$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$\Im$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The imaginary part of a complex number. If $c$ $\vphantom0\raisebox{1.5pt}{$=$}$ $c_r+{{\rm i}}c_i$ with $c_r$ and $c_i$ real numbers, then $\Im(c)$ $\vphantom0\raisebox{1.5pt}{$=$}$ $c_i$. Note that $c-c^*$ $\vphantom0\raisebox{1.5pt}{$=$}$ $2{\rm i}\Im(c)$.

$
\setbox 0=\hbox{${\cal I}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$i$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate: Not to be confused with ${\rm i}$.

$
\setbox 0=\hbox{${\hat\imath}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The unit vector in the $x$-direction.

$
\setbox 0=\hbox{${\rm i}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The standard square root of minus one: ${\rm i}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\sqrt{-1}$, ${\rm i}^2$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\vphantom0\raisebox{1.5pt}{$-$}$1, 1/${\rm i}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $-{\rm i}$, ${\rm i}^*$ $\vphantom0\raisebox{1.5pt}{$=$}$ $-{\rm i}$.

index notation    
A more concise and powerful way of writing vector and matrix components by using a numerical index to indicate the components. For Cartesian coordinates, you might number the coordinates $x$ as 1, $y$ as 2, and $z$ as 3. In that case, a sum like $v_x+v_y+v_z$ can be more concisely written as $\sum_i{v}_i$. And a statement like $v_x$ $\raisebox{.2pt}{$\ne$}$ 0, $v_y$ $\raisebox{.2pt}{$\ne$}$ 0, $v_z$ $\raisebox{.2pt}{$\ne$}$ 0 can be more compactly written as $v_i$ $\raisebox{.2pt}{$\ne$}$ 0. To really see how it simplifies the notations, have a look at the matrix entry. (And that one shows only 2 by 2 matrices. Just imagine 100 by 100 matrices.)

iff    
Emphatic if. Should be read as if and only if.

integer    
Integer numbers are the whole numbers: $\ldots,-2,-1,0,1,2,3,4,\ldots$.

inverse    
(Of matrices or operators.) If an operator $A$ converts a vector or function $f$ into a vector or function $g$, then the inverse of the operator $A^{-1}$ converts $g$ back into $f$. For example, the operator 2 converts vectors or functions into two times themselves, and its inverse operator $\frac12$ converts these back into the originals. Some operators do not have inverses. For example, the operator 0 converts all vectors or functions into zero. But given zero, there is no way to figure out what function or vector it came from; the inverse operator does not exist.

irrotational    
A vector $\vec{v}$ is irrotational if its curl $\nabla$ $\times$ $\vec{v}$ is zero.

iso    
Means equal” or “constant.

isolated    
An isolated system is one that does not interact with its surroundings in any way. No heat is transfered with the surroundings, no work is done on or by the surroundings.

$
\setbox 0=\hbox{$J$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$j$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{${\hat\jmath}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The unit vector in the $y$-direction.

$
\setbox 0=\hbox{$K$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{${\mathscr K}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Thomson (Kelvin) coefficient.

K    
May indicate:

$
\setbox 0=\hbox{$k$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{${\hat k}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The unit vector in the $z$-direction.

$
\setbox 0=\hbox{$k_{\rm B}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Boltzmann constant. Equal to 1.380,649 10$\POW9,{-23}$ J/K. Relates absolute temperature to a typical unit of heat motion energy.

kmol    
A kilo mole refers to 6.022,141,3 10$\POW9,{26}$ atoms or molecules. The weight of this many particles is about the number of protons and neutrons in the atom nucleus/molecule nuclei. So a kmol of hydrogen atoms has a mass of about 1 kg, and a kmol of hydrogen molecules about 2 kg. A kmol of helium atoms has a mass of about 4 kg, since helium has two protons and two neutrons in its nucleus. These numbers are not very accurate, not just because the electron masses are ignored, and the free neutron and proton masses are somewhat different, but also because of relativity effects that cause actual nuclear masses to deviate from the sum of the free proton and neutron masses.

$
\setbox 0=\hbox{$L$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{${\cal L}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Lagrangian.

L    
The atomic states or orbitals with theoretical Bohr energy $E_2$

$
\setbox 0=\hbox{$l$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$\ell$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$\pounds $}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Lagrangian density. This is best understood in the UK.

$
\setbox 0=\hbox{$\lim$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Indicates the final result of an approaching process. $\lim_{\varepsilon\to0}$ indicates for practical purposes the value of the following expression when $\varepsilon$ is extremely small.

linear combination    
A very generic concept indicating sums of objects times coefficients. For example, a position vector ${\skew0\vec r}$ in basic physics is the linear combination $x{\hat\imath}+y{\hat\jmath}+z{\hat k}$ with the objects the unit vectors ${\hat\imath}$, ${\hat\jmath}$, and ${\hat k}$ and the coefficients the position coordinates $x$, $y$, and $z$. A linear combination of a set of functions $f_1(x),f_2(x),f_3(x),\ldots,f_n(x)$ would be the function

\begin{displaymath}
c_1 f_1(x) + c_2 f_2(x) + c_3 f_3(x) + \ldots c_n f_n(x)
\end{displaymath}

where $c_1,c_2,c_3,\ldots,c_n$ are constants, i.e. independent of $x$.

linear dependence    
A set of vectors or functions is linearly dependent if at least one of the set can be expressed in terms of the others. Consider the example of a set of functions $f_1(x),f_2(x),\ldots,f_n(x)$. This set is linearly dependent if

\begin{displaymath}
c_1 f_1(x) + c_2 f_2(x) + c_3 f_3(x) + \ldots c_n f_n(x) = 0
\end{displaymath}

where at least one of the constants $c_1,c_2,c_2,\ldots,c_n$ is nonzero. To see why, suppose that say $c_2$ is nonzero. Then you can divide by $c_2$ and rearrange to get

\begin{displaymath}
f_2(x) = - \frac{c_1}{c_2} f_1(x) - \frac{c_3}{c_2} f_3(x) - \ldots
- \frac{c_n}{c_2} f_n(x)
\end{displaymath}

That expresses $f_2(x)$ in terms of the other functions.

linear independence    
A set of vectors or functions is linearly independent if none of the set can be expressed in terms of the others. Consider the example of a set of functions $f_1(x),f_2(x),\ldots,f_n(x)$. This set is linearly independent if

\begin{displaymath}
c_1 f_1(x) + c_2 f_2(x) + c_3 f_3(x) + \ldots c_n f_n(x) = 0
\end{displaymath}

only if every one of the constants $c_1,c_2,c_3,\ldots,c_n$ is zero. To see why, assume that say $f_2(x)$ could be expressed in terms of the others,

\begin{displaymath}
f_2(x) = C_1 f_1(x) + C_3 f_3(x) + \ldots + C_n f_n(x)
\end{displaymath}

Then taking $c_2$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1, $c_1$ $\vphantom0\raisebox{1.5pt}{$=$}$ $-C_1$, $c_3$ $\vphantom0\raisebox{1.5pt}{$=$}$ $-C_3$, ...$c_n$ $\vphantom0\raisebox{1.5pt}{$=$}$ $-C_n$, the condition above would be violated. So $f_2$ cannot be expressed in terms of the others.

$
\setbox 0=\hbox{$M$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{${\cal M}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Mirror operator.

M    
The atomic states or orbitals with theoretical Bohr energy $E_3$

$
\setbox 0=\hbox{$m$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

matrix    
A table of numbers.

As a simple example, a two-dimensional matrix $A$ is a table of four numbers called $a_{11}$, $a_{12}$, $a_{21}$, and $a_{22}$:

\begin{displaymath}
\left(
\begin{array}{ll}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{array}
\right)
\end{displaymath}

unlike a two-dimensional (ket) vector $\vec{v}$, which would consist of only two numbers $v_1$ and $v_2$ arranged in a column:

\begin{displaymath}
\left(
\begin{array}{l}
v_1 \\
v_2
\end{array}
\right)
\end{displaymath}

(Such a vector can be seen as a rectangular matrix of size 2 $\times$ 1, but let’s not get into that.)

In index notation, a matrix $A$ is a set of numbers $\{a_{ij}\}$ indexed by two indices. The first index $i$ is the row number, the second index $j$ is the column number. A matrix turns a vector $\vec{v}$ into another vector $\vec{w}$ according to the recipe

\begin{displaymath}
w_i = \sum_{\mbox{{\scriptsize all }}j} a_{ij} v_j \quad \mbox{for all $i$}
\end{displaymath}

where $v_j$ stands for “the $j$-th component of vector $\vec{v}$,” and $w_i$ for “the $i$-th component of vector $\vec{w}$.”

As an example, the product of $A$ and $\vec{v}$ above is by definition

\begin{displaymath}
\left(
\begin{array}{ll}
a_{11} & a_{12} \\
a_{21} ...
...2} v_2 \\
a_{21} v_1 + a_{22} v_2
\end{array}
\right)
\end{displaymath}

which is another two-dimensional ket vector.

Note that in matrix multiplications like the example above, in geometric terms you take dot products between the rows of the first factor and the column of the second factor.

To multiply two matrices together, just think of the columns of the second matrix as separate vectors. For example:

\begin{displaymath}
\left(
\begin{array}{ll}
a_{11} & a_{12} \\
a_{21} ...
...{21} & a_{21} b_{12} + a_{22} b_{22}
\end{array}
\right)
\end{displaymath}

which is another two-dimensional matrix. In index notation, the $ij$ component of the product matrix has value $\sum_ka_{ik}b_{kj}$.

The zero matrix is like the number zero; it does not change a matrix it is added to and turns whatever it is multiplied with into zero. A zero matrix is zero everywhere. In two dimensions:

\begin{displaymath}
\left(
\begin{array}{ll}
0 & 0 \\
0 & 0
\end{array}
\right)
\end{displaymath}

A unit matrix is the equivalent of the number one for matrices; it does not change the quantity it is multiplied with. A unit matrix is one on its “main diagonal” and zero elsewhere. The 2 by 2 unit matrix is:

\begin{displaymath}
\left(
\begin{array}{ll}
1 & 0 \\
0 & 1
\end{array}
\right)
\end{displaymath}

More generally the coefficients, $\{\delta_{ij}\}$, of a unit matrix are one if $i$ $\vphantom0\raisebox{1.5pt}{$=$}$ $j$ and zero otherwise.

The transpose of a matrix $A$, $A^{\rm {T}}$, is what you get if you switch the two indices. Graphically, it turns its rows into its columns and vice versa. The Hermitian adjoint$A^\dagger$ is what you get if you switch the two indices and then take the complex conjugate of every element. If you want to take a matrix to the other side of an inner product, you will need to change it to its Hermitian adjoint. Hermitian matricesare equal to their Hermitian adjoint, so this does nothing for them.

See also determinant and eigenvector.

metric prefixes    
In the metric system, the prefixes Y, Z, E, P, T, G, M, and k stand for 10$\POW9,{i}$ with $i$ $\vphantom0\raisebox{1.5pt}{$=$}$ 24, 21, 18, 15, 12, 9, 6, and 3, respectively. Similarly, d, c, m, $\mu$, n, p, f, a, z, y stand for 10$\POW9,{-i}$ with $i$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1, 2, 3, 6, 9, 12, 15, 18, 21, and 24 respectively. For example, 1 ns is 10$\POW9,{-9}$ seconds. Corresponding names are yotta, zetta, exa, peta, tera, giga, mega, kilo, deci, centi, milli, micro, nano, pico, femto, atto, zepto, and yocto.

molecular mass    
Typical thermodynamics books for engineers tabulate values of the molecular mass, as a nondimensional number. The bottom line first: these numbers should have been called the molar mass of the substance, for the naturally occurring isotope ratio on earth. And they should have been given units of kg/kmol. That is how you use these numbers in actual computations. So just ignore the fact that what these books really tabulate is officially called the relative molecular mass for the natural isotope ratio.

Don’t blame these textbooks too much for making a mess of things. Physicists have historically bandied about a zillion different names for what is essentially a single number. Like molecular mass, “relative molecular mass,” molecular weight, “atomic mass,” relative atomic mass, “atomic weight,” molar mass, “relative molar mass,” etcetera are basically all the same thing.

All of these have values that equal the mass of a molecule relative to a reference value for a single nucleon. So these value are about equal to the number of nucleons (protons and neutrons) in the nuclei of a single molecule. (For an isotope ratio, that becomes the average number of nucleons. Do note that nuclei are sufficiently relativistic that a proton or neutron can be noticeably heavier in one nucleus than another, and that neutrons are a bit heavier than protons even in isolation.) The official reference nucleon weight is defined based on the most common carbon isotope carbon-12. Since carbon-12 has 6 protons plus 6 neutrons, the reference nucleon weight is taken to be one twelfth of the carbon-12 atomic weight. That is called the unified atomic mass unit (u) or Dalton (Da). The atomic mass unit (amu) is an older virtually identical unit, but physicists and chemists could never quite agree on what its value was. No kidding.

If you want to be politically correct, the deal is as follows. Molecular mass is just what the term says, the mass of a molecule, in mass units. (I found zero evidence in either the IUPAC Gold Book or NIST SP811 for the claim of Wikipedia that it must always be expressed in u.) Molar mass is just what the words says, the mass of a mole. Official SI units are kg/mol, but you will find it in g/mol, equivalent to kg/kmol. (You cannot expect enough brains from international committees to realize that if you define the kg and not the g as unit of mass, then it would be a smart idea to also define kmol instead of mol as unit of particle count.) Simply ignore relative atomic and molecular masses, you do not care about them. (I found zero evidence in either the IUPAC Gold Book or NIST SP811 for the claims of Wikipedia that the molecular mass cannot be an average over isotopes or that the molar mass must be for a natural isotope ratio. In fact, NIST uses molar mass of carbon-12 and specifically includes the possibility of an average in the relative molecular mass.)

See also the atomic mass constant $m_{\rm {u}}$.

$
\setbox 0=\hbox{$N$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

N    
May indicate

$
\setbox 0=\hbox{$n$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate: and maybe some other stuff.

n    
May indicate

natural    
Natural numbers are the numbers: $1,2,3,4,\ldots$.

normal    
A normal operator or matrix is one that has orthonormal eigenfunctions or eigenvectors. Since eigenvectors are not orthonormal in general, a normal operator or matrix is abnormal! Another example of a highly confusing term. Such a matrix should have been called orthodiagonalizable or something of the kind. To be fair, the author is not aware of any physicists being involved in this particular term; it may be the mathematicians that are to blame here.

For an operator or matrix $A$ to be normal, it must commute with its Hermitian adjoint, $[A,A^\dagger]$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0. Hermitian matrices are normal since they are equal to their Hermitian adjoint. Skew-Hermitian matrices are normal since they are equal to the negative of their Hermitian adjoint. Unitary matrices are normal because they are the inverse of their Hermitian adjoint.

O    
May indicate the origin of the coordinate system.

opposite    
The opposite of a number $a$ is $-a$. In other words, it is the additive inverse.

$
\setbox 0=\hbox{$P$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{${\cal P}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Particle exchange operator. Exchanges the positions and spins of two identical particles.

$
\setbox 0=\hbox{${\mathscr P}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Peltier coefficient.

P    
Often used to indicate a state with one unit of orbital angular momentum.

$
\setbox 0=\hbox{$p$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

p    
May indicate

perpendicular bisector    
For two given points $P$ and $Q$, the perpendicular bisector consists of all points $R$ that are equally far from $P$ as they are from $Q$. In two dimensions, the perpendicular bisector is the line that passes through the point exactly half way in between $P$ and $Q$, and that is orthogonal to the line connecting $P$ and $Q$. In three dimensions, the perpendicular bisector is the plane that passes through the point exactly half way in between $P$ and $Q$, and that is orthogonal to the line connecting $P$ and $Q$. In vector notation, the perpendicular bisector of points $P$ and $Q$ is all points $R$ whose radius vector ${\skew0\vec r}$ satisfies the equation:

\begin{displaymath}
({\skew0\vec r}-{\skew0\vec r}_P)\cdot({\skew0\vec r}_Q-{\...
..._Q-{\skew0\vec r}_P)\cdot({\skew0\vec r}_Q-{\skew0\vec r}_P)
\end{displaymath}

(Note that the halfway point ${\skew0\vec r}-{\skew0\vec r}_P$ $\vphantom0\raisebox{1.5pt}{$=$}$ ${\textstyle\frac{1}{2}}({\skew0\vec r}_Q-{\skew0\vec r}_P)$ is included in this formula, as is the half way point plus any vector that is normal to $({\skew0\vec r}_Q-{\skew0\vec r}_P)$.)

phase angle    
Any complex number can be written in polar form as $c$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\vert c\vert e^{{\rm i}\alpha}$ where both the magnitude $\vert c\vert$ and the phase angle $\alpha$ are real numbers. Note that when the phase angle varies from zero to $2\pi$, the complex number $c$ varies from positive real to positive imaginary to negative real to negative imaginary and back to positive real. When the complex number is plotted in the complex plane, the phase angle is the direction of the number relative to the origin. The phase angle $\alpha$ is often called the argument, but so is about everything else in mathematics, so that is not very helpful.

In complex time-dependent waves of the form $e^{{\rm i}({\omega}t-\phi)}$, and its real equivalent $\cos({\omega}t-\phi)$, the phase angle $\phi$ gives the angular argument of the wave at time zero.

photon    
Unit of electromagnetic radiation (which includes light, x-rays, microwaves, etcetera). A photon has a energy $\hbar\omega$, where $\omega$ is its angular frequency, and a wave length $2{\pi}c$/$\omega$ where $c$ is the speed of light.

potential    
In order to optimize confusion, pretty much everything in physics that is scalar is called potential. Potential energy is routinely concisely referred to as potential. It is the energy that a particle can pick up from a force field by changing its position. It is in Joule. But an electric potential is taken to be per unit charge, which gives it units of volts. Then there are thermodynamic potentials like the chemical potential.

$
\setbox 0=\hbox{$p_x$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Linear momentum in the $x$-direction. (In the one-dimensional cases at the end of the unsteady evolution chapter, the $x$ subscript is omitted.) Components in the $y$- and $z$-directions are $p_y$ and $p_z$. Classical Newtonian physics has $p_x$ $\vphantom0\raisebox{1.5pt}{$=$}$ $mu$ where $m$ is the mass and $u$ the velocity in the $x$-direction. In quantum mechanics, the possible values of $p_x$ are the eigenvalues of the operator ${\widehat p}_x$ which equals $\hbar\partial$/${\rm i}\partial{x}$. (But which becomes canonical momentum in a magnetic field.)

$
\setbox 0=\hbox{$Q$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate

$
\setbox 0=\hbox{$q$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$R$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{${\cal R}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Rotation operator.

$
\setbox 0=\hbox{$\Re$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The real part of a complex number. If $c$ $\vphantom0\raisebox{1.5pt}{$=$}$ $c_r+{{\rm i}}c_i$ with $c_r$ and $c_i$ real numbers, then $\Re(c)$ $\vphantom0\raisebox{1.5pt}{$=$}$ $c_r$. Note that $c+c^*$ $\vphantom0\raisebox{1.5pt}{$=$}$ $2\Re(c)$.

$
\setbox 0=\hbox{$r$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{${\skew0\vec r}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The position vector. In Cartesian coordinates $(x,y,z)$ or $x{\hat\imath}+y{\hat\jmath}+z{\hat k}$. In spherical coordinates $r\hat\imath_r$. Its three Cartesian components may be indicated by $r_1,r_2,r_3$ or by $x,y,z$ or by $x_1,x_2,x_3$.

reciprocal    
The reciprocal of a number $a$ is 1/$a$. In other words, it is the multiplicative inverse.

relativity    
The special theory of relativity accounts for the experimental observation that the speed of light $c$ is the same in all local coordinate systems. It necessarily drops the basic concepts of absolute time and length that were corner stones in Newtonian physics.

Albert Einstein should be credited with the boldness to squarely face up to the unavoidable where others wavered. However, he should also be credited for the boldness of swiping the basic ideas from Lorentz and Poincaré without giving them proper, or any, credit. The evidence is very strong he was aware of both works, and his various arguments are almost carbon copies of those of Poincaré, but in his paper it looks like it all came from Einstein, with the existence of the earlier works not mentioned. (Note that the general theory of relativity, which is of no interest to this book, is almost surely properly credited to Einstein. But he was a lot less hungry then.)

Relativity implies that a length seen by an observer moving at a speed $v$ is shorter than the one seen by a stationary observer by a factor $\sqrt{1-(v/c)^2}$ assuming the length is in the direction of motion. This is called Lorentz-Fitzgerald contraction. It makes galactic travel somewhat more conceivable because the size of the galaxy will contract for an astronaut in a rocket ship moving close to the speed of light. Relativity also implies that the time that an event takes seems to be slower by a factor $1/\sqrt{1-(v/c)^2}$ if the event is seen by an observer in motion compared to the location where the event occurs. That is called time dilation. Some high-energy particles generated in space move so fast that they reach the surface of the earth though this takes much more time than the particles would last at rest in a laboratory. The decay time increases because of the motion of the particles. (Of course, as far as the particles themselves see it, the distance to travel is a lot shorter than it seems to be to earth. For them, it is a matter of length contraction.)

The following formulae give the relativistic mass, momentum, and kinetic energy of a particle in motion:

\begin{displaymath}
m= \frac{m_0}{\sqrt{1-(v/c)^2}}
\qquad
p = m v
\qquad
T = mc^2 - m_0c^2
\end{displaymath}

where $m_0$ is the rest mass of the particle, i.e.  the mass as measured by an observer to whom the particle seems at rest. The formula for kinetic energy reflects the fact that even if a particle is at rest, it still has an amount of build-in energy equal to $m_0c^2$ left. The total energy of a particle in empty space, being kinetic and rest mass energy, is given by

\begin{displaymath}
E = m c^2 = \sqrt{(m_0c^2)^2 + c^2p^2}
\end{displaymath}

as can be verified by substituting in the expression for the momentum, in terms of the rest mass, and then taking both terms inside the square root under a common denominator. For small linear momentum $p$, this can be approximated as $\frac12m_0v^2$.

Relativity seemed quite a dramatic departure of Newtonian physics when it developed. Then quantum mechanics started to emerge...

rot    
The rot of a vector $\vec{v}$ is defined as $\mathop{\rm curl}\nolimits \vec{v}$ $\vphantom0\raisebox{1.5pt}{$\equiv$}$ $\mathop{\rm {rot}}\vec{v}$ $\vphantom0\raisebox{1.5pt}{$\equiv$}$ $\nabla$ $\times$ $\vec{v}$.

$
\setbox 0=\hbox{$S$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{${\cal S}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The action integral of Lagrangian mechanics, {A.1}

$
\setbox 0=\hbox{${\mathscr S}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Seebeck coefficient.

S    
Often used to indicate a state of zero orbital angular momentum.

$
\setbox 0=\hbox{$s$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

s    
May indicate:

scalar    
A quantity that is not a vector, a quantity that is just a single number.

$
\setbox 0=\hbox{$\sin$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The sine function, a periodic function oscillating between 1 and -1 as shown in [40, pp. 40-]. Good to remember: $\cos^2\alpha+\sin^2\alpha$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1 and $\sin2\alpha$ $\vphantom0\raisebox{1.5pt}{$=$}$ $2\sin\alpha\cos\alpha$ and $\cos2\alpha$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\cos^2\alpha-\sin^2\alpha$.

solenoidal    
A vector $\vec{v}$ is solenoidal if its divergence $\nabla\cdot\vec{v}$ is zero.

spectrum    
In this book, a spectrum normally means a plot of energy levels along the vertical axis. Often, the horizontal coordinate is used to indicate a second variable, such as the density of states or the particle velocity.

For light (photons), a spectrum can be obtained experimentally by sending the light through a prism. This separates the colors in the light, and each color means a particular energy of the photons.

The word spectrum is also often used in a more general mathematical sense, but not in this book as far as I can remember.

spherical coordinates    
The spherical coordinates $r$, $\theta$, and $\phi$ of an arbitrary point P are defined as

Figure N.3: Spherical coordinates of an arbitrary point P.
\begin{figure}
\centering
\setlength{\unitlength}{1pt}
\begin{picture}(...
...0,0){$\phi$}}
\put(40,120){\makebox(0,0){P}}
\end{picture}
\end{figure}

In Cartesian coordinates, the unit vectors in the $x$, $y$, and $z$ directions are called ${\hat\imath}$, ${\hat\jmath}$, and ${\hat k}$. Similarly, in spherical coordinates, the unit vectors in the $r$, $\theta$, and $\phi$ directions are called ${\hat\imath}_r$, ${\hat\imath}_\theta$, and ${\hat\imath}_\phi$. Here, say, the $\theta$ direction is defined as the direction of the change in position if you increase $\theta$ by an infinitesimally small amount while keeping $r$ and $\varphi$ the same. Note therefore in particular that the direction of ${\hat\imath}_r$ is the same as that of ${\skew0\vec r}$; radially outward.

An arbitrary vector $\vec{v}$ can be decomposed in components $v_r$, $v_\theta$, and $v_\phi$ along these unit vectors. In particular

\begin{displaymath}
\vec v \equiv v_r {\hat\imath}_r + v_\theta {\hat\imath}_\theta + v_\phi {\hat\imath}_\phi
\end{displaymath}

Recall from calculus that in spherical coordinates, a volume integral of an arbitrary function $f$ takes the form

\begin{displaymath}
\int f {\,\rm d}^3{\skew0\vec r}= \int\int\int f r^2 \sin\theta {\,\rm d}r {\rm d}\theta {\rm d}\phi
\end{displaymath}

In other words, the volume element in spherical coordinates is

\begin{displaymath}
{\rm d}V = {\rm d}^3 {\skew0\vec r}= r^2 \sin\theta {\,\rm d}r {\rm d}\theta {\rm d}\phi
\end{displaymath}

Often it is convenient of think of volume integrations as a two-step process: first perform an integration over the angular coordinates $\theta$ and $\phi$. Physically, that integrates over spherical surfaces. Then perform an integration over $r$ to integrate all the spherical surfaces together. The combined infinitesimal angular integration element

\begin{displaymath}
{\rm d}\Omega = \sin\theta {\rm d}\theta {\rm d}\phi
\end{displaymath}

is called the infinitesimal solid angle ${\rm d}\Omega$. In two-dimensional polar coordinates $r$ and $\theta$, the equivalent would be the infinitesimal polar angle ${\rm d}\theta$. Recall that ${\rm d}\theta$, (in proper radians of course), equals the arclength of an infinitesimal part of the circle of integration divided by the circle radius. Similarly ${\rm d}\Omega$ is the surface of an infinitesimal part of the sphere of integration divided by the square sphere radius.

See the $\nabla$ entry for the gradient operator and Laplacian in spherical coordinates.

Stokes' Theorem    
This theorem, first derived by Kelvin and first published by someone else I cannot recall, says that for any reasonably smoothly varying vector $\vec{v}$,

\begin{displaymath}
\int_A \left(\nabla \times \vec v\right) {\,\rm d}A
=
\oint \vec v \cdot {\rm d}\vec r
\end{displaymath}

where the first integral is over any smooth surface area $A$ and the second integral is over the edge of that surface. How did Stokes get his name on it? He tortured his students with it, that’s how!

One important consequence of the Stokes theorem is for vector fields $\vec{v}$ that are irrotational, i.e. that have $\nabla$ $\times$ $\vec{v}$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0. Such fields can be written as

\begin{displaymath}
\vec v = \nabla f \qquad
f({\skew0\vec r}) \equiv \int_{...
...erline{\skew0\vec r}})\cdot{\rm d}{\underline{\skew0\vec r}}
\end{displaymath}

Here ${\skew0\vec r}_{\rm {ref}}$ is the position of an arbitrarily chosen reference point, usually the origin. The reason the field $\vec{v}$ can be written this way is the Stokes theorem. Because of the theorem, it does not make a difference along which path from ${\skew0\vec r}_{\rm {ref}}$ to ${\skew0\vec r}$ you integrate. (Any two paths give the same answer, as long as $\vec{v}$ is irrotational everywhere in between the paths.) So the definition of $f$ is unambiguous. And you can verify that the partial derivatives of $f$ give the components of $\vec{v}$ by approaching the final position ${\skew0\vec r}$ in the integration from the corresponding direction.

symmetry    
A symmetry is an operation under which an object does not change. For example, a human face is almost, but not completely, mirror symmetric: it looks almost the same in a mirror as when seen directly. The electrical field of a single point charge is spherically symmetric; it looks the same from whatever angle you look at it, just like a sphere does. A simple smooth glass (like a glass of water) is cylindrically symmetric; it looks the same whatever way you rotate it around its vertical axis.

$
\setbox 0=\hbox{$T$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{${\cal T}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Translation operator that translates a wave function through space. The amount of translation is usually indicated by a subscript.

$
\setbox 0=\hbox{$t$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

temperature    
A measure of the heat motion of the particles making up macroscopic objects. At absolute zero temperature, the particles are in the ground state of lowest possible energy.

triple product    
A product of three vectors. There are two different versions:

$
\setbox 0=\hbox{$U$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{${\cal U}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The time shift operator: ${\cal U}(\tau,t)$ changes the wave function $\Psi(\ldots;t)$ into $\Psi(\ldots;t+\tau)$. If the Hamiltonian is independent of time

\begin{displaymath}
{\cal U}(\tau,t) = {\cal U}_\tau = e^{-{\rm i}H \tau/\hbar}
\end{displaymath}

$
\setbox 0=\hbox{$u$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

u    
May indicate the atomic mass constant, equivalent to 1.660,538,92 10$\POW9,{-27}$ kg or 931.494,06 MeV/$c^2$.

$
\setbox 0=\hbox{$V$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{${\cal V}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Volume.

$
\setbox 0=\hbox{$v$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$\vec{v}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

vector    
Simply put, a list of numbers. A vector $\vec{v}$ in index notation is a set of numbers $\{v_i\}$ indexed by an index $i$. In normal three-dimensional Cartesian space, $i$ takes the values 1, 2, and 3, making the vector a list of three numbers, $v_1$, $v_2$, and $v_3$. These numbers are called the three components of $\vec{v}$. The list of numbers can be visualized as a column, and is then called a ket vector, or as a row, in which case it is called a bra vector. This convention indicates how multiplication should be conducted with them. A bra times a ket produces a single number, the dot product or inner product of the vectors:

\begin{displaymath}
\left(1,3,5\right)\left(\begin{array}{c}7\\ 11\\ 13\end{array}\right)
= 1\;7 + 3\;11 + 5\;13 = 105
\end{displaymath}

To turn a ket into a bra for purposes of taking inner products, write the complex conjugates of its components as a row.

Formal definitions of vectors vary, but real mathematicians will tell you that vectors are objects that can be manipulated in certain ways (addition and multiplication by a scalar). Some physicists define vectors as objects that transform in a certain way under coordinate transformation (one-di­men­sion­al tensors); that is not the same thing.

vectorial product    
An vectorial product, or cross product is a product of vectors that produces another vector. If

\begin{displaymath}
\vec c=\vec a\times\vec b,
\end{displaymath}

it means in index notation that the $i$-th component of vector $\vec{c}$ is

\begin{displaymath}
c_i = a_{{\overline{\imath}}} b_{{\overline{\overline{\ima...
... - a_{{\overline{\overline{\imath}}}}b_{{\overline{\imath}}}
\end{displaymath}

where ${\overline{\imath}}$ is the index following $i$ in the sequence 123123..., and ${\overline{\overline{\imath}}}$ the one preceding it. For example, $c_1$ will equal $a_2b_3-a_3b_2$.

W    
May indicate:

$
\setbox 0=\hbox{$w$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$\vec{w}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Generic vector.

$
\setbox 0=\hbox{$X$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Used in this book to indicate a function of $x$ to be determined.

$
\setbox 0=\hbox{$x$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$Y$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Used in this book to indicate a function of $y$ to be determined.

$
\setbox 0=\hbox{$Y_l^m$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Spherical harmonic. Eigenfunction of both angular momentum in the $z$-direction and of total square angular momentum.

$
\setbox 0=\hbox{$y$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$Z$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$z$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate: