Subsections


2.7 Additional Points

This subsection describes a few further issues of importance for this book.


2.7.1 Dirac notation

Physicists like to write inner products such as $\big\langle f\big\vert Ag\big\rangle $ in Dirac notation:

\begin{displaymath}
\big\langle f\big\vert A \big\vert g\big\rangle \equiv \big\langle f\big\vert Ag\big\rangle
\end{displaymath}

since this conforms more closely to how you would think of it in linear algebra:

\begin{displaymath}
\begin{array}{ccc}
\big\langle f\big\vert & A & \big\ver...
...13pt} \mbox{bra} & \mbox{operator} & \mbox{ket}
\end{array}
\end{displaymath}

The various advanced ideas of linear algebra can be extended to operators in this way, but they will not be needed in this book.

One thing will be needed in some more advanced addenda, however. That is the case that operator $A$ is not Hermitian. In that case, if you want to take $A$ to the other side of the inner product, you need to change it into a different operator. That operator is called the “Hermitian conjugate” of $A$. In physics, it is almost always indicated as $A^\dagger$. So, simply by definition,

\begin{displaymath}
\big\langle f\big\vert Ag\big\rangle \equiv \big\langle f\...
...{\mbox{\scriptsize all }x} (A^\dagger f(x))^* g(x) {\,\rm d}x
\end{displaymath}

Then there are some more things that this book will not use. However, you will almost surely encounter these when you read other books on quantum mechanics.

First, the dagger is used much like a generalization of complex conjugate,

\begin{displaymath}
f^\dagger \equiv f^* \qquad \big\vert f\big\rangle ^\dagger \equiv \big\langle f\big\vert
\end{displaymath}

etcetera. Applying a dagger a second time gives the original back. Also, if you work out the dagger on a product, you need to reverse the order of the factors. For example

\begin{displaymath}
\Big(A^\dagger\big\vert f\big\rangle \Big)^\dagger \big\ve...
...big\rangle = \big\langle f\big\vert A \big\vert g\big\rangle
\end{displaymath}

In words, putting $A^\dagger\big\vert f\big\rangle $ into the left side of an inner product gives $\big\langle f\big\vert A$.

The second point will be illustrated for the case of vectors in three dimensions. Such a vector can be written as

\begin{displaymath}
\vec v = {\hat\imath}v_x + {\hat\jmath}v_y + {\hat k}v_z
\end{displaymath}

Here ${\hat\imath}$, ${\hat\jmath}$, and ${\hat k}$ are the three unit vectors in the axial directions. The components $v_x$, $v_y$ and $v_z$ can be found using dot products:

\begin{displaymath}
\vec v = {\hat\imath}({\hat\imath}\cdot\vec v) + {\hat\jmath}({\hat\jmath}\cdot\vec v)
+ {\hat k}({\hat k}\cdot\vec v)
\end{displaymath}

Symbolically, you can write this as

\begin{displaymath}
\vec v = ({\hat\imath}{\hat\imath}\cdot + {\hat\jmath}{\hat\jmath}\cdot + {\hat k}{\hat k}\cdot)\vec v
\end{displaymath}

In fact, the operator in parentheses can be defined by saying that for any vector $\vec{v}$, it gives the exact same vector back. Such an operator is called an “identity operator.”

The relation

\begin{displaymath}
({\hat\imath}{\hat\imath}\cdot + {\hat\jmath}{\hat\jmath}\cdot + {\hat k}{\hat k}\cdot) = 1
\end{displaymath}

is called the “completeness relation.” To see why, suppose you leave off the third part of the operator. Then

\begin{displaymath}
({\hat\imath}{\hat\imath}\cdot + {\hat\jmath}{\hat\jmath}\cdot) \vec v = {\hat\imath}v_x + {\hat\jmath}v_y
\end{displaymath}

The $z$-​component is gone! Now the vector $\vec{v}$ gets projected onto the $x,y$-plane. The operator has become a “projection operator” instead of an identity operator by not suming over the complete set of unit vectors.

You will almost always find these things in terms of bras and kets. To see how that looks, define

\begin{displaymath}
{\hat\imath}\equiv \big\vert 1\big\rangle \qquad {\hat\jma...
... 3\big\rangle
\qquad \vec v \equiv \big\vert v\big\rangle
\end{displaymath}

Then

\begin{displaymath}
\big\vert v\big\rangle = \big\vert 1\big\rangle \big\langl...
...t i\big\rangle \big\langle i\big\vert \big\vert v\big\rangle
\end{displaymath}

so the completeness relation looks like

\begin{displaymath}
\sum_{{\rm all}\ i} \big\vert i\big\rangle \big\langle i\big\vert = 1
\end{displaymath}

If you do not sum over the complete set of kets, you get a projection operator instead of an identity one.


2.7.2 Additional independent variables

In many cases, the functions involved in an inner product may depend on more than a single variable $x$. For example, they might depend on the position $(x,y,z)$ in three-di­men­sion­al space.

The rule to deal with that is to ensure that the inner product integrations are over all independent variables. For example, in three spatial dimensions:

\begin{displaymath}
\langle f \vert g\rangle =
\int_{\mbox{\scriptsize all }...
...ze all }z}
f^*(x,y,z) g(x,y,z) {\,\rm d}x {\rm d}y {\rm d}z
\end{displaymath}

Note that the time $t$ is a somewhat different variable from the rest, and time is not included in the inner product integrations.