2.3 The Dot, oops, IN­NER Prod­uct

The dot prod­uct of vec­tors is an im­por­tant tool. It makes it pos­si­ble to find the length of a vec­tor, by mul­ti­ply­ing the vec­tor by it­self and tak­ing the square root. It is also used to check if two vec­tors are or­thog­o­nal: if their dot prod­uct is zero, they are. In this sub­sec­tion, the dot prod­uct is de­fined for com­plex vec­tors and func­tions.

The usual dot prod­uct of two vec­tors $\vec{f}$ and $\vec{g}$ can be found by mul­ti­ply­ing com­po­nents with the same in­dex $i$ to­gether and sum­ming that:

\begin{displaymath}
\vec f \cdot \vec g \equiv f_1 g_1 + f_2 g_2 + f_3 g_3
\end{displaymath}

(The em­phatic equal, $\vphantom0\raisebox{1.5pt}{$\equiv$}$, is com­monly used to in­di­cate “is by de­f­i­n­i­tion equal” or is al­ways equal.) Fig­ure 2.6 shows mul­ti­plied com­po­nents us­ing equal col­ors.

Fig­ure 2.6: Form­ing the dot prod­uct of two vec­tors.
\begin{figure}\centering
% \htmlimage{extrascale=3,notransparent}{}
\setlengt...
...ox(0,0)[b]{3}}
\put(143,1.2){\makebox(0,0)[b]{$i$}}
\end{picture}
\end{figure}

Note the use of nu­meric sub­scripts, $f_1$, $f_2$, and $f_3$ rather than $f_x$, $f_y$, and $f_z$; it means the same thing. Nu­meric sub­scripts al­low the three term sum above to be writ­ten more com­pactly as:

\begin{displaymath}
\vec f \cdot \vec g \equiv \sum_{\mbox{\scriptsize all }i} f_i g_i
\end{displaymath}

The $\Sigma$ is called the sum­ma­tion sym­bol.

The length of a vec­tor $\vec{f}$, in­di­cated by $\vert\vec{f}\vert$ or sim­ply by $f$, is nor­mally com­puted as

\begin{displaymath}
\vert\vec f\vert = \sqrt{\vec f \cdot \vec f}
= \sqrt{\sum_{\mbox{\scriptsize all }i} f_i^2}
\end{displaymath}

How­ever, this does not work cor­rectly for com­plex vec­tors. The dif­fi­culty is that terms of the form $f_i^2$ are no longer nec­es­sar­ily pos­i­tive num­bers. For ex­am­ple, ${\rm i}^2$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\vphantom{0}\raisebox{1.5pt}{$-$}$1.

There­fore, it is nec­es­sary to use a gen­er­al­ized “in­ner prod­uct” for com­plex vec­tors, which puts a com­plex con­ju­gate on the first vec­tor:

\begin{displaymath}
\fbox{$\displaystyle
\left\langle\vphantom{\vec g}\vec f\h...
...rangle
\equiv
\sum_{\mbox{\scriptsize all }i} f^*_i g_i
$}
\end{displaymath} (2.7)

If the vec­tor $\vec{f}$ is real, the com­plex con­ju­gate does noth­ing, and the in­ner prod­uct $\left\langle\vphantom{\vec{g}}\vec{f}\hspace{-\nulldelimiterspace}\hspace{.03em}\right.\!\left\vert\vphantom{\vec{f}}\vec{g}\right\rangle $ is the same as the dot prod­uct $\vec{f}\cdot\vec{g}$. Oth­er­wise, in the in­ner prod­uct $\vec{f}$ and $\vec{g}$ are no longer in­ter­change­able; the con­ju­gates are only on the first fac­tor, $\vec{f}$. In­ter­chang­ing $\vec{f}$ and $\vec{g}$ changes the in­ner prod­uct’s value into its com­plex con­ju­gate.

The length of a nonzero vec­tor is now al­ways a pos­i­tive num­ber:

\begin{displaymath}
\fbox{$\displaystyle
\vert\vec f\vert = \sqrt{\left\langle...
... = \sqrt{\sum_{\mbox{\scriptsize all }i} \vert f_i\vert^2}
$}
\end{displaymath} (2.8)

Physi­cists take the in­ner prod­uct bracket ver­bally apart as

\begin{displaymath}
\begin{array}{ccc}
{\left\langle\vec f\hspace{0.3pt}\right...
...ox{bra} & \makebox[0pt][l]{/}\mbox{c} & \mbox{ket}
\end{array}\end{displaymath}

and re­fer to vec­tors as bras and kets.

The in­ner prod­uct of func­tions is de­fined in ex­actly the same way as for vec­tors, by mul­ti­ply­ing val­ues at the same $x$-​po­si­tion to­gether and sum­ming. But since there are in­fi­nitely many $x$ val­ues, the sum be­comes an in­te­gral:

\begin{displaymath}
\fbox{$\displaystyle
\left\langle\vphantom{g}f\hspace{-\nu...
...e = \int_{\mbox{\scriptsize all }x} f^*(x) g(x) {\,\rm d}x
$}
\end{displaymath} (2.9)

Fig­ure 2.7 shows mul­ti­plied func­tion val­ues us­ing equal col­ors:

Fig­ure 2.7: Form­ing the in­ner prod­uct of two func­tions.
\begin{figure}\centering
% \htmlimage{extrascale=3,notransparent}{}
\setlengt...
...0)[r]{$g(x)$}}
\put(143,1.2){\makebox(0,0)[b]{$x$}}
\end{picture}
\end{figure}

The equiv­a­lent of the length of a vec­tor is in the case of a func­tion called its “norm:”

\begin{displaymath}
\fbox{$\displaystyle
\vert\vert f\vert\vert \equiv \sqrt{\...
...t_{\mbox{\scriptsize all }x} \vert f(x)\vert^2 {\,\rm d}x}
$}
\end{displaymath} (2.10)

The dou­ble bars are used to avoid con­fu­sion with the ab­solute value of the func­tion.

A vec­tor or func­tion is called “nor­mal­ized” if its length or norm is one:

\begin{displaymath}
\fbox{$\displaystyle
\left\langle\vphantom{f}f\hspace{-\nu...
...hantom{f}f\right\rangle =1 \mbox{ iff $f$\ is normalized.}
$}
\end{displaymath} (2.11)

(“iff” should re­ally be read as if and only if.)

Two vec­tors, or two func­tions, $f$ and $g$, are by de­f­i­n­i­tion or­thog­o­nal if their in­ner prod­uct is zero:

\begin{displaymath}
\fbox{$\displaystyle
\left\langle\vphantom{g}f\hspace{-\nu...
...\right\rangle =0 \mbox{ iff $f$\ and $g$\ are orthogonal.}
$}
\end{displaymath} (2.12)

Sets of vec­tors or func­tions that are all

oc­cur a lot in quan­tum me­chan­ics. Such sets should be called “or­tho­nor­mal”, though the less pre­cise term or­thog­o­nal is of­ten used in­stead. This doc­u­ment will re­fer to them cor­rectly as be­ing or­tho­nor­mal.

So, a set of func­tions or vec­tors $f_1,f_2,f_3,\ldots$ is or­tho­nor­mal if

\begin{displaymath}
0=
\langle f_1\vert f_2\rangle=\langle f_2\vert f_1\rangle...
...angle f_2\vert f_3\rangle=\langle f_3\vert f_2\rangle=
\ldots
\end{displaymath}

and

\begin{displaymath}
1=\langle f_1\vert f_1\rangle=\langle f_2\vert f_2\rangle=\langle f_3\vert f_3\rangle=
\ldots
\end{displaymath}


Key Points
$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
For com­plex vec­tors and func­tions, the nor­mal dot prod­uct be­comes the in­ner prod­uct.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
To take an in­ner prod­uct of vec­tors,
  • take com­plex con­ju­gates of the com­po­nents of the first vec­tor;
  • mul­ti­ply cor­re­spond­ing com­po­nents of the two vec­tors to­gether;
  • sum these prod­ucts.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
To take an in­ner prod­uct of func­tions,
  • take the com­plex con­ju­gate of the first func­tion;
  • mul­ti­ply the two func­tions;
  • in­te­grate the prod­uct func­tion.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
To find the length of a vec­tor, take the in­ner prod­uct of the vec­tor with it­self, and then a square root.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
To find the norm of a func­tion, take the in­ner prod­uct of the func­tion with it­self, and then a square root.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
A pair of vec­tors, or a pair of func­tions, is or­thog­o­nal if their in­ner prod­uct is zero.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
A set of vec­tors forms an or­tho­nor­mal set if every one is or­thog­o­nal to all the rest, and every one is of unit length.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
A set of func­tions forms an or­tho­nor­mal set if every one is or­thog­o­nal to all the rest, and every one is of unit norm.

2.3 Re­view Ques­tions
1.

Find the fol­low­ing in­ner prod­uct of the two vec­tors:

\begin{displaymath}
\left\langle\left(
\begin{array}{c} 1+{\rm i}\\ 2-{\rm i}
\e...
...\begin{array}{c} 2{\rm i}\\ 3
\end{array}\right) \right\rangle
\end{displaymath}

So­lu­tion dot-a

2.

Find the length of the vec­tor

\begin{displaymath}
\left(
\begin{array}{c} 1+{\rm i}\\ 3
\end{array}\right)
\end{displaymath}

So­lu­tion dot-b

3.

Find the in­ner prod­uct of the func­tions $\sin(x)$ and $\cos(x)$ on the in­ter­val 0 $\raisebox{-.3pt}{$\leqslant$}$ $x$ $\raisebox{-.3pt}{$\leqslant$}$ 1.

So­lu­tion dot-c

4.

Show that the func­tions $\sin(x)$ and $\cos(x)$ are or­thog­o­nal on the in­ter­val 0 $\raisebox{-.3pt}{$\leqslant$}$ $x$ $\raisebox{-.3pt}{$\leqslant$}$ $2\pi$.

So­lu­tion dot-d

5.

Ver­ify that $\sin(x)$ is not a nor­mal­ized func­tion on the in­ter­val 0 $\raisebox{-.3pt}{$\leqslant$}$ $x$ $\raisebox{-.3pt}{$\leqslant$}$ $2\pi$, and nor­mal­ize it by di­vid­ing by its norm.

So­lu­tion dot-e

6.

Ver­ify that the most gen­eral mul­ti­ple of $\sin(x)$ that is nor­mal­ized on the in­ter­val 0 $\raisebox{-.3pt}{$\leqslant$}$ $x$ $\raisebox{-.3pt}{$\leqslant$}$ $2\pi$ is $e^{{\rm i}\alpha}\sin(x)$$\raisebox{.5pt}{$/$}$$\sqrt{\pi}$ where $\alpha$ is any ar­bi­trary real num­ber. So, us­ing the Euler for­mula, the fol­low­ing mul­ti­ples of $\sin(x)$ are all nor­mal­ized: $\sin(x)$$\raisebox{.5pt}{$/$}$$\sqrt{\pi}$, (for $\alpha$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0), $-\sin(x)$$\raisebox{.5pt}{$/$}$$\sqrt{\pi}$, (for $\alpha$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\pi$), and ${\rm i}\sin(x)$$\raisebox{.5pt}{$/$}$$\sqrt{\pi}$, (for $\alpha$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\pi$$\raisebox{.5pt}{$/$}$​2).

So­lu­tion dot-f

7.

Show that the func­tions $e^{4{\rm i}{\pi}x}$ and $e^{6{\rm i}{\pi}x}$ are an or­tho­nor­mal set on the in­ter­val 0 $\raisebox{-.3pt}{$\leqslant$}$ $x$ $\raisebox{-.3pt}{$\leqslant$}$ 1.

So­lu­tion dot-g