D.18 Eigenfunctions of commuting operators

The fact that two operators that commute have a common set of eigenfunctions can be seen as follows: assume that $\alpha$ is an eigenfunction of $A$ with eigenvalue $a$. Then since $A$ and $B$ commute, $AB\alpha$ $\vphantom0\raisebox{1.5pt}{$=$}$ $BA\alpha$ $\vphantom0\raisebox{1.5pt}{$=$}$ $aB\alpha$. Comparing start and end, $B\alpha$ must be an eigenfunction of $A$ with eigenvalue $a$ just like $\alpha$ itself is. If there is no degeneracy of the eigenvalue, that must mean that $B\alpha$ equals $\alpha$ or is at least proportional to it. That is the same as saying that $\alpha$ is an eigenfunction of $B$ too. (In the special case that $B\alpha$ is zero, $\alpha$ is still an eigenfunction of $B$, with eigenvalue zero.)

If there is degeneracy, the eigenfunctions of $A$ are not unique and you can mess with them until they all do become eigenfunctions of $B$ too. That can be shown assuming that the problem has been approximated by a finite-di­men­sion­al one. Then $A$ and $B$ become matrices and the eigenfunction become eigenvectors. Consider each eigenvalue of $A$ in turn. There will be more than one eigenvector corresponding to a degenerate eigenvalue $a$. Now by completeness, any eigenvector $\beta$ can be written as a combination of the eigenvectors of $A$, and more particularly as $\beta$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\beta_n+\beta_a$ where $\beta_a$ is a combination of the eigenvectors of $A$ with eigenvalue $a$ and $\beta_n$ a combination of the eigenvectors of $A$ with other eigenvalues.

The vectors $\beta_n$ and $\beta_a$ separately are still eigenvectors of $B$ if nonzero, since as noted above, $B$ converts eigenvectors of $A$ into eigenvectors with the same eigenvalue or zero. (For example, if $B\beta_a$ was not $b\beta_a$, $B\beta_n$ would have to make up the difference, and $B\beta_n$ can only produce combinations of eigenvectors of $A$ that do not have eigenvalue $a$.) Now replace the eigenvector $\beta$ by either $\beta_a$ or $\beta_n$, whichever is independent of the other eigenvectors of $B$. Doing this for all eigenvectors of $B$ you achieve that the replacement eigenvectors of $B$ are either combinations of the eigenvectors of $A$ with eigenvalue $a$ or of the other eigenvectors of $A$. The set of new eigenvectors of $B$ that are combinations of the eigenvectors of $A$ with eigenvalue $a$ can now be taken as the replacement eigenvectors of $A$ with eigenvalue $a$. They are also eigenvectors of $B$. Repeat for all eigenvalues of $A$.

Similar arguments can be used recursively to show that more generally, a set of operators that all commute have a common set of eigenvectors.

The operators do not really have to be Hermitian, just diagonalizable: they must have a complete set of eigenfunctions.

The above derivation assumed that the problem was finite-​di­men­sion­al, or discretized some way into a finite-di­men­sion­al one like you do in numerical solutions. The latter is open to some suspicion, because even the most accurate numerical approximation is never truly exact. Unfortunately, in the infinite-​di­men­sion­al case the derivation gets much trickier. However, as the hydrogen atom and harmonic oscillator eigenfunction examples indicate, typical infinite systems in nature do satisfy the relationship anyway.