### D.12 The har­monic os­cil­la­tor so­lu­tion

If you re­ally want to know how the har­monic os­cil­la­tor wave func­tion can be found, here it is. Read at your own risk.

The ODE (or­di­nary dif­fer­en­tial equa­tion) to solve is

where the spring con­stant was rewrit­ten as the equiv­a­lent ex­pres­sion .

Now the first thing you al­ways want to do with this sort of prob­lems is to sim­plify it as much as pos­si­ble. In par­tic­u­lar, get rid of as much di­men­sional con­stants as you can by rescal­ing the vari­ables: de­fine a new scaled -​co­or­di­nate and a scaled en­ergy by

If you make these re­place­ments into the ODE above, you can make the co­ef­fi­cients of the two terms in the left hand side equal by choos­ing . In that case both terms will have the same net co­ef­fi­cient . Then if you clev­erly choose , the right hand side will have that co­ef­fi­cient too, and you can di­vide it away and end up with no co­ef­fi­cients at all:

Looks a lot cleaner, not?

Now ex­am­ine this equa­tion for large val­ues of (i.e. large ). You get ap­prox­i­mately

If you write the so­lu­tion as an ex­po­nen­tial, you can ball­park that it must take the form

where the dots in­di­cate terms that are small com­pared to for large . The form of the so­lu­tion is im­por­tant, since be­comes in­fi­nitely large at large . That is un­ac­cept­able: the prob­a­bil­ity of find­ing the par­ti­cle can­not be­come in­fi­nitely large at large : the to­tal prob­a­bil­ity of find­ing the par­ti­cle must be one, not in­fi­nite. The only so­lu­tions that are ac­cept­able are those that be­have as for large .

Now split off the lead­ing ex­po­nen­tial part by defin­ing a new un­known by

Sub­sti­tut­ing this in the ODE and di­vid­ing out the ex­po­nen­tial, you get:

Now try to solve this by writ­ing as a power se­ries, (say, a Tay­lor se­ries):

where the val­ues of run over what­ever the ap­pro­pri­ate pow­ers are and the are con­stants. If you plug this into the ODE, you get

For the two sides to be equal, they must have the same co­ef­fi­cient for every power of .

There must be a low­est value of for which there is a nonzero co­ef­fi­cient , for if took on ar­bi­trar­ily large neg­a­tive val­ues, would blow up strongly at the ori­gin, and the prob­a­bil­ity to find the par­ti­cle near the ori­gin would then be in­fi­nite. De­note the low­est value of by . This low­est power pro­duces a power of in the left hand side of the equa­tion above, but there is no cor­re­spond­ing power in the right hand side. So, the co­ef­fi­cient of will need to be zero, and that means ei­ther 0 or 1. So the power se­ries for will need to start as ei­ther or . The con­stant or is al­lowed to have any nonzero value.

But note that the term nor­mally pro­duces a term in the right hand side of the equa­tion above. For the left hand side to have a match­ing term, there will need to be a fur­ther term in the power se­ries for ,

where will need to equal , so . This term in turn will nor­mally pro­duce a term in the right hand side which will have to be can­celed in the left hand side by a term in the power se­ries for . And so on.

So, if the power se­ries starts with 0, the so­lu­tion will take the gen­eral form

while if it starts with 1 you will get

In the first case, you have a sym­met­ric so­lu­tion, one which re­mains the same when you flip over the sign of , and in the sec­ond case you have an an­ti­sym­met­ric so­lu­tion, one which changes sign when you flip over the sign of .

You can find a gen­eral for­mula for the co­ef­fi­cients of the se­ries by mak­ing the change in no­ta­tions in the left-hand-side sum:

Note that you can start sum­ming at rather than , since the first term in the sum is zero any­way. Next note that you can again for­get about the dif­fer­ence be­tween and , be­cause it is just a sym­bolic sum­ma­tion vari­able. The sym­bolic sum writes out to the ex­act same ac­tual sum whether you call the sym­bolic sum­ma­tion vari­able or .

So for the pow­ers in the two sides to be equal, you must have

In par­tic­u­lar, for large , by ap­prox­i­ma­tion

Now if you check out the Tay­lor se­ries of , (i.e. the Tay­lor se­ries of with re­placed by ,) you find it sat­is­fies the ex­act same equa­tion. So, nor­mally the so­lu­tion blows up some­thing like at large . And since was , nor­mally takes on the un­ac­cept­able form . (If you must have rigor here, es­ti­mate in terms of where is a num­ber slightly less than one, plus a poly­no­mial. That is enough to show un­ac­cept­abil­ity of such so­lu­tions.)

What are the op­tions for ac­cept­able so­lu­tions? The only pos­si­bil­ity is that the power se­ries ter­mi­nates. There must be a high­est power , call it , whose term in the right hand side is zero

In that case, there is no need for a fur­ther term, the power se­ries will re­main a poly­no­mial of de­gree . But note that all this re­quires the scaled en­ergy to equal , and the ac­tual en­ergy is there­fore ​2. Dif­fer­ent choices for the power at which the se­ries ter­mi­nates pro­duce dif­fer­ent en­er­gies and cor­re­spond­ing eigen­func­tions. But they are dis­crete, since , as any power , must be a non­neg­a­tive in­te­ger.

With iden­ti­fied as , you can find the ODE for listed in ta­ble books, like [41, 29.1], un­der the name Her­mite's dif­fer­en­tial equa­tion. They then iden­tify the poly­no­mial so­lu­tions as the so-called “Her­mite poly­no­mi­als,” ex­cept for a nor­mal­iza­tion fac­tor. To find the nor­mal­iza­tion fac­tor, i.e.  or , de­mand that the to­tal prob­a­bil­ity of find­ing the par­ti­cle any­where is one, 1. You should be able to find the value for the ap­pro­pri­ate in­te­gral in your ta­ble book, like [41, 29.15].

Putting it all to­gether, the generic ex­pres­sion for the eigen­func­tions can be found to be:

 (D.4)

where the de­tails of the Her­mite poly­no­mi­als can be found in ta­ble books like [41, pp. 167-168]. They are read­ily eval­u­ated on a com­puter us­ing the “re­cur­rence re­la­tion” you can find there, for as far as com­puter round-off er­ror al­lows (up to about 70.)

Quan­tum field the­ory al­lows a much neater way to find the eigen­func­tions. It is ex­plained in ad­den­dum {A.15.5} or equiv­a­lently in {D.64}.