N.23 Fun­da­men­tal as­sump­tion of sta­tis­tics

The as­sump­tion that all en­ergy eigen­states with the same en­ergy are equally likely is sim­ply stated as an ax­iom in typ­i­cal books, [4, p. 92], [18, p. 1], [25, p. 230], [52, p. 177]. Some of these sources quite ex­plic­itly sug­gest that the fact should be self-ev­i­dent to the reader.

How­ever, why could not an en­ergy eigen­state, call it A, in which all par­ti­cles have about the same en­ergy, have a wildly dif­fer­ent prob­a­bil­ity from some eigen­state B in which one par­ti­cle has al­most all the en­ergy and the rest has very lit­tle? The two wave func­tions are wildly dif­fer­ent. (Note that if the prob­a­bil­i­ties are only some­what dif­fer­ent, it would not af­fect var­i­ous con­clu­sions much be­cause of the vast nu­mer­i­cal su­pe­ri­or­ity of the most prob­a­ble en­ergy dis­tri­b­u­tion.)

The fact that it does not take any en­ergy to go from one state to the other [18, p. 1] does not im­ply that the sys­tem must spend equal time in each state, or that each state must be equally likely. It is not dif­fi­cult at all to con­struct non­lin­ear sys­tems of evo­lu­tion equa­tions that con­serve en­ergy and in which the sys­tem runs ex­po­nen­tially away to­wards spe­cific states.

How­ever, the co­ef­fi­cients of the en­ergy eigen­func­tions do not sat­isfy some ar­bi­trary non­lin­ear sys­tem of evo­lu­tion equa­tions. They evolve ac­cord­ing to the Schrö­din­ger equa­tion, and the in­ter­ac­tions be­tween the en­ergy eigen­states are de­ter­mined by a Hamil­ton­ian ma­trix of co­ef­fi­cients. The Hamil­ton­ian is a Her­mit­ian ma­trix; it has to be to con­serve en­ergy. That means that the cou­pling con­stant that al­lows state A to in­crease or re­duce the prob­a­bil­ity of state B is just as big as the cou­pling con­stant that al­lows B to in­crease or re­duce the prob­a­bil­ity of state A. More specif­i­cally, the rate of in­crease of the prob­a­bil­ity of state A due to state B and vice-versa is seen to be

\left(\frac{{\rm d}\vert c_A\vert^2}{{\rm d}t}\right)_{\rm ...
...rm due to A}
= -\frac{1}{\hbar}\Im\left(c_A^*H_{AB}c_B\right)

where $H_{AB}$ is the per­tur­ba­tion Hamil­ton­ian co­ef­fi­cient be­tween A and B. (In the ab­sence of per­tur­ba­tions, the en­ergy eigen­func­tions do not in­ter­act and $H_{AB}$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0.) As­sum­ing that the phase of the Hamil­ton­ian co­ef­fi­cient is ran­dom com­pared to the phase dif­fer­ence be­tween A and B, the trans­ferred prob­a­bil­ity can go at ran­dom one way or the other re­gard­less of which one state is ini­tially more likely. Even if A is cur­rently very im­prob­a­ble, it is just as likely to pick up prob­a­bil­ity from B as B is from A. Also note that eigen­func­tions of the same en­ergy are un­usu­ally ef­fec­tive in ex­chang­ing prob­a­bil­ity, since their co­ef­fi­cients evolve ap­prox­i­mately in phase.

This note would ar­gue that un­der such cir­cum­stances, it is sim­ply no longer rea­son­able to think that the dif­fer­ence in prob­a­bil­i­ties be­tween eigen­states of the same en­ergy is enough to make a dif­fer­ence. How could en­ergy eigen­states that read­ily and ran­domly ex­change prob­a­bil­ity, in ei­ther di­rec­tion, end up in a sit­u­a­tion where some eigen­states have ab­solutely noth­ing, to in­cred­i­ble pre­ci­sion?

Feyn­man [18, p. 8] gives an ar­gu­ment based on time-de­pen­dent per­tur­ba­tion the­ory, chap­ter 11.10. How­ever, time-de­pen­dent per­tur­ba­tions the­ory re­lies heav­ily on ap­prox­i­ma­tion, and worse, the mea­sure­ment wild card. Un­til sci­en­tists, while maybe not agree­ing ex­actly on what mea­sure­ment is, start lay­ing down rig­or­ous, un­am­bigu­ous, math­e­mat­i­cal ground rules on what mea­sure­ments can do and can­not do, mea­sure­ment is like as­trol­ogy: any­thing goes.