Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

Archive for September, 2011

Time independent perturbation theory with degeneracy

Posted by peeterjoot on September 30, 2011

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Time independent perturbation with degeneracy.

In class it was claimed that if we repeated the derivation of the first order pertubation with degenerate states, then we’d get into (divide by zero) trouble if the state we were perturbing had degeneracy. Here I alter the previous derivation to show this explicitly.

The setup

Like the non-degenerate case, we are covering the time independent perturbation methods from section 16.1 of the text [1].

We start with a known Hamiltonian H_0, and alter it with the addition of a “small” perturbation

\begin{aligned}H = H_0 + \lambda H', \qquad \lambda \in [0,1]\end{aligned} \hspace{\stretch{1}}(1.1)

For the original operator, we assume that a complete set of eigenvectors and eigenkets is known

\begin{aligned}H_0 {\lvert {{\psi_{s \alpha}}^{(0)}} \rangle} = {E_s}^{(0)} {\lvert {{\psi_{s \alpha}}^{(0)}} \rangle}\end{aligned} \hspace{\stretch{1}}(1.2)

We seek the perturbed eigensolution

\begin{aligned}H {\lvert {\psi_{s \alpha}} \rangle} = E_{s \alpha} {\lvert {\psi_{s \alpha}} \rangle}\end{aligned} \hspace{\stretch{1}}(1.3)

and assumed a perturbative series representation for the energy eigenvalues in the new system

\begin{aligned}E_{s \alpha} = {E_s}^{(0)} + \lambda {E_{s \alpha}}^{(1)} + \lambda^2 {E_{s \alpha}}^{(2)} + \cdots\end{aligned} \hspace{\stretch{1}}(1.4)

Note that we do not assume that the perturbed energy states, if degenerate in the original system, are still degenerate after pertubation.

Given an assumed representation for the new eigenkets in terms of the known basis

\begin{aligned}{\lvert {\psi_{s \alpha}} \rangle} = \sum_{n, \beta} c_{ns;\beta \alpha} {\lvert {{\psi_{n \beta}}^{(0)}} \rangle} \end{aligned} \hspace{\stretch{1}}(1.5)

and a pertubative series representation for the probability coefficients

\begin{aligned}c_{ns;\beta \alpha} = {c_{ns;\beta \alpha}}^{(0)} + \lambda {c_{ns;\beta \alpha}}^{(1)} + \lambda^2 {c_{ns;\beta \alpha}}^{(2)},\end{aligned} \hspace{\stretch{1}}(1.6)

so that

\begin{aligned}{\lvert {\psi_{s \alpha}} \rangle} = \sum_{n, \beta} {c_{ns;\beta \alpha}}^{(0)} {\lvert {{\psi_{n \beta}}^{(0)}} \rangle} +\lambda\sum_{n, \beta} {c_{ns;\beta \alpha}}^{(1)} {\lvert {{\psi_{n \beta}}^{(0)}} \rangle} + \lambda^2\sum_{n, \beta} {c_{ns;\beta \alpha}}^{(2)} {\lvert {{\psi_{n \beta}}^{(0)}} \rangle} + \cdots\end{aligned} \hspace{\stretch{1}}(1.7)

Setting \lambda = 0 requires

\begin{aligned}{c_{ns;\beta \alpha}}^{(0)} = \delta_{ns;\beta \alpha},\end{aligned} \hspace{\stretch{1}}(1.8)

for

\begin{aligned}\begin{aligned}{\lvert {\psi_{s \alpha}} \rangle} &= {\lvert {{\psi_{s \alpha}}^{(0)}} \rangle} +\lambda\sum_{n, \beta} {c_{ns;\beta \alpha}}^{(1)} {\lvert {{\psi_{n \beta}}^{(0)}} \rangle} + \lambda^2\sum_{n, \beta} {c_{ns;\beta \alpha}}^{(2)} {\lvert {{\psi_{n \beta}}^{(0)}} \rangle} + \cdots \\ &=\left(1 + \lambda {c_{ss ; \alpha \alpha}}^{(1)} + \lambda^2 {c_{ss ; \alpha \alpha}}^{(2)} + \cdots\right){\lvert {{\psi_{s \alpha}}^{(0)}} \rangle} + \lambda\sum_{n\beta \ne s\alpha} {c_{ns;\beta \alpha}}^{(1)} {\lvert {{\psi_{n \beta}}^{(0)}} \rangle} +\lambda^2\sum_{n\beta \ne s\alpha} {c_{ns;\beta \alpha}}^{(2)} {\lvert {{\psi_{n \beta}}^{(0)}} \rangle} + \cdots\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.9)

We rescale our kets

\begin{aligned}{\lvert {\bar{\psi}_{s \alpha}} \rangle} ={\lvert {{\psi_{s \alpha}}^{(0)}} \rangle} + \lambda\sum_{n\beta \ne s\alpha} {\bar{c}_{ns;\beta \alpha}}^{(1)} {\lvert {{\psi_{n \beta}}^{(0)}} \rangle} +\lambda^2\sum_{n\beta \ne s\alpha} {\bar{c}_{ns;\beta \alpha}}^{(2)} {\lvert {{\psi_{n \beta}}^{(0)}} \rangle} + \cdots\end{aligned} \hspace{\stretch{1}}(1.10)

where

\begin{aligned}{\bar{c}_{ns;\beta \alpha}}^{(j)} = \frac{{c_{ns;\beta \alpha}}^{(j)}}{1 + \lambda {c_{ss ; \alpha \alpha}}^{(1)} + \lambda^2 {c_{ss ; \alpha \alpha}}^{(2)} + \cdots}\end{aligned} \hspace{\stretch{1}}(1.11)

The normalization of the rescaled kets is then

\begin{aligned}\left\langle{{\bar{\psi}_{s \alpha}}} \vert {{\bar{\psi}_{s \alpha}}}\right\rangle =1+ \lambda^2\sum_{n\beta \ne s\alpha} {\left\lvert{{\bar{c}_{ss}}^{(1)}}\right\rvert}^2+\cdots\equiv \frac{1}{{Z_{s \alpha}}},\end{aligned} \hspace{\stretch{1}}(1.12)

One can then construct a renormalized ket if desired

\begin{aligned}{\lvert {\bar{\psi}_{s \alpha}} \rangle}_R = Z_{s \alpha}^{1/2} {\lvert {\bar{\psi}_{s \alpha}} \rangle},\end{aligned} \hspace{\stretch{1}}(1.13)

so that

\begin{aligned}({\lvert {\bar{\psi}_{s \alpha}} \rangle}_R)^\dagger {\lvert {\bar{\psi}_{s \alpha}} \rangle}_R = Z_{s \alpha} \left\langle{{\bar{\psi}_{s \alpha}}} \vert {{\bar{\psi}_{s \alpha}}}\right\rangle = 1.\end{aligned} \hspace{\stretch{1}}(1.14)

The meat.

We continue by renaming terms in 1.10

\begin{aligned}{\lvert {\bar{\psi}_{s \alpha}} \rangle} ={\lvert {{\psi_{s \alpha}}^{(0)}} \rangle} + \lambda {\lvert {{\psi_{s \alpha}}^{(1)}} \rangle} + \lambda^2 {\lvert {{\psi_{s \alpha}}^{(2)}} \rangle} + \cdots\end{aligned} \hspace{\stretch{1}}(1.15)

where

\begin{aligned}{\lvert {{\psi_{s \alpha}}^{(j)}} \rangle} = \sum_{n\beta \ne s\alpha} {\bar{c}_{ns;\beta \alpha}}^{(j)} {\lvert {{\psi_{n \beta}}^{(0)}} \rangle}.\end{aligned} \hspace{\stretch{1}}(1.16)

Now we act on this with the Hamiltonian

\begin{aligned}H {\lvert {\bar{\psi}_{s \alpha}} \rangle} = E_{s \alpha} {\lvert {\bar{\psi}_{s \alpha}} \rangle},\end{aligned} \hspace{\stretch{1}}(1.17)

or

\begin{aligned}H {\lvert {\bar{\psi}_{s \alpha}} \rangle} - E_{s \alpha} {\lvert {\bar{\psi}_{s \alpha}} \rangle} = 0.\end{aligned} \hspace{\stretch{1}}(1.18)

Expanding this, we have

\begin{aligned}\begin{aligned}&(H_0 + \lambda H') \left({\lvert {{\psi_{s \alpha}}^{(0)}} \rangle} + \lambda {\lvert {{\psi_{s \alpha}}^{(1)}} \rangle} + \lambda^2 {\lvert {{\psi_{s \alpha}}^{(2)}} \rangle} + \cdots\right) \\ &\quad - \left( {E_s}^{(0)} + \lambda {E_{s \alpha}}^{(1)} + \lambda^2 {E_{s \alpha}}^{(2)} + \cdots \right)\left({\lvert {{\psi_{s \alpha}}^{(0)}} \rangle} + \lambda {\lvert {{\psi_{s \alpha}}^{(1)}} \rangle} + \lambda^2 {\lvert {{\psi_{s \alpha}}^{(2)}} \rangle} + \cdots\right)= 0.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.19)

We want to write this as

\begin{aligned}{\lvert {A} \rangle} + \lambda {\lvert {B} \rangle} + \lambda^2 {\lvert {C} \rangle} + \cdots = 0.\end{aligned} \hspace{\stretch{1}}(1.20)

This is

\begin{aligned}\begin{aligned}0 &=\lambda^0(H_0 - E_s^{(0)}) {\lvert {{\psi_{s \alpha}}^{(0)}} \rangle}  \\ &+ \lambda\left((H_0 - E_s^{(0)}) {\lvert {{\psi_{s \alpha}}^{(1)}} \rangle} +(H' - E_{s \alpha}^{(1)}) {\lvert {{\psi_{s \alpha}}^{(0)}} \rangle} \right) \\ &+ \lambda^2\left((H_0 - E_s^{(0)}) {\lvert {{\psi_{s \alpha}}^{(2)}} \rangle} +(H' - E_{s \alpha}^{(1)}) {\lvert {{\psi_{s \alpha}}^{(1)}} \rangle} -E_{s \alpha}^{(2)} {\lvert {{\psi_{s \alpha}}^{(0)}} \rangle} \right) \\ &\cdots\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.21)

So we form

\begin{aligned}{\lvert {A} \rangle} &=(H_0 - E_s^{(0)}) {\lvert {{\psi_{s \alpha}}^{(0)}} \rangle} \\ {\lvert {B} \rangle} &=(H_0 - E_s^{(0)}) {\lvert {{\psi_{s \alpha}}^{(1)}} \rangle} +(H' - E_{s \alpha}^{(1)}) {\lvert {{\psi_{s \alpha}}^{(0)}} \rangle} \\ {\lvert {C} \rangle} &=(H_0 - E_s^{(0)}) {\lvert {{\psi_{s \alpha}}^{(2)}} \rangle} +(H' - E_{s \alpha}^{(1)}) {\lvert {{\psi_{s \alpha}}^{(1)}} \rangle} -E_{s \alpha}^{(2)} {\lvert {{\psi_{s \alpha}}^{(0)}} \rangle},\end{aligned} \hspace{\stretch{1}}(1.22)

and so forth.

Zeroth order in \lambda

Since H_0 {\lvert {{\psi_{s \alpha}}^{(0)}} \rangle} = E_s^{(0)} {\lvert {{\psi_{s \alpha}}^{(0)}} \rangle}, this first condition on {\lvert {A} \rangle} is not much more than a statement that 0 - 0 = 0.

First order in \lambda

How about {\lvert {B} \rangle} = 0? For this to be zero we require that both of the following are simultaneously zero

\begin{aligned}\left\langle{{{\psi_{s \alpha}}^{(0)}}} \vert {{B}}\right\rangle &= 0 \\ \left\langle{{{\psi_{m \beta}}^{(0)}}} \vert {{B}}\right\rangle &= 0, \qquad m \beta \ne s \alpha\end{aligned} \hspace{\stretch{1}}(1.25)

This first condition is

\begin{aligned}{\langle {{\psi_{s \alpha}}^{(0)}} \rvert} (H' - E_{s \alpha}^{(1)}) {\lvert {{\psi_{s \alpha}}^{(0)}} \rangle} = 0.\end{aligned} \hspace{\stretch{1}}(1.27)

With

\begin{aligned}{\langle {{\psi_{m \beta}}^{(0)}} \rvert} H' {\lvert {{\psi_{s \alpha}}^{(0)}} \rangle} \equiv {H_{ms ; \beta \alpha}}',\end{aligned} \hspace{\stretch{1}}(1.28)

or

\begin{aligned}{H_{ss ; \alpha \alpha}}' = E_{s \alpha}^{(1)}.\end{aligned} \hspace{\stretch{1}}(1.29)

From the second condition we have

\begin{aligned}0 = {\langle {{\psi_{m \beta}}^{(0)}} \rvert} (H_0 - E_s^{(0)}) {\lvert {{\psi_{s \alpha}}^{(1)}} \rangle} +{\langle {{\psi_{m \beta}}^{(0)}} \rvert} (H' - E_{s \alpha}^{(1)}) {\lvert {{\psi_{s \alpha}}^{(0)}} \rangle} \end{aligned} \hspace{\stretch{1}}(1.30)

Utilizing the Hermitian nature of H_0 we can act backwards on {\langle {{\psi_m}^{(0)}} \rvert}

\begin{aligned}{\langle {{\psi_{m \beta}}^{(0)}} \rvert} H_0=E_m^{(0)} {\langle {{\psi_{m \beta}}^{(0)}} \rvert}.\end{aligned} \hspace{\stretch{1}}(1.31)

We note that \left\langle{{{\psi_{m \beta}}^{(0)}}} \vert {{{\psi_{s \alpha}}^{(0)}}}\right\rangle = 0, m \beta \ne s \alpha. We can also expand the \left\langle{{{\psi_{m \beta}}^{(0)}}} \vert {{{\psi_{s \alpha}}^{(1)}}}\right\rangle, which is

\begin{aligned}\left\langle{{{\psi_{m \beta}}^{(0)}}} \vert {{{\psi_{s \alpha}}^{(1)}}}\right\rangle &={\langle {{\psi_{m \beta}}^{(0)}} \rvert}\left(\sum_{n\delta \ne s\alpha} {\bar{c}_{ns;\delta \alpha}}^{(1)} {\lvert {{\psi_{n \delta}}^{(0)}} \rangle}\right) \\ \end{aligned}

I found that reducing this sum wasn’t obvious until some actual integers were plugged in. Suppose that s = 3\,1, and m \beta = 2\,2, then this is

\begin{aligned}\left\langle{{{\psi_{2\,2}}^{(0)}}} \vert {{{\psi_{3\,1}}^{(1)}}}\right\rangle &={\langle {{\psi_{2\,2}}^{(0)}} \rvert}\left(\sum_{n \delta \in \{1\,1, 1\,2, \cdots, 2\,1, 2\,2, 2\,3, \cdots, 3\,2, 3\,3, \cdots \} } {\bar{c}_{n 3; \delta 1}}^{(1)} {\lvert {{\psi_{n \delta}}^{(0)}} \rangle}\right) \\ &={\bar{c}_{2\,3 ; 2\,1}}^{(1)} \left\langle{{{\psi_{2\,2}}^{(0)}}} \vert {{{\psi_{2\,2}}^{(0)}}}\right\rangle \\ &={\bar{c}_{2\,3; 2\,1}}^{(1)}.\end{aligned}

Observe that we can also replace the superscript (1) with (j) in the above manipulation without impacting anything else. That and putting back in the abstract indexes, we have the general result

\begin{aligned}\left\langle{{{\psi_{m \beta}}^{(0)}}} \vert {{{\psi_{s \alpha}}^{(j)}}}\right\rangle ={\bar{c}_{ms ; \beta \alpha}}^{(j)}.\end{aligned} \hspace{\stretch{1}}(1.32)

Utilizing this gives us, for {red}

\begin{aligned}0 = ( E_m^{(0)} - E_s^{(0)}) {\bar{c}_{ms ; \beta \alpha}}^{(1)}+{H_{ms ; \beta \alpha}}' \end{aligned} \hspace{\stretch{1}}(1.33)

Here we see our first sign of the trouble hinted at in lecture 5. Just because m \beta \ne s \alpha does not mean that m \ne s. For example, with m \beta = 1\,1 and s\alpha = 1\,2 we would have

\begin{aligned}E_{1 2}^{(1)} &= {H_{1\,1 ; 2 2}}' \\ {\bar{c}_{1\,1 ; 1 2}}^{(1)}&=\frac{{H_{1\,1 ; 1 2}}' }{ E_1^{(0)} - E_1^{(0)} }\end{aligned} \hspace{\stretch{1}}(1.34)

We’ve got a {red} unless additional restrictions are imposed!

If we return to 1.33, we see that, for the result to be valid, when m = s, and there exists degeneracy for the s state, we require

\begin{aligned}{H_{ms ; \beta \alpha}}' = 0\end{aligned} \hspace{\stretch{1}}(1.36)

(then 1.33 becomes a 0 = 0 equality, and all is still okay)

And summarizing what we learn from our {\lvert {B} \rangle} = 0 conditions we have

\begin{aligned}E_{s \alpha}^{(1)} &= {H_{ss ; \alpha \alpha}}' \\ {\bar{c}_{ms ; \beta \alpha}}^{(1)}&=\frac{{H_{ms ; \beta \alpha}}' }{ E_s^{(0)} - E_m^{(0)} }, \qquad {m \ne s} \\ {H_{ss ; \beta \alpha}}' &= 0, \qquad \beta \alpha \ne 1\,1\end{aligned} \hspace{\stretch{1}}(1.37)

Second order in \lambda

Doing the same thing for {\lvert {C} \rangle} = 0 we form (or assume)

\begin{aligned}\left\langle{{{\psi_{s \alpha}}^{(0)}}} \vert {{C}}\right\rangle = 0 \end{aligned} \hspace{\stretch{1}}(1.40)

\begin{aligned}0 &= \left\langle{{{\psi_{s \alpha}}^{(0)}}} \vert {{C}}\right\rangle  \\ &={\langle {{\psi_{s \alpha}}^{(0)}} \rvert}\left((H_0 - E_s^{(0)}) {\lvert {{\psi_{s \alpha}}^{(2)}} \rangle} +(H' - E_{s \alpha}^{(1)}) {\lvert {{\psi_{s \alpha}}^{(1)}} \rangle} -E_{s \alpha}^{(2)} {\lvert {{\psi_{s \alpha}}^{(0)}} \rangle}  \right) \\ &=(E_s^{(0)} - E_s^{(0)}) \left\langle{{{\psi_{s \alpha}}^{(0)}}} \vert {{{\psi_{s \alpha}}^{(2)}}}\right\rangle +{\langle {{\psi_{s \alpha}}^{(0)}} \rvert}(H' - E_{s \alpha}^{(1)}) {\lvert {{\psi_{s \alpha}}^{(1)}} \rangle} -E_{s \alpha}^{(2)} \left\langle{{{\psi_{s \alpha}}^{(0)}}} \vert {{{\psi_{s \alpha}}^{(0)}}}\right\rangle \end{aligned}

We need to know what the \left\langle{{{\psi_{s \alpha}}^{(0)}}} \vert {{{\psi_{s \alpha}}^{(1)}}}\right\rangle is, and find that it is zero

\begin{aligned}\left\langle{{{\psi_{s \alpha}}^{(0)}}} \vert {{{\psi_{s \alpha}}^{(1)}}}\right\rangle={\langle {{\psi_{s \alpha}}^{(0)}} \rvert}\sum_{n\beta \ne s\alpha} {\bar{c}_{ns;\beta \alpha}}^{(1)} {\lvert {{\psi_{n \beta}}^{(0)}} \rangle} = 0\end{aligned} \hspace{\stretch{1}}(1.41)

Utilizing that we have

\begin{aligned}E_{s \alpha}^{(2)} &={\langle {{\psi_{s \alpha}}^{(0)}} \rvert} H' {\lvert {{\psi_{s \alpha}}^{(1)}} \rangle}  \\ &={\langle {{\psi_{s \alpha}}^{(0)}} \rvert} H' \sum_{m \beta \ne s \alpha} {\bar{c}_{ms}}^{(1)} {\lvert {{\psi_{m \beta}}^{(0)}} \rangle} \\ &=\sum_{m \beta \ne s \alpha} {\bar{c}_{ms ; \beta \alpha}}^{(1)} {H_{sm ; \alpha \beta}}'\end{aligned}

From 1.37, treating the {red} case carefully, we have

\begin{aligned}E_{s \alpha}^{(2)} =\sum_{\beta \ne \alpha} {\bar{c}_{ss ; \beta \alpha}}^{(1)} {H_{ss ; \alpha \beta}}'+\sum_{m \beta \ne s \alpha, m \ne s} \frac{{H_{ms ; \beta \alpha}}' }{ E_s^{(0)} - E_m^{(0)} }{H_{sm ; \alpha \beta}}'\end{aligned} \hspace{\stretch{1}}(1.42)

Again, only if H_{ss ; \alpha \beta} = 0 for \beta \ne \alpha do we have a result we can use. If that is the case, the first sum is killed without a divide by zero, leaving

\begin{aligned}E_{s \alpha}^{(2)} =\sum_{m \beta \ne s \alpha, m \ne s} \frac{{\left\lvert{{H_{ms ; \beta \alpha}}'}\right\rvert}^2 }{ E_s^{(0)} - E_m^{(0)} }.\end{aligned} \hspace{\stretch{1}}(1.43)

We can now summarize by forming the first order terms of the perturbed energy and the corresponding kets

\begin{aligned}E_{s \alpha} &= E_s^{(0)} + \lambda {H_{ss ; \alpha \alpha}}' + \lambda^2 \sum_{m \ne s, m \beta \ne s \alpha} \frac{{\left\lvert{{H_{ms ; \beta \alpha}}'}\right\rvert}^2 }{ E_s^{(0)} - E_m^{(0)} } + \cdots\\ {\lvert {\bar{\psi}_{s \alpha}} \rangle} &= {\lvert {{\psi_{s \alpha}}^{(0)}} \rangle} + \lambda\sum_{m \ne s, m \beta \ne s \alpha} \frac{{H_{ms ; \beta \alpha}}'}{ E_s^{(0)} - E_m^{(0)} } {\lvert {{\psi_{m \beta}}^{(0)}} \rangle}+ \cdots \\ {H_{ss ; \beta \alpha}}' &= 0, \qquad \beta \alpha \ne 1\,1\end{aligned} \hspace{\stretch{1}}(1.44)

Notational discrepency: OOPS. It looks like I used different notation than in class for our matrix elements for the placement of the indexes.

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , , , | Leave a Comment »

vi (vim) trick of the day: replace commas on current line with newlines

Posted by peeterjoot on September 28, 2011

:, ! tr , '\n'

Posted in perl and general scripting hackery | Tagged: , , | Leave a Comment »

PHY456H1F: Quantum Mechanics II. Lecture 6 (Taught by Prof J.E. Sipe). Interaction picture.

Posted by peeterjoot on September 27, 2011

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Disclaimer.

Peeter’s lecture notes from class. May not be entirely coherent.

Interaction picture.

Recap.

Recall our table comparing our two interaction pictures

\begin{aligned}\text{Schr\"{o}dinger picture} &\qquad \text{Heisenberg picture} \\ i \hbar \frac{d}{dt} {\lvert {\psi_s(t)} \rangle} = H {\lvert {\psi_s(t)} \rangle} &\qquad i \hbar \frac{d}{dt} O_H(t) = \left[{O_H},{H}\right] \\ {\langle {\psi_s(t)} \rvert} O_S {\lvert {\psi_s(t)} \rangle} &= {\langle {\psi_H} \rvert} O_H {\lvert {\psi_H} \rangle} \\ {\lvert {\psi_s(0)} \rangle} &= {\lvert {\psi_H} \rangle} \\ O_S &= O_H(0)\end{aligned}

A motivating example.

While fundamental Hamiltonians are independent of time, in a number of common cases, we can form approximate Hamiltonians that are time dependent. One such example is that of Coulomb excitations of an atom, as covered in section 18.3 of the text [1], and shown in figure (\ref{fig:qmTwoL6fig1}).

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.4\textheight]{qmTwoL6fig1}
\caption{Coulomb interaction of a nucleus and heavy atom.}
\end{figure}

We consider the interaction of a nucleus with a neutral atom, heavy enough that it can be considered classically. From the atoms point of view, the effects of the heavy nucleus barreling by can be described using a time dependent Hamiltonian. For the atom, that interaction Hamiltonian is

\begin{aligned}H' = \sum_i \frac{ Z e q_i }{{\left\lvert{\mathbf{r}_N(t) - \mathbf{R}_i}\right\rvert}}.\end{aligned} \hspace{\stretch{1}}(2.1)

Here and \mathbf{r}_N is the position vector for the heavy nucleus, and \mathbf{R}_i is the position to each charge within the atom, where i ranges over all the internal charges, positive and negative, within the atom.

Placing the origin close to the atom, we can write this interaction Hamiltonian as

\begin{aligned}H'(t) = \not{{\sum_i \frac{Z e q_i}{{\left\lvert{\mathbf{r}_N(t)}\right\rvert}}}}+ \sum_i Z e q_i \mathbf{R}_i \cdot {\left.{{\left(\frac{\partial {}}{\partial {\mathbf{r}}} \frac{1}{{{\left\lvert{ \mathbf{r}_N(t) - \mathbf{r}}\right\rvert}}}\right)}}\right\vert}_{{\mathbf{r} = 0}}\end{aligned} \hspace{\stretch{1}}(2.2)

The first term vanishes because the total charge in our neutral atom is zero. This leaves us with

\begin{aligned}\begin{aligned}H'(t) &= -\sum_i q_i \mathbf{R}_i \cdot {\left.{{\left(-\frac{\partial {}}{\partial {\mathbf{r}}} \frac{ Z e}{{\left\lvert{ \mathbf{r}_N(t) - \mathbf{r}}\right\rvert}}\right)}}\right\vert}_{{\mathbf{r} = 0}} \\ &= - \sum_i q_i \mathbf{R}_i \cdot \mathbf{E}(t),\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.3)

where \mathbf{E}(t) is the electric field at the origin due to the nucleus.

Introducing a dipole moment operator for the atom

\begin{aligned}\boldsymbol{\mu} = \sum_i q_i \mathbf{R}_i,\end{aligned} \hspace{\stretch{1}}(2.4)

the interaction takes the form

\begin{aligned}H'(t) = -\boldsymbol{\mu} \cdot \mathbf{E}(t).\end{aligned} \hspace{\stretch{1}}(2.5)

Here we have a quantum mechanical operator, and a classical field taken together. This sort of dipole interaction also occurs when we treat a atom placed into an electromagnetic field, treated classically as depicted in figure (\ref{fig:qmTwoL6fig2})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.4\textheight]{qmTwoL6fig2}
\caption{atom in a field}
\end{figure}

In the figure, we can use the dipole interaction, provided \lambda \gg a, where a is the “width” of the atom.

Because it is great for examples, we will see this dipole interaction a lot.

The interaction picture.

Having talked about both the Schr\”{o}dinger and Heisenberg pictures, we can now move on to describe a hybrid, one where our Hamiltonian has been split into static and time dependent parts

\begin{aligned}H(t) = H_0 + H'(t)\end{aligned} \hspace{\stretch{1}}(2.6)

We will formulate an approach for dealing with problems of this sort called the interaction picture.

This is also covered in section 3.3 of the text, albeit in a much harder to understand fashion (the text appears to try to not pull the result from a magic hat, but the steps to get to the end result are messy). It would probably have been nicer to see it this way instead.

In the Schr\”{o}dinger picture our dynamics have the form

\begin{aligned}i \hbar \frac{d}{dt} {\lvert {\psi_s(t)} \rangle} = H {\lvert {\psi_s(t)} \rangle}\end{aligned} \hspace{\stretch{1}}(2.7)

How about the Heisenberg picture? We look for a solution

\begin{aligned}{\lvert {\psi_s(t)} \rangle} = U(t, t_0) {\lvert {\psi_s(t_0)} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.8)

We want to find this operator that evolves the state from the state as some initial time t_0, to the arbitrary later state found at time t. Plugging in we have

\begin{aligned}i \hbar \frac{d{{}}}{dt} U(t, t_0) {\lvert {\psi_s(t_0)} \rangle}=H(t) U(t, t_0) {\lvert {\psi_s(t_0)} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.9)

This has to hold for all {\lvert {\psi_s(t_0)} \rangle}, and we can equivalently seek a solution of the operator equation

\begin{aligned}i \hbar \frac{d{{}}}{dt} U(t, t_0) = H(t) U(t, t_0),\end{aligned} \hspace{\stretch{1}}(2.10)

where

\begin{aligned}U(t_0, t_0) = I,\end{aligned} \hspace{\stretch{1}}(2.11)

the identity for the Hilbert space.

Suppose that H(t) was independent of time. We could find that

\begin{aligned}U(t, t_0) = e^{-i H(t - t_0)/\hbar}.\end{aligned} \hspace{\stretch{1}}(2.12)

If H(t) depends on time could you guess that

\begin{aligned}U(t, t_0) = e^{-\frac{i}{\hbar} \int_{t_0}^t H(\tau) d\tau}\end{aligned} \hspace{\stretch{1}}(2.13)

holds? No. This may be true when H(t) is a number, but when it is an operator, the Hamiltonian does not necessarily commute with itself at different times

\begin{aligned}\left[{H(t')},{H(t'')}\right] \ne 0.\end{aligned} \hspace{\stretch{1}}(2.14)

So this is wrong in general. As an aside, for numbers, 2.13 can be verified easily. We have

\begin{aligned}i \hbar \left( e^{-\frac{i}{\hbar} \int_{t_0}^t H(\tau) d\tau} \right)'&=i \hbar \left( -\frac{i}{\hbar} \right) \left( \int_{t_0}^t H(\tau) d\tau \right)'e^{-\frac{i}{\hbar} \int_{t_0}^t H(\tau) d\tau } \\ &=\left( H(t) \frac{dt}{dt} - H(t_0) \frac{dt_0}{dt} \right)e^{-\frac{i}{\hbar} \int_{t_0}^t H(\tau) d\tau}  \\ &= H(t) U(t, t_0)\end{aligned}

Expectations

Suppose that we do find U(t, t_0). Then our expectation takes the form

\begin{aligned}{\langle {\psi_s(t)} \rvert} O_s {\lvert {\psi_s(t)} \rangle} = {\langle {\psi_s(t_0)} \rvert} U^\dagger(t, t_0) O_s U(t, t_0) {\lvert {\psi_s(t_0)} \rangle} \end{aligned} \hspace{\stretch{1}}(2.15)

Put

\begin{aligned}{\lvert {\psi_H} \rangle} = {\lvert {\psi_s(t_0)} \rangle},\end{aligned} \hspace{\stretch{1}}(2.16)

and form

\begin{aligned}O_H = U^\dagger(t, t_0) O_s U(t, t_0) \end{aligned} \hspace{\stretch{1}}(2.17)

so that our expectation has the familiar representations

\begin{aligned}{\langle {\psi_s(t)} \rvert} O_s {\lvert {\psi_s(t)} \rangle} ={\langle {\psi_H} \rvert} O_H {\lvert {\psi_H} \rangle} \end{aligned} \hspace{\stretch{1}}(2.18)

New strategy. Interaction picture.

Let’s define

\begin{aligned}U_I(t, t_0) = e^{\frac{i}{\hbar} H_0(t - t_0)} U(t, t_0)\end{aligned} \hspace{\stretch{1}}(2.19)

or

\begin{aligned}U(t, t_0) = e^{-\frac{i}{\hbar} H_0(t - t_0)} U_I(t, t_0).\end{aligned} \hspace{\stretch{1}}(2.20)

Let’s see how this works. We have

\begin{aligned}i \hbar \frac{d{{U_I}}}{dt} &= i \hbar \frac{d{{}}}{dt} \left(e^{\frac{i}{\hbar} H_0(t - t_0)} U(t, t_0)\right) \\ &=-H_0 U(t, t_0)+e^{\frac{i}{\hbar} H_0(t - t_0)} \left( i \hbar \frac{d{{}}}{dt} U(t, t_0) \right) \\ &=-H_0 U(t, t_0)+e^{\frac{i}{\hbar} H_0(t - t_0)} \left( (H + H'(t)) U(t, t_0) \right) \\ &=e^{\frac{i}{\hbar} H_0(t - t_0)} H'(t)) U(t, t_0) \\ &=e^{\frac{i}{\hbar} H_0(t - t_0)} H'(t)) e^{-\frac{i}{\hbar} H_0(t - t_0)} U_I(t, t_0).\end{aligned}

Define

\begin{aligned}\bar{H}'(t) =e^{\frac{i}{\hbar} H_0(t - t_0)} H'(t)) e^{-\frac{i}{\hbar} H_0(t - t_0)},\end{aligned} \hspace{\stretch{1}}(2.21)

so that our operator equation takes the form

\begin{aligned}i \hbar \frac{d{{}}}{dt} U_I(t, t_0) = \bar{H}'(t) U_I(t, t_0).\end{aligned} \hspace{\stretch{1}}(2.22)

Note that we also have the required identity at the initial time

\begin{aligned}U_I(t_0, t_0) = I.\end{aligned} \hspace{\stretch{1}}(2.23)

Without requiring us to actually find U(t, t_0) all of the dynamics of the time dependent interaction are now embedded in our operator equation for \bar{H}'(t), with all of the simple interaction related to the non time dependent portions of the Hamiltonian left separate.

Connection with the Schr\”{o}dinger picture.

In the Schr\”{o}dinger picture we have

\begin{aligned}{\lvert {\psi_s(t)} \rangle} &= U(t, t_0) {\lvert {\psi_s(t_0)} \rangle}  \\ &=e^{-\frac{i}{\hbar} H_0(t - t_0)} U_I(t, t_0){\lvert {\psi_s(t_0)} \rangle}.\end{aligned}

With a definition of the interaction picture ket as

\begin{aligned}{\lvert {\psi_I} \rangle} = U_I(t, t_0) {\lvert {\psi_s(t_0)} \rangle} = U_I(t, t_0) {\lvert {\psi_H} \rangle},\end{aligned} \hspace{\stretch{1}}(2.24)

the Schr\”{o}dinger picture is then related to the interaction picture by

\begin{aligned}{\lvert {\psi_s(t)} \rangle} = e^{-\frac{i}{\hbar} H_0(t - t_0)} {\lvert {\psi_I} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.25)

Also, by multiplying 2.22 by our Schr\”{o}dinger ket, we remove the last vestiges of U_I and U from the dynamical equation for our time dependent interaction

\begin{aligned}i \hbar \frac{d{{}}}{dt} {\lvert {\psi_I} \rangle} = \bar{H}'(t) {\lvert {\psi_I} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.26)

Interaction picture expectation.

Inverting 2.25, we can form an operator expectation, and relate it the interaction and Schr\”{o}dinger pictures

\begin{aligned}{\langle {\psi_s(t)} \rvert} O_s {\lvert {\psi_s(t)} \rangle} ={\langle {\psi_I} \rvert} e^{\frac{i}{\hbar} H_0(t - t_0)}O_se^{-\frac{i}{\hbar} H_0(t - t_0)}{\lvert {\psi_I} \rangle} .\end{aligned} \hspace{\stretch{1}}(2.27)

With a definition

\begin{aligned}O_I =e^{\frac{i}{\hbar} H_0(t - t_0)}O_se^{-\frac{i}{\hbar} H_0(t - t_0)},\end{aligned} \hspace{\stretch{1}}(2.28)

we have

\begin{aligned}{\langle {\psi_s(t)} \rvert} O_s {\lvert {\psi_s(t)} \rangle} ={\langle {\psi_I} \rvert} O_I{\lvert {\psi_I} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.29)

As before, the time evolution of our interaction picture operator, can be found by taking derivatives of 2.28, for which we find

\begin{aligned}i \hbar \frac{d{{O_I(t)}}}{dt} = \left[{O_I(t)},{H_0}\right]\end{aligned} \hspace{\stretch{1}}(2.30)

Summarizing the interaction picture.

Given

\begin{aligned}H(t) = H_0 + H'(t),\end{aligned} \hspace{\stretch{1}}(2.31)

and initial time states

\begin{aligned}{\lvert {\psi_I(t_0)} \rangle} ={\lvert {\psi_s(t_0)} \rangle} = {\lvert {\psi_H} \rangle},\end{aligned} \hspace{\stretch{1}}(2.32)

we have

\begin{aligned}{\langle {\psi_s(t)} \rvert} O_s {\lvert {\psi_s(t)} \rangle} ={\langle {\psi_I} \rvert} O_I{\lvert {\psi_I} \rangle},\end{aligned} \hspace{\stretch{1}}(2.33)

where

\begin{aligned}{\lvert {\psi_I} \rangle} = U_I(t, t_0) {\lvert {\psi_s(t_0)} \rangle},\end{aligned} \hspace{\stretch{1}}(2.34)

and

\begin{aligned}i \hbar \frac{d{{}}}{dt} {\lvert {\psi_I} \rangle} = \bar{H}'(t) {\lvert {\psi_I} \rangle},\end{aligned} \hspace{\stretch{1}}(2.35)

or

\begin{aligned}i \hbar \frac{d{{}}}{dt} U_I(t, t_0) &= \bar{H}'(t) U_I(t, t_0) \\ U_I(t_0, t_0) &= I.\end{aligned} \hspace{\stretch{1}}(2.36)

Our interaction picture Hamiltonian is

\begin{aligned}\bar{H}'(t) =e^{\frac{i}{\hbar} H_0(t - t_0)} H'(t)) e^{-\frac{i}{\hbar} H_0(t - t_0)},\end{aligned} \hspace{\stretch{1}}(2.38)

and for Schr\”{o}dinger operators, independent of time, we have the dynamical equation

\begin{aligned}i \hbar \frac{d{{O_I(t)}}}{dt} = \left[{O_I(t)},{H_0}\right]\end{aligned} \hspace{\stretch{1}}(2.39)

Justifying the Taylor expansion above (not class notes).

Multivariable Taylor series

As outlined in section 2.8 (8.10) of [2], we want to derive the multi-variable Taylor expansion for a scalar valued function of some number of variables

\begin{aligned}f(\mathbf{u}) = f(u^1, u^2, \cdots),\end{aligned} \hspace{\stretch{1}}(3.40)

consider the displacement operation applied to the vector argument

\begin{aligned}f(\mathbf{a} + \mathbf{x}) = {\left.{{f(\mathbf{a} + t \mathbf{x})}}\right\vert}_{{t=1}}.\end{aligned} \hspace{\stretch{1}}(3.41)

We can Taylor expand a single variable function without any trouble, so introduce

\begin{aligned}g(t) = f(\mathbf{a} + t \mathbf{x}),\end{aligned} \hspace{\stretch{1}}(3.42)

where

\begin{aligned}g(1) = f(\mathbf{a} + \mathbf{x}).\end{aligned} \hspace{\stretch{1}}(3.43)

We have

\begin{aligned}g(t) = g(0) + t {\left.{{ \frac{\partial {g}}{\partial {t}} }}\right\vert}_{{t = 0}}+ \frac{t^2}{2!} {\left.{{ \frac{\partial {g}}{\partial {t}} }}\right\vert}_{{t = 0}}+ \cdots,\end{aligned} \hspace{\stretch{1}}(3.44)

so that

\begin{aligned}g(1) = g(0) + + {\left.{{ \frac{\partial {g}}{\partial {t}} }}\right\vert}_{{t = 0}}+ \frac{1}{2!} {\left.{{ \frac{\partial {g}}{\partial {t}} }}\right\vert}_{{t = 0}}+ \cdots.\end{aligned} \hspace{\stretch{1}}(3.45)

The multivariable Taylor series now becomes a plain old application of the chain rule, where we have to evaluate

\begin{aligned}\frac{dg}{dt} &= \frac{d{{}}}{dt} f(a^1 + t x^1, a^2 + t x^2, \cdots) \\ &= \sum_i \frac{\partial {}}{\partial {(a^i + t x^i)}} f(\mathbf{a} + t \mathbf{x}) \frac{\partial {a^i + t x^i}}{\partial {t}},\end{aligned}

so that

\begin{aligned}{\left.{{\frac{dg}{dt} }}\right\vert}_{{t=0}}= \sum_i x^i \left( {\left.{{ \frac{\partial {f}}{\partial {x^i}}}}\right\vert}_{{x^i = a^i}}\right).\end{aligned} \hspace{\stretch{1}}(3.46)

Assuming an Euclidean space we can write this in the notationally more pleasant fashion using a gradient operator for the space

\begin{aligned}{\left.{{\frac{dg}{dt} }}\right\vert}_{{t=0}} = {\left.{{\mathbf{x} \cdot \boldsymbol{\nabla}_{\mathbf{u}} f(\mathbf{u})}}\right\vert}_{{\mathbf{u} = \mathbf{a}}}.\end{aligned} \hspace{\stretch{1}}(3.47)

To handle the higher order terms, we repeat the chain rule application, yielding for example

\begin{aligned}{\left.{{\frac{d^2 f(\mathbf{a} + t \mathbf{x})}{dt^2} }}\right\vert}_{{t=0}} &={\left.{{\frac{d{{}}}{dt} \sum_i x^i \frac{\partial {f(\mathbf{a} + t \mathbf{x})}}{\partial {(a^i + t x^i)}} }}\right\vert}_{{t=0}}\\ &={\left.{{\sum_i x^i \frac{\partial {}}{\partial {(a^i + t x^i)}} \frac{d{{f(\mathbf{a} + t \mathbf{x})}}}{dt}}}\right\vert}_{{t=0}} \\ &={\left.{{(\mathbf{x} \cdot \boldsymbol{\nabla}_{\mathbf{u}})^2 f(\mathbf{u})}}\right\vert}_{{\mathbf{u} = \mathbf{a}}}.\end{aligned}

Thus the Taylor series associated with a vector displacement takes the tidy form

\begin{aligned}f(\mathbf{a} + \mathbf{x}) = \sum_{k=0}^\infty \frac{1}{{k!}} {\left.{{(\mathbf{x} \cdot \boldsymbol{\nabla}_{\mathbf{u}})^k f(\mathbf{u})}}\right\vert}_{{\mathbf{u} = \mathbf{a}}}.\end{aligned} \hspace{\stretch{1}}(3.48)

Even more fancy, we can form the operator equation

\begin{aligned}f(\mathbf{a} + \mathbf{x}) = {\left.{{e^{ \mathbf{x} \cdot \boldsymbol{\nabla}_{\mathbf{u}} } f(\mathbf{u})}}\right\vert}_{{\mathbf{u} = \mathbf{a}}}\end{aligned} \hspace{\stretch{1}}(3.49)

Here a dummy variable \mathbf{u} has been retained as an instruction not to differentiate the \mathbf{x} part of the directional derivative in any repeated applications of the \mathbf{x} \cdot \boldsymbol{\nabla} operator.

That notational cludge can be removed by swapping \mathbf{a} and \mathbf{x}

\begin{aligned}f(\mathbf{a} + \mathbf{x}) = \sum_{k=0}^\infty \frac{1}{{k!}} (\mathbf{a} \cdot \boldsymbol{\nabla})^k f(\mathbf{x})=e^{ \mathbf{a} \cdot \boldsymbol{\nabla} } f(\mathbf{x}),\end{aligned} \hspace{\stretch{1}}(3.50)

where \boldsymbol{\nabla} = \boldsymbol{\nabla}_{\mathbf{x}} = ({\partial {}}/{\partial {x^1}}, {\partial {}}/{\partial {x^2}}, ...).

Having derived this (or for those with lesser degrees of amnesia, recall it), we can see that 2.2 was a direct application of this, retaining no second order or higher terms.

Our expression used in the interaction Hamiltonian discussion was

\begin{aligned}\frac{1}{{{\left\lvert{\mathbf{r} - \mathbf{R}}\right\rvert}}} \approx \frac{1}{{{\left\lvert{\mathbf{r}}\right\rvert}}} + \mathbf{R} \cdot {\left.{{\left(\frac{\partial {}}{\partial {\mathbf{R}}} \frac{1}{{{\left\lvert{ \mathbf{r} - \mathbf{R}}\right\rvert}}}\right)}}\right\vert}_{{\mathbf{R} = 0}}.\end{aligned} \hspace{\stretch{1}}(3.51)

which we can see has the same structure as above with some variable substitutions. Evaluating it we have

\begin{aligned}\frac{\partial {}}{\partial {\mathbf{R}}} \frac{1}{{{\left\lvert{ \mathbf{r} - \mathbf{R}}\right\rvert}}}&=\mathbf{e}_i \frac{\partial {}}{\partial {R^i}} ((x^j - R^j)^2)^{-1/2} \\ &=\mathbf{e}_i \left(-\frac{1}{{2}}\right) 2 (x^j - R^j) \frac{\partial {(x^j - R^j)}}{\partial {R^i}} \frac{1}{{{\left\lvert{\mathbf{r} - \mathbf{R}}\right\rvert}^3}} \\ &= \frac{\mathbf{r} - \mathbf{R}}{{\left\lvert{\mathbf{r} - \mathbf{R}}\right\rvert}^3} ,\end{aligned}

and at \mathbf{R} = 0 we have

\begin{aligned}\frac{1}{{{\left\lvert{\mathbf{r} - \mathbf{R}}\right\rvert}}} \approx \frac{1}{{{\left\lvert{\mathbf{r}}\right\rvert}}} + \mathbf{R} \cdot \frac{\mathbf{r}}{{\left\lvert{\mathbf{r}}\right\rvert}^3}.\end{aligned} \hspace{\stretch{1}}(3.52)

We see in this direction derivative produces the classical electric Coulomb field expression for an electrostatic distribution, once we take the \mathbf{r}/{\left\lvert{\mathbf{r}}\right\rvert}^3 and multiply it with the - Z e factor.

With algebra.

A different way to justify the expansion of 2.2 is to consider a Clifford algebra factorization (following notation from [3]) of the absolute vector difference, where \mathbf{R} is considered small.

\begin{aligned}{\left\lvert{\mathbf{r} - \mathbf{R}}\right\rvert}&= \sqrt{ \left(\mathbf{r} - \mathbf{R}\right) \left(\mathbf{r} - \mathbf{R}\right) } \\ &= \sqrt{ \left\langle{{\mathbf{r} \left(1 - \frac{1}{\mathbf{r}} \mathbf{R}\right) \left(1 - \mathbf{R} \frac{1}{\mathbf{r}}\right) \mathbf{r}}}\right\rangle } \\ &= \sqrt{ \left\langle{{\mathbf{r}^2 \left(1 - \frac{1}{\mathbf{r}} \mathbf{R}\right) \left(1 - \mathbf{R} \frac{1}{\mathbf{r}}\right) }}\right\rangle } \\ &= {\left\lvert{\mathbf{r}}\right\rvert} \sqrt{ 1 - 2 \frac{1}{\mathbf{r}} \cdot \mathbf{R} + \left\langle{{\frac{1}{\mathbf{r}} \mathbf{R} \mathbf{R} \frac{1}{\mathbf{r}}}}\right\rangle} \\ &= {\left\lvert{\mathbf{r}}\right\rvert} \sqrt{ 1 - 2 \frac{1}{\mathbf{r}} \cdot \mathbf{R} + \frac{\mathbf{R}^2}{\mathbf{r}^2}}\end{aligned}

Neglecting the \mathbf{R}^2 term, we can then Taylor series expand this scalar expression

\begin{aligned}\frac{1}{{{\left\lvert{\mathbf{r} - \mathbf{R}}\right\rvert}}} \approx\frac{1}{{{\left\lvert{\mathbf{r}}\right\rvert}}} \left( 1 + \frac{1}{\mathbf{r}} \cdot \mathbf{R}\right) =\frac{1}{{{\left\lvert{\mathbf{r}}\right\rvert}}} + \frac{\hat{\mathbf{r}}}{\mathbf{r}^2} \cdot \mathbf{R}=\frac{1}{{{\left\lvert{\mathbf{r}}\right\rvert}}} + \frac{\mathbf{r}}{{\left\lvert{\mathbf{r}}\right\rvert}^3} \cdot \mathbf{R}.\end{aligned} \hspace{\stretch{1}}(3.53)

Observe this is what was found with the multivariable Taylor series expansion too.

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] D. Hestenes. New Foundations for Classical Mechanics. Kluwer Academic Publishers, 1999.

[3] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , | Leave a Comment »

PHY456H1F: Quantum Mechanics II. Lecture 5 (Taught by Prof J.E. Sipe). Pertubation theory and degeneracy. Review of dynamics

Posted by peeterjoot on September 26, 2011

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Disclaimer.

Peeter’s lecture notes from class. May not be entirely coherent.

Issues concerning degeneracy.

When the perturbed state is non-degenerate.

Suppose the state of interest is non-degenerate but others are

FIXME: diagram. states designated by dashes labeled n1, n2, n3 degeneracy \alpha = 3 for energy E_n^{(0)}.

This is no problem except for notation, and if the analysis is repeated we find

\begin{aligned}E_s &= E_s^{(0)} + \lambda {H_{ss}}' + \lambda^2 \sum_{m \ne s, \alpha} \frac{{\left\lvert{{H_{m \alpha ; s}}'}\right\rvert}^2 }{ E_s^{(0)} - E_m^{(0)} } + \cdots\\ {\lvert {\bar{\psi}_s} \rangle} &= {\lvert {{\psi_s}^{(0)}} \rangle} + \lambda\sum_{m \ne s, \alpha} \frac{{H_{m \alpha ; s}}'}{ E_s^{(0)} - E_m^{(0)} } {\lvert {{\psi_{m \alpha}}^{(0)}} \rangle}+ \cdots,\end{aligned} \hspace{\stretch{1}}(2.1)

where

\begin{aligned}{H_{m \alpha ; s}}' ={\langle {{\psi_{m \alpha}}^{(0)}} \rvert} H' {\lvert {{\psi_{s \alpha}}^{(0)}} \rangle}\end{aligned} \hspace{\stretch{1}}(2.3)

When the perturbed state is also degenerate.

FIXME: diagram. states designated by dashes labeled n1, n2, n3 degeneracy \alpha = 3 for energy E_n^{(0)}, and states designated by dashes labeled s1, s2, s3 degeneracy \alpha = 3 for energy E_s^{(0)}.

If we just blindly repeat the derivation for the non-degenerate case we would obtain

\begin{aligned}E_s &= E_s^{(0)} + \lambda {H_{s1 ; s1}}' + \lambda^2 \sum_{m \ne s, \alpha} \frac{{\left\lvert{{H_{m \alpha ; s1}}'}\right\rvert}^2 }{ E_s^{(0)} - E_m^{(0)} } + \lambda^2 \sum_{\alpha \ne 1} \frac{{\left\lvert{{H_{s \alpha ; s1}}'}\right\rvert}^2 }{ E_s^{(0)} - {red} } + \cdots\\ {\lvert {\bar{\psi}_s} \rangle} &= {\lvert {{\psi_s}^{(0)}} \rangle} + \lambda\sum_{m \ne s, \alpha} \frac{{H_{m \alpha ; s}}'}{ E_s^{(0)} - E_m^{(0)} } {\lvert {{\psi_{m \alpha}}^{(0)}} \rangle}+ \lambda\sum_{\alpha \ne s1} \frac{{H_{s \alpha ; s1}}'}{ E_s^{(0)} - {red} } {\lvert {{\psi_{s \alpha}}^{(0)}} \rangle}+ \cdots,\end{aligned} \hspace{\stretch{1}}(2.4)

where

\begin{aligned}{H_{m \alpha ; s1}}' ={\langle {{\psi_{m \alpha}}^{(0)}} \rvert} H' {\lvert {{\psi_{s1}}^{(0)}} \rangle}\end{aligned} \hspace{\stretch{1}}(2.6)

Note that the E_s^{(0)} -{red} is NOT a typo, and why we run into trouble. There is one case where a perturbation approach is still possible. That case is if we happen to have

\begin{aligned}{\langle {{\psi_{m \alpha}}^{(0)}} \rvert} H' {\lvert {{\psi_{s1}}^{(0)}} \rangle} = 0.\end{aligned} \hspace{\stretch{1}}(2.7)

That may not be obvious, but if one returns to the original derivation, the right terms cancel so that one will not end up with the 0/0 problem.

FIXME: do this derivation.

Diagonalizing the perturbation Hamiltonian.

Suppose that we do not have this special zero condition that allows the perturbation treatment to remain valid. What can we do. It turns out that we can make use of the fact that the perturbation Hamiltonian is Hermitian, and diagonalize the matrix

\begin{aligned}{\langle {{\psi_{s \alpha}}^{(0)}} \rvert} H' {\lvert {{\psi_{s \beta}}^{(0)}} \rangle} \end{aligned} \hspace{\stretch{1}}(2.8)

In the example of a two fold degeneracy, this amounts to us choosing not to work with the states

\begin{aligned}{\lvert {\psi_{s1}^{(0)}} \rangle}, {\lvert {\psi_{s2}^{(0)}} \rangle},\end{aligned} \hspace{\stretch{1}}(2.9)

both some linear combinations of the two

\begin{aligned}{\lvert {\psi_{sI}^{(0)}} \rangle} &= a_1 {\lvert {\psi_{s1}^{(0)}} \rangle} + b_1 {\lvert {\psi_{s2}^{(0)}} \rangle} \\ {\lvert {\psi_{sII}^{(0)}} \rangle} &= a_2 {\lvert {\psi_{s1}^{(0)}} \rangle} + b_2 {\lvert {\psi_{s2}^{(0)}} \rangle} \end{aligned} \hspace{\stretch{1}}(2.10)

In this new basis, once found, we have

\begin{aligned}{\langle {{\psi_{s \alpha}}^{(0)}} \rvert} H' {\lvert {{\psi_{s \beta}}^{(0)}} \rangle} = \mathcal{H}_\alpha \delta_{\alpha \beta}\end{aligned} \hspace{\stretch{1}}(2.12)

Utilizing this to fix the previous, one would get if the analysis was repeated correctly

\begin{aligned}E_{s\alpha} &= E_s^{(0)} + \lambda {H_{s\alpha ; s\alpha}}' + \lambda^2 \sum_{m \ne s, \alpha} \frac{{\left\lvert{{H_{m \beta ; s \alpha}}'}\right\rvert}^2 }{ E_s^{(0)} - E_m^{(0)} } + \cdots\\ {\lvert {\bar{\psi}_{s \alpha}} \rangle} &= {\lvert {{\psi_{s \alpha}}^{(0)}} \rangle} + \lambda\sum_{m \ne s, \beta} \frac{{H_{m \beta ; s \alpha}}'}{ E_s^{(0)} - E_m^{(0)} } {\lvert {{\psi_{m \beta}}^{(0)}} \rangle}+ \cdots.\end{aligned} \hspace{\stretch{1}}(2.13)

We see that a degenerate state can be split by applying perturbation.

FIXME: do this derivation.

FIXME: diagram. E_s^{(0)} as one energy level without perturbation, and as two distinct levels with perturbation.

guess I’ll bet that this is the origin of the spectral line splitting, especially given that an atom like hydrogen has degenerate states.

Review of dynamics.

We want to move on to time dependent problems. In general for a time dependent problem, the answer follows provided one has solved for all the perturbed energy eigenvalues. This can be laborious (or not feasible due to infinite sums).

Before doing this, let’s review our dynamics as covered in section 3 of the text [1].

Schr\”{o}dinger and Heisenberg pictures

Our operator equation in the Schr\”{o}dinger picture is the familiar

\begin{aligned}i \hbar \frac{d}{dt} {\lvert {\psi_s(t)} \rangle} = H {\lvert {\psi_s(t)} \rangle}\end{aligned} \hspace{\stretch{1}}(3.15)

and most of our operators X, P, \cdots are time independent.

\begin{aligned}\left\langle{{O}}\right\rangle(t) = {\langle {\psi_s(t)} \rvert} O_s{\lvert {\psi_s(t)} \rangle}\end{aligned} \hspace{\stretch{1}}(3.16)

where O_s is the operator in the Schr\”{o}dinger picture, and is non time dependent.

Formally, the time evolution of any state is given by

\begin{aligned}{\lvert {\psi_s(t)} \rangle}e^{-i H t/\hbar}{\lvert {\psi_s(0)} \rangle} = U(t, 0) {\lvert {\psi_s(0)} \rangle} \end{aligned} \hspace{\stretch{1}}(3.17)

so the expectation of an operator can be written

\begin{aligned}\left\langle{{O}}\right\rangle(t) = {\langle {\psi_s(0)} \rvert} e^{i H t/\hbar}O_se^{-i H t/\hbar}{\lvert {\psi_s(0)} \rangle}.\end{aligned} \hspace{\stretch{1}}(3.18)

With the introduction of the Heisenberg ket

\begin{aligned}{\lvert {\psi_H} \rangle} = {\lvert {\psi_s(0)} \rangle},\end{aligned} \hspace{\stretch{1}}(3.19)

and Heisenberg operators

\begin{aligned}O_H = e^{i H t/\hbar} O_s e^{-i H t/\hbar},\end{aligned} \hspace{\stretch{1}}(3.20)

the expectation evolution takes the form

\begin{aligned}\left\langle{{O}}\right\rangle(t) = {\langle {\psi_H} \rvert} O_H{\lvert {\psi_H} \rangle}.\end{aligned} \hspace{\stretch{1}}(3.21)

Note that because the Hamiltonian commutes with it’s exponential (it commutes with itself and any power series of itself), the Hamiltonian in the Heisenberg picture is the same as in the Schr\”{o}dinger picture

\begin{aligned}H_H = e^{i H t/\hbar} H e^{-i H t/\hbar} = H.\end{aligned} \hspace{\stretch{1}}(3.22)

Time evolution and the Commutator

Taking the derivative of 3.20 provides us with the time evolution of any operator in the Heisenberg picture

\begin{aligned}i \hbar \frac{d}{dt} O_H(t) &=i \hbar \frac{d}{dt} \left( e^{i H t/\hbar} O_s e^{-i H t/\hbar}\right) \\ &=i \hbar \left( \frac{i H}{\hbar} e^{i H t/\hbar} O_s e^{-i H t/\hbar}+e^{i H t/\hbar} O_s e^{-i H t/\hbar} \frac{-i H}{\hbar} \right) \\ &=\left( -H O_H+O_H H\right).\end{aligned}

We can write this as a commutator

\begin{aligned}i \hbar \frac{d}{dt} O_H(t) = \left[{O_H},{H}\right].\end{aligned} \hspace{\stretch{1}}(3.23)

Summarizing the two pictures.

\begin{aligned}\text{Schr\"{o}dinger picture} &\qquad \text{Heisenberg picture} \\ i \hbar \frac{d}{dt} {\lvert {\psi_s(t)} \rangle} = H {\lvert {\psi_s(t)} \rangle} &\qquad i \hbar \frac{d}{dt} O_H(t) = \left[{O_H},{H}\right] \\ {\langle {\psi_s(t)} \rvert} O_S {\lvert {\psi_s(t)} \rangle} &= {\langle {\psi_H} \rvert} O_H {\lvert {\psi_H} \rangle} \\ {\lvert {\psi_s(0)} \rangle} &= {\lvert {\psi_H} \rangle} \\ O_S &= O_H(0)\end{aligned}

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , , , , , | Leave a Comment »

PHY456H1F: Quantum Mechanics II. Lecture 4 (Taught by Prof J.E. Sipe). Time independent perturbation theory (continued)

Posted by peeterjoot on September 23, 2011

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Disclaimer.

Peeter’s lecture notes from class. May not be entirely coherent.

Time independent perturbation.

The setup

To recap, we were covering the time independent perturbation methods from section 16.1 of the text [1]. We start with a known Hamiltonian H_0, and alter it with the addition of a “small” perturbation

\begin{aligned}H = H_0 + \lambda H', \qquad \lambda \in [0,1]\end{aligned} \hspace{\stretch{1}}(2.1)

For the original operator, we assume that a complete set of eigenvectors and eigenkets is known

\begin{aligned}H_0 {\lvert {{\psi_0}^{(0)}} \rangle} = {E_s}^{(0)} {\lvert {{\psi_s}^{(0)}} \rangle}\end{aligned} \hspace{\stretch{1}}(2.2)

We seek the perturbed eigensolution

\begin{aligned}H {\lvert {\psi_s} \rangle} = E_s {\lvert {\psi_s} \rangle}\end{aligned} \hspace{\stretch{1}}(2.3)

and assumed a perturbative series representation for the energy eigenvalues in the new system

\begin{aligned}E_s = {E_s}^{(0)} + \lambda {E_s}^{(1)} + \lambda^2 {E_s}^{(2)} + \cdots\end{aligned} \hspace{\stretch{1}}(2.4)

Given an assumed representation for the new eigenkets in terms of the known basis

\begin{aligned}{\lvert {\psi_s} \rangle} = \sum_n c_{ns} {\lvert {{\psi_n}^{(0)}} \rangle} \end{aligned} \hspace{\stretch{1}}(2.5)

and a pertubative series representation for the probability coefficients

\begin{aligned}c_{ns} = {c_{ns}}^{(0)} + \lambda {c_{ns}}^{(1)} + \lambda^2 {c_{ns}}^{(2)},\end{aligned} \hspace{\stretch{1}}(2.6)

so that

\begin{aligned}{\lvert {\psi_s} \rangle} = \sum_n {c_{ns}}^{(0)} {\lvert {{\psi_n}^{(0)}} \rangle} +\lambda\sum_n {c_{ns}}^{(1)} {\lvert {{\psi_n}^{(0)}} \rangle} + \lambda^2\sum_n {c_{ns}}^{(2)} {\lvert {{\psi_n}^{(0)}} \rangle} + \cdots\end{aligned} \hspace{\stretch{1}}(2.7)

Setting \lambda = 0 requires

\begin{aligned}{c_{ns}}^{(0)} = \delta_{ns},\end{aligned} \hspace{\stretch{1}}(2.8)

for

\begin{aligned}\begin{aligned}{\lvert {\psi_s} \rangle} &= {\lvert {{\psi_s}^{(0)}} \rangle} +\lambda\sum_n {c_{ns}}^{(1)} {\lvert {{\psi_n}^{(0)}} \rangle} + \lambda^2\sum_n {c_{ns}}^{(2)} {\lvert {{\psi_n}^{(0)}} \rangle} + \cdots \\ &=\left(1 + \lambda {c_{ns}}^{(1)} + \lambda^2 {c_{ns}}^{(2)} + \cdots\right){\lvert {{\psi_s}^{(0)}} \rangle} + \lambda\sum_{n \ne s} {c_{ns}}^{(1)} {\lvert {{\psi_n}^{(0)}} \rangle} +\lambda^2\sum_{n \ne s} {c_{ns}}^{(2)} {\lvert {{\psi_n}^{(0)}} \rangle} + \cdots\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.9)

We rescaled our kets

\begin{aligned}{\lvert {\bar{\psi}_s} \rangle} ={\lvert {{\psi_s}^{(0)}} \rangle} + \lambda\sum_{n \ne s} {\bar{c}_{ns}}^{(1)} {\lvert {{\psi_n}^{(0)}} \rangle} +\lambda^2\sum_{n \ne s} {\bar{c}_{ns}}^{(2)} {\lvert {{\psi_n}^{(0)}} \rangle} + \cdots\end{aligned} \hspace{\stretch{1}}(2.10)

where

\begin{aligned}{\bar{c}_{ns}}^{(j)} = \frac{{c_{ns}}^{(j)}}{1 + \lambda {c_{ns}}^{(1)} + \lambda^2 {c_{ns}}^{(2)} + \cdots}\end{aligned} \hspace{\stretch{1}}(2.11)

The normalization of the rescaled kets is then

\begin{aligned}\left\langle{{\bar{\psi}_s}} \vert {{\bar{\psi}_s}}\right\rangle =1+ \lambda^2\sum_{n \ne s} {\left\lvert{{\bar{c}_{ns}}^{(1)}}\right\rvert}^2+\cdots\equiv \frac{1}{{Z_s}},\end{aligned} \hspace{\stretch{1}}(2.12)

One can then construct a renormalized ket if desired

\begin{aligned}{\lvert {\bar{\psi}_s} \rangle}_R = Z_s^{1/2} {\lvert {\bar{\psi}_s} \rangle},\end{aligned} \hspace{\stretch{1}}(2.13)

so that

\begin{aligned}({\lvert {\bar{\psi}_s} \rangle}_R)^\dagger {\lvert {\bar{\psi}_s} \rangle}_R = Z_s \left\langle{{\bar{\psi}_s}} \vert {{\bar{\psi}_s}}\right\rangle = 1.\end{aligned} \hspace{\stretch{1}}(2.14)

The meat.

That’s as far as we got last time. We continue by renaming terms in 2.10

\begin{aligned}{\lvert {\bar{\psi}_s} \rangle} ={\lvert {{\psi_s}^{(0)}} \rangle} + \lambda {\lvert {{\psi_s}^{(1)}} \rangle} + \lambda^2 {\lvert {{\psi_s}^{(2)}} \rangle} + \cdots\end{aligned} \hspace{\stretch{1}}(2.15)

where

\begin{aligned}{\lvert {{\psi_n}^{(j)}} \rangle} = \sum_{n \ne s} {\bar{c}_{ns}}^{(j)} {\lvert {{\psi_s}^{(0)}} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.16)

Now we act on this with the Hamiltonian

\begin{aligned}H {\lvert {\bar{\psi}_s} \rangle} = E_s {\lvert {\bar{\psi}_s} \rangle},\end{aligned} \hspace{\stretch{1}}(2.17)

or

\begin{aligned}H {\lvert {\bar{\psi}_s} \rangle} - E_s {\lvert {\bar{\psi}_s} \rangle} = 0.\end{aligned} \hspace{\stretch{1}}(2.18)

Expanding this, we have

\begin{aligned}\begin{aligned}&(H_0 + \lambda H') \left({\lvert {{\psi_s}^{(0)}} \rangle} + \lambda {\lvert {{\psi_s}^{(1)}} \rangle} + \lambda^2 {\lvert {{\psi_s}^{(2)}} \rangle} + \cdots\right) \\ &\quad - \left( {E_s}^{(0)} + \lambda {E_s}^{(1)} + \lambda^2 {E_s}^{(2)} + \cdots \right)\left({\lvert {{\psi_s}^{(0)}} \rangle} + \lambda {\lvert {{\psi_s}^{(1)}} \rangle} + \lambda^2 {\lvert {{\psi_s}^{(2)}} \rangle} + \cdots\right)= 0.\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.19)

We want to write this as

\begin{aligned}{\lvert {A} \rangle} + \lambda {\lvert {B} \rangle} + \lambda^2 {\lvert {C} \rangle} + \cdots = 0.\end{aligned} \hspace{\stretch{1}}(2.20)

This is

\begin{aligned}\begin{aligned}0 &=\lambda^0(H_0 - E_s^{(0)}) {\lvert {{\psi_s}^{(0)}} \rangle}  \\ &+ \lambda\left((H_0 - E_s^{(0)}) {\lvert {{\psi_s}^{(1)}} \rangle} +(H' - E_s^{(1)}) {\lvert {{\psi_s}^{(0)}} \rangle} \right) \\ &+ \lambda^2\left((H_0 - E_s^{(0)}) {\lvert {{\psi_s}^{(2)}} \rangle} +(H' - E_s^{(1)}) {\lvert {{\psi_s}^{(1)}} \rangle} -E_s^{(2)} {\lvert {{\psi_s}^{(0)}} \rangle} \right) \\ &\cdots\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.21)

So we form

\begin{aligned}{\lvert {A} \rangle} &=(H_0 - E_s^{(0)}) {\lvert {{\psi_s}^{(0)}} \rangle} \\ {\lvert {B} \rangle} &=(H_0 - E_s^{(0)}) {\lvert {{\psi_s}^{(1)}} \rangle} +(H' - E_s^{(1)}) {\lvert {{\psi_s}^{(0)}} \rangle} \\ {\lvert {C} \rangle} &=(H_0 - E_s^{(0)}) {\lvert {{\psi_s}^{(2)}} \rangle} +(H' - E_s^{(1)}) {\lvert {{\psi_s}^{(1)}} \rangle} -E_s^{(2)} {\lvert {{\psi_s}^{(0)}} \rangle},\end{aligned} \hspace{\stretch{1}}(2.22)

and so forth.

Zeroth order in \lambda

Since H_0 {\lvert {{\psi_s}^{(0)}} \rangle} = E_s^{(0)} {\lvert {{\psi_s}^{(0)}} \rangle}, this first condition on {\lvert {A} \rangle} is not much more than a statement that 0 - 0 = 0.

First order in \lambda

How about {\lvert {B} \rangle} = 0? For this to be zero we require that both of the following are simultaneously zero

\begin{aligned}\left\langle{{{\psi_s}^{(0)}}} \vert {{B}}\right\rangle &= 0 \\ \left\langle{{{\psi_m}^{(0)}}} \vert {{B}}\right\rangle &= 0, \qquad m \ne s\end{aligned} \hspace{\stretch{1}}(2.25)

This first condition is

\begin{aligned}{\langle {{\psi_s}^{(0)}} \rvert} (H' - E_s^{(1)}) {\lvert {{\psi_s}^{(0)}} \rangle} = 0.\end{aligned} \hspace{\stretch{1}}(2.27)

With

\begin{aligned}{\langle {{\psi_m}^{(0)}} \rvert} H' {\lvert {{\psi_s}^{(0)}} \rangle} \equiv {H_{ms}}',\end{aligned} \hspace{\stretch{1}}(2.28)

or

\begin{aligned}{H_{ss}}' = E_s^{(1)}.\end{aligned} \hspace{\stretch{1}}(2.29)

From the second condition we have

\begin{aligned}0 = {\langle {{\psi_m}^{(0)}} \rvert} (H_0 - E_s^{(0)}) {\lvert {{\psi_s}^{(1)}} \rangle} +{\langle {{\psi_m}^{(0)}} \rvert} (H' - E_s^{(1)}) {\lvert {{\psi_s}^{(0)}} \rangle} \end{aligned} \hspace{\stretch{1}}(2.30)

Utilizing the Hermitian nature of H_0 we can act backwards on {\langle {{\psi_m}^{(0)}} \rvert}

\begin{aligned}{\langle {{\psi_m}^{(0)}} \rvert} H_0=E_m^{(0)} {\langle {{\psi_m}^{(0)}} \rvert}.\end{aligned} \hspace{\stretch{1}}(2.31)

We note that \left\langle{{{\psi_m}^{(0)}}} \vert {{{\psi_s}^{(0)}}}\right\rangle = 0, m \ne s. We can also expand the \left\langle{{{\psi_m}^{(0)}}} \vert {{{\psi_s}^{(1)}}}\right\rangle, which is

\begin{aligned}\left\langle{{{\psi_m}^{(0)}}} \vert {{{\psi_s}^{(1)}}}\right\rangle &={\langle {{\psi_m}^{(0)}} \rvert}\left(\sum_{n \ne s} {\bar{c}_{ns}}^{(1)} {\lvert {{\psi_n}^{(0)}} \rangle}\right) \\ \end{aligned}

I found that reducing this sum wasn’t obvious until some actual integers were plugged in. Suppose that s = 3, and m = 5, then this is

\begin{aligned}\left\langle{{{\psi_5}^{(0)}}} \vert {{{\psi_3}^{(1)}}}\right\rangle &={\langle {{\psi_5}^{(0)}} \rvert}\left(\sum_{n = 0, 1, 2, 4, 5, \cdots} {\bar{c}_{n3}}^{(1)} {\lvert {{\psi_n}^{(0)}} \rangle}\right) \\ &={\bar{c}_{53}}^{(1)} \left\langle{{{\psi_5}^{(0)}}} \vert {{{\psi_5}^{(0)}}}\right\rangle \\ &={\bar{c}_{53}}^{(1)}.\end{aligned}

More generally that is

\begin{aligned}\left\langle{{{\psi_m}^{(0)}}} \vert {{{\psi_s}^{(1)}}}\right\rangle ={\bar{c}_{ms}}^{(1)}.\end{aligned} \hspace{\stretch{1}}(2.32)

Utilizing this gives us

\begin{aligned}0 = ( E_m^{(0)} - E_s^{(0)}) {\bar{c}_{ms}}^{(1)}+{H_{ms}}' \end{aligned} \hspace{\stretch{1}}(2.33)

And summarizing what we learn from our {\lvert {B} \rangle} = 0 conditions we have

\begin{aligned}E_s^{(1)} &= {H_{ss}}' \\ {\bar{c}_{ms}}^{(1)}&=\frac{{H_{ms}}' }{ E_s^{(0)} - E_m^{(0)} }\end{aligned} \hspace{\stretch{1}}(2.34)

Second order in \lambda

Doing the same thing for {\lvert {C} \rangle} = 0 we form (or assume)

\begin{aligned}\left\langle{{{\psi_s}^{(0)}}} \vert {{C}}\right\rangle &= 0 \\ \left\langle{{{\psi_m}^{(0)}}} \vert {{C}}\right\rangle &= 0, \qquad m \ne s\end{aligned} \hspace{\stretch{1}}(2.36)

\begin{aligned}0 &= \left\langle{{{\psi_s}^{(0)}}} \vert {{C}}\right\rangle  \\ &={\langle {{\psi_s}^{(0)}} \rvert}\left((H_0 - E_s^{(0)}) {\lvert {{\psi_s}^{(2)}} \rangle} +(H' - E_s^{(1)}) {\lvert {{\psi_s}^{(1)}} \rangle} -E_s^{(2)} {\lvert {{\psi_s}^{(0)}} \rangle}  \right) \\ &=(E_s^{(0)} - E_s^{(0)}) \left\langle{{{\psi_s}^{(0)}}} \vert {{{\psi_s}^{(2)}}}\right\rangle +{\langle {{\psi_s}^{(0)}} \rvert}(H' - E_s^{(1)}) {\lvert {{\psi_s}^{(1)}} \rangle} -E_s^{(2)} \left\langle{{{\psi_s}^{(0)}}} \vert {{{\psi_s}^{(0)}}}\right\rangle \end{aligned}

We need to know what the \left\langle{{{\psi_s}^{(0)}}} \vert {{{\psi_s}^{(1)}}}\right\rangle is, and find that it is zero

\begin{aligned}\left\langle{{{\psi_s}^{(0)}}} \vert {{{\psi_s}^{(1)}}}\right\rangle={\langle {{\psi_s}^{(0)}} \rvert}\sum_{n \ne s} {\bar{c}_{ns}}^{(1)} {\lvert {{\psi_n}^{(0)}} \rangle}\end{aligned} \hspace{\stretch{1}}(2.38)

Again, suppose that s = 3. Our sum ranges over all n \ne 3, so all the brakets are zero. Utilizing that we have

\begin{aligned}E_s^{(2)} &={\langle {{\psi_s}^{(0)}} \rvert} H' {\lvert {{\psi_s}^{(1)}} \rangle}  \\ &={\langle {{\psi_s}^{(0)}} \rvert} H' \sum_{m \ne s} {\bar{c}_{ms}}^{(1)} {\lvert {{\psi_m}^{(0)}} \rangle} \\ &=\sum_{m \ne s} {\bar{c}_{ms}}^{(1)} {H_{sm}}'\end{aligned}

From 2.34 we have

\begin{aligned}E_s^{(2)} =\sum_{m \ne s} \frac{{H_{ms}}' }{ E_s^{(0)} - E_m^{(0)} }{H_{sm}}'=\sum_{m \ne s} \frac{{\left\lvert{{H_{ms}}'}\right\rvert}^2 }{ E_s^{(0)} - E_m^{(0)} }\end{aligned} \hspace{\stretch{1}}(2.39)

We can now summarize by forming the first order terms of the perturbed energy and the corresponding kets

\begin{aligned}E_s &= E_s^{(0)} + \lambda {H_{ss}}' + \lambda^2 \sum_{m \ne s} \frac{{\left\lvert{{H_{ms}}'}\right\rvert}^2 }{ E_s^{(0)} - E_m^{(0)} } + \cdots\\ {\lvert {\bar{\psi}_s} \rangle} &= {\lvert {{\psi_s}^{(0)}} \rangle} + \lambda\sum_{m \ne s} \frac{{H_{ms}}'}{ E_s^{(0)} - E_m^{(0)} } {\lvert {{\psi_m}^{(0)}} \rangle}+ \cdots\end{aligned} \hspace{\stretch{1}}(2.40)

We can continue calculating, but are hopeful that we can stop the calculation without doing more work, even if \lambda = 1. If one supposes that the

\begin{aligned}\sum_{m \ne s} \frac{{H_{ms}}'}{ E_s^{(0)} - E_m^{(0)} } \end{aligned} \hspace{\stretch{1}}(2.42)

term is “small”, then we can hope that truncating the sum will be reasonable for \lambda = 1. This would be the case if

\begin{aligned}{H_{ms}}' \ll {\left\lvert{ E_s^{(0)} - E_m^{(0)} }\right\rvert},\end{aligned} \hspace{\stretch{1}}(2.43)

however, to put some mathematical rigor into making a statement of such smallness takes a lot of work. We are referred to [2]. Incidentally, these are loosely referred to as the first and second testaments, because of the author’s name, and the fact that they came as two volumes historically.

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] A. Messiah, G.M. Temmer, and J. Potter. Quantum mechanics: two volumes bound as one. Dover Publications New York, 1999.

Posted in Math and Physics Learning. | Tagged: , , , , , , , | Leave a Comment »

On C operator precedence.

Posted by peeterjoot on September 23, 2011

Saw the following code fragment today, illustrating a common precedence mistake:

         if ( ( ( A == type ) || ( B == type ) )
              && ( ( 0 == p->size1 ) && ( 0 == p->size2 ) ) ||
                 ( p->size1 < p->size2 ) )

This has the following structure

if ( typeMatches && isZero || isLess )

but should be:

if ( typeMatches && (isZero || isLess) )

This doesn’t behave in an expected fashion, which can be verified easily:

#include <stdio.h>

int main()
{
   for ( unsigned typeMatches = 0 ; typeMatches < 2 ; typeMatches++ )
   {
      for ( unsigned zero = 0 ; zero < 2 ; zero++ )
      {
         for ( unsigned less = 0 ; less < 2 ; less++ )
         {
            unsigned r = typeMatches && zero || less ;

            printf("typeMatches: %u; zero: %u; less: %u\t:%u\n", typeMatches, zero, less, r ) ;
         }
      }
   }

   return 0 ;
}

I’ve seen people argue vehemently that you need to memorize the whole C operator precedence table to avoid errors like this. I think my preference is to prefer simple code. My head hurt when I looked at this fragment. Wouldn’t it be easier to understand what the author was thinking if some temporary variables were used. Perhaps like so:


            if ( ( A == type ) || ( B == type ) )
            {
               bool foundZero = ( 0 == p->size1 ) && ( 0 == p->size2 ) ;
               bool foundLess = ( p->size1 < p->size2 ) ;

               if ( foundZero || foundLess )
               {
                  // ...
               }
            }

I like variable names (and these could be more specific) used to implicitly comment the code. I think I’d seen that suggestion, and the suggestion not to let a statement span multiple lines, years ago one of Code Complete, or The Pragmatic programmer. Both of those are great books that deserve a read by any programmer (I’m due for a more recent re-read myself).

Posted in C/C++ development and debugging. | Tagged: , | Leave a Comment »

search within search in vim

Posted by peeterjoot on September 20, 2011

I was asked how to search within a function for something and report an error if not found (as opposed to continuing past that function for the pattern of interest).

I’d never done this. Something I have done a bazillion times is search and replace within a function. Suppose one has the following:

void foo( int c )
{
   for ( int i = 0 ; i < c ; i++ )
   {
      bar( i ) ;
   }
}

Then positioning yourself on the first line of the function, you can do something like:

:,/^}/ s/bar/Blah/

to replace bar’s with Blah’s on all lines in the range (current line), to curly brace anchored on the beginning of the line (although this requires a coding convention where that is done).

How would you search only within the current function? I tried:

:,/^}/ /bar/

but this appears to take me past my current function, to the next /bar/ match that is NOT in the specified range.

I admit this is something that would have been handy a few times. In particular, we’ve got one source file that’s now 45000 lines long. I’ve gotten messed up a few times by searching for something, and ending up in a completely different function.

Here’s what I came up with:

,/^}/ s/bar//c

Use a prompted search and replace using the /c modifier, and ‘q’uit the replace once a match is found. If no match is found in the range, one gets an error.

This seems like a somewhat abusive use of /c, but works. I’d be curious if there are any other ways?

Posted in perl and general scripting hackery | Tagged: , | Leave a Comment »

PHY456H1F: Quantum Mechanics II. Lecture 3 (Taught by Prof J.E. Sipe). Perturbation methods

Posted by peeterjoot on September 19, 2011

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Peeter’s lecture notes from class. May not be entirely coherent.

States and wave functions

Suppose we have the following non-degenerate energy eigenstates

\begin{aligned}&\dot{v}s \\ E_1 &\sim {\lvert {\Psi_1} \rangle} \\ E_0 &\sim {\lvert {\Psi_0} \rangle}\end{aligned}

and consider a state that is “very close” to {\lvert {\Psi_n} \rangle}.

\begin{aligned}{\lvert {\Psi} \rangle} = {\lvert {\Psi_n} \rangle} + {\lvert {\delta \Psi_n} \rangle}\end{aligned} \hspace{\stretch{1}}(1.1)

We form projections onto {\lvert {\Psi_n} \rangle} “direction”. The difference from this projection will be written {\lvert {\Psi_{n \perp}} \rangle}, as depicted in figure (\ref{fig:qmTwoL3fig1}). This illustration cannot not be interpreted literally, but illustrates the idea nicely.

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.4\textheight]{qmTwoL3fig1}
\caption{Pictorial illustration of ket projections}
\end{figure}

For the amount along the projection onto {\lvert {\Psi_n} \rangle} we write

\begin{aligned}\left\langle{{\Psi_n}} \vert {{\delta \Psi_n}}\right\rangle = \delta \alpha\end{aligned} \hspace{\stretch{1}}(1.2)

so that the total deviation from the original state is

\begin{aligned}{\lvert {\delta \Psi_n} \rangle} = \delta \alpha {\lvert {\Psi_n} \rangle} + {\lvert {\delta \Psi_{n \perp}} \rangle} .\end{aligned} \hspace{\stretch{1}}(1.3)

The varied ket is then

\begin{aligned}{\lvert {\Psi} \rangle} = (1 + \delta \alpha ){\lvert {\Psi_n} \rangle} + {\lvert {\delta \Psi_{n \perp}} \rangle} \end{aligned} \hspace{\stretch{1}}(1.4)

where

\begin{aligned}(\delta \alpha)^2, \left\langle{{\delta \Psi_{n \perp}}} \vert {{\delta \Psi_{n \perp}}}\right\rangle  \ll 1\end{aligned} \hspace{\stretch{1}}(1.5)

In terms of these projections our kets magnitude is

\begin{aligned}\left\langle{{\Psi}} \vert {{\Psi}}\right\rangle &= \Bigl((1 + {\delta \alpha}^{*} ){\langle {\Psi_n} \rvert} + {\langle {\delta \Psi_{n \perp}} \rvert} \Bigr)\Bigl((1 + \delta \alpha ){\lvert {\Psi_n} \rangle} + {\lvert {\delta \Psi_{n \perp}} \rangle} \Bigr) \\ &={\left\lvert{1 + \delta \alpha}\right\rvert}^2 \left\langle{{\Psi_n}} \vert {{\Psi_n}}\right\rangle+ \left\langle{{\delta \Psi_{n \perp}}} \vert {{\delta \Psi_{n \perp}}}\right\rangle  \\ &\quad +(1 + {\delta \alpha}^{*} )\left\langle{{\Psi_n}} \vert {{\delta \Psi_{n \perp}}}\right\rangle +(1 + \delta \alpha )\left\langle{{\delta \Psi_{n \perp}}} \vert {{\delta \Psi_n}}\right\rangle \end{aligned}

Because \left\langle{{\delta \Psi_{n \perp}}} \vert {{\delta \Psi_n}}\right\rangle = 0 this is

\begin{aligned}\left\langle{{\Psi}} \vert {{\Psi}}\right\rangle= {\left\lvert{1 + \delta \alpha }\right\rvert}^2\left\langle{{\delta \Psi_{n \perp}}} \vert {{\delta \Psi_{n \perp}}}\right\rangle.\end{aligned} \hspace{\stretch{1}}(1.6)

Similarly for the energy expectation we have

\begin{aligned}\left\langle{{\Psi}} \vert {{\Psi}}\right\rangle &= \Bigl((1 + {\delta \alpha}^{*} ){\langle {\Psi_n} \rvert} + {\langle {\delta \Psi_{n \perp}} \rvert} \Bigr)H\Bigl((1 + \delta \alpha ){\lvert {\Psi_n} \rangle} + {\lvert {\delta \Psi_{n \perp}} \rangle} \Bigr) \\ &={\left\lvert{1 + \delta \alpha}\right\rvert}^2 E_n \left\langle{{\Psi_n}} \vert {{\Psi_n}}\right\rangle+ \braket{\delta \Psi_{n \perp}} H {\delta \Psi_{n \perp}}  \\ &\quad + (1 + {\delta \alpha}^{*} ) E_n \left\langle{{\Psi_n}} \vert {{\delta \Psi_{n \perp}}}\right\rangle +(1 + \delta \alpha ) E_n \left\langle{{\delta \Psi_{n \perp}}} \vert {{\delta \Psi_n}}\right\rangle \end{aligned}

Or

\begin{aligned}{\langle {\Psi} \rvert} H {\lvert {\Psi} \rangle}= E_n {\left\lvert{1 + \delta \alpha }\right\rvert}^2+{\langle {\delta \Psi_{n \perp}} \rvert} H {\lvert {\delta \Psi_{n \perp}} \rangle}.\end{aligned} \hspace{\stretch{1}}(1.7)

This gives

\begin{aligned}E[\Psi] &= \frac{{\langle {\Psi} \rvert} H {\lvert {\Psi} \rangle}}{\left\langle{{\Psi}} \vert {{\Psi}}\right\rangle} \\ &=\frac{E_n {\left\lvert{1 + \delta \alpha }\right\rvert}^2 + {\langle {\delta \Psi_{n \perp}} \rvert} H {\lvert {\delta \Psi_{n \perp}} \rangle}}{{\left\lvert{1 + \delta \alpha }\right\rvert}^2\left\langle{{\delta \Psi_{n \perp}}} \vert {{\delta \Psi_{n \perp}}}\right\rangle } \\ &=\frac{E_n + \frac{{\langle {\delta \Psi_{n \perp}} \rvert} H {\lvert {\delta \Psi_{n \perp}} \rangle} }{{\left\lvert{1 + \delta \alpha }\right\rvert}^2}}{1+\frac{\left\langle{{\delta \Psi_{n \perp}}} \vert {{\delta \Psi_{n \perp}}}\right\rangle }{{\left\lvert{1 + \delta \alpha }\right\rvert}^2}} \\ &=E_n \left( 1 - \frac{\left\langle{{\delta \Psi_{n \perp}}} \vert {{\delta \Psi_{n \perp}}}\right\rangle }{{\left\lvert{1 + \delta \alpha }\right\rvert}^2}+ \cdots \right) + \cdots \\ &=E_n\left[1 + \mathcal{O}\left((\delta \Psi_{n \perp})^2\right)\right]\end{aligned}

where

\begin{aligned}(\delta \Psi_{n \perp})^2\sim\left\langle{{\delta \Psi_{n \perp}}} \vert {{\delta \Psi_{n \perp}}}\right\rangle\end{aligned} \hspace{\stretch{1}}(1.8)

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.4\textheight]{qmTwoL3fig2}
\caption{Illustration of variation of energy with variation of Hamiltonian}
\end{figure}

“small errors” in {\lvert {\Psi} \rangle} don’t lead to large errors in E[\Psi]

It is reasonably easy to get a good estimate and E_0, although it is reasonably hard to get a good estimate of {\lvert {\Psi_0} \rangle}. This is for the same reason, because E[] is not terribly sensitive.

Excited states.

\begin{aligned}&\dot{v}s \\ E_2 &\sim {\lvert {\Psi_2} \rangle} \\ E_1 &\sim {\lvert {\Psi_1} \rangle} \\ E_0 &\sim {\lvert {\Psi_0} \rangle}\end{aligned}

Suppose we wanted an estimate of E_1. If we knew the ground state {\lvert {\Psi_0} \rangle}. For any trial {\lvert {\Psi} \rangle} form

\begin{aligned}{\lvert {\Psi'} \rangle} = {\lvert {\Psi} \rangle} - {\lvert {\Psi_0} \rangle}  \left\langle{{\Psi_0}} \vert {{\Psi}}\right\rangle\end{aligned} \hspace{\stretch{1}}(2.9)

We are taking out the projection of the ground state from an arbitrary trial function.

For a state written in terms of the basis states, allowing for an \alpha degeneracy

\begin{aligned}{\lvert {\Psi} \rangle} = c_0 {\lvert {\Psi_0} \rangle}  +\sum_{n> 0, \alpha} c_{n \alpha} {\lvert {\Psi_{n \alpha}} \rangle}\end{aligned} \hspace{\stretch{1}}(2.10)

\begin{aligned}\left\langle{{\Psi_0}} \vert {{\Psi}}\right\rangle = c_0 \end{aligned} \hspace{\stretch{1}}(2.11)

and

\begin{aligned}{\lvert {\Psi'} \rangle} = \sum_{n> 0, \alpha} c_{n \alpha} {\lvert {\Psi_{n \alpha}} \rangle}\end{aligned} \hspace{\stretch{1}}(2.12)

(note that there are some theorems that tell us that the ground state is generally non-degenerate).

\begin{aligned}E[\Psi'] &= \frac{{\langle {\Psi'} \rvert} H {\lvert {\Psi'} \rangle}}{\left\langle{{\Psi'}} \vert {{\Psi'}}\right\rangle}  \\ &=\frac{\sum_{n> 0, \alpha} {\left\lvert{c_{n \alpha}}\right\rvert}^2 E_n}{\sum_{m> 0, \beta} {\left\lvert{c_{m \beta}}\right\rvert}^2 }\ge E_1\end{aligned}

Often don’t know the exact ground state, although we might have a guess {\lvert {\tilde{\Psi}_0} \rangle}.

for

\begin{aligned}{\lvert {\Psi''} \rangle} = {\lvert {\Psi} \rangle} - {\lvert {\tilde{\Psi}_0} \rangle}\left\langle{{\tilde{\Psi}_0}} \vert {{\Psi}}\right\rangle\end{aligned} \hspace{\stretch{1}}(2.13)

but cannot prove that

\begin{aligned}\frac{{\langle {\Psi''} \rvert} H {\lvert {\Psi''} \rangle}}{\left\langle{{\Psi''}} \vert {{\Psi''}}\right\rangle} \ge E_1\end{aligned} \hspace{\stretch{1}}(2.14)

Then

FIXME: missed something here.

\begin{aligned}\frac{{\langle {\Psi'''} \rvert} H {\lvert {\Psi'''} \rangle}}{\left\langle{{\Psi'''}} \vert {{\Psi'''}}\right\rangle} \ge E_1\end{aligned} \hspace{\stretch{1}}(2.15)

Somewhat remarkably, this is often possible. We talked last time about the Hydrogen atom. In that case, you can guess that the excited state is in the 2s orbital and and therefore orthogonal to the 1s (?) orbital.

Time independent perturbation theory.

See section 16.1 of the text [1].

We can sometimes use this sort of physical insight to help construct a good approximation. This is provided that we have some of this physical insight, or that it is good insight in the first place.

This is the no-think (turn the crank) approach.

Here we split our Hamiltonian into two parts

\begin{aligned}H = H_0 + H'\end{aligned} \hspace{\stretch{1}}(3.16)

where H_0 is a Hamiltonian for which we know the energy eigenstates and the eigenkets. The H' is the “perturbation” that is supposed to be small “in some sense”.

Prof Sipe will provide some references later that provide a more specific meaning to this “smallness”. From some ad-hoc discussion in the class it sounds like one has to consider sequences of operators, and look at the convergence of those sequences (is this L2 measure theory?)

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.4\textheight]{qmTwoL3fig3}
\caption{Example of small perturbation from known Hamiltonian}
\end{figure}

We’d like to consider a range of problems of the form

\begin{aligned}H = H_0 + \lambda H'\end{aligned} \hspace{\stretch{1}}(3.17)

where

\begin{aligned}\lambda \in [0,1]\end{aligned} \hspace{\stretch{1}}(3.18)

So that when \lambda \rightarrow 0 we have

\begin{aligned}H \rightarrow H_0\end{aligned} \hspace{\stretch{1}}(3.19)

the problem that we already know, but for \lambda \rightarrow 1 we have

\begin{aligned}H = H_0 + H'\end{aligned} \hspace{\stretch{1}}(3.20)

the problem that we’d like to solve.

We are assuming that we know the eigenstates and eigenvalues for H_0. Assuming no degeneracy

\begin{aligned}H_0 {\lvert {\Psi_s^{(0)}} \rangle} = E_s^{(0)}{\lvert {\Psi_s^{(0)}} \rangle} \end{aligned} \hspace{\stretch{1}}(3.21)

We seek

\begin{aligned}(H_0 + H'){\lvert {\Psi_s} \rangle} = E_s{\lvert {\Psi_s} \rangle} \end{aligned} \hspace{\stretch{1}}(3.22)

(this is the \lambda = 1 case).

Once (if) found, when \lambda \rightarrow 0 we will have

\begin{aligned}E_s &\rightarrow E_s^{(0)} \\ {\lvert {\Psi_s} \rangle} &\rightarrow {\lvert {\Psi_s^{(0)}} \rangle}\end{aligned}

\begin{aligned}E_s = E_s^{(0)}  + \lambda E_s^{(1)} + \frac{\lambda^2}{2} E_s^{(2)}\end{aligned} \hspace{\stretch{1}}(3.23)

\begin{aligned}\Psi_s = \sum_n c_{ns} {\lvert {\Psi_n^{(0)}} \rangle}\end{aligned} \hspace{\stretch{1}}(3.24)

This we know we can do because we are assumed to have a complete set of states.

with

\begin{aligned}c_{ns} = c_{ns}^{(0)}  + \lambda c_{ns}^{(1)} + \frac{\lambda^2}{2} c_{ns}^{(2)}\end{aligned} \hspace{\stretch{1}}(3.25)

where

\begin{aligned}c_{ns}^{(0)} = \delta_{ns}\end{aligned} \hspace{\stretch{1}}(3.26)

There’s a subtlety here that will be treated differently from the text. We write

\begin{aligned}{\lvert {\Psi_s} \rangle}&={\lvert {\Psi_s^{(0)}} \rangle}+ \lambda \sum_nc_{ns}^{(1)} {\lvert {\Psi_n^{(0)}} \rangle}+ \frac{\lambda^2}{2} \sum_nc_{ns}^{(2)}{\lvert {\Psi_n^{(0)}} \rangle}+ \cdots \\ &=\left(1 + \lambda c_{ss}^{(1)} + \cdots\right){\lvert {\Psi_s^{(0)}} \rangle}+ \lambda \sum_{n \ne s} c_{ns}^{(1)} {\lvert {\Psi_n^{(0)}} \rangle}+ \cdots\end{aligned}

Take

\begin{aligned}{\lvert {\bar{\Psi}_s} \rangle}&={\lvert {\bar{\Psi}_s^{(0)}} \rangle}+ \lambda\frac{\sum_{n \ne s} c_{ns}^{(1)} {\lvert {\Psi_n^{(0)}} \rangle}}{1 + \lambda c_{ss}^{(1)}} + \cdots\\ &={\lvert {\bar{\Psi}_s^{(0)}} \rangle}+ \lambda\sum_{n \ne s} \bar{c}_{ns}^{(1)} {\lvert {\Psi_n^{(0)}} \rangle} + \cdots\end{aligned}

where

\begin{aligned}\bar{c}_{ns}^{(1)}  =\frac{c_{ns}^{(1)} }{1 + \lambda c_{ss}^{(1)}} \end{aligned} \hspace{\stretch{1}}(3.27)

We have:

\begin{aligned}\bar{c}_{ns}^{(1)} &= c_{ns}^{(1)} \\ \bar{c}_{ns}^{(2)} &\ne c_{ns}^{(2)} \end{aligned}

FIXME: I missed something here.

Note that this is no longer normalized.

\begin{aligned}\left\langle{{\bar{\Psi}_s}} \vert {{\bar{\Psi}_s}}\right\rangle \ne 1\end{aligned} \hspace{\stretch{1}}(3.28)

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , , | Leave a Comment »

PHY456H1F, Quantum Mechanics II. My solutions to problem set 1 (ungraded).

Posted by peeterjoot on September 19, 2011

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Harmonic oscillator.

Consider

\begin{aligned}H_0 = \frac{P^2}{2m} + \frac{1}{{2}} m \omega^2 X^2\end{aligned} \hspace{\stretch{1}}(1.1)

Since it’s been a while let’s compute the raising and lowering factorization that was used so extensively for this problem.

It was of the form

\begin{aligned}H_0 = (a X - i b P)(a X + i b P) + \cdots\end{aligned} \hspace{\stretch{1}}(1.2)

Why this factorization has an imaginary in it is a good question. It’s not one that is given any sort of rationale in the text ([1]).

It’s clear that we want a = \sqrt{m/2} \omega and b = 1/\sqrt{2m}. The difference is then

\begin{aligned}H_0 - (a X - i b P)(a X + i b P)=- i a b \left[{X},{P}\right]  = - i \frac{\omega}{2} \left[{X},{P}\right]\end{aligned} \hspace{\stretch{1}}(1.3)

That commutator is an i\hbar value, but what was the sign? Let’s compute so we don’t get it wrong

\begin{aligned}\left[{x},{ p}\right] \psi&= -i \hbar \left[{x},{\partial_x}\right] \psi \\ &= -i \hbar ( x \partial_x \psi - \partial_x (x \psi) ) \\ &= -i \hbar ( - \psi ) \\ &= i \hbar \psi\end{aligned}

So we have

\begin{aligned}H_0 =\left(\omega \sqrt{\frac{m}{2}} X - i \sqrt{\frac{1}{2m}} P\right)\left(\omega \sqrt{\frac{m}{2}} X + i \sqrt{\frac{1}{2m}} P\right)+ \frac{\hbar \omega}{2}\end{aligned} \hspace{\stretch{1}}(1.4)

Factoring out an \hbar \omega produces the form of the Hamiltonian that we used before

\begin{aligned}H_0 =\hbar \omega \left(\left(\sqrt{\frac{m \omega}{2 \hbar}} X - i \sqrt{\frac{1}{2m \hbar \omega}} P\right)\left(\sqrt{\frac{m \omega}{2 \hbar}} X + i \sqrt{\frac{1}{2m \hbar \omega}} P\right)+ \frac{1}{{2}}\right).\end{aligned} \hspace{\stretch{1}}(1.5)

The factors were labeled the uppering (a^\dagger) and lowering (a) operators respectively, and written

\begin{aligned}H_0 &= \hbar \omega \left( a^\dagger a + \frac{1}{{2}} \right) \\ a &= \sqrt{\frac{m \omega}{2 \hbar}} X + i \sqrt{\frac{1}{2m \hbar \omega}} P \\ a^\dagger &= \sqrt{\frac{m \omega}{2 \hbar}} X - i \sqrt{\frac{1}{2m \hbar \omega}} P.\end{aligned} \hspace{\stretch{1}}(1.6)

Observe that we can find the inverse relations

\begin{aligned}X &= \sqrt{ \frac{\hbar}{2 m \omega} } \left( a + a^\dagger \right) \\ P &= i \sqrt{ \frac{m \hbar \omega}{2} } \left( a^\dagger  - a \right)\end{aligned} \hspace{\stretch{1}}(1.9)

Question
What is a good reason that we chose this particular factorization? For example, a quick computation shows that we could have also picked

\begin{aligned}H_0 = \hbar \omega \left( a a^\dagger - \frac{1}{{2}} \right).\end{aligned} \hspace{\stretch{1}}(1.11)

I don’t know that answer. That said, this second factorization is useful in that it provides the commutator relation between the raising and lowering operators, since subtracting 1.11 and 1.6 yields

\begin{aligned}\left[{a},{a^\dagger}\right] = 1.\end{aligned} \hspace{\stretch{1}}(1.12)

If we suppose that we have eigenstates for the operator a^\dagger a of the form

\begin{aligned}a^\dagger a {\lvert {n} \rangle} = \lambda_n {\lvert {n} \rangle},\end{aligned} \hspace{\stretch{1}}(1.13)

then the problem of finding the eigensolution of H_0 reduces to solving this problem. Because a^\dagger a commutes with 1/2, an eigenstate of a^\dagger a is also an eigenstate of H_0. Utilizing 1.12 we then have

\begin{aligned}a^\dagger a ( a {\lvert {n} \rangle} )&= (a a^\dagger - 1 ) a {\lvert {n} \rangle} \\ &= a (a^\dagger a - 1 ) {\lvert {n} \rangle} \\ &= a (\lambda_n - 1 ) {\lvert {n} \rangle} \\ &= (\lambda_n - 1 ) a {\lvert {n} \rangle},\end{aligned}

so we see that a {\lvert {n} \rangle} is an eigenstate of a^\dagger a with eigenvalue \lambda_n - 1.

Similarly for the raising operator

\begin{aligned}a^\dagger a ( a^\dagger {\lvert {n} \rangle} )&=a^\dagger (a  a^\dagger) {\lvert {n} \rangle} ) \\ &=a^\dagger (a^\dagger a + 1) {\lvert {n} \rangle} ) \\ &=a^\dagger (\lambda_n + 1) {\lvert {n} \rangle} ),\end{aligned}

and find that a^\dagger {\lvert {n} \rangle} is also an eigenstate of a^\dagger a with eigenvalue \lambda_n + 1.

Supposing that there is a lowest energy level (because the potential V(x) = m \omega x^2 /2 has a lower bound of zero) then the state {\lvert {0} \rangle} for which the energy is the lowest when operated on by a we have

\begin{aligned}a {\lvert {0} \rangle} = 0\end{aligned} \hspace{\stretch{1}}(1.14)

Thus

\begin{aligned}a^\dagger a {\lvert {0} \rangle} = 0,\end{aligned} \hspace{\stretch{1}}(1.15)

and

\begin{aligned}\lambda_0 = 0.\end{aligned} \hspace{\stretch{1}}(1.16)

This seems like a small bit of slight of hand, since it sneakily supplies an integer value to \lambda_0 where up to this point 0 was just a label.

If the eigenvalue equation we are trying to solve for the Hamiltonian is

\begin{aligned}H_0 {\lvert {n} \rangle} = E_n {\lvert {n} \rangle}.\end{aligned} \hspace{\stretch{1}}(1.17)

Then we must then have

\begin{aligned}E_n = \hbar \omega \left(\lambda_n + \frac{1}{{2}} \right) = \hbar \omega \left(n + \frac{1}{{2}} \right)\end{aligned} \hspace{\stretch{1}}(1.18)

Part (a)

We’ve now got enough context to attempt the first part of the question, calculation of

\begin{aligned}{\langle {n} \rvert} X^4 {\lvert {n} \rangle}\end{aligned} \hspace{\stretch{1}}(1.19)

We’ve calculated things like this before, such as

\begin{aligned}{\langle {n} \rvert} X^2 {\lvert {n} \rangle}&=\frac{\hbar}{2 m \omega} {\langle {n} \rvert} (a + a^\dagger)^2 {\lvert {n} \rangle}\end{aligned}

To continue we need an exact relation between {\lvert {n} \rangle} and {\lvert {n \pm 1} \rangle}. Recall that a {\lvert {n} \rangle} was an eigenstate of a^\dagger a with eigenvalue n - 1. This implies that the eigenstates a {\lvert {n} \rangle} and {\lvert {n-1} \rangle} are proportional

\begin{aligned}a {\lvert {n} \rangle} = c_n {\lvert {n - 1} \rangle},\end{aligned} \hspace{\stretch{1}}(1.20)

or

\begin{aligned}{\langle {n} \rvert} a^\dagger a {\lvert {n} \rangle} &= {\left\lvert{c_n}\right\rvert}^2 \left\langle{{n - 1}} \vert {{n-1}}\right\rangle = {\left\lvert{c_n}\right\rvert}^2 \\ n \left\langle{{n}} \vert {{n}}\right\rangle &= \\ n &=\end{aligned}

so that

\begin{aligned}a {\lvert {n} \rangle} = \sqrt{n} {\lvert {n - 1} \rangle}.\end{aligned} \hspace{\stretch{1}}(1.21)

Similarly let

\begin{aligned}a^\dagger {\lvert {n} \rangle} = b_n {\lvert {n + 1} \rangle},\end{aligned} \hspace{\stretch{1}}(1.22)

or

\begin{aligned}{\langle {n} \rvert} a a^\dagger {\lvert {n} \rangle} &= {\left\lvert{b_n}\right\rvert}^2 \left\langle{{n - 1}} \vert {{n-1}}\right\rangle = {\left\lvert{b_n}\right\rvert}^2 \\ {\langle {n} \rvert} (1 + a^\dagger a) {\lvert {n} \rangle} &= \\ 1 + n &=\end{aligned}

so that

\begin{aligned}a^\dagger {\lvert {n} \rangle} = \sqrt{n+1} {\lvert {n + 1} \rangle}.\end{aligned} \hspace{\stretch{1}}(1.23)

We can now return to 1.19, and find

\begin{aligned}{\langle {n} \rvert} X^4 {\lvert {n} \rangle}&=\frac{\hbar^2}{4 m^2 \omega^2} {\langle {n} \rvert} (a + a^\dagger)^4 {\lvert {n} \rangle}\end{aligned}

Consider half of this braket

\begin{aligned}(a + a^\dagger)^2 {\lvert {n} \rangle}&=\left( a^2 + (a^\dagger)^2 + a^\dagger a + a a^\dagger \right) {\lvert {n} \rangle} \\ &=\left( a^2 + (a^\dagger)^2 + a^\dagger a + (1 + a^\dagger a) \right) {\lvert {n} \rangle} \\ &=\left( a^2 + (a^\dagger)^2 + 1 + 2 a^\dagger a \right) {\lvert {n} \rangle} \\ &=\sqrt{n-1}\sqrt{n-2} {\lvert {n-2} \rangle}+\sqrt{n+1}\sqrt{n+2} {\lvert {n + 2} \rangle}+{\lvert {n} \rangle}+  2 n {\lvert {n} \rangle}\end{aligned}

Squaring, utilizing the Hermitian nature of the X operator

\begin{aligned}{\langle {n} \rvert} X^4 {\lvert {n} \rangle}=\frac{\hbar^2}{4 m^2 \omega^2}\left((n-1)(n-2) + (n+1)(n+2) + (1 + 2n)^2\right)=\frac{\hbar^2}{4 m^2 \omega^2}\left( 6 n^2 + 4 n + 5 \right)\end{aligned} \hspace{\stretch{1}}(1.24)

Part (b)

Find the ground state energy of the Hamiltonian H = H_0 + \gamma X^2 for \gamma > 0.

The new Hamiltonian has the form

\begin{aligned}H = \frac{P^2}{2m} + \frac{1}{{2}} m \left(\omega^2 + \frac{2 \gamma}{m} \right) X^2 =\frac{P^2}{2m} + \frac{1}{{2}} m {\omega'}^2 X^2,\end{aligned} \hspace{\stretch{1}}(1.25)

where

\begin{aligned}\omega' = \sqrt{ \omega^2 + \frac{2 \gamma}{m} }\end{aligned} \hspace{\stretch{1}}(1.26)

The energy states of the Hamiltonian are thus

\begin{aligned}E_n = \hbar \sqrt{ \omega^2 + \frac{2 \gamma}{m} } \left( n + \frac{1}{{2}} \right)\end{aligned} \hspace{\stretch{1}}(1.27)

and the ground state of the modified Hamiltonian H is thus

\begin{aligned}E_0 = \frac{\hbar}{2} \sqrt{ \omega^2 + \frac{2 \gamma}{m} }\end{aligned} \hspace{\stretch{1}}(1.28)

Part (c)

Find the ground state energy of the Hamiltonian H = H_0 - \alpha X.

With a bit of play, this new Hamiltonian can be factored into

\begin{aligned}H= \hbar \omega \left( b^\dagger b + \frac{1}{{2}} \right) - \frac{\alpha^2}{2 m \omega^2}= \hbar \omega \left( b b^\dagger - \frac{1}{{2}} \right) - \frac{\alpha^2}{2 m \omega^2},\end{aligned} \hspace{\stretch{1}}(1.29)

where

\begin{aligned}b &= \sqrt{\frac{m \omega}{2\hbar}} X + \frac{i P}{\sqrt{2 m \hbar \omega}} - \frac{\alpha}{\omega \sqrt{ 2 m \hbar \omega }} \\ b^\dagger &= \sqrt{\frac{m \omega}{2\hbar}} X - \frac{i P}{\sqrt{2 m \hbar \omega}} - \frac{\alpha}{\omega \sqrt{ 2 m \hbar \omega }}.\end{aligned} \hspace{\stretch{1}}(1.30)

From 1.29 we see that we have the same sort of commutator relationship as in the original Hamiltonian

\begin{aligned}\left[{b},{b^\dagger}\right] = 1,\end{aligned} \hspace{\stretch{1}}(1.32)

and because of this, all the preceding arguments follow unchanged with the exception that the energy eigenstates of this Hamiltonian are shifted by a constant

\begin{aligned}H {\lvert {n} \rangle} = \left( \hbar \omega \left( n + \frac{1}{{2}} \right) - \frac{\alpha^2}{2 m \omega^2} \right) {\lvert {n} \rangle},\end{aligned} \hspace{\stretch{1}}(1.33)

where the {\lvert {n} \rangle} states are simultaneous eigenstates of the b^\dagger b operator

\begin{aligned}b^\dagger b {\lvert {n} \rangle} = n {\lvert {n} \rangle}.\end{aligned} \hspace{\stretch{1}}(1.34)

The ground state energy is then

\begin{aligned}E_0 = \frac{\hbar \omega }{2} - \frac{\alpha^2}{2 m \omega^2}.\end{aligned} \hspace{\stretch{1}}(1.35)

This makes sense. A translation of the entire position of the system should not effect the energy level distribution of the system, but we have set our reference potential differently, and have this constant energy adjustment to the entire system.

Hydrogen atom and spherical harmonics.

We are asked to show that for any eigenkets of the hydrogen atom {\lvert {\Phi_{nlm}} \rangle} we have

\begin{aligned}{\langle {\Phi_{nlm}} \rvert} X {\lvert {\Phi_{nlm}} \rangle} ={\langle {\Phi_{nlm}} \rvert} Y {\lvert {\Phi_{nlm}} \rangle} ={\langle {\Phi_{nlm}} \rvert} Z {\lvert {\Phi_{nlm}} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.36)

The summary sheet provides us with the wavefunction

\begin{aligned}\left\langle{\mathbf{r}} \vert {{\Phi_{nlm}}}\right\rangle = \frac{2}{n^2 a_0^{3/2}} \sqrt{\frac{(n-l-1)!}{(n+l)!)^3}} F_{nl}\left( \frac{2r}{n a_0} \right) Y_l^m(\theta, \phi),\end{aligned} \hspace{\stretch{1}}(2.37)

where F_{nl} is a real valued function defined in terms of Lagueere polynomials. Working with the expectation of the X operator to start with we have

\begin{aligned}{\langle {\Phi_{nlm}} \rvert} X {\lvert {\Phi_{nlm}} \rangle} &=\int \left\langle{{\Phi_{nlm}}} \vert {{\mathbf{r}'}}\right\rangle {\langle {\mathbf{r}'} \rvert} X {\lvert {\mathbf{r}} \rangle} \left\langle{\mathbf{r}} \vert {{\Phi_{nlm}}}\right\rangle d^3 \mathbf{r} d^3 \mathbf{r}' \\ &=\int \left\langle{{\Phi_{nlm}}} \vert {{\mathbf{r}'}}\right\rangle \delta(\mathbf{r} - \mathbf{r}') r \sin\theta \cos\phi \left\langle{\mathbf{r}} \vert {{\Phi_{nlm}}}\right\rangle d^3 \mathbf{r} d^3 \mathbf{r}' \\ &=\int \Phi_{nlm}^{*}(\mathbf{r}) r \sin\theta \cos\phi \Phi_{nlm}(\mathbf{r}) d^3 \mathbf{r} \\ &\sim\int r^2 dr {\left\lvert{ F_{nl}\left(\frac{2 r}{ n a_0} \right)}\right\rvert}^2 r \int \sin\theta d\theta d\phi{Y_l^m}^{*}(\theta, \phi) \sin\theta \cos\phi Y_l^m(\theta, \phi) \\ \end{aligned}

Recalling that the only \phi dependence in Y_l^m is e^{i m \phi} we can perform the d\phi integration directly, which is

\begin{aligned}\int_{\phi=0}^{2\pi} \cos\phi d\phi e^{-i m \phi} e^{i m \phi} = 0.\end{aligned} \hspace{\stretch{1}}(2.38)

We have the same story for the Y expectation which is

\begin{aligned}{\langle {\Phi_{nlm}} \rvert} X {\lvert {\Phi_{nlm}} \rangle} \sim\int r^2 dr {\left\lvert{F_{nl}\left( \frac{2 r}{ n a_0} \right)}\right\rvert}^2 r \int \sin\theta d\theta d\phi{Y_l^m}^{*}(\theta, \phi) \sin\theta \sin\phi Y_l^m(\theta, \phi).\end{aligned} \hspace{\stretch{1}}(2.39)

Our \phi integral is then just

\begin{aligned}\int_{\phi=0}^{2\pi} \sin\phi d\phi e^{-i m \phi} e^{i m \phi} = 0,\end{aligned} \hspace{\stretch{1}}(2.40)

also zero. The Z expectation is a slightly different story. There we have

\begin{aligned}\begin{aligned}{\langle {\Phi_{nlm}} \rvert} Z {\lvert {\Phi_{nlm}} \rangle} &\sim\int dr {\left\lvert{F_{nl}\left( \frac{2 r}{ n a_0} \right)}\right\rvert}^2 r^3  \\ &\quad \int_0^{2\pi} d\phi\int_0^\pi \sin \theta d\theta\left( \sin\theta \right)^{-2m}\left( \frac{d^{l - m}}{d (\cos\theta)^{l-m}} \sin^{2l}\theta \right)^2\cos\theta.\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.41)

Within this last integral we can make the substitution

\begin{aligned}u &= \cos\theta \\ \sin\theta d\theta &= - d(\cos\theta) = -du \\ u &\in [1, -1],\end{aligned} \hspace{\stretch{1}}(2.42)

and the integral takes the form

\begin{aligned}-\int_{-1}^1 (-du) \frac{1}{{(1 - u^2)^m}} \left( \frac{d^{l-m}}{d u^{l -m }} (1 - u^2)^l\right)^2 u.\end{aligned} \hspace{\stretch{1}}(2.45)

Here we have the product of two even functions, times one odd function (u), over a symmetric interval, so the end result is zero, completing the problem.

I wasn’t able to see how to exploit the parity result suggested in the problem, but it wasn’t so bad to show these directly.

Angular momentum operator.

Working with the appropriate expressions in Cartesian components, confirm that L_i {\lvert {\psi} \rangle} = 0 for each component of angular momentum L_i, if \left\langle{\mathbf{r}} \vert {{\psi}}\right\rangle = \psi(\mathbf{r}) is in fact only a function of r = {\left\lvert{\mathbf{r}}\right\rvert}.

In order to proceed, we will have to consider a matrix element, so that we can operate on {\lvert {\psi} \rangle} in position space. For that matrix element, we can proceed to insert complete states, and reduce the problem to a question of wavefunctions. That is

\begin{aligned}{\langle {\mathbf{r}} \rvert} L_i {\lvert {\psi} \rangle}&=\int d^3 \mathbf{r}' {\langle {\mathbf{r}} \rvert} L_i {\lvert {\mathbf{r}'} \rangle} \left\langle{{\mathbf{r}'}} \vert {{\psi}}\right\rangle \\ &=\int d^3 \mathbf{r}' {\langle {\mathbf{r}} \rvert} \epsilon_{i a b} X_a P_b {\lvert {\mathbf{r}'} \rangle} \left\langle{{\mathbf{r}'}} \vert {{\psi}}\right\rangle \\ &=-i \hbar \epsilon_{i a b} \int d^3 \mathbf{r}' x_a {\langle {\mathbf{r}} \rvert} \frac{\partial {\psi(\mathbf{r}')}}{\partial {X_b}} {\lvert {\mathbf{r}'} \rangle}  \\ &=-i \hbar \epsilon_{i a b} \int d^3 \mathbf{r}' x_a \frac{\partial {\psi(\mathbf{r}')}}{\partial {x_b}} \left\langle{\mathbf{r}} \vert {{\mathbf{r}'}}\right\rangle  \\ &=-i \hbar \epsilon_{i a b} \int d^3 \mathbf{r}' x_a \frac{\partial {\psi(\mathbf{r}')}}{\partial {x_b}} \delta^3(\mathbf{r} - \mathbf{r}') \\ &=-i \hbar \epsilon_{i a b} x_a \frac{\partial {\psi(\mathbf{r})}}{\partial {x_b}} \end{aligned}

With \psi(\mathbf{r}) = \psi(r) we have

\begin{aligned}{\langle {\mathbf{r}} \rvert} L_i {\lvert {\psi} \rangle}&=-i \hbar \epsilon_{i a b} x_a \frac{\partial {\psi(r)}}{\partial {x_b}}  \\ &=-i \hbar \epsilon_{i a b} x_a \frac{\partial {r}}{\partial {x_b}} \frac{d\psi(r)}{dr}  \\ &=-i \hbar \epsilon_{i a b} x_a \frac{1}{{2}} 2 x_b \frac{1}{{r}} \frac{d\psi(r)}{dr}  \\ \end{aligned}

We are left with an sum of a symmetric product x_a x_b with the antisymmetric tensor \epsilon_{i a b} so this is zero for all i \in [1,3].

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , | Leave a Comment »

scripted perl undo of a code modification to test using diff.

Posted by peeterjoot on September 15, 2011

We’ve got some code that uses an external product that allows us to put some of our info in a generic structure with integer members name1, name2, name3, name4.

Unfortunately, when we started using this, we didn’t use macros or inlines to encode exactly what we’d chosen to put in these fields, so now we’ve got a lots of code that tries to compensate by using comments like so:

               sqmIncrObjMetricsCounter( agtCB,
                                         pCaWobEntry->n.name2, // poolID
                                         pCaWobEntry->n.name3, // objectID
                                         0,
                                         SQLB_OBJTYPE(pCaWobEntry->n.name4), // objectType
                                         1,
                                         &objMetric ) ;

I chose to retrofit in some inline helper methods to eliminate the requirement for adding (and relying on!) comments to describe what was going on. Although I did figure out some partially automated methods, I was unfortunately not clever enough to figure out how to do this code modification in an entirely scripted fashion. I came up with what I thought was a good test strategy, where I was able to write a script to undo my changes, and then run diff against the ancestor in our version control repository to verify that I’d not actually introduced any changes of substance. I thought the method was a neat technique, yet another example of using ‘perl -p -i’ ad-hoc scripting. Here’s a fragment of the undo script:

s/SAL_ExtractPageNumFromPageName *\( *&([^ \)]+) *\)/$1.name1/g;
s/SAL_ExtractPageNumFromPageName *\( *([^ \)]+) *\)/$1->name1/g;

s/SAL_AssignPageNumToPageName *\( *&([^ ,]+) *, *([^ \)]+) *\)/$1.name1 = $2/g;
s/SAL_AssignPageNumToPageName *\( *([^ ,]+) *, *([^ \)]+) *\)/$1->name1 = $2/g;

After running this on all the files that I modified I was able to make sure that all the name1’s I’d changed to access via a helper method were still name1’s and that I’d not made any copy and paste errors, using the wrong new inlines anywhere.

Of course this presumes that I didn’t make a corresponding bug in my undo script, but that’s a much smaller task to review than all the other code in question, and I’m now much more comfortable committing this change to our version control system.

Posted in C/C++ development and debugging. | Leave a Comment »