Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

Posts Tagged ‘eigenstate’

PHY456H1F: Quantum Mechanics II. Lecture 4 (Taught by Prof J.E. Sipe). Time independent perturbation theory (continued)

Posted by peeterjoot on September 23, 2011

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Disclaimer.

Peeter’s lecture notes from class. May not be entirely coherent.

Time independent perturbation.

The setup

To recap, we were covering the time independent perturbation methods from section 16.1 of the text [1]. We start with a known Hamiltonian H_0, and alter it with the addition of a “small” perturbation

\begin{aligned}H = H_0 + \lambda H', \qquad \lambda \in [0,1]\end{aligned} \hspace{\stretch{1}}(2.1)

For the original operator, we assume that a complete set of eigenvectors and eigenkets is known

\begin{aligned}H_0 {\lvert {{\psi_0}^{(0)}} \rangle} = {E_s}^{(0)} {\lvert {{\psi_s}^{(0)}} \rangle}\end{aligned} \hspace{\stretch{1}}(2.2)

We seek the perturbed eigensolution

\begin{aligned}H {\lvert {\psi_s} \rangle} = E_s {\lvert {\psi_s} \rangle}\end{aligned} \hspace{\stretch{1}}(2.3)

and assumed a perturbative series representation for the energy eigenvalues in the new system

\begin{aligned}E_s = {E_s}^{(0)} + \lambda {E_s}^{(1)} + \lambda^2 {E_s}^{(2)} + \cdots\end{aligned} \hspace{\stretch{1}}(2.4)

Given an assumed representation for the new eigenkets in terms of the known basis

\begin{aligned}{\lvert {\psi_s} \rangle} = \sum_n c_{ns} {\lvert {{\psi_n}^{(0)}} \rangle} \end{aligned} \hspace{\stretch{1}}(2.5)

and a pertubative series representation for the probability coefficients

\begin{aligned}c_{ns} = {c_{ns}}^{(0)} + \lambda {c_{ns}}^{(1)} + \lambda^2 {c_{ns}}^{(2)},\end{aligned} \hspace{\stretch{1}}(2.6)

so that

\begin{aligned}{\lvert {\psi_s} \rangle} = \sum_n {c_{ns}}^{(0)} {\lvert {{\psi_n}^{(0)}} \rangle} +\lambda\sum_n {c_{ns}}^{(1)} {\lvert {{\psi_n}^{(0)}} \rangle} + \lambda^2\sum_n {c_{ns}}^{(2)} {\lvert {{\psi_n}^{(0)}} \rangle} + \cdots\end{aligned} \hspace{\stretch{1}}(2.7)

Setting \lambda = 0 requires

\begin{aligned}{c_{ns}}^{(0)} = \delta_{ns},\end{aligned} \hspace{\stretch{1}}(2.8)

for

\begin{aligned}\begin{aligned}{\lvert {\psi_s} \rangle} &= {\lvert {{\psi_s}^{(0)}} \rangle} +\lambda\sum_n {c_{ns}}^{(1)} {\lvert {{\psi_n}^{(0)}} \rangle} + \lambda^2\sum_n {c_{ns}}^{(2)} {\lvert {{\psi_n}^{(0)}} \rangle} + \cdots \\ &=\left(1 + \lambda {c_{ns}}^{(1)} + \lambda^2 {c_{ns}}^{(2)} + \cdots\right){\lvert {{\psi_s}^{(0)}} \rangle} + \lambda\sum_{n \ne s} {c_{ns}}^{(1)} {\lvert {{\psi_n}^{(0)}} \rangle} +\lambda^2\sum_{n \ne s} {c_{ns}}^{(2)} {\lvert {{\psi_n}^{(0)}} \rangle} + \cdots\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.9)

We rescaled our kets

\begin{aligned}{\lvert {\bar{\psi}_s} \rangle} ={\lvert {{\psi_s}^{(0)}} \rangle} + \lambda\sum_{n \ne s} {\bar{c}_{ns}}^{(1)} {\lvert {{\psi_n}^{(0)}} \rangle} +\lambda^2\sum_{n \ne s} {\bar{c}_{ns}}^{(2)} {\lvert {{\psi_n}^{(0)}} \rangle} + \cdots\end{aligned} \hspace{\stretch{1}}(2.10)

where

\begin{aligned}{\bar{c}_{ns}}^{(j)} = \frac{{c_{ns}}^{(j)}}{1 + \lambda {c_{ns}}^{(1)} + \lambda^2 {c_{ns}}^{(2)} + \cdots}\end{aligned} \hspace{\stretch{1}}(2.11)

The normalization of the rescaled kets is then

\begin{aligned}\left\langle{{\bar{\psi}_s}} \vert {{\bar{\psi}_s}}\right\rangle =1+ \lambda^2\sum_{n \ne s} {\left\lvert{{\bar{c}_{ns}}^{(1)}}\right\rvert}^2+\cdots\equiv \frac{1}{{Z_s}},\end{aligned} \hspace{\stretch{1}}(2.12)

One can then construct a renormalized ket if desired

\begin{aligned}{\lvert {\bar{\psi}_s} \rangle}_R = Z_s^{1/2} {\lvert {\bar{\psi}_s} \rangle},\end{aligned} \hspace{\stretch{1}}(2.13)

so that

\begin{aligned}({\lvert {\bar{\psi}_s} \rangle}_R)^\dagger {\lvert {\bar{\psi}_s} \rangle}_R = Z_s \left\langle{{\bar{\psi}_s}} \vert {{\bar{\psi}_s}}\right\rangle = 1.\end{aligned} \hspace{\stretch{1}}(2.14)

The meat.

That’s as far as we got last time. We continue by renaming terms in 2.10

\begin{aligned}{\lvert {\bar{\psi}_s} \rangle} ={\lvert {{\psi_s}^{(0)}} \rangle} + \lambda {\lvert {{\psi_s}^{(1)}} \rangle} + \lambda^2 {\lvert {{\psi_s}^{(2)}} \rangle} + \cdots\end{aligned} \hspace{\stretch{1}}(2.15)

where

\begin{aligned}{\lvert {{\psi_n}^{(j)}} \rangle} = \sum_{n \ne s} {\bar{c}_{ns}}^{(j)} {\lvert {{\psi_s}^{(0)}} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.16)

Now we act on this with the Hamiltonian

\begin{aligned}H {\lvert {\bar{\psi}_s} \rangle} = E_s {\lvert {\bar{\psi}_s} \rangle},\end{aligned} \hspace{\stretch{1}}(2.17)

or

\begin{aligned}H {\lvert {\bar{\psi}_s} \rangle} - E_s {\lvert {\bar{\psi}_s} \rangle} = 0.\end{aligned} \hspace{\stretch{1}}(2.18)

Expanding this, we have

\begin{aligned}\begin{aligned}&(H_0 + \lambda H') \left({\lvert {{\psi_s}^{(0)}} \rangle} + \lambda {\lvert {{\psi_s}^{(1)}} \rangle} + \lambda^2 {\lvert {{\psi_s}^{(2)}} \rangle} + \cdots\right) \\ &\quad - \left( {E_s}^{(0)} + \lambda {E_s}^{(1)} + \lambda^2 {E_s}^{(2)} + \cdots \right)\left({\lvert {{\psi_s}^{(0)}} \rangle} + \lambda {\lvert {{\psi_s}^{(1)}} \rangle} + \lambda^2 {\lvert {{\psi_s}^{(2)}} \rangle} + \cdots\right)= 0.\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.19)

We want to write this as

\begin{aligned}{\lvert {A} \rangle} + \lambda {\lvert {B} \rangle} + \lambda^2 {\lvert {C} \rangle} + \cdots = 0.\end{aligned} \hspace{\stretch{1}}(2.20)

This is

\begin{aligned}\begin{aligned}0 &=\lambda^0(H_0 - E_s^{(0)}) {\lvert {{\psi_s}^{(0)}} \rangle}  \\ &+ \lambda\left((H_0 - E_s^{(0)}) {\lvert {{\psi_s}^{(1)}} \rangle} +(H' - E_s^{(1)}) {\lvert {{\psi_s}^{(0)}} \rangle} \right) \\ &+ \lambda^2\left((H_0 - E_s^{(0)}) {\lvert {{\psi_s}^{(2)}} \rangle} +(H' - E_s^{(1)}) {\lvert {{\psi_s}^{(1)}} \rangle} -E_s^{(2)} {\lvert {{\psi_s}^{(0)}} \rangle} \right) \\ &\cdots\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.21)

So we form

\begin{aligned}{\lvert {A} \rangle} &=(H_0 - E_s^{(0)}) {\lvert {{\psi_s}^{(0)}} \rangle} \\ {\lvert {B} \rangle} &=(H_0 - E_s^{(0)}) {\lvert {{\psi_s}^{(1)}} \rangle} +(H' - E_s^{(1)}) {\lvert {{\psi_s}^{(0)}} \rangle} \\ {\lvert {C} \rangle} &=(H_0 - E_s^{(0)}) {\lvert {{\psi_s}^{(2)}} \rangle} +(H' - E_s^{(1)}) {\lvert {{\psi_s}^{(1)}} \rangle} -E_s^{(2)} {\lvert {{\psi_s}^{(0)}} \rangle},\end{aligned} \hspace{\stretch{1}}(2.22)

and so forth.

Zeroth order in \lambda

Since H_0 {\lvert {{\psi_s}^{(0)}} \rangle} = E_s^{(0)} {\lvert {{\psi_s}^{(0)}} \rangle}, this first condition on {\lvert {A} \rangle} is not much more than a statement that 0 - 0 = 0.

First order in \lambda

How about {\lvert {B} \rangle} = 0? For this to be zero we require that both of the following are simultaneously zero

\begin{aligned}\left\langle{{{\psi_s}^{(0)}}} \vert {{B}}\right\rangle &= 0 \\ \left\langle{{{\psi_m}^{(0)}}} \vert {{B}}\right\rangle &= 0, \qquad m \ne s\end{aligned} \hspace{\stretch{1}}(2.25)

This first condition is

\begin{aligned}{\langle {{\psi_s}^{(0)}} \rvert} (H' - E_s^{(1)}) {\lvert {{\psi_s}^{(0)}} \rangle} = 0.\end{aligned} \hspace{\stretch{1}}(2.27)

With

\begin{aligned}{\langle {{\psi_m}^{(0)}} \rvert} H' {\lvert {{\psi_s}^{(0)}} \rangle} \equiv {H_{ms}}',\end{aligned} \hspace{\stretch{1}}(2.28)

or

\begin{aligned}{H_{ss}}' = E_s^{(1)}.\end{aligned} \hspace{\stretch{1}}(2.29)

From the second condition we have

\begin{aligned}0 = {\langle {{\psi_m}^{(0)}} \rvert} (H_0 - E_s^{(0)}) {\lvert {{\psi_s}^{(1)}} \rangle} +{\langle {{\psi_m}^{(0)}} \rvert} (H' - E_s^{(1)}) {\lvert {{\psi_s}^{(0)}} \rangle} \end{aligned} \hspace{\stretch{1}}(2.30)

Utilizing the Hermitian nature of H_0 we can act backwards on {\langle {{\psi_m}^{(0)}} \rvert}

\begin{aligned}{\langle {{\psi_m}^{(0)}} \rvert} H_0=E_m^{(0)} {\langle {{\psi_m}^{(0)}} \rvert}.\end{aligned} \hspace{\stretch{1}}(2.31)

We note that \left\langle{{{\psi_m}^{(0)}}} \vert {{{\psi_s}^{(0)}}}\right\rangle = 0, m \ne s. We can also expand the \left\langle{{{\psi_m}^{(0)}}} \vert {{{\psi_s}^{(1)}}}\right\rangle, which is

\begin{aligned}\left\langle{{{\psi_m}^{(0)}}} \vert {{{\psi_s}^{(1)}}}\right\rangle &={\langle {{\psi_m}^{(0)}} \rvert}\left(\sum_{n \ne s} {\bar{c}_{ns}}^{(1)} {\lvert {{\psi_n}^{(0)}} \rangle}\right) \\ \end{aligned}

I found that reducing this sum wasn’t obvious until some actual integers were plugged in. Suppose that s = 3, and m = 5, then this is

\begin{aligned}\left\langle{{{\psi_5}^{(0)}}} \vert {{{\psi_3}^{(1)}}}\right\rangle &={\langle {{\psi_5}^{(0)}} \rvert}\left(\sum_{n = 0, 1, 2, 4, 5, \cdots} {\bar{c}_{n3}}^{(1)} {\lvert {{\psi_n}^{(0)}} \rangle}\right) \\ &={\bar{c}_{53}}^{(1)} \left\langle{{{\psi_5}^{(0)}}} \vert {{{\psi_5}^{(0)}}}\right\rangle \\ &={\bar{c}_{53}}^{(1)}.\end{aligned}

More generally that is

\begin{aligned}\left\langle{{{\psi_m}^{(0)}}} \vert {{{\psi_s}^{(1)}}}\right\rangle ={\bar{c}_{ms}}^{(1)}.\end{aligned} \hspace{\stretch{1}}(2.32)

Utilizing this gives us

\begin{aligned}0 = ( E_m^{(0)} - E_s^{(0)}) {\bar{c}_{ms}}^{(1)}+{H_{ms}}' \end{aligned} \hspace{\stretch{1}}(2.33)

And summarizing what we learn from our {\lvert {B} \rangle} = 0 conditions we have

\begin{aligned}E_s^{(1)} &= {H_{ss}}' \\ {\bar{c}_{ms}}^{(1)}&=\frac{{H_{ms}}' }{ E_s^{(0)} - E_m^{(0)} }\end{aligned} \hspace{\stretch{1}}(2.34)

Second order in \lambda

Doing the same thing for {\lvert {C} \rangle} = 0 we form (or assume)

\begin{aligned}\left\langle{{{\psi_s}^{(0)}}} \vert {{C}}\right\rangle &= 0 \\ \left\langle{{{\psi_m}^{(0)}}} \vert {{C}}\right\rangle &= 0, \qquad m \ne s\end{aligned} \hspace{\stretch{1}}(2.36)

\begin{aligned}0 &= \left\langle{{{\psi_s}^{(0)}}} \vert {{C}}\right\rangle  \\ &={\langle {{\psi_s}^{(0)}} \rvert}\left((H_0 - E_s^{(0)}) {\lvert {{\psi_s}^{(2)}} \rangle} +(H' - E_s^{(1)}) {\lvert {{\psi_s}^{(1)}} \rangle} -E_s^{(2)} {\lvert {{\psi_s}^{(0)}} \rangle}  \right) \\ &=(E_s^{(0)} - E_s^{(0)}) \left\langle{{{\psi_s}^{(0)}}} \vert {{{\psi_s}^{(2)}}}\right\rangle +{\langle {{\psi_s}^{(0)}} \rvert}(H' - E_s^{(1)}) {\lvert {{\psi_s}^{(1)}} \rangle} -E_s^{(2)} \left\langle{{{\psi_s}^{(0)}}} \vert {{{\psi_s}^{(0)}}}\right\rangle \end{aligned}

We need to know what the \left\langle{{{\psi_s}^{(0)}}} \vert {{{\psi_s}^{(1)}}}\right\rangle is, and find that it is zero

\begin{aligned}\left\langle{{{\psi_s}^{(0)}}} \vert {{{\psi_s}^{(1)}}}\right\rangle={\langle {{\psi_s}^{(0)}} \rvert}\sum_{n \ne s} {\bar{c}_{ns}}^{(1)} {\lvert {{\psi_n}^{(0)}} \rangle}\end{aligned} \hspace{\stretch{1}}(2.38)

Again, suppose that s = 3. Our sum ranges over all n \ne 3, so all the brakets are zero. Utilizing that we have

\begin{aligned}E_s^{(2)} &={\langle {{\psi_s}^{(0)}} \rvert} H' {\lvert {{\psi_s}^{(1)}} \rangle}  \\ &={\langle {{\psi_s}^{(0)}} \rvert} H' \sum_{m \ne s} {\bar{c}_{ms}}^{(1)} {\lvert {{\psi_m}^{(0)}} \rangle} \\ &=\sum_{m \ne s} {\bar{c}_{ms}}^{(1)} {H_{sm}}'\end{aligned}

From 2.34 we have

\begin{aligned}E_s^{(2)} =\sum_{m \ne s} \frac{{H_{ms}}' }{ E_s^{(0)} - E_m^{(0)} }{H_{sm}}'=\sum_{m \ne s} \frac{{\left\lvert{{H_{ms}}'}\right\rvert}^2 }{ E_s^{(0)} - E_m^{(0)} }\end{aligned} \hspace{\stretch{1}}(2.39)

We can now summarize by forming the first order terms of the perturbed energy and the corresponding kets

\begin{aligned}E_s &= E_s^{(0)} + \lambda {H_{ss}}' + \lambda^2 \sum_{m \ne s} \frac{{\left\lvert{{H_{ms}}'}\right\rvert}^2 }{ E_s^{(0)} - E_m^{(0)} } + \cdots\\ {\lvert {\bar{\psi}_s} \rangle} &= {\lvert {{\psi_s}^{(0)}} \rangle} + \lambda\sum_{m \ne s} \frac{{H_{ms}}'}{ E_s^{(0)} - E_m^{(0)} } {\lvert {{\psi_m}^{(0)}} \rangle}+ \cdots\end{aligned} \hspace{\stretch{1}}(2.40)

We can continue calculating, but are hopeful that we can stop the calculation without doing more work, even if \lambda = 1. If one supposes that the

\begin{aligned}\sum_{m \ne s} \frac{{H_{ms}}'}{ E_s^{(0)} - E_m^{(0)} } \end{aligned} \hspace{\stretch{1}}(2.42)

term is “small”, then we can hope that truncating the sum will be reasonable for \lambda = 1. This would be the case if

\begin{aligned}{H_{ms}}' \ll {\left\lvert{ E_s^{(0)} - E_m^{(0)} }\right\rvert},\end{aligned} \hspace{\stretch{1}}(2.43)

however, to put some mathematical rigor into making a statement of such smallness takes a lot of work. We are referred to [2]. Incidentally, these are loosely referred to as the first and second testaments, because of the author’s name, and the fact that they came as two volumes historically.

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] A. Messiah, G.M. Temmer, and J. Potter. Quantum mechanics: two volumes bound as one. Dover Publications New York, 1999.

Posted in Math and Physics Learning. | Tagged: , , , , , , , | Leave a Comment »

PHY456H1F, Quantum Mechanics II. My solutions to problem set 1 (ungraded).

Posted by peeterjoot on September 19, 2011

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Harmonic oscillator.

Consider

\begin{aligned}H_0 = \frac{P^2}{2m} + \frac{1}{{2}} m \omega^2 X^2\end{aligned} \hspace{\stretch{1}}(1.1)

Since it’s been a while let’s compute the raising and lowering factorization that was used so extensively for this problem.

It was of the form

\begin{aligned}H_0 = (a X - i b P)(a X + i b P) + \cdots\end{aligned} \hspace{\stretch{1}}(1.2)

Why this factorization has an imaginary in it is a good question. It’s not one that is given any sort of rationale in the text ([1]).

It’s clear that we want a = \sqrt{m/2} \omega and b = 1/\sqrt{2m}. The difference is then

\begin{aligned}H_0 - (a X - i b P)(a X + i b P)=- i a b \left[{X},{P}\right]  = - i \frac{\omega}{2} \left[{X},{P}\right]\end{aligned} \hspace{\stretch{1}}(1.3)

That commutator is an i\hbar value, but what was the sign? Let’s compute so we don’t get it wrong

\begin{aligned}\left[{x},{ p}\right] \psi&= -i \hbar \left[{x},{\partial_x}\right] \psi \\ &= -i \hbar ( x \partial_x \psi - \partial_x (x \psi) ) \\ &= -i \hbar ( - \psi ) \\ &= i \hbar \psi\end{aligned}

So we have

\begin{aligned}H_0 =\left(\omega \sqrt{\frac{m}{2}} X - i \sqrt{\frac{1}{2m}} P\right)\left(\omega \sqrt{\frac{m}{2}} X + i \sqrt{\frac{1}{2m}} P\right)+ \frac{\hbar \omega}{2}\end{aligned} \hspace{\stretch{1}}(1.4)

Factoring out an \hbar \omega produces the form of the Hamiltonian that we used before

\begin{aligned}H_0 =\hbar \omega \left(\left(\sqrt{\frac{m \omega}{2 \hbar}} X - i \sqrt{\frac{1}{2m \hbar \omega}} P\right)\left(\sqrt{\frac{m \omega}{2 \hbar}} X + i \sqrt{\frac{1}{2m \hbar \omega}} P\right)+ \frac{1}{{2}}\right).\end{aligned} \hspace{\stretch{1}}(1.5)

The factors were labeled the uppering (a^\dagger) and lowering (a) operators respectively, and written

\begin{aligned}H_0 &= \hbar \omega \left( a^\dagger a + \frac{1}{{2}} \right) \\ a &= \sqrt{\frac{m \omega}{2 \hbar}} X + i \sqrt{\frac{1}{2m \hbar \omega}} P \\ a^\dagger &= \sqrt{\frac{m \omega}{2 \hbar}} X - i \sqrt{\frac{1}{2m \hbar \omega}} P.\end{aligned} \hspace{\stretch{1}}(1.6)

Observe that we can find the inverse relations

\begin{aligned}X &= \sqrt{ \frac{\hbar}{2 m \omega} } \left( a + a^\dagger \right) \\ P &= i \sqrt{ \frac{m \hbar \omega}{2} } \left( a^\dagger  - a \right)\end{aligned} \hspace{\stretch{1}}(1.9)

Question
What is a good reason that we chose this particular factorization? For example, a quick computation shows that we could have also picked

\begin{aligned}H_0 = \hbar \omega \left( a a^\dagger - \frac{1}{{2}} \right).\end{aligned} \hspace{\stretch{1}}(1.11)

I don’t know that answer. That said, this second factorization is useful in that it provides the commutator relation between the raising and lowering operators, since subtracting 1.11 and 1.6 yields

\begin{aligned}\left[{a},{a^\dagger}\right] = 1.\end{aligned} \hspace{\stretch{1}}(1.12)

If we suppose that we have eigenstates for the operator a^\dagger a of the form

\begin{aligned}a^\dagger a {\lvert {n} \rangle} = \lambda_n {\lvert {n} \rangle},\end{aligned} \hspace{\stretch{1}}(1.13)

then the problem of finding the eigensolution of H_0 reduces to solving this problem. Because a^\dagger a commutes with 1/2, an eigenstate of a^\dagger a is also an eigenstate of H_0. Utilizing 1.12 we then have

\begin{aligned}a^\dagger a ( a {\lvert {n} \rangle} )&= (a a^\dagger - 1 ) a {\lvert {n} \rangle} \\ &= a (a^\dagger a - 1 ) {\lvert {n} \rangle} \\ &= a (\lambda_n - 1 ) {\lvert {n} \rangle} \\ &= (\lambda_n - 1 ) a {\lvert {n} \rangle},\end{aligned}

so we see that a {\lvert {n} \rangle} is an eigenstate of a^\dagger a with eigenvalue \lambda_n - 1.

Similarly for the raising operator

\begin{aligned}a^\dagger a ( a^\dagger {\lvert {n} \rangle} )&=a^\dagger (a  a^\dagger) {\lvert {n} \rangle} ) \\ &=a^\dagger (a^\dagger a + 1) {\lvert {n} \rangle} ) \\ &=a^\dagger (\lambda_n + 1) {\lvert {n} \rangle} ),\end{aligned}

and find that a^\dagger {\lvert {n} \rangle} is also an eigenstate of a^\dagger a with eigenvalue \lambda_n + 1.

Supposing that there is a lowest energy level (because the potential V(x) = m \omega x^2 /2 has a lower bound of zero) then the state {\lvert {0} \rangle} for which the energy is the lowest when operated on by a we have

\begin{aligned}a {\lvert {0} \rangle} = 0\end{aligned} \hspace{\stretch{1}}(1.14)

Thus

\begin{aligned}a^\dagger a {\lvert {0} \rangle} = 0,\end{aligned} \hspace{\stretch{1}}(1.15)

and

\begin{aligned}\lambda_0 = 0.\end{aligned} \hspace{\stretch{1}}(1.16)

This seems like a small bit of slight of hand, since it sneakily supplies an integer value to \lambda_0 where up to this point 0 was just a label.

If the eigenvalue equation we are trying to solve for the Hamiltonian is

\begin{aligned}H_0 {\lvert {n} \rangle} = E_n {\lvert {n} \rangle}.\end{aligned} \hspace{\stretch{1}}(1.17)

Then we must then have

\begin{aligned}E_n = \hbar \omega \left(\lambda_n + \frac{1}{{2}} \right) = \hbar \omega \left(n + \frac{1}{{2}} \right)\end{aligned} \hspace{\stretch{1}}(1.18)

Part (a)

We’ve now got enough context to attempt the first part of the question, calculation of

\begin{aligned}{\langle {n} \rvert} X^4 {\lvert {n} \rangle}\end{aligned} \hspace{\stretch{1}}(1.19)

We’ve calculated things like this before, such as

\begin{aligned}{\langle {n} \rvert} X^2 {\lvert {n} \rangle}&=\frac{\hbar}{2 m \omega} {\langle {n} \rvert} (a + a^\dagger)^2 {\lvert {n} \rangle}\end{aligned}

To continue we need an exact relation between {\lvert {n} \rangle} and {\lvert {n \pm 1} \rangle}. Recall that a {\lvert {n} \rangle} was an eigenstate of a^\dagger a with eigenvalue n - 1. This implies that the eigenstates a {\lvert {n} \rangle} and {\lvert {n-1} \rangle} are proportional

\begin{aligned}a {\lvert {n} \rangle} = c_n {\lvert {n - 1} \rangle},\end{aligned} \hspace{\stretch{1}}(1.20)

or

\begin{aligned}{\langle {n} \rvert} a^\dagger a {\lvert {n} \rangle} &= {\left\lvert{c_n}\right\rvert}^2 \left\langle{{n - 1}} \vert {{n-1}}\right\rangle = {\left\lvert{c_n}\right\rvert}^2 \\ n \left\langle{{n}} \vert {{n}}\right\rangle &= \\ n &=\end{aligned}

so that

\begin{aligned}a {\lvert {n} \rangle} = \sqrt{n} {\lvert {n - 1} \rangle}.\end{aligned} \hspace{\stretch{1}}(1.21)

Similarly let

\begin{aligned}a^\dagger {\lvert {n} \rangle} = b_n {\lvert {n + 1} \rangle},\end{aligned} \hspace{\stretch{1}}(1.22)

or

\begin{aligned}{\langle {n} \rvert} a a^\dagger {\lvert {n} \rangle} &= {\left\lvert{b_n}\right\rvert}^2 \left\langle{{n - 1}} \vert {{n-1}}\right\rangle = {\left\lvert{b_n}\right\rvert}^2 \\ {\langle {n} \rvert} (1 + a^\dagger a) {\lvert {n} \rangle} &= \\ 1 + n &=\end{aligned}

so that

\begin{aligned}a^\dagger {\lvert {n} \rangle} = \sqrt{n+1} {\lvert {n + 1} \rangle}.\end{aligned} \hspace{\stretch{1}}(1.23)

We can now return to 1.19, and find

\begin{aligned}{\langle {n} \rvert} X^4 {\lvert {n} \rangle}&=\frac{\hbar^2}{4 m^2 \omega^2} {\langle {n} \rvert} (a + a^\dagger)^4 {\lvert {n} \rangle}\end{aligned}

Consider half of this braket

\begin{aligned}(a + a^\dagger)^2 {\lvert {n} \rangle}&=\left( a^2 + (a^\dagger)^2 + a^\dagger a + a a^\dagger \right) {\lvert {n} \rangle} \\ &=\left( a^2 + (a^\dagger)^2 + a^\dagger a + (1 + a^\dagger a) \right) {\lvert {n} \rangle} \\ &=\left( a^2 + (a^\dagger)^2 + 1 + 2 a^\dagger a \right) {\lvert {n} \rangle} \\ &=\sqrt{n-1}\sqrt{n-2} {\lvert {n-2} \rangle}+\sqrt{n+1}\sqrt{n+2} {\lvert {n + 2} \rangle}+{\lvert {n} \rangle}+  2 n {\lvert {n} \rangle}\end{aligned}

Squaring, utilizing the Hermitian nature of the X operator

\begin{aligned}{\langle {n} \rvert} X^4 {\lvert {n} \rangle}=\frac{\hbar^2}{4 m^2 \omega^2}\left((n-1)(n-2) + (n+1)(n+2) + (1 + 2n)^2\right)=\frac{\hbar^2}{4 m^2 \omega^2}\left( 6 n^2 + 4 n + 5 \right)\end{aligned} \hspace{\stretch{1}}(1.24)

Part (b)

Find the ground state energy of the Hamiltonian H = H_0 + \gamma X^2 for \gamma > 0.

The new Hamiltonian has the form

\begin{aligned}H = \frac{P^2}{2m} + \frac{1}{{2}} m \left(\omega^2 + \frac{2 \gamma}{m} \right) X^2 =\frac{P^2}{2m} + \frac{1}{{2}} m {\omega'}^2 X^2,\end{aligned} \hspace{\stretch{1}}(1.25)

where

\begin{aligned}\omega' = \sqrt{ \omega^2 + \frac{2 \gamma}{m} }\end{aligned} \hspace{\stretch{1}}(1.26)

The energy states of the Hamiltonian are thus

\begin{aligned}E_n = \hbar \sqrt{ \omega^2 + \frac{2 \gamma}{m} } \left( n + \frac{1}{{2}} \right)\end{aligned} \hspace{\stretch{1}}(1.27)

and the ground state of the modified Hamiltonian H is thus

\begin{aligned}E_0 = \frac{\hbar}{2} \sqrt{ \omega^2 + \frac{2 \gamma}{m} }\end{aligned} \hspace{\stretch{1}}(1.28)

Part (c)

Find the ground state energy of the Hamiltonian H = H_0 - \alpha X.

With a bit of play, this new Hamiltonian can be factored into

\begin{aligned}H= \hbar \omega \left( b^\dagger b + \frac{1}{{2}} \right) - \frac{\alpha^2}{2 m \omega^2}= \hbar \omega \left( b b^\dagger - \frac{1}{{2}} \right) - \frac{\alpha^2}{2 m \omega^2},\end{aligned} \hspace{\stretch{1}}(1.29)

where

\begin{aligned}b &= \sqrt{\frac{m \omega}{2\hbar}} X + \frac{i P}{\sqrt{2 m \hbar \omega}} - \frac{\alpha}{\omega \sqrt{ 2 m \hbar \omega }} \\ b^\dagger &= \sqrt{\frac{m \omega}{2\hbar}} X - \frac{i P}{\sqrt{2 m \hbar \omega}} - \frac{\alpha}{\omega \sqrt{ 2 m \hbar \omega }}.\end{aligned} \hspace{\stretch{1}}(1.30)

From 1.29 we see that we have the same sort of commutator relationship as in the original Hamiltonian

\begin{aligned}\left[{b},{b^\dagger}\right] = 1,\end{aligned} \hspace{\stretch{1}}(1.32)

and because of this, all the preceding arguments follow unchanged with the exception that the energy eigenstates of this Hamiltonian are shifted by a constant

\begin{aligned}H {\lvert {n} \rangle} = \left( \hbar \omega \left( n + \frac{1}{{2}} \right) - \frac{\alpha^2}{2 m \omega^2} \right) {\lvert {n} \rangle},\end{aligned} \hspace{\stretch{1}}(1.33)

where the {\lvert {n} \rangle} states are simultaneous eigenstates of the b^\dagger b operator

\begin{aligned}b^\dagger b {\lvert {n} \rangle} = n {\lvert {n} \rangle}.\end{aligned} \hspace{\stretch{1}}(1.34)

The ground state energy is then

\begin{aligned}E_0 = \frac{\hbar \omega }{2} - \frac{\alpha^2}{2 m \omega^2}.\end{aligned} \hspace{\stretch{1}}(1.35)

This makes sense. A translation of the entire position of the system should not effect the energy level distribution of the system, but we have set our reference potential differently, and have this constant energy adjustment to the entire system.

Hydrogen atom and spherical harmonics.

We are asked to show that for any eigenkets of the hydrogen atom {\lvert {\Phi_{nlm}} \rangle} we have

\begin{aligned}{\langle {\Phi_{nlm}} \rvert} X {\lvert {\Phi_{nlm}} \rangle} ={\langle {\Phi_{nlm}} \rvert} Y {\lvert {\Phi_{nlm}} \rangle} ={\langle {\Phi_{nlm}} \rvert} Z {\lvert {\Phi_{nlm}} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.36)

The summary sheet provides us with the wavefunction

\begin{aligned}\left\langle{\mathbf{r}} \vert {{\Phi_{nlm}}}\right\rangle = \frac{2}{n^2 a_0^{3/2}} \sqrt{\frac{(n-l-1)!}{(n+l)!)^3}} F_{nl}\left( \frac{2r}{n a_0} \right) Y_l^m(\theta, \phi),\end{aligned} \hspace{\stretch{1}}(2.37)

where F_{nl} is a real valued function defined in terms of Lagueere polynomials. Working with the expectation of the X operator to start with we have

\begin{aligned}{\langle {\Phi_{nlm}} \rvert} X {\lvert {\Phi_{nlm}} \rangle} &=\int \left\langle{{\Phi_{nlm}}} \vert {{\mathbf{r}'}}\right\rangle {\langle {\mathbf{r}'} \rvert} X {\lvert {\mathbf{r}} \rangle} \left\langle{\mathbf{r}} \vert {{\Phi_{nlm}}}\right\rangle d^3 \mathbf{r} d^3 \mathbf{r}' \\ &=\int \left\langle{{\Phi_{nlm}}} \vert {{\mathbf{r}'}}\right\rangle \delta(\mathbf{r} - \mathbf{r}') r \sin\theta \cos\phi \left\langle{\mathbf{r}} \vert {{\Phi_{nlm}}}\right\rangle d^3 \mathbf{r} d^3 \mathbf{r}' \\ &=\int \Phi_{nlm}^{*}(\mathbf{r}) r \sin\theta \cos\phi \Phi_{nlm}(\mathbf{r}) d^3 \mathbf{r} \\ &\sim\int r^2 dr {\left\lvert{ F_{nl}\left(\frac{2 r}{ n a_0} \right)}\right\rvert}^2 r \int \sin\theta d\theta d\phi{Y_l^m}^{*}(\theta, \phi) \sin\theta \cos\phi Y_l^m(\theta, \phi) \\ \end{aligned}

Recalling that the only \phi dependence in Y_l^m is e^{i m \phi} we can perform the d\phi integration directly, which is

\begin{aligned}\int_{\phi=0}^{2\pi} \cos\phi d\phi e^{-i m \phi} e^{i m \phi} = 0.\end{aligned} \hspace{\stretch{1}}(2.38)

We have the same story for the Y expectation which is

\begin{aligned}{\langle {\Phi_{nlm}} \rvert} X {\lvert {\Phi_{nlm}} \rangle} \sim\int r^2 dr {\left\lvert{F_{nl}\left( \frac{2 r}{ n a_0} \right)}\right\rvert}^2 r \int \sin\theta d\theta d\phi{Y_l^m}^{*}(\theta, \phi) \sin\theta \sin\phi Y_l^m(\theta, \phi).\end{aligned} \hspace{\stretch{1}}(2.39)

Our \phi integral is then just

\begin{aligned}\int_{\phi=0}^{2\pi} \sin\phi d\phi e^{-i m \phi} e^{i m \phi} = 0,\end{aligned} \hspace{\stretch{1}}(2.40)

also zero. The Z expectation is a slightly different story. There we have

\begin{aligned}\begin{aligned}{\langle {\Phi_{nlm}} \rvert} Z {\lvert {\Phi_{nlm}} \rangle} &\sim\int dr {\left\lvert{F_{nl}\left( \frac{2 r}{ n a_0} \right)}\right\rvert}^2 r^3  \\ &\quad \int_0^{2\pi} d\phi\int_0^\pi \sin \theta d\theta\left( \sin\theta \right)^{-2m}\left( \frac{d^{l - m}}{d (\cos\theta)^{l-m}} \sin^{2l}\theta \right)^2\cos\theta.\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.41)

Within this last integral we can make the substitution

\begin{aligned}u &= \cos\theta \\ \sin\theta d\theta &= - d(\cos\theta) = -du \\ u &\in [1, -1],\end{aligned} \hspace{\stretch{1}}(2.42)

and the integral takes the form

\begin{aligned}-\int_{-1}^1 (-du) \frac{1}{{(1 - u^2)^m}} \left( \frac{d^{l-m}}{d u^{l -m }} (1 - u^2)^l\right)^2 u.\end{aligned} \hspace{\stretch{1}}(2.45)

Here we have the product of two even functions, times one odd function (u), over a symmetric interval, so the end result is zero, completing the problem.

I wasn’t able to see how to exploit the parity result suggested in the problem, but it wasn’t so bad to show these directly.

Angular momentum operator.

Working with the appropriate expressions in Cartesian components, confirm that L_i {\lvert {\psi} \rangle} = 0 for each component of angular momentum L_i, if \left\langle{\mathbf{r}} \vert {{\psi}}\right\rangle = \psi(\mathbf{r}) is in fact only a function of r = {\left\lvert{\mathbf{r}}\right\rvert}.

In order to proceed, we will have to consider a matrix element, so that we can operate on {\lvert {\psi} \rangle} in position space. For that matrix element, we can proceed to insert complete states, and reduce the problem to a question of wavefunctions. That is

\begin{aligned}{\langle {\mathbf{r}} \rvert} L_i {\lvert {\psi} \rangle}&=\int d^3 \mathbf{r}' {\langle {\mathbf{r}} \rvert} L_i {\lvert {\mathbf{r}'} \rangle} \left\langle{{\mathbf{r}'}} \vert {{\psi}}\right\rangle \\ &=\int d^3 \mathbf{r}' {\langle {\mathbf{r}} \rvert} \epsilon_{i a b} X_a P_b {\lvert {\mathbf{r}'} \rangle} \left\langle{{\mathbf{r}'}} \vert {{\psi}}\right\rangle \\ &=-i \hbar \epsilon_{i a b} \int d^3 \mathbf{r}' x_a {\langle {\mathbf{r}} \rvert} \frac{\partial {\psi(\mathbf{r}')}}{\partial {X_b}} {\lvert {\mathbf{r}'} \rangle}  \\ &=-i \hbar \epsilon_{i a b} \int d^3 \mathbf{r}' x_a \frac{\partial {\psi(\mathbf{r}')}}{\partial {x_b}} \left\langle{\mathbf{r}} \vert {{\mathbf{r}'}}\right\rangle  \\ &=-i \hbar \epsilon_{i a b} \int d^3 \mathbf{r}' x_a \frac{\partial {\psi(\mathbf{r}')}}{\partial {x_b}} \delta^3(\mathbf{r} - \mathbf{r}') \\ &=-i \hbar \epsilon_{i a b} x_a \frac{\partial {\psi(\mathbf{r})}}{\partial {x_b}} \end{aligned}

With \psi(\mathbf{r}) = \psi(r) we have

\begin{aligned}{\langle {\mathbf{r}} \rvert} L_i {\lvert {\psi} \rangle}&=-i \hbar \epsilon_{i a b} x_a \frac{\partial {\psi(r)}}{\partial {x_b}}  \\ &=-i \hbar \epsilon_{i a b} x_a \frac{\partial {r}}{\partial {x_b}} \frac{d\psi(r)}{dr}  \\ &=-i \hbar \epsilon_{i a b} x_a \frac{1}{{2}} 2 x_b \frac{1}{{r}} \frac{d\psi(r)}{dr}  \\ \end{aligned}

We are left with an sum of a symmetric product x_a x_b with the antisymmetric tensor \epsilon_{i a b} so this is zero for all i \in [1,3].

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , | Leave a Comment »

A problem on spherical harmonics.

Posted by peeterjoot on January 10, 2011

[Click here for a PDF of this post with nicer formatting]

Motivation.

One of the PHY356 exam questions from the final I recall screwing up on, and figuring it out after the fact on the drive home. The question actually clarified a difficulty I’d had, but unfortunately I hadn’t had the good luck to perform such a question, to help figure this out before the exam.

From what I recall the question provided an initial state, with some degeneracy in m, perhaps of the following form

\begin{aligned}{\lvert {\phi(0)} \rangle} = \sqrt{\frac{1}{7}} {\lvert { 12 } \rangle}+\sqrt{\frac{2}{7}} {\lvert { 10 } \rangle}+\sqrt{\frac{4}{7}} {\lvert { 20 } \rangle},\end{aligned} \hspace{\stretch{1}}(1.1)

and a Hamiltonian of the form

\begin{aligned}H = \alpha L_z\end{aligned} \hspace{\stretch{1}}(1.2)

From what I recall of the problem, I am going to reattempt it here now.

Evolved state.

One part of the question was to calculate the evolved state. Application of the time evolution operator gives us

\begin{aligned}{\lvert {\phi(t)} \rangle} = e^{-i \alpha L_z t/\hbar} \left(\sqrt{\frac{1}{7}} {\lvert { 12 } \rangle}+\sqrt{\frac{2}{7}} {\lvert { 10 } \rangle}+\sqrt{\frac{4}{7}} {\lvert { 20 } \rangle} \right).\end{aligned} \hspace{\stretch{1}}(1.3)

Now we note that L_z {\lvert {12} \rangle} = 2 \hbar {\lvert {12} \rangle}, and L_z {\lvert { l 0} \rangle} = 0 {\lvert {l 0} \rangle}, so the exponentials reduce this nicely to just

\begin{aligned}{\lvert {\phi(t)} \rangle} = \sqrt{\frac{1}{7}} e^{ -2 i \alpha t } {\lvert { 12 } \rangle}+\sqrt{\frac{2}{7}} {\lvert { 10 } \rangle}+\sqrt{\frac{4}{7}} {\lvert { 20 } \rangle}.\end{aligned} \hspace{\stretch{1}}(1.4)

Probabilities for L_z measurement outcomes.

I believe we were also asked what the probabilities for the outcomes of a measurement of L_z at this time would be. Here is one place that I think that I messed up, and it is really a translation error, attempting to get from the english description of the problem to the math description of the same. I’d had trouble with this process a few times in the problems, and managed to blunder through use of language like “measure”, and “outcome”, but don’t think I really understood how these were used properly.

What are the outcomes that we measure? We measure operators, but the result of a measurement is the eigenvalue associated with the operator. What are the eigenvalues of the L_z operator? These are the m \hbar values, from the operation L_z {\lvert {l m} \rangle} = m \hbar {\lvert {l m} \rangle}. So, given this initial state, there are really two outcomes that are possible, since we have two distinct eigenvalues. These are 2 \hbar and 0 for m = 2, and m= 0 respectively.

A measurement of the “outcome” 2 \hbar, will be the probability associated with the amplitude \left\langle{{ 1 2 }} \vert {{\phi(t)}}\right\rangle (ie: the absolute square of this value). That is

\begin{aligned}{\left\lvert{ \left\langle{{ 1 2 }} \vert {{\phi(t) }}\right\rangle }\right\rvert}^2 = \frac{1}{7}.\end{aligned} \hspace{\stretch{1}}(1.5)

Now, the only other outcome for a measurement of L_z for this state is a measurement of 0 \hbar, and the probability of this is then just 1 - \frac{1}{7} = \frac{6}{7}. On the exam, I think I listed probabilities for three outcomes, with values \frac{1}{7}, \frac{2}{7}, \frac{4}{7} respectively, but in retrospect that seems blatently wrong.

Probabilities for \mathbf{L}^2 measurement outcomes.

What are the probabilities for the outcomes for a measurement of \mathbf{L}^2 after this? The first question is really what are the outcomes. That’s really a question of what are the possible eigenvalues of \mathbf{L}^2 that can be measured at this point. Recall that we have

\begin{aligned}\mathbf{L}^2 {\lvert {l m} \rangle} = \hbar^2 l (l + 1) {\lvert {l m} \rangle}\end{aligned} \hspace{\stretch{1}}(1.6)

So for a state that has only l=1,2 contributions before the measurement, the eigenvalues that can be observed for the \mathbf{L}^2 operator are respectively 2 \hbar^2 and 6 \hbar^2 respectively.

For the l=2 case, our probability is 4/7, leaving 3/7 as the probability for measurement of the l=1 (2 \hbar^2) eigenvalue. We can compute this two ways, and it seems worthwhile to consider both. This first method makes use of the fact that the L_z operator leaves the state vector intact, but it also seems like a bit of a cheat. Consider instead two possible results of measurement after the L_z observation. When an L_z measurement of 0 \hbar is performed our state will be left with only the m=0 kets. That is

\begin{aligned}{\lvert {\psi_a} \rangle} = \frac{1}{{\sqrt{3}}} \left( {\lvert {10} \rangle} + \sqrt{2} {\lvert {20} \rangle} \right),\end{aligned} \hspace{\stretch{1}}(1.7)

whereas, when a 2 \hbar measurement of L_z is performed our state would then only have the m=2 contribution, and would be

\begin{aligned}{\lvert {\psi_b} \rangle} = e^{-2 i \alpha t} {\lvert {12 } \rangle}.\end{aligned} \hspace{\stretch{1}}(1.8)

We have two possible ways of measuring the 2 \hbar^2 eigenvalue for \mathbf{L}^2. One is when our state was {\lvert {\psi_a} \rangle} (, and the resulting state has a {\lvert {10} \rangle} component, and the other is after the m=2 measurement, where our state is left with a {\lvert {12} \rangle} component.

The resulting probability is then a conditional probability result

\begin{aligned}\frac{6}{7} {\left\lvert{ \left\langle{{10}} \vert {{\psi_a}}\right\rangle }\right\rvert}^2 + \frac{1}{7} {\left\lvert{ \left\langle{{12 }} \vert {{\psi_b}}\right\rangle}\right\rvert}^2 = \frac{3}{7}\end{aligned} \hspace{\stretch{1}}(1.9)

The result is the same, as expected, but this is likely a more convicing argument.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , | Leave a Comment »