Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

Archive for November, 2010

My submission for PHY356 (Quantum Mechanics I) Problem Set 3.

Posted by peeterjoot on November 30, 2010

[Click here for a PDF of this post with nicer formatting]

Problem 1.

Statement

A particle of mass m is free to move along the x-direction such that V(X)=0. The state of the system is represented by the wavefunction Eq. (4.74)

\begin{aligned}\psi(x,t) = \frac{1}{{\sqrt{2\pi}}} \int_{-\infty}^\infty dk e^{i k x} e^{- i \omega t} f(k)\end{aligned} \hspace{\stretch{1}}(1.1)

with f(k) given by Eq. (4.59).

\begin{aligned}f(k) &= N e^{-\alpha k^2}\end{aligned} \hspace{\stretch{1}}(1.2)

Note that I’ve inserted a 1/\sqrt{2\pi} factor above that isn’t in the text, because otherwise \psi(x,t) will not be unit normalized (assuming f(k) is normalized in wavenumber space).

\begin{itemize}
\item
(a) What is the group velocity associated with this state?
\item
(b) What is the probability for measuring the particle at position x=x_0>0 at time t=t_0>0?
\item
(c) What is the probability per unit length for measuring the particle at position x=x_0>0 at time t=t_0>0?
\item
(d) Explain the physical meaning of the above results.
\end{itemize}

Solution

(a). group velocity.

To calculate the group velocity we need to know the dependence of \omega on k.

Let’s step back and consider the time evolution action on \psi(x,0). For the free particle case we have

\begin{aligned}H = \frac{\mathbf{p}^2}{2m} = -\frac{\hbar^2}{2m} \partial_{xx}.\end{aligned} \hspace{\stretch{1}}(1.3)

Writing N' = N/\sqrt{2\pi} we have

\begin{aligned}-\frac{i t}{\hbar} H \psi(x,0) &= \frac{i t \hbar }{2m} N' \int_{-\infty}^\infty dk (i k)^2 e^{i k x - \alpha k^2} \\ &= N' \int_{-\infty}^\infty dk \frac{-i t \hbar k^2}{2m} e^{i k x - \alpha k^2}\end{aligned}

Each successive application of -iHt/\hbar will introduce another power of -it\hbar k^2/2 m, so once we sum all the terms of the exponential series U(t) = e^{-iHt/\hbar} we have

\begin{aligned}\psi(x,t) =N' \int_{-\infty}^\infty dk \exp\left( \frac{-i t \hbar k^2}{2m} + i k x - \alpha k^2 \right).\end{aligned} \hspace{\stretch{1}}(1.4)

Comparing with 1.1 we find

\begin{aligned}\omega(k) = \frac{\hbar k^2}{2m}.\end{aligned} \hspace{\stretch{1}}(1.5)

This completes this section of the problem since we are now able to calculate the group velocity

\begin{aligned}v_g = \frac{\partial {\omega(k)}}{\partial {k}} = \frac{\hbar k}{m}.\end{aligned} \hspace{\stretch{1}}(1.6)

(b). What is the probability for measuring the particle at position x=x_0>0 at time t=t_0>0?

In order to evaluate the probability, it looks desirable to evaluate the wave function integral 1.4.
Writing 2 \beta = i/(\alpha + i t \hbar/2m ), the exponent of that integral is

\begin{aligned}-k^2 \left( \alpha + \frac{i t \hbar }{2m} \right) + i k x&=-\left( \alpha + \frac{i t \hbar }{2m} \right) \left( k^2 - \frac{i k x }{\alpha + \frac{i t \hbar }{2m} } \right) \\ &=-\frac{i}{2\beta} \left( (k - x \beta )^2 - x^2 \beta^2 \right)\end{aligned}

The x^2 portion of the exponential

\begin{aligned}\frac{i x^2 \beta^2}{2\beta} = \frac{i x^2 \beta}{2} = - \frac{x^2 }{4 (\alpha + i t \hbar /2m)}\end{aligned}

then comes out of the integral. We can also make a change of variables q = k - x \beta to evaluate the remainder of the Gaussian and are left with

\begin{aligned}\psi(x,t) =N' \sqrt{ \frac{\pi}{\alpha + i t \hbar/2m} } \exp\left( - \frac{x^2 }{4 (\alpha + i t \hbar /2m)} \right).\end{aligned} \hspace{\stretch{1}}(1.7)

Observe that from 1.2 we can compute N = (2 \alpha/\pi)^{1/4}, which could be substituted back into 1.7 if desired.

Our probability density is

\begin{aligned}{\left\lvert{ \psi(x,t) }\right\rvert}^2 &=\frac{1}{{2 \pi}} N^2 {\left\lvert{ \frac{\pi}{\alpha + i t \hbar/2m} }\right\rvert} \exp\left( - \frac{x^2}{4} \left( \frac{1}{{(\alpha + i t \hbar /2m)}} + \frac{1}{{(\alpha - i t \hbar /2m)}} \right) \right) \\ &=\frac{1}{{2 \pi}} \sqrt{\frac{2 \alpha}{\pi} } \frac{\pi}{\sqrt{\alpha^2 + (t \hbar/2m)^2 }} \exp\left( - \frac{x^2}{4} \frac{1}{{\alpha^2 + (t \hbar/2m)^2 }} \left( \alpha - i t \hbar /2m + \alpha + i t \hbar /2m \right)\right) \\ &=\end{aligned}

With a final regrouping of terms, this is

\begin{aligned}{\left\lvert{ \psi(x,t) }\right\rvert}^2 =\sqrt{\frac{ \alpha }{ 2 \pi (\alpha^2 + (t \hbar/2m)^2 }) }\exp\left( - \frac{x^2}{2} \frac{\alpha}{\alpha^2 + (t \hbar/2m)^2 } \right).\end{aligned} \hspace{\stretch{1}}(1.8)

As a sanity check we observe that this integrates to unity for all t as desired. The probability that we find the particle at position x > x_0 is then

\begin{aligned}P_{x>x_0}(t) = \sqrt{\frac{ \alpha }{ 2 \pi (\alpha^2 + (t \hbar/2m)^2 }) }\int_{x=x_0}^\infty dx \exp\left( - \frac{x^2}{2} \frac{\alpha}{\alpha^2 + (t \hbar/2m)^2 } \right)\end{aligned} \hspace{\stretch{1}}(1.9)

The only simplification we can make is to rewrite this in terms of the complementary error function

\begin{aligned}\text{erfc}(x) = \frac{2}{\sqrt{\pi}} \int_x^\infty e^{-t^2} dt.\end{aligned} \hspace{\stretch{1}}(1.10)

Writing

\begin{aligned}\beta(t) = \frac{\alpha}{\alpha^2 + (t \hbar/2m)^2 },\end{aligned} \hspace{\stretch{1}}(1.11)

we have

\begin{aligned}P_{x>x_0}(t_0) = \frac{1}{{2}} \text{erfc} \left( \sqrt{\beta(t_0)/2} x_0 \right)\end{aligned} \hspace{\stretch{1}}(1.12)

Sanity checking this result, we note that since \text{erfc}(0) = 1 the probability for finding the particle in the x>0 range is 1/2 as expected.

(c). What is the probability per unit length for measuring the particle at position x=x_0>0 at time t=t_0>0?

This unit length probability is thus

\begin{aligned}P_{x>x_0+1/2}(t_0) - P_{x>x_0-1/2}(t_0) &=\frac{1}{{2}} \text{erfc}\left( \sqrt{\frac{\beta(t_0)}{2}} \left(x_0+\frac{1}{{2}} \right) \right) -\frac{1}{{2}} \text{erfc}\left( \sqrt{\frac{\beta(t_0)}{2}} \left(x_0-\frac{1}{{2}} \right) \right) \end{aligned} \hspace{\stretch{1}}(1.13)

(d). Explain the physical meaning of the above results.

To get an idea what the group velocity means, observe that we can write our wavefunction 1.1 as

\begin{aligned}\psi(x,t) = \frac{1}{{\sqrt{2\pi}}} \int_{-\infty}^\infty dk e^{i k (x - v_g t)} f(k)\end{aligned} \hspace{\stretch{1}}(1.14)

We see that the phase coefficient of the Gaussian f(k) “moves” at the rate of the group velocity v_g. Also recall that in the text it is noted that the time dependent term 1.11 can be expressed in terms of position and momentum uncertainties (\Delta x)^2, and (\Delta p)^2 = \hbar^2 (\Delta k)^2. That is

\begin{aligned}\frac{1}{{\beta(t)}} = (\Delta x)^2 + \frac{(\Delta p)^2}{m^2} t^2 \equiv (\Delta x(t))^2\end{aligned} \hspace{\stretch{1}}(1.15)

This makes it evident that the probability density flattens and spreads over time with the rate equal to the uncertainty of the group velocity \Delta p/m = \Delta v_g (since v_g = \hbar k/m). It is interesting that something as simple as this phase change results in a physically measurable phenomena. We see that a direct result of this linear with time phase change, we are less able to find the particle localized around it’s original time x = 0 position as more time elapses.

Problem 2.

Statement

A particle with intrinsic angular momentum or spin s=1/2 is prepared in the spin-up with respect to the z-direction state {\lvert {f} \rangle}={\lvert {z+} \rangle}. Determine

\begin{aligned}\left({\langle {f} \rvert} \left( S_z - {\langle {f} \rvert} S_z {\lvert {f} \rangle} \mathbf{1} \right)^2 {\lvert {f} \rangle} \right)^{1/2}\end{aligned} \hspace{\stretch{1}}(2.16)

and

\begin{aligned}\left({\langle {f} \rvert} \left( S_x - {\langle {f} \rvert} S_x {\lvert {f} \rangle} \mathbf{1} \right)^2 {\lvert {f} \rangle} \right)^{1/2}\end{aligned} \hspace{\stretch{1}}(2.17)

and explain what these relations say about the system.

Solution: Uncertainty of S_z with respect to {\lvert {z+} \rangle}

Noting that S_z {\lvert {f} \rangle} = S_z {\lvert {z+} \rangle} = \hbar/2 {\lvert {z+} \rangle} we have

\begin{aligned}{\langle {f} \rvert} S_z {\lvert {f} \rangle} = \frac{\hbar}{2} \end{aligned} \hspace{\stretch{1}}(2.18)

The average outcome for many measurements of the physical quantity associated with the operator S_z when the system has been prepared in the state {\lvert {f} \rangle} = {\lvert {z+} \rangle} is \hbar/2.

\begin{aligned}\Bigl(S_z - {\langle {f} \rvert} S_z {\lvert {f} \rangle} \mathbf{1} \Bigr) {\lvert {f} \rangle}&= \frac{\hbar}{2} {\lvert {f} \rangle} -\frac{\hbar}{2} {\lvert {f} \rangle} = 0\end{aligned} \hspace{\stretch{1}}(2.19)

We could also compute this from the matrix representations, but it is slightly more work.

Operating once more with S_z - {\langle {f} \rvert} S_z {\lvert {f} \rangle} \mathbf{1} on the zero ket vector still gives us zero, so we have zero in the root for 2.16

\begin{aligned}\left({\langle {f} \rvert} \left( S_z - {\langle {f} \rvert} S_z {\lvert {f} \rangle} \mathbf{1} \right)^2 {\lvert {f} \rangle} \right)^{1/2} = 0\end{aligned} \hspace{\stretch{1}}(2.20)

What does 2.20 say about the state of the system? Given many measurements of the physical quantity associated with the operator V = (S_z - {\langle {f} \rvert} S_z {\lvert {f} \rangle} \mathbf{1})^2, where the initial state of the system is always {\lvert {f} \rangle} = {\lvert {z+} \rangle}, then the average of the measurements of the physical quantity associated with V is zero. We can think of the operator V^{1/2} = S_z - {\langle {f} \rvert} S_z {\lvert {f} \rangle} \mathbf{1} as a representation of the observable, “how different is the measured result from the average {\langle {f} \rvert} S_z {\lvert {f} \rangle}”.

So, given a system prepared in state {\lvert {f} \rangle} = {\lvert {z+} \rangle}, and performance of repeated measurements capable of only examining spin-up, we find that the system is never any different than its initial spin-up state. We have no uncertainty that we will measure any difference from spin-up on average, when the system is prepared in the spin-up state.

Solution: Uncertainty of S_x with respect to {\lvert {z+} \rangle}

For this second part of the problem, we note that we can write

\begin{aligned}{\lvert {f} \rangle} = {\lvert {z+} \rangle} = \frac{1}{{\sqrt{2}}} ( {\lvert {x+} \rangle} + {\lvert {x-} \rangle} ).\end{aligned} \hspace{\stretch{1}}(2.21)

So the expectation value of S_x with respect to this state is

\begin{aligned}{\langle {f} \rvert} S_x {\lvert {f} \rangle}&=\frac{1}{{2}}( {\lvert {x+} \rangle} + {\lvert {x-} \rangle} ) S_x ( {\lvert {x+} \rangle} + {\lvert {x-} \rangle} ) \\ &=\hbar ( {\lvert {x+} \rangle} + {\lvert {x-} \rangle} ) ( {\lvert {x+} \rangle} - {\lvert {x-} \rangle} ) \\ &=\hbar ( 1 + 0 + 0 -1 ) \\ &= 0\end{aligned}

After repeated preparation of the system in state {\lvert {f} \rangle}, the average measurement of the physical quantity associated with operator S_x is zero. In terms of the eigenstates for that operator {\lvert {x+} \rangle} and {\lvert {x-} \rangle} we have equal probability of measuring either given this particular initial system state.

For the variance calculation, this reduces our problem to the calculation of {\langle {f} \rvert} S_x^2 {\lvert {f} \rangle}, which is

\begin{aligned}{\langle {f} \rvert} S_x^2 {\lvert {f} \rangle} &=\frac{1}{{2}} \left( \frac{\hbar}{2} \right)^2 ( {\lvert {x+} \rangle} + {\lvert {x-} \rangle} ) ( (+1)^2 {\lvert {x+} \rangle} + (-1)^2 {\lvert {x-} \rangle} ) \\ &=\left( \frac{\hbar}{2} \right)^2,\end{aligned}

so for 2.22 we have

\begin{aligned}\left({\langle {f} \rvert} \left( S_x - {\langle {f} \rvert} S_x {\lvert {f} \rangle} \mathbf{1} \right)^2 {\lvert {f} \rangle} \right)^{1/2} = \frac{\hbar}{2}\end{aligned} \hspace{\stretch{1}}(2.22)

The average of the absolute magnitude of the physical quantity associated with operator S_x is found to be \hbar/2 when repeated measurements are performed given a system initially prepared in state {\lvert {f} \rangle} = {\lvert {z+} \rangle}. We saw that the average value for the measurement of that physical quantity itself was zero, showing that we have equal probabilities of measuring either \pm \hbar/2 for this experiment. A measurement that would show the system was in the x-direction spin-up or spin-down states would find that these states are equi-probable.

Grading comments.

I lost one mark on the group velocity response. Instead of 3.23 he wanted

\begin{aligned}v_g = {\left. \frac{\partial {\omega(k)}}{\partial {k}} \right\vert}_{k = k_0}= \frac{\hbar k_0}{m} = 0\end{aligned} \hspace{\stretch{1}}(3.23)

since f(k) peaks at k=0.

I’ll have to go back and think about that a bit, because I’m unsure of the last bits of the reasoning there.

I also lost 0.5 and 0.25 (twice) because I didn’t explicitly state that the probability that the particle is at x_0, a specific single point, is zero. I thought that was obvious and didn’t have to be stated, but it appears expressing this explicitly is what he was looking for.

Curiously, one thing that I didn’t loose marks on was, the wrong answer for the probability per unit length. What he was actually asking for was the following

\begin{aligned}\lim_{\epsilon \rightarrow 0} \frac{1}{{\epsilon}} \int_{x_0 - \epsilon/2}^{x_0 + \epsilon/2} {\left\lvert{ \Psi(x_0, t_0) }\right\rvert}^2 dx = {\left\lvert{\Psi(x_0, t_0)}\right\rvert}^2\end{aligned} \hspace{\stretch{1}}(3.24)

That’s a whole lot more sensible seeming quantity to calculate than what I did, but I don’t think that I can be faulted too much since the phrase was never used in the text nor in the lectures.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , , | Leave a Comment »

PHY356F: Quantum Mechanics I. Lecture 11 notes. Harmonic Oscillator.

Posted by peeterjoot on November 30, 2010

[Click here for a PDF of this post with nicer formatting]

Setup.

Why study this problem?

It is relevant to describing the oscillation of molecules, quantum states of light, vibrations of the lattice structure of a solid, and so on.

FIXME: projected picture of masses on springs, with a ladle shaped well, approximately Harmonic about the minimum of the bucket.

The problem to solve is the one dimensional Hamiltonian

\begin{aligned}V(X) &= \frac{1}{{2}} K X^2 \\ K &= m \omega^2 \\ H &= \frac{P^2}{2m} + V(X)\end{aligned} \hspace{\stretch{1}}(8.168)

where m is the mass, \omega is the frequency, X is the position operator, and P is the momentum operator. Of these quantities, \omega and m are classical quantities.

This problem can be used to illustrate some of the reasons why we study the different pictures (Heisenberg, Interaction and Schr\”{o}dinger). This is a problem well suited to all of these (FIXME: lookup an example of this with the interaction picture. The book covers H and S methods.

We attack this with a non-intuitive, but cool technique. Introduce the raising a^\dagger and lowering a operators:

\begin{aligned}a &= \sqrt{\frac{m \omega}{2 \hbar}} \left( X + i \frac{P}{m\omega} \right) \\ a^\dagger &= \sqrt{\frac{m \omega}{2 \hbar}} \left( X - i \frac{P}{m\omega} \right)\end{aligned} \hspace{\stretch{1}}(8.171)

\paragraph{Question:} are we using the dagger for more than Hermitian conjugation in this case.
\paragraph{Answer:} No, this is precisely the Hermitian conjugation operation.

Solving for X and P in terms of a and a^\dagger, we have

\begin{aligned}a + a^\dagger &= \sqrt{\frac{m \omega}{2 \hbar}} 2 X  \\ a - a^\dagger &= \sqrt{\frac{m \omega}{2 \hbar}} 2 i \frac{P }{m \omega}\end{aligned}

or

\begin{aligned}X &= \sqrt{\frac{\hbar}{2 m \omega}} (a^\dagger + a) \\ P &= i \sqrt{\frac{\hbar m \omega}{2}} (a^\dagger -a)\end{aligned} \hspace{\stretch{1}}(8.173)

Express H in terms of a and a^\dagger

\begin{aligned}H &= \frac{P^2}{2m} + \frac{1}{{2}} K X^2  \\ &= \frac{1}{2m} \left(i \sqrt{\frac{\hbar m \omega}{2}} (a^\dagger -a)\right)^2+ \frac{1}{{2}} m \omega^2\left(\sqrt{\frac{\hbar}{2 m \omega}} (a^\dagger + a) \right)^2 \\ &= \frac{-\hbar \omega}{4} \left(a^\dagger a^\dagger + a^2 - a a^\dagger - a^\dagger a\right)+ \frac{\hbar \omega}{4}\left(a^\dagger a^\dagger + a^2 + a a^\dagger + a^\dagger a\right) \\ \end{aligned}

\begin{aligned}H= \frac{\hbar \omega}{2} \left(a a^\dagger + a^\dagger a\right) = \frac{\hbar \omega}{2} \left(2 a^\dagger a + \left[{a},{a^\dagger}\right]\right) \end{aligned} \hspace{\stretch{1}}(8.175)

Since \left[{X},{P}\right] = i \hbar \mathbf{1} then we can show that \left[{a},{a^\dagger}\right] = \mathbf{1}. Solve for \left[{a},{a^\dagger}\right] as follows

\begin{aligned}i \hbar &=\left[{X},{P}\right] \\ &=\left[{\sqrt{\frac{\hbar}{2 m \omega}} (a^\dagger + a) },{i \sqrt{\frac{\hbar m \omega}{2}} (a^\dagger -a)}\right] \\ &=\sqrt{\frac{\hbar}{2 m \omega}} i \sqrt{\frac{\hbar m \omega}{2}} \left[{a^\dagger + a},{a^\dagger -a}\right] \\ &= \frac{i \hbar}{2}\left(\left[{a^\dagger},{a^\dagger}\right] -\left[{a^\dagger},{a}\right] +\left[{a},{a^\dagger}\right] -\left[{a},{a}\right] \right)  \\ &= \frac{i \hbar}{2}\left(0+2 \left[{a},{a^\dagger}\right] -0\right)\end{aligned}

Comparing LHS and RHS we have as stated

\begin{aligned}\left[{a},{a^\dagger}\right] = \mathbf{1}\end{aligned} \hspace{\stretch{1}}(8.176)

and thus from 8.175 we have

\begin{aligned}H = \hbar \omega \left( a^\dagger a + \frac{\mathbf{1}}{2} \right)\end{aligned} \hspace{\stretch{1}}(8.177)

Let {\lvert {n} \rangle} be the eigenstate of H so that H{\lvert {n} \rangle} = E_n {\lvert {n} \rangle}. From 8.177 we have

\begin{aligned}H {\lvert {n} \rangle} =\hbar \omega \left( a^\dagger a + \frac{\mathbf{1}}{2} \right) {\lvert {n} \rangle}\end{aligned} \hspace{\stretch{1}}(8.178)

or

\begin{aligned}a^\dagger a {\lvert {n} \rangle} + \frac{{\lvert {n} \rangle}}{2} = \frac{E_n}{\hbar \omega} {\lvert {n} \rangle}\end{aligned} \hspace{\stretch{1}}(8.179)

\begin{aligned}a^\dagger a {\lvert {n} \rangle} = \left( \frac{E_n}{\hbar \omega} - \frac{1}{{2}} \right) {\lvert {n} \rangle} = \lambda_n {\lvert {n} \rangle}\end{aligned} \hspace{\stretch{1}}(8.180)

We wish now to find the eigenstates of the “Number” operator a^\dagger a, which are simultaneously eigenstates of the Hamiltonian operator.

Observe that we have

\begin{aligned}a^\dagger a (a^\dagger {\lvert {n} \rangle} ) &= a^\dagger ( a a^\dagger {\lvert {n} \rangle} ) \\ &= a^\dagger ( \mathbf{1} + a^\dagger a ) {\lvert {n} \rangle}\end{aligned}

where we used \left[{a},{a^\dagger}\right] = a a^\dagger - a^\dagger a = \mathbf{1}.

\begin{aligned}a^\dagger a (a^\dagger {\lvert {n} \rangle} ) &= a^\dagger \left( \mathbf{1} + \frac{E_n}{\hbar\omega} - \frac{\mathbf{1}}{2} \right) {\lvert {n} \rangle} \\ &= a^\dagger \left( \frac{E_n}{\hbar\omega} + \frac{\mathbf{1}}{2} \right) {\lvert {n} \rangle},\end{aligned}

or

\begin{aligned}a^\dagger a (a^\dagger {\lvert {n} \rangle} ) = (\lambda_n + 1) (a^\dagger {\lvert {n} \rangle} )\end{aligned} \hspace{\stretch{1}}(8.181)

The new state a^\dagger {\lvert {n} \rangle} is presumed to lie in the same space, expressible as a linear combination of the basis states in this space. We can see the effect of the operator a a^\dagger on this new state, we find that the energy is changed, but the state is otherwise unchanged. Any state a^\dagger {\lvert {n} \rangle} is an eigenstate of a^\dagger a, and therefore also an eigenstate of the Hamiltonian.

Play the same game and win big by discovering that

\begin{aligned}a^\dagger a ( a {\lvert {n} \rangle} ) = (\lambda_n -1) (a {\lvert {n} \rangle} )\end{aligned} \hspace{\stretch{1}}(8.182)

There will be some state {\lvert {0} \rangle} such that

\begin{aligned}a {\lvert {0} \rangle} = 0 {\lvert {0} \rangle}\end{aligned} \hspace{\stretch{1}}(8.183)

which implies

\begin{aligned}a^\dagger (a {\lvert {0} \rangle}) = (a^\dagger a) {\lvert {0} \rangle} = 0\end{aligned} \hspace{\stretch{1}}(8.184)

so from 8.180 we have

\begin{aligned}\lambda_0 = 0\end{aligned} \hspace{\stretch{1}}(8.185)

Observe that we can identify \lambda_n = n for

\begin{aligned}\lambda_n = \left( \frac{E_n}{\hbar\omega} - \frac{1}{{2}} \right) = n,\end{aligned} \hspace{\stretch{1}}(8.186)

or

\begin{aligned}\frac{E_n}{\hbar\omega} = n + \frac{1}{{2}}\end{aligned} \hspace{\stretch{1}}(8.187)

or

\begin{aligned}E_n = \hbar \omega \left( n + \frac{1}{{2}} \right)\end{aligned} \hspace{\stretch{1}}(8.188)

where n = 0, 1, 2, \cdots.

We can write

\begin{aligned}\hbar \omega \left( a^\dagger a + \frac{1}{{2}} \mathbf{1} \right) {\lvert {n} \rangle} &= E_n {\lvert {n} \rangle} \\ a^\dagger a {\lvert {n} \rangle} + \frac{1}{{2}} {\lvert {n} \rangle} &= \frac{E_n}{\hbar \omega} {\lvert {n} \rangle} \\ \end{aligned}

or

\begin{aligned}a^\dagger a {\lvert {n} \rangle} = \left( \frac{E_n}{\hbar \omega} - \frac{1}{{2}} \right) {\lvert {n} \rangle} = \lambda_n {\lvert {n} \rangle} = n {\lvert {n} \rangle}\end{aligned} \hspace{\stretch{1}}(8.189)

We call this operator a^\dagger a = N, the number operator, so that

\begin{aligned}N {\lvert {n} \rangle} = n {\lvert {n} \rangle}\end{aligned} \hspace{\stretch{1}}(8.190)

Relating states.

Recall the calculation we performed for

\begin{aligned}L_{+} {\lvert {lm} \rangle} &= C_{+} {\lvert {l, m+1} \rangle} \\ L_{-} {\lvert {lm} \rangle} &= C_{+} {\lvert {l, m-1} \rangle}\end{aligned} \hspace{\stretch{1}}(9.191)

Where C_{+}, and C_{+} are constants. The next game we are going to play is to work out C_n for the lowering operation

\begin{aligned}a{\lvert {n} \rangle} = C_n {\lvert {n-1} \rangle}\end{aligned} \hspace{\stretch{1}}(9.193)

and the raising operation

\begin{aligned}a^\dagger {\lvert {n} \rangle} = B_n {\lvert {n+1} \rangle}.\end{aligned} \hspace{\stretch{1}}(9.194)

For the Hermitian conjugate of a {\lvert {n} \rangle} we have

\begin{aligned}(a {\lvert {n} \rangle})^\dagger = ( C_n {\lvert {n-1} \rangle} )^\dagger = C_n^{*} {\lvert {n-1} \rangle}\end{aligned} \hspace{\stretch{1}}(9.195)

So

\begin{aligned}({\langle {n} \rvert} a^\dagger) (a {\lvert {n} \rangle}) = C_n C_n^{*} \left\langle{{n-1}} \vert {{n-1}}\right\rangle = {\left\lvert{C_n}\right\rvert}^2\end{aligned} \hspace{\stretch{1}}(9.196)

Expanding the LHS we have

\begin{aligned}{\left\lvert{C_n}\right\rvert}^2 &={\langle {n} \rvert} a^\dagger a {\lvert {n} \rangle} \\ &={\langle {n} \rvert} n {\lvert {n} \rangle} \\ &=n \left\langle{{n}} \vert {{n}}\right\rangle \\ &=n \end{aligned}

For

\begin{aligned}C_n = \sqrt{n}\end{aligned} \hspace{\stretch{1}}(9.197)

Similarly

\begin{aligned}({\langle {n} \rvert} a^\dagger) (a {\lvert {n} \rangle}) = B_n B_n^{*} \left\langle{{n+1}} \vert {{n+1}}\right\rangle = {\left\lvert{B_n}\right\rvert}^2\end{aligned} \hspace{\stretch{1}}(9.198)

and

\begin{aligned}{\left\lvert{B_n}\right\rvert}^2 &={\langle {n} \rvert} \underbrace{a a^\dagger}_{a a^\dagger - a^\dagger a = \mathbf{1}} {\lvert {n} \rangle} \\ &={\langle {n} \rvert} \left( \mathbf{1} + a^\dagger a \right) {\lvert {n} \rangle} \\ &=(1 + n) \left\langle{{n}} \vert {{n}}\right\rangle \\ &=1 + n \end{aligned}

for

\begin{aligned}B_n = \sqrt{n + 1}\end{aligned} \hspace{\stretch{1}}(9.199)

Heisenberg picture.

\paragraph{How does the lowering operator a evolve in time?}

\paragraph{A:} Recall that for a general operator A, we have for the time evolution of that operator

\begin{aligned}i \hbar \frac{d A}{dt} = \left[{ A },{H}\right]\end{aligned} \hspace{\stretch{1}}(10.200)

Let’s solve this one.

\begin{aligned}i \hbar \frac{d a}{dt} &= \left[{ a },{H}\right] \\ &= \left[{ a },{ \hbar \omega (a^\dagger a + \mathbf{1}/2) }\right] \\ &= \hbar\omega \left[{ a },{ (a^\dagger a + \mathbf{1}/2) }\right] \\ &= \hbar\omega \left[{ a },{ a^\dagger a }\right] \\ &= \hbar\omega \left( a a^\dagger a - a^\dagger a a \right) \\ &= \hbar\omega \left( (a a^\dagger) a - a^\dagger a a \right) \\ &= \hbar\omega \left( (a^\dagger a + \mathbf{1}) a - a^\dagger a a \right) \\ &= \hbar\omega a \end{aligned}

Even though a is an operator, it can undergo a time evolution and we can think of it as a function, and we can solve for a in the differential equation

\begin{aligned}\frac{d a}{dt} = -i \omega a \end{aligned} \hspace{\stretch{1}}(10.201)

This has the solution

\begin{aligned}a = a(0) e^{-i \omega t}\end{aligned} \hspace{\stretch{1}}(10.202)

here a(0) is an operator, the value of that operator at t = 0. The exponential here is just a scalar (not effected by the operator so we can put it on either side of the operator as desired).

\paragraph{CHECK:}

\begin{aligned}a' = a(0) \frac{d}{dt} e^{-i \omega t} = a(0) (-i \omega) e^{-i \omega t} = -i \omega a\end{aligned} \hspace{\stretch{1}}(10.203)

A couple comments on the Schr\”{o}dinger picture.

We don’t do this in class, but it is very similar to the approach of the hydrogen atom. See the text for full details.

In the Schr\”{o}dinger picture,

\begin{aligned}-\frac{\hbar^2}{2m} \frac{d^2 u}{dx^2} + \frac{1}{{2}} m \omega^2 x^2 u = E u\end{aligned} \hspace{\stretch{1}}(11.204)

This does directly to the wave function representation, but we can relate these by noting that we get this as a consequence of the identification u = u(x) = \left\langle{{x}} \vert {{u}}\right\rangle.

In 11.204, we can switch to dimensionless quantities with

\begin{aligned}\xi = \text{``xi (z)''} = \alpha x\end{aligned} \hspace{\stretch{1}}(11.205)

with

\begin{aligned}\alpha = \sqrt{\frac{m \omega}{\hbar}}\end{aligned} \hspace{\stretch{1}}(11.206)

This gives, with \lambda = 2E/\hbar\omega,

\begin{aligned}\frac{d^2 u}{d\xi^2} + (\lambda - \xi^2) u = 0\end{aligned} \hspace{\stretch{1}}(11.207)

We can use polynomial series expansion methods to solve this, and find that we require a terminating expression, and write this in terms of the Hermite polynomials (courtesy of the clever French once again).

When all is said and done we will get the energy eigenvalues once again

\begin{aligned}E = E_n = \hbar \omega \left( n + \frac{1}{{2}} \right)\end{aligned} \hspace{\stretch{1}}(11.208)

Back to the Heisenberg picture.

Let us express

\begin{aligned}\left\langle{{x}} \vert {{n}}\right\rangle = u_n(x)\end{aligned} \hspace{\stretch{1}}(12.209)

With

\begin{aligned}a {\lvert {0} \rangle} = 0,\end{aligned} \hspace{\stretch{1}}(12.210)

we have

\begin{aligned}0  =\left( X + i \frac{P}{m \omega} \right) {\lvert {0} \rangle},\end{aligned} \hspace{\stretch{1}}(12.211)

and

\begin{aligned}0 &= {\langle {x} \rvert} \left( X + i \frac{P}{m \omega} \right) {\lvert {0} \rangle} \\ &= {\langle {x} \rvert} X {\lvert {0 } \rangle} + i \frac{1}{m \omega} {\langle {x} \rvert} P {\lvert {0} \rangle} \\ &= x \left\langle{{x}} \vert {{0}}\right\rangle + i \frac{1}{m \omega} {\langle {x} \rvert} P {\lvert {0} \rangle} \\ \end{aligned}

Recall that our matrix operator is

\begin{aligned}{\langle {x'} \rvert} P {\lvert {x} \rangle} = \delta(x - x') \left( -i \hbar \frac{d}{dx} \right)\end{aligned} \hspace{\stretch{1}}(12.212)

\begin{aligned}{\langle {x} \rvert} P {\lvert {0} \rangle} &={\langle {x} \rvert} P \underbrace{\int {\lvert {x'} \rangle} {\langle {x'} \rvert} dx' }_{= \mathbf{1}}{\lvert {0} \rangle} \\ &=\int {\langle {x} \rvert} P {\lvert {x'} \rangle} \left\langle{{x'}} \vert {{0}}\right\rangle dx' \\ &=\int \delta(x - x') \left( -i \hbar \frac{d}{dx} \right)\left\langle{{x'}} \vert {{0}}\right\rangle dx' \\ &=\left( -i \hbar \frac{d}{dx} \right)\left\langle{{x}} \vert {{0}}\right\rangle\end{aligned}

We have then

\begin{aligned}0 =x u_0(x) + \frac{\hbar}{m \omega} \frac{d u_0(x)}{dx}\end{aligned} \hspace{\stretch{1}}(12.213)

NOTE: picture of the solution to this LDE on slide…. but I didn’t look closely enough.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , | Leave a Comment »

Desai Chapter 9 notes and problems.

Posted by peeterjoot on November 29, 2010

[Click here for a PDF of this post with nicer formatting]

Motivation.

Chapter 9 notes for [1].

Notes

Problems

Problem 2.

Statement.

On the basis of the results already derived for the harmonic oscillator, determine the energy eigenvalues and the ground-state wavefunction for the truncatedoscillator

\begin{aligned}V(x) &= \frac{1}{{2}} K x^2 \theta(x)\end{aligned}

Solution.

We require u(0) = 0, so our solutions are limited to the truncated odd harmonic oscillator solutions. The normalization will be different since only the x>0 integration range is significant. Our energy eigenvalues are

\begin{aligned}E_n = \left( n + \frac{1}{{2}} \right) \hbar \omega, n = 1, 3, 5, \cdots\end{aligned} \hspace{\stretch{1}}(3.1)

And its wave function is

\begin{aligned}v_1(x) \propto u_1(x) \theta(x) = A x e^{-\alpha^2 x^2/2} \theta(x)\end{aligned} \hspace{\stretch{1}}(3.2)

where u_1(x) is the first odd wavefunction for the non-truncated oscillator. Normalizing this we find A^2 \sqrt{\pi}/4 \alpha^3 = 1, or

\begin{aligned}v_1(x) = 2 \left( \frac{\alpha^3}{\sqrt{\pi}}\right)^{1/2} x e^{-\alpha^2 x^2/2} \theta(x)\end{aligned} \hspace{\stretch{1}}(3.3)

Problem 3.

Statement.

Show that for the harmonic oscillator in the state {\lvert {n} \rangle}, the following uncertainty product holds.

\begin{aligned}\Delta x \Delta p = \left( n + \frac{1}{{2}} \right) \hbar\end{aligned} \hspace{\stretch{1}}(3.4)

Solution.

I tried this first explicitly with the first two wave functions

\begin{aligned}u_0(x) &= \left(\frac{\alpha^2}{\pi}\right)^{1/4} e^{- \alpha^2 x^2/2} \\ u_1(x) &= \sqrt{2 \alpha^2} \left(\frac{\alpha^2}{\pi}\right)^{1/4} x e^{- \alpha^2 x^2/2}\end{aligned} \hspace{\stretch{1}}(3.5)

For the {\lvert {0} \rangle} state we find easily that \left\langle{{X}}\right\rangle = 0

\begin{aligned}{\langle {0} \rvert} X {\lvert {0} \rangle} &=\int dx {\langle {0} \rvert} X {\lvert {x} \rangle} \left\langle{{x}} \vert {{0}}\right\rangle \\ &=\int dx x {\left\lvert{\left\langle{{x}} \vert {{0}}\right\rangle}\right\rvert}^2 \\ &=\int dx x {\left\lvert{u_0(x)}\right\rvert}^2 \\ &\propto\int dx x e^{-\alpha^2 x^2} \end{aligned}

and this is zero since we are integrating an odd function over an even range (presuming that we take the principle value of the integral).

For the {\lvert {1} \rangle} state this we have

\begin{aligned}{\langle {0} \rvert} X {\lvert {0} \rangle} \propto\int dx x^5 e^{-\alpha^2 x^2} = 0\end{aligned}

Since each u_n(x) is a polynomial times a e^{-\alpha^2 x^2/2} factor we have \left\langle{{X}}\right\rangle = 0 for all states {\lvert {n} \rangle}.

The momentum expecation values for states {\lvert {0} \rangle} and {\lvert {1} \rangle} are also fairly simple to compute. We have

\begin{aligned}{\langle {n} \rvert} P {\lvert {n} \rangle} &=\int dx {\langle {n} \rvert} P {\lvert {x} \rangle}\left\langle{{x}} \vert {{n}}\right\rangle \\ &=\int dx' dx \left\langle{{n}} \vert {{x'}}\right\rangle {\langle {x} \rvert} P {\lvert {x} \rangle} \left\langle{{x}} \vert {{n}}\right\rangle \\ &=-i \hbar \int dx' dx u_n^{*}(x') \delta(x-x') \frac{\partial {}}{\partial {x}} u_n(x) \\ &=-i \hbar \int dx u_n^{*}(x) \frac{\partial {}}{\partial {x}} u_n(x) \\ \end{aligned}

For the {\lvert {0} \rangle} state our derivative is odd since a factor of x is brought down, and we are again integrating an odd function over an even range. For the {\lvert {1} \rangle} case our derivative is proportional to

\begin{aligned}\frac{\partial {}}{\partial {x}} u_1(x) \propto\frac{\partial {}}{\partial {x}} \left( x e^{-\alpha^2 x^2 } \right)=\left( 1 -2 \alpha^2 x^2 \right) e^{-\alpha^2 x^2 } \end{aligned}

Again, this is an even function, while u_1(x) is odd, so we have zero. Noting that we can express each u_n(x) in terms of Hankel functions

\begin{aligned}u_n(x) &= \left( \frac{ \alpha}{\sqrt{\pi} 2^n n!} \right)^{1/2} H_n(\alpha x) e^{ -\alpha^2 x^2/2}\end{aligned} \hspace{\stretch{1}}(3.7)

where H_{2n}(x) is even and H_{2n-1}(x) is odd, we note that this expectation value will always be zero since we will have an even times odd function in the integration kernel.

Knowing that the position and momentum expectation values are zero reduces this problem to the calculation of {\langle {n} \rvert} X^2 {\lvert {n} \rangle} {\langle {n} \rvert} P^2 {\lvert {n} \rangle}. Either of these expecation values are again not too hard to compute for n=0,1. However, we now have to keep track of the proportionality constants. As expected this yields

\begin{aligned}{\langle {0} \rvert} X^2 {\lvert {0} \rangle} {\langle {0} \rvert} P^2 {\lvert {0} \rangle} &= \hbar^2/4  \\ {\langle {1} \rvert} X^2 {\lvert {1} \rangle} {\langle {1} \rvert} P^2 {\lvert {1} \rangle} &= 9 \hbar^2/4\end{aligned} \hspace{\stretch{1}}(3.8)

These are respectively

\begin{aligned}\Delta x \Delta p &= \left( 0 + \frac{1}{{2}} \right) \hbar \\ \Delta x \Delta p &= \left( 1 + \frac{1}{{2}} \right) \hbar\end{aligned} \hspace{\stretch{1}}(3.10)

However, these integrals were only straightforward (albeit tedious) to calculate because we had explicit representations for u_0(x) and u_1(x). For the general wave function, what we have to work with is either the Hankel function representation of 3.7 or the derivative form

\begin{aligned}u_n(x) &= (-1)^n\left( \frac{ \alpha}{\sqrt{\pi} 2^n n!} \right)^{1/2} e^{ \alpha^2 x^2/2}\frac{d^n}{d (\alpha x)^n}e^{ -\alpha^2 x^2}\end{aligned} \hspace{\stretch{1}}(3.12)

Expanding this explicitly for arbitrary n isn’t going to be feasible. We can reduce the scope of the problem by trying to be lazy and see how some work can be avoided. One possible trick is noting that we can express the squared momentum expectation in terms of the Hamiltonian

\begin{aligned}{\langle {n} \rvert} P^2 {\lvert {n} \rangle}&={\langle {n} \rvert} 2m \left( H - \frac{1}{{2}} m \omega^2 X^2 \right) {\lvert {n} \rangle} \\ &=\left( n + \frac{1}{{2}} \right) 2 m \hbar \omega- m^2 \omega^2 {\langle {n} \rvert} X^2 {\lvert {n} \rangle} \\ &=\left( n + \frac{1}{{2}} \right) 2 \hbar^2 \alpha^2 - \hbar^2 \alpha^4 {\langle {n} \rvert} X^2 {\lvert {n} \rangle} \\ \end{aligned}

So we can get away with only calculating {\langle {n} \rvert} X^2 {\lvert {n} \rangle}, an exersize in integration by parts

\begin{aligned}{\langle {n} \rvert} X^2 {\lvert {n} \rangle}&=\frac{ \alpha}{\sqrt{\pi} 2^n n!} \int dx x^2e^{ \alpha^2 x^2}\left(\frac{d^n}{d (\alpha x)^n}e^{ -\alpha^2 x^2}\right)^2 \\ &=\frac{ 1 }{\alpha^2 \sqrt{\pi} 2^n n!} \int dy y^2 e^{ y^2}\left(\frac{d^n}{d y^n}e^{ -y^2 }\right)^2 \\ &=\frac{ 1 }{\alpha^2 \sqrt{\pi} 2^n n!} \int dy \frac{1}{{2}} y \frac{ d}{dy} e^{ y^2}\left(\frac{d^n}{d y^n}e^{ -y^2 }\right)^2 \\ &=\frac{ 1 }{\alpha^2 \sqrt{\pi} 2^n n!} \frac{1}{{-2}}\int dy e^{ y^2}\frac{d}{dy} \left( y \left(\frac{d^n}{d y^n}e^{ -y^2 }\right)^2 \right)\\ &=\frac{ 1 }{\alpha^2 \sqrt{\pi} 2^n n!} \frac{1}{{-2}}\int dy e^{ y^2}\left( \left(\frac{d^n}{d y^n}e^{ -y^2 }\right)^2 + 2 y \frac{d^n}{d y^n}e^{ -y^2 }\frac{d^{n+1}}{d y^{n+1}}e^{ -y^2 }\right)\\ &=-\frac{1}{{2 \alpha^2}}-\frac{ 1 }{\alpha^2 \sqrt{\pi} 2^n n!} \frac{1}{{2}}\int dy \frac{d}{dy} e^{ y^2}\frac{d^n}{d y^n}e^{ -y^2 }\frac{d^{n+1}}{d y^{n+1}}e^{ -y^2 }\\ &=-\frac{1}{{2 \alpha^2}}+\frac{ 1 }{\alpha^2 \sqrt{\pi} 2^n n!} \frac{1}{{2}}\int dy e^{ y^2}\left(\frac{d^{n+1}}{d y^{n+1}}e^{ -y^2 }\frac{d^{n+1}}{d y^{n+1}}e^{ -y^2 }+\frac{d^n}{d y^n}e^{ -y^2 }\frac{d^{n+2}}{d y^{n+2}}e^{ -y^2 }\right)\\ \end{aligned}

The second term in this remaining integral is proportional to \left\langle{{n}} \vert {{n+2}}\right\rangle = 0, which leaves us with

\begin{aligned}{\langle {n} \rvert} X^2 {\lvert {n} \rangle}=-\frac{1}{{2 \alpha^2}} + \frac{n+1}{\alpha^2} = \frac{1}{{\alpha^2}}\left( n + \frac{1}{{2}} \right)\end{aligned} \hspace{\stretch{1}}(3.13)

Our squared momentum expectation value is then

\begin{aligned}{\langle {n} \rvert} P^2 {\lvert {n} \rangle}&=\left( n + \frac{1}{{2}} \right) 2 \hbar^2 \alpha^2 - \hbar^2 \alpha^4 {\langle {n} \rvert} X^2 {\lvert {n} \rangle} \\ &=\left( n + \frac{1}{{2}} \right) \hbar^2 \alpha^2 \end{aligned}

This completes the problem, and we are left with

\begin{aligned}\Delta x \Delta p = \left( n + \frac{1}{{2}} \right) \hbar.\end{aligned} \hspace{\stretch{1}}(3.14)

Problem 4.

Statement.

Consider the following two-dimensional harmonic oscilator problem:

\begin{aligned}-\frac{\hbar^2}{2m} \frac{\partial^2 u}{\partial x^2}-\frac{\hbar^2}{2m} \frac{\partial^2 u}{\partial y^2}+ \frac{1}{{2}} K_1 x^2 u+ \frac{1}{{2}} K_2 y^2 u= E u\end{aligned} \hspace{\stretch{1}}(3.15)

where (x,y) are the coordinates of the particle. Use the separation of variables technique to obtain the energy eigenvalues. Discuss the degeneracy in the eigenvalues if K_1 = K_2.

Solution.

Write u = A(x) B(y). Substitute and dividing throughout by u we have

\begin{aligned}\left( -\frac{\hbar^2}{2m} \frac{A''}{A} + \frac{1}{{2}} K_1 x^2 \right)+\left( -\frac{\hbar^2}{2m} \frac{B''}{B} + \frac{1}{{2}} K_2 y^2 \right)= E\end{aligned} \hspace{\stretch{1}}(3.16)

Introduction of a pair of constants E_1, E_2 for each of the independent terms we have

\begin{aligned}H_1 A &= -\frac{\hbar^2}{2m} A'' + \frac{1}{{2}} K_1 x^2 A = E_1 A \\ H_2 B &= -\frac{\hbar^2}{2m} B'' + \frac{1}{{2}} K_1 y^2 B = E_2 B \\ H &= H_1 + H_2 \\ E  &= E_1 + E_2\end{aligned} \hspace{\stretch{1}}(3.17)

For each of these equations we have a set of quantized eigenvalues and can write

\begin{aligned}E_{1m} &= \left(m + \frac{1}{{2}}\right) \hbar \sqrt{\frac{K_1}{m}} \\ E_{2n} &= \left(n + \frac{1}{{2}}\right) \hbar \sqrt{\frac{K_2}{m}} \\ H_1 A_m(x) &= E_{1m} A_m(x) \\ H_2 A_n(y) &= E_{2n} B_n(y)\end{aligned} \hspace{\stretch{1}}(3.21)

The complete eigenstates are then

\begin{aligned}u_{mn}(x,y) &= A_m(x) B_n(y)\end{aligned} \hspace{\stretch{1}}(3.25)

with total energy satisfying

\begin{aligned}H u_{mn}(x,y) &=\frac{\hbar}{\sqrt{m}} \left( \left(m + \frac{1}{{2}}\right) \sqrt{K_1} + \left(n + \frac{1}{{2}}\right) \sqrt{K_2} \right) u_{mn}(x,y)\end{aligned} \hspace{\stretch{1}}(3.26)

A general state requires a double sum over the possible combinations of states \Psi = \sum_{mn} c_{mn} u_{mn}, however if K_1 = K_2 = K, we cannot distinguish between u_{mn} and u_{nm} based on the energy eigenvalues

\begin{aligned}H u_{mn}(x,y) &= \hbar\sqrt{\frac{K}{m}} \left( m + n + 1 \right) u_{mn}(x,y) = H u_{nm}(x,y)\end{aligned} \hspace{\stretch{1}}(3.27)

In this case, we can write the wave function corresponding to a general state for the system as just \Psi = \sum_{m+ n = \text{constant}} c_{mn} u_{mn}. This reduction in the cardinality of this set of basis eigenstates is the degeneracy to be discussed.

Problem 5,6.

Statement.

Consider now a variation on Problem 4 in which we have a coupled oscillator with the potential given by

\begin{aligned}V(x,y) = \frac{1}{{2}} K \Bigl( x^2 + y^2 + 2 \lambda x y \Bigr)\end{aligned} \hspace{\stretch{1}}(3.28)

Obtain the energy eigenvalues by changing variables (x,y) to (x', y') such that the new potential is quadratic in (x', y'), without the coupling term.

Solution.

This has the look of a diagonalization problem so we write the potential in matrix form

\begin{aligned}V(x,y)= \frac{1}{{2}} K\begin{bmatrix}x & y\end{bmatrix}\begin{bmatrix}1 & \lambda \\ \lambda & 1\end{bmatrix}\begin{bmatrix}x \\  y\end{bmatrix} = \frac{1}{{2}} K \tilde{X} M X\end{aligned} \hspace{\stretch{1}}(3.29)

The similarity transformation required is

\begin{aligned}M = \frac{1}{{\sqrt{2}}}\begin{bmatrix}1 & 1 \\ 1 & -1\end{bmatrix}\begin{bmatrix}1+ \lambda & 0 \\ 0 & 1 - \lambda\end{bmatrix}\frac{1}{{\sqrt{2}}}\begin{bmatrix}1 & 1 \\ 1 & -1\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.30)

Our change of variables is therefore

\begin{aligned}X' =\frac{1}{{\sqrt{2}}}\begin{bmatrix}1 & 1 \\ 1 & -1\end{bmatrix}X=\frac{1}{{\sqrt{2}}}\begin{bmatrix}x + y \\ x - y\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.31)

Our Laplacian should also remain diagonal under this orthonormal transformation, but we can verify this by expanding out the partials explicitly

\begin{aligned}\frac{\partial {}}{\partial {x}} &=\frac{\partial {x'}}{\partial {x}}\frac{\partial {}}{\partial {x'}}+\frac{\partial {y'}}{\partial {x}}\frac{\partial {}}{\partial {y'}} = \frac{1}{{\sqrt{2}}} \left( \frac{\partial {}}{\partial {x'}} + \frac{\partial {}}{\partial {y'}} \right) \\ \frac{\partial {}}{\partial {y}} &=\frac{\partial {x'}}{\partial {y}}\frac{\partial {}}{\partial {x'}} +\frac{\partial {y'}}{\partial {y}}\frac{\partial {}}{\partial {y'}}= \frac{1}{{\sqrt{2}}}\left( \frac{\partial {}}{\partial {x'}} - \frac{\partial {}}{\partial {y'}} \right)\end{aligned} \hspace{\stretch{1}}(3.32)

Squaring and summing we have

\begin{aligned}\frac{\partial^2}{\partial x^2} +\frac{\partial^2}{\partial y^2}&=\frac{1}{{2}} \left( \frac{\partial {}}{\partial {x'}} + \frac{\partial {}}{\partial {y'}} \right)^2+\frac{1}{{2}} \left( \frac{\partial {}}{\partial {x'}} - \frac{\partial {}}{\partial {y'}} \right)^2=\frac{\partial^2}{\partial {x'}^2} +\frac{\partial^2}{\partial {y'}^2}\end{aligned} \hspace{\stretch{1}}(3.34)

Our transformed Hamiltonian operator is thus

\begin{aligned}-\frac{\hbar^2}{2m} \frac{\partial^2 u}{\partial {x'}^2}-\frac{\hbar^2}{2m} \frac{\partial^2 u}{\partial {y'}^2}+ \frac{1}{{2}} K(1+\lambda) {x'}^2 u+ \frac{1}{{2}} K(1-\lambda) {y'}^2 u= E u\end{aligned} \hspace{\stretch{1}}(3.35)

So, provided {\left\lvert{\lambda}\right\rvert} < 1, the energy eigenvalue equation is given by 3.26 with K_1 = K(1+ \lambda), and K_2 = K(1 -\lambda).

Problem 7.

Statement.

Consider two coupled harmonic oscillators in one dimension of natural length a and spring constant K connecting three particles located at x_1, x_2, and x_3. The corresponding Schr\”{o}dinger equation is given as

\begin{aligned}-\frac{\hbar^2}{2m} \frac{\partial^2 u}{\partial {x_1}^2}-\frac{\hbar^2}{2m} \frac{\partial^2 u}{\partial {x_2}^2}-\frac{\hbar^2}{2m} \frac{\partial^2 u}{\partial {x_3}^2}+ \frac{K}{2}\left((x_2 - x_1 - a)^2+(x_3 - x_2 - a)^2\right) u= E u\end{aligned} \hspace{\stretch{1}}(3.36)

Obtain the energy eigenvalues using the matrix method.

Solution.

Let’s start with an initial simplifying substutition to get rid of the factors of a. Write

\begin{aligned}r_1 &= x_1 + a \\ r_2 &= x_2 \\ r_3 &= x_3 - a\end{aligned} \hspace{\stretch{1}}(3.37)

These were picked so that the differences in our quadratic terms involve only factors of r_k

\begin{aligned}x_2 - x_1 - a &= r_2 - r_1 \\ x_3 - x_2 - a &= r_3 - r_2\end{aligned} \hspace{\stretch{1}}(3.40)

Schr\”{o}dinger’s equation is now

\begin{aligned}-\frac{\hbar^2}{2m} \frac{\partial^2 u}{\partial {r_1}^2}-\frac{\hbar^2}{2m} \frac{\partial^2 u}{\partial {r_2}^2}-\frac{\hbar^2}{2m} \frac{\partial^2 u}{\partial {r_3}^2}+ \frac{K}{2}\left((r_2 - r_1)^2+(r_3 - r_2)^2\right) u= E u\end{aligned} \hspace{\stretch{1}}(3.42)

Putting our potential into matrix form, we have

\begin{aligned}V(r_1, r_2, r_3) &=\frac{K}{2}\left((r_2 - r_1)^2+(r_3 - r_2)^2\right)=\frac{K}{2}\begin{bmatrix}r_1 & r_2 & r_3\end{bmatrix}\begin{bmatrix}1 & -1 & 0 \\ -1 & 2 & -1 \\ 0 & -1 & 1\end{bmatrix}\begin{bmatrix}r_1 \\  r_2 \\  r_3\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.43)

This symmetric matrix, let’s call it M

\begin{aligned}M=\begin{bmatrix}1 & -1 & 0 \\ -1 & 2 & -1 \\ 0 & -1 & 1\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.44)

has eigenvalues 0,1,3, with orthonormal eigenvectors

\begin{aligned}e_0 &=\frac{1}{{\sqrt{3}}}\begin{bmatrix}1 \\ 1 \\ 1\end{bmatrix} \\ e_1 &=\frac{1}{{\sqrt{2}}}\begin{bmatrix}1 \\ 0 \\ -1\end{bmatrix} \\ e_3 &=\frac{1}{{\sqrt{6}}}\begin{bmatrix}1 \\ -2 \\ 1\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.45)

Writing

\begin{aligned}U = [e_0 e_1 e_3]=\begin{bmatrix}\frac{1}{{\sqrt{3}}} & \frac{1}{{\sqrt{2}}}  & \frac{1}{{\sqrt{6}}}  \\ \frac{1}{{\sqrt{3}}} & 0  & -\frac{2}{\sqrt{6}}  \\ \frac{1}{{\sqrt{3}}} & -\frac{1}{{\sqrt{2}}}  & \frac{1}{{\sqrt{6}}}\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.48)

\begin{aligned}M = U\begin{bmatrix}0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 3\end{bmatrix}\tilde{U}=U D \tilde{U}\end{aligned} \hspace{\stretch{1}}(3.49)

Writing R' = \tilde{U} R, and \boldsymbol{\nabla}' = \tilde{U} \boldsymbol{\nabla}, we see that the Laplacian has no mixed partial terms after transformation

\begin{aligned}\boldsymbol{\nabla}' \cdot \boldsymbol{\nabla}' &= (\tilde{U} \boldsymbol{\nabla})^{\tilde{}} \tilde{U} \boldsymbol{\nabla} \\ &= \tilde{\boldsymbol{\nabla} } \boldsymbol{\nabla} \\ &= \boldsymbol{\nabla} \cdot \boldsymbol{\nabla}\end{aligned}

Schr\”{o}dinger’s equation is then just

\begin{aligned}\left( -\frac{\hbar^2}{2m} {\boldsymbol{\nabla}'}^2 + \frac{K}{2} \tilde{R'} D R' \right) u = E u\end{aligned} \hspace{\stretch{1}}(3.50)

Or

\begin{aligned}-\frac{\hbar^2}{2m} \frac{\partial^2 u}{\partial {r_1'}^2}-\frac{\hbar^2}{2m} \frac{\partial^2 u}{\partial {r_2'}^2}-\frac{\hbar^2}{2m} \frac{\partial^2 u}{\partial {r_3'}^2}+ \frac{K}{2}\left({r_2'}^2+3 {r_3'}^2\right) u= E u\end{aligned} \hspace{\stretch{1}}(3.51)

Separation of variables provides us with one free particle wave equation, and two harmonic oscillator equations

\begin{aligned}-\frac{\hbar^2}{2m} \frac{\partial^2 u_1}{\partial {r_1'}^2} &= E_1 u_1 \\ -\frac{\hbar^2}{2m} \frac{\partial^2 u}{\partial {r_2'}^2} + \frac{K}{2} {r_2'}^2 u_2 &= E_2 u_2 \\ -\frac{\hbar^2}{2m} \frac{\partial^2 u}{\partial {r_3'}^2} + \frac{3 K}{2} {r_3'}^2 u_3 &= E_3 u_3\end{aligned} \hspace{\stretch{1}}(3.52)

We can borrow the Harmonic oscillator energy eigenvalues from problem 4 again with K_1 = K, and K_2 = 3 K.

Problem 8.

Statement.

As a variation of Problem 7 assume that the middle particle at x_2 has a different mass M. Reduce this problem to the form of Problem 7 by a scale change in x_2 and then use the matrix method to obtain the energy eigenvalues.

Solution.

We write \sqrt{M} x_2 = \sqrt{m} x_2', x_1 + a = x_1', x_3 - a = x_3', and then Schr\”{o}dinger’s equation takes the form

\begin{aligned}\left( -\frac{\hbar^2}{2m} {\boldsymbol{\nabla}'}^2 + V(X') \right) u &= E u \end{aligned} \hspace{\stretch{1}}(3.55)

\begin{aligned}V(X') = \frac{K}{2} \left( \left( \sqrt{\frac{m}{M}} x_2' - x_1' \right)^2+\left( -\sqrt{\frac{m}{M}} x_2' + x_3' \right)^2\right)\end{aligned} \hspace{\stretch{1}}(3.56)

With \mu = \sqrt{m/M}, we have

\begin{aligned}V(X') = \frac{K}{2} \tilde{X'}\begin{bmatrix}1 & -\mu & 0 \\ -\mu & 2 \mu^2 & -\mu \\ 0 & -\mu & 1\end{bmatrix}X'\end{aligned} \hspace{\stretch{1}}(3.57)

We find that this symmetric matrix has eigenvalues 0, 1, 1 + 2 \mu^2, and eigenvectors

\begin{aligned}e_0 &=\frac{1}{{\sqrt{1 + 2 \mu^2}}}\begin{bmatrix}\mu \\  1 \\  \mu\end{bmatrix} \\ e_1 &=\frac{1}{{\sqrt{2}}}\begin{bmatrix}1 \\  0 \\  -1\end{bmatrix} \\ e_{1+ 2 \mu^2} &=\frac{1}{{\sqrt{2 + 4 \mu^2}}}\begin{bmatrix}1 \\  -2 \mu \\  1\end{bmatrix} \end{aligned} \hspace{\stretch{1}}(3.58)

The rest of the problem is now no different than the tail end of Problem 7, and we end up with K_1 = K, K_2 = (1 + 2 \mu^2) K.

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , , | Leave a Comment »

Notes and problems for Desai Chapter VI.

Posted by peeterjoot on November 29, 2010

[Click here for a PDF of this post with nicer formatting]

Motivation.

Chapter VI notes for [1].

Notes

section 6.5, interaction with orbital angular momentum

He states that we take

\begin{aligned}\mathbf{A} = \frac{1}{{2}} (\mathbf{B} \times \mathbf{r})\end{aligned} \hspace{\stretch{1}}(2.1)

and that this reproduces the gauge condition \boldsymbol{\nabla} \cdot \mathbf{A} = 0, and the requirement \boldsymbol{\nabla} \times \mathbf{A} = \mathbf{B}.

These seem to imply that \mathbf{B} is constant, which also accounts for the fact that he writes \boldsymbol{\mu} \cdot \mathbf{L} = \mathbf{L} \cdot \boldsymbol{\mu}.

Consider the gauge condition first, by expanding the divergence of a cross product

\begin{aligned}\boldsymbol{\nabla} \cdot (\mathbf{F} \times \mathbf{G})&=\left\langle{{ \boldsymbol{\nabla} -I \frac{ \mathbf{F} \mathbf{G} - \mathbf{G} \mathbf{F} }{2} }}\right\rangle \\ &=-\frac{1}{{2}} \left\langle{{ I \boldsymbol{\nabla} \mathbf{F} \mathbf{G} - I \boldsymbol{\nabla} \mathbf{G} \mathbf{F} }}\right\rangle \\ &=-\frac{1}{{2}} \left\langle{{I \mathbf{G}(\stackrel{ \rightarrow }{\boldsymbol{\nabla}} \mathbf{F})  - I \mathbf{F} (\stackrel{ \rightarrow }{\boldsymbol{\nabla}} \mathbf{G})+I (\mathbf{G} \stackrel{ \leftarrow }{\boldsymbol{\nabla}}) \mathbf{F} - I (\mathbf{F} \stackrel{ \leftarrow }{\boldsymbol{\nabla}}) \mathbf{G}}}\right\rangle \\ &=-\frac{1}{{2}} \left\langle{{I \mathbf{G}(\stackrel{ \rightarrow }{\boldsymbol{\nabla}} \wedge \mathbf{F})  - I \mathbf{F} (\stackrel{ \rightarrow }{\boldsymbol{\nabla}} \wedge \mathbf{G})+I (\mathbf{G} \wedge \stackrel{ \leftarrow }{\boldsymbol{\nabla}}) \mathbf{F} - I (\mathbf{F} \wedge \stackrel{ \leftarrow }{\boldsymbol{\nabla}}) \mathbf{G}}}\right\rangle \\ &=\frac{1}{{2}} \left\langle{{\mathbf{G} (\stackrel{ \rightarrow }{\boldsymbol{\nabla}} \times \mathbf{F})  - \mathbf{F} (\stackrel{ \rightarrow }{\boldsymbol{\nabla}} \times \mathbf{G})+(\mathbf{G} \times \stackrel{ \leftarrow }{\boldsymbol{\nabla}}) \mathbf{F} - (\mathbf{F} \times \stackrel{ \leftarrow }{\boldsymbol{\nabla}}) \mathbf{G}}}\right\rangle \\ &=\frac{1}{{2}} \left(\mathbf{G} \cdot (\boldsymbol{\nabla} \times \mathbf{F})  - \mathbf{F} \cdot (\boldsymbol{\nabla} \times \mathbf{G})-\mathbf{F} \cdot (\boldsymbol{\nabla} \times \mathbf{G})  + \mathbf{G} \cdot (\boldsymbol{\nabla} \times \mathbf{F} )\right) \\ \end{aligned}

This gives us

\begin{aligned}\boldsymbol{\nabla} \cdot (\mathbf{F} \times \mathbf{G})&=\mathbf{G} \cdot (\boldsymbol{\nabla} \times \mathbf{F})  - \mathbf{F} \cdot (\boldsymbol{\nabla} \times \mathbf{G})\end{aligned} \hspace{\stretch{1}}(2.2)

With \mathbf{A} = (\mathbf{B} \times \mathbf{r})/2 we then have

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{A} =\frac{1}{{2}} \mathbf{r} \cdot (\boldsymbol{\nabla} \times \mathbf{B})  - \frac{1}{{2}} \mathbf{B} \cdot (\boldsymbol{\nabla} \times \mathbf{r})=\frac{1}{{2}} \mathbf{r} \cdot (\boldsymbol{\nabla} \times \mathbf{B})\end{aligned} \hspace{\stretch{1}}(2.3)

Unless \boldsymbol{\nabla} \times \mathbf{B} is always perpendicular to \mathbf{r} we can only have a zero divergence when \mathbf{B} is constant.

Now, let’s look at \boldsymbol{\nabla} \times \mathbf{A}. We need another auxillary identity

\begin{aligned}\boldsymbol{\nabla} \times (\mathbf{F} \times \mathbf{G})&=-I \boldsymbol{\nabla} \wedge (\mathbf{F} \times \mathbf{G}) \\ &=-\frac{1}{{2}} {\left\langle{{I \stackrel{ \rightarrow }{\boldsymbol{\nabla}} (\mathbf{F} \times \mathbf{G})- I (\mathbf{F} \times \mathbf{G}) \stackrel{ \leftarrow }{\boldsymbol{\nabla}}}}\right\rangle}_{1} \\ &=\frac{1}{{2}} \left(-\stackrel{ \rightarrow }{\boldsymbol{\nabla}} \cdot (\mathbf{F} \wedge \mathbf{G})+ (\mathbf{F} \wedge \mathbf{G}) \cdot \stackrel{ \leftarrow }{\boldsymbol{\nabla}}\right) \\ &=\frac{1}{{2}} \left(-(\stackrel{ \rightarrow }{\boldsymbol{\nabla}} \cdot \mathbf{F}) \mathbf{G}+(\stackrel{ \rightarrow }{\boldsymbol{\nabla}} \cdot \mathbf{G}) \mathbf{F}+ \mathbf{F} (\mathbf{G} \cdot \stackrel{ \leftarrow }{\boldsymbol{\nabla}} )- \mathbf{G} (\mathbf{F} \cdot \stackrel{ \leftarrow }{\boldsymbol{\nabla}} )\right)\\ &=\frac{1}{{2}} \left(-(\boldsymbol{\nabla} \cdot \mathbf{F}) \mathbf{G}+(\boldsymbol{\nabla} \cdot \mathbf{G}) \mathbf{F}+ (\boldsymbol{\nabla} \cdot \mathbf{G} ) \mathbf{F}- (\boldsymbol{\nabla} \cdot \mathbf{F} ) \mathbf{G}\right)\end{aligned}

Here the gradients are all still acting on both \mathbf{F} and \mathbf{G}. Expanding this out by chain rule we have

\begin{aligned}2 \boldsymbol{\nabla} \times (\mathbf{F} \times \mathbf{G})=&-(\mathbf{F} \cdot \boldsymbol{\nabla}) \mathbf{G}-\mathbf{G} (\boldsymbol{\nabla} \cdot \mathbf{F})   +\mathbf{F} (\boldsymbol{\nabla} \cdot \mathbf{G})+(\mathbf{G} \cdot \boldsymbol{\nabla} ) \mathbf{F}  \\ \quad&+\mathbf{F} (\boldsymbol{\nabla} \cdot \mathbf{G} )+ (\mathbf{G} \cdot \boldsymbol{\nabla} ) \mathbf{F}  - (\mathbf{F} \cdot \boldsymbol{\nabla} ) \mathbf{G}- \mathbf{G} (\boldsymbol{\nabla} \cdot \mathbf{F} )\end{aligned}

or

\begin{aligned}\boldsymbol{\nabla} \times (\mathbf{F} \times \mathbf{G})&=\mathbf{F} (\boldsymbol{\nabla} \cdot \mathbf{G}) -(\mathbf{F} \cdot \boldsymbol{\nabla}) \mathbf{G}+(\mathbf{G} \cdot \boldsymbol{\nabla} ) \mathbf{F}  -\mathbf{G} (\boldsymbol{\nabla} \cdot \mathbf{F})\end{aligned} \hspace{\stretch{1}}(2.4)

With \mathbf{F} = \mathbf{B}/2, and \mathbf{G} = \mathbf{r}, we have

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{A}&=\frac{1}{{2}}\mathbf{B} (\boldsymbol{\nabla} \cdot \mathbf{r}) -\frac{1}{{2}}(\mathbf{B} \cdot \boldsymbol{\nabla}) \mathbf{r}+\frac{1}{{2}}(\mathbf{r} \cdot \boldsymbol{\nabla} ) \mathbf{B}  -\frac{1}{{2}}\mathbf{r} (\boldsymbol{\nabla} \cdot \mathbf{B})\end{aligned}

We note that \boldsymbol{\nabla} \cdot \mathbf{r} = 3, and

\begin{aligned}(\mathbf{B} \cdot \boldsymbol{\nabla} ) \mathbf{r} &=B_k \partial_k x_m \mathbf{e}_m \\ &=B_k \delta_{km} \mathbf{e}_m \\ &=\mathbf{B}\end{aligned}

If \mathbf{B} is constant, we have

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{A} = \frac{3\mathbf{B}}{2} - \frac{\mathbf{B}}{2} = \mathbf{B},\end{aligned} \hspace{\stretch{1}}(2.5)

as desired. Now this would all likely be a lot more intuitive if one started with constant \mathbf{B} and derived from that what the vector potential was. That’s probably worth also thinking about.

Problems

Problem 1.

Statement.

Solution.

TODO.

Problem 3.

Statement.

Solution.

TODO.

Problem 3.

Statement.

Solution.

TODO.

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , , | Leave a Comment »

Hydrogen like atom, and Laguerre polynomials.

Posted by peeterjoot on November 29, 2010

[Click here for a PDF of this post with nicer formatting]

Motivation.

For the hydrogen atom, after some variable substitutions the radial part of the Schr\”{o}dinger equation takes the form

\begin{aligned}\frac{d^2 R_l}{d\rho^2} + \frac{2}{\rho} \frac{d R_l}{d\rho} + \left( \frac{\lambda}{\rho} - \frac{l(l+1)}{\rho^2} - \frac{1}{{4}} \right) R_l = 0\end{aligned} \hspace{\stretch{1}}(1.1)

In [1] it is argued that the functions R_l are of the form

\begin{aligned}R_l = \rho^s L(\rho) e^{-\rho/2}\end{aligned} \hspace{\stretch{1}}(1.2)

where L is a polynomial in \rho, specifically Laguerre polynomials. Let’s look at some of those details a bit more closely.

Guts

The first part of the argument comes from considering the \rho \rightarrow \infty case, where Schr\”{o}dinger’s equation is approximately

\begin{aligned}\frac{d^2 R_l}{d\rho^2} - \frac{1}{{4}} R_l \approx 0.\end{aligned} \hspace{\stretch{1}}(2.3)

This large \rho approximation has solutions e^{\pm \rho/2}, and we take the negative sign case as physically meaningful in order for the wave function to be normalizable.

Next it is argued that polynomial multiples of this will also be approximate solutions. Utilizing monomial multiple of the decreasing exponential as a trial solution, let’s compute how this fits into the radial Schr\”{o}dinger’s equation 1.1 above. Write

\begin{aligned}R_l = \rho^s e^{-\rho/2}\end{aligned} \hspace{\stretch{1}}(2.4)

The derivatives are

\begin{aligned}R_l' &= \rho^{s-1} \left( s -\frac{\rho}{2}\right) e^{-\rho/2} \\ R_l'' &=\rho^{s-2}\left( s (s-1) -s \rho +\frac{1}{4} \rho^2\right)e^{-\rho/2}\end{aligned}

and substitution yields

\begin{aligned}\rho^{s-2}e^{-\rho/2}\left((s - \rho) (s+1)+\lambda \rho- l(l+1)\right)\end{aligned} \hspace{\stretch{1}}(2.5)

There are two things that this can show. The first is that for \rho \rightarrow \infty this produces a polynomial with degree s-2 and s-1 terms multiplied by the exponential, and we have approximately

\begin{aligned}\rho^{s-1}e^{-\rho/2}(\lambda - s - 1)\end{aligned} \hspace{\stretch{1}}(2.6)

The s-1 terms will dominate the polynomial, but the exponential dominate all, approaching zero for \rho \rightarrow \infty, just as the non-polynomial multiplied e^{-\rho/2} approximate solution will. This confirms that in the limit this polynomial multiplied exponential still has the desired behavior in the large \rho limit. Also observe that in the limit of small \rho we have approximately

\begin{aligned}\rho^{s-2}e^{-\rho/2}\left(s (s+1) - l(l+1)\right)\end{aligned} \hspace{\stretch{1}}(2.7)

Since \rho^{s-2} \rightarrow \infty as \rho \rightarrow 0, we require either a different trial solution, or s=l to have a normalizable wavefunction.

Before settling on s=l let’s compute the derivatives for a more general trial function, of the form 1.2, and substitute those. After a bit of computation we find

\begin{aligned}R_l' = \rho^{s-1} e^{-\rho/2} \left( \left( s - \frac{\rho}{2} \right) L + \rho L'\right)\end{aligned} \hspace{\stretch{1}}(2.8)

\begin{aligned}R_l'' = \rho^{s-2} e^{-\rho/2} \left(\left( s(s-1) - s \rho + \frac{\rho^2}{4} \right) L+\left( 2 s \rho -\rho^2 \right) L'+ \rho^2 L''\right)\end{aligned} \hspace{\stretch{1}}(2.9)

Putting these together and substitution back into 1.1 yields

\begin{aligned}0 = \rho^{s-2} e^{-\rho/2} \left(L \left( (s-\rho)(s+1) + \rho \lambda -l (l+1)\right)+\rho L' \left( 2 (s+1) -\rho \right)+ \rho^2 L''\right)\end{aligned} \hspace{\stretch{1}}(2.10)

In the \rho \rightarrow 0 limit where the \rho^{s-2} terms dominate 2.11 becomes

\begin{aligned}0 \approx\rho^{s-2} L \left(s(s+1) - l(l+1)\right)\end{aligned} \hspace{\stretch{1}}(2.11)

Again, this provides the s=l or s = -(l+1) possibilities from the text, and we discard s=-(l+1) due to non-normalizability. A side question. How does one solve integer equations like this?

What remains?

With s=l killing off the \rho^{s-2} terms, what is our differential equation for L?

\begin{aligned}0 =\rho L''+L' \left( 2 (l+1) -\rho \right)+L \left( \lambda - (l+1) \right)\end{aligned} \hspace{\stretch{1}}(2.12)

Comparing this to [2] we have something pretty close to the stated differential equation for the Laguerre polynomial. Ours is of the form

\begin{aligned}0 =\rho L''+L' \left( m + 1 -\rho \right)+L n,\end{aligned} \hspace{\stretch{1}}(2.13)

where the differential equation in the wikipedia article has m=0. No change of variables involving a scalar multiplicative factor for \rho appears to be able to get it into that form, and I am guessing this is the differential equation for the associated Laguerre polynomial (something not stated in the wikipedia article).

Let’s derive the recurrence relations for the coefficients, and work out the first few such polynomials to compare. Plugging in a polynomial of the form

\begin{aligned}L = \sum_{k=0}^r a_k \rho^{k},\end{aligned} \hspace{\stretch{1}}(2.14)

where a_r is assumed to be non-zero. We also assume that this polynomial is not an infinite series (ruling out the infinite series with convergence arguments is covered nicely in the text).

we have for 2.13

\begin{aligned}0 &= \sum_{k=0}^r a_k\left(k (k-1) \rho^{k-1}+ k (m+1) \rho^{k-1}- k \rho^{k}+ n \rho^{k}\right) \\ &=\sum_{{k'}=1}^r\rho^{{k'}-1}a_{k'}{k'} \left({k'}-1 +(m +1)\right)+\sum_{k=0}^r\rho^{k}a_k\left(- k + n \right) \\ &=\sum_{{k}=0}^{r-1}\rho^{k}a_{k+1}(k+1) \left(k +(m+1)\right)+\sum_{k=0}^r\rho^{k}a_k\left(- k + n \right) \\ &=\sum_{{k}=0}^{r-1}\rho^{k}\Bigl(a_{k+1} (k+1) (k + m + 1)+a_k (n -k)\Bigr)+a_r (n-r) \rho^{r}\end{aligned}

Observe first that since we have assumed a_r \ne 0, we must have r=n. Requiring termwise equality with zero gives us the recurrance relation between the coefficents, for k \in [0,n-1]

\begin{aligned}a_{k+1} = a_k \frac{k - n}{ (k+1) (k + m + 1) }.\end{aligned} \hspace{\stretch{1}}(2.15)

Repeated application shows the pattern for these coefficients, and with a_0=1 we have

\begin{aligned}a_1 &= -\frac{n-0}{(1)(m+1)} \\ a_2 &= \frac{(n-1)(n-0)}{(2)(1)(m+2)(m+1)} \\ a_3 &= -\frac{(n-2)(n-1)(n-0)}{(3)(2)(1)(m+3)(m+2)(m+1)},\end{aligned}

With

\begin{aligned}a_k &= \frac{(-1)^k (n-(k-1))\cdots(n-1)(n-0)}{k!(m+k)\cdots(m+2)(m+1)} \\ &= \frac{(-1)^k n! m!}{k!(m+k)!(n-(k-1) -1)!},\end{aligned}

Or

\begin{aligned}a_k = \frac{(-1)^k n! m!}{k!(m+k)!(n-k)!}.\end{aligned} \hspace{\stretch{1}}(2.16)

Forming the complete series, we can get at the form of the associated Laguerre polynomials in the wikipedia article without too much trouble

\begin{aligned}L_n^m(\rho) &\propto 1 + \sum_{k=1}^n \frac{(-1)^k}{k!} \frac{n! m!}{(n-k)!(m+k)!} \rho^k \\ &\propto \frac{(n+m)!}{n!m!} + \sum_{k=1}^n \frac{(-1)^k}{k!} \frac{(n+m)!}{(n-k)!(m+k)!} \rho^k.\end{aligned}

Dropping the proportionality, this simplifies to just

\begin{aligned}L_n^m(\rho) = \sum_{k=0}^n \frac{(-1)^k}{k!} \binom{n+m}{m+k} \rho^k\end{aligned} \hspace{\stretch{1}}(2.17)

This isn’t neccessarily the form of the polynomials used in the text. To see if that is the case, we need to check the normalization.

According to the wikipedia article we have for the associated Laguerre polynomials as defined above

\begin{aligned}\int_0^{\infty}\rho^m e^{-\rho} L_n^{m}(\rho)L_{n'}^{m}(\rho)d\rho = \frac{(n+m)!}{n!}\delta_{n,{n'}}\end{aligned} \hspace{\stretch{1}}(2.18)

whereas in the text we have

\begin{aligned}\int_0^{\infty}\rho^{2l + 2} e^{-\rho} \left( L_{n+l}^{2l + 1}(\rho) \right)^2 d\rho = \frac{2n ((n+l)!)^3}{(n-l-1)!}.\end{aligned} \hspace{\stretch{1}}(2.19)

It seems clear that two different notations are being used. In this physical context of wave functions we want the normalization defined by

\begin{aligned}1 = \int_0^\infty \rho^2 R_l^2(\rho) d\rho = \int_0^\infty \rho^{2l + 2} e^{-\rho} L^2(\rho) d\rho\end{aligned} \hspace{\stretch{1}}(2.20)

Using the wikipedia notation, with

\begin{aligned}L(\rho) = A L_n^{2l+1},\end{aligned} \hspace{\stretch{1}}(2.21)

we want

\begin{aligned}1 &= \int \rho^{2l + 2} e^{-\rho} L^2(\rho) d\rho \\ &= A^2 \sum_{a,b=0}^n \frac{(-1)^{a+b}}{a!b!} \binom{n+2l+1}{2l+1+a}\binom{n+2l+1}{2l+1+b}\int_0^\infty d\rho \rho^{2l + 2 + a + b} e^{-\rho} \end{aligned}

Since \int_0^\infty d\rho \rho^{a} e^{-\rho} = \Gamma(a+1) = a! we have

\begin{aligned}1 = A^2 \sum_{a,b=0}^n \frac{(-1)^{a+b}}{a!b!} \binom{n+m}{m+a}\binom{n+m}{m+b}(m + 1 + a + b)!\end{aligned} \hspace{\stretch{1}}(2.22)

It looks like there is probably some way to simplify this, and if so we’d be able to map the notation used (without definition) used in the text, to the notation used in the wikipedia article. If we don’t care about that, nor the specifics of the normalization constant then there’s not too much more to say.

This is an ugly kind of place to leave things, but that’s enough for today. It’s too bad that the text isn’t just more explicit, and it’s probably best to refer elsewhere for any more detail. With no specifics about the functions themselves in any form, one has to do that anyways.

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] Wikipedia. Laguerre polynomials — wikipedia, the free encyclopedia, 2010. [Online; accessed 29-November-2010]. http://en.wikipedia.org/w/index.php?title=Laguerre_polynomials&oldid=38%6787645.

Posted in Math and Physics Learning. | Tagged: , , , , | Leave a Comment »

PHY356F: Quantum Mechanics I. Lecture 10 notes. Hydrogen atom.

Posted by peeterjoot on November 23, 2010

[Click here for a PDF of this post with nicer formatting]

Introduce the center of mass coordinates.

We’ll want to solve this using the formalism we’ve discussed. The general problem is a proton, positively charged, with a nearby negative charge (the electron).

Our equation to solve is

\begin{aligned}\left(-\frac{\hbar^2}{2 m_1} \boldsymbol{\nabla}_1^2-\frac{\hbar^2}{2 m_2} \boldsymbol{\nabla}_2^2\right)\bar{u}(\mathbf{r}_1, \mathbf{r}_2) + V(\mathbf{r}_1, \mathbf{r}_2)\bar{u}(\mathbf{r}_1, \mathbf{r}_2)=E \bar{u}(\mathbf{r}_1, \mathbf{r}_2).\end{aligned} \hspace{\stretch{1}}(6.123)

Here \left( -\frac{\hbar^2}{2 m_1} \boldsymbol{\nabla}_1^2 -\frac{\hbar^2}{2 m_2} \boldsymbol{\nabla}_2^2 \right) is the total kinetic energy term.
For hydrogen we can consider the potential to be the Coulomb potential energy function that depends only on \mathbf{r}_1 - \mathbf{r}_2. We can transform this using a center of mass transformation. Introduce the centre of mass coordinate and relative coordinate vectors

\begin{aligned}\mathbf{R} &= \frac{m_1 \mathbf{r}_1 + m_2 \mathbf{r}_2}{ m_1 + m_2 } \\ \mathbf{r} &= \mathbf{r}_1 - \mathbf{r}_2.\end{aligned} \hspace{\stretch{1}}(6.124)

The notation \boldsymbol{\nabla}_k^2 represents the Laplacian for the positions of the k’th particle, so that if \mathbf{r}_1 = (x_1, x_2, x_3) is the position of the first particle, the Laplacian for this is:

\begin{aligned}\boldsymbol{\nabla}_1^2=\frac{\partial^2}{\partial x_1^2}+\frac{\partial^2}{\partial y_1^2}+\frac{\partial^2}{\partial z_1^2}\end{aligned} \hspace{\stretch{1}}(6.126)

Here \mathbf{R} is the center of mass coordinate, and \mathbf{r} is the relative coordinate. With this transformation we can reduce the problem to a single coordinate PDE.

We set \bar{u}(\mathbf{r}_1, \mathbf{r}_2) = u(\mathbf{r}) U(\mathbf{R}) and E = E_{rel} + E_{cm}, and get

\begin{aligned}-\frac{\hbar^2}{2\mu} {\boldsymbol{\nabla}_{\mathbf{r}}}^2 u(\mathbf{r}) + V(\mathbf{r}) u(\mathbf{r}) = E_{rel} u(\mathbf{r})\end{aligned} \hspace{\stretch{1}}(6.127)

and

\begin{aligned}-\frac{\hbar^2}{2M} {\boldsymbol{\nabla}_{\mathbf{R}}}^2 U(\mathbf{R}) = E_{cm} U(\mathbf{R})\end{aligned} \hspace{\stretch{1}}(6.128)

where M = m_1 + m_2 is the total mass, and \mu = m_1 m_2/M is the reduced mass.

Aside: WHY do we care (slide of Hydrogen line spectrum shown)? This all started because when people looked at the spectrum for the hydrogen atom, a continuous spectrum was not found. Instead what was found was quantized frequencies. All this abstract Hilbert space notation with its bras and kets is a way of representing observable phenomina.

Also note that we have the same sort of problems in electrodynamics and mechanics, so we are able to recycle this sort of work, either applying it in those problems later, or using those techniques here.

In Electromagnetism these are the problems involving the solution to

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{E} = 0\end{aligned} \hspace{\stretch{1}}(6.129)

or for

\begin{aligned}\mathbf{E} = - \boldsymbol{\nabla} \Phi\end{aligned} \hspace{\stretch{1}}(6.130)

\begin{aligned}\boldsymbol{\nabla}^2 \Phi = 0,\end{aligned} \hspace{\stretch{1}}(6.131)

where \mathbf{E} is the electric field and \Phi is the electric potential.

We need sol solve 6.127 for u(\mathbf{r}). In spherical coordinates

\begin{aligned}-\frac{\hbar^2}{2m} \frac{1}{{r}} \frac{d^2}{dr^2} ( r R_l ) + \left( V(\mathbf{r}) + \frac{\hbar^2 }{2m} l(l+1) \right) R_l = E R_l\end{aligned} \hspace{\stretch{1}}(6.132)

where

\begin{aligned}u(\mathbf{r}) = R_l(\mathbf{r}) Y_{lm}(\theta, \phi)\end{aligned} \hspace{\stretch{1}}(6.133)

This all follows by the separation of variables technique that we’ll use here, in E and M, in PDEs, and so forth.

FIXME: picture drawn. Theta measured down from \mathbf{e}_3 axis to the position \mathbf{r} and \phi measured in the x,y plane measured in the \mathbf{e}_1 to \mathbf{e}_2 orientation.

For the hydrogen atom, we have

\begin{aligned}V(\mathbf{r}) = - \frac{Z e^2}{r}\end{aligned} \hspace{\stretch{1}}(6.134)

Here is what this looks like.

We introduce

\begin{aligned}\rho &= \alpha r \\ \alpha &= \sqrt{\frac{-8 m E}{\hbar^2}} \\ \lambda &= \frac{2 m Z e^2}{\hbar^2 \alpha} \\ \frac{2 m (- E) }{\hbar^2 \alpha^2 } &= \frac{1}{{4}}\end{aligned} \hspace{\stretch{1}}(6.135)

and write

\begin{aligned}\frac{d^2 R_l}{d\rho^2} + \frac{2}{\rho} \frac{d R_l}{d\rho} + \left( \frac{\lambda}{\rho} - \frac{l(l+1)}{\rho^2} - \frac{1}{{4}} \right) R_l = 0\end{aligned} \hspace{\stretch{1}}(6.139)

Large \rho limit.

For \rho \rightarrow \infty, 6.139 becomes

\begin{aligned}\frac{d^2 R_l}{d\rho^2} - \frac{1}{{4}} R_l = 0\end{aligned} \hspace{\stretch{1}}(6.140)

which implies solutions of the form

\begin{aligned}R_l(\rho) = e^{\pm \rho/2}\end{aligned} \hspace{\stretch{1}}(6.141)

but keep R_l(\rho) = e^{-\rho/2} and note that R_l(\rho) = F(\rho)e^{-\rho/2} is also a solution in the limit of \rho \rightarrow \infty, where F(\rho) is a polynomial.

Let F(\rho) = \rho^s L(\rho) where L(\rho) = a_0 + a_1 \rho + \cdots a_\nu \rho^\nu + \cdots.

Small \rho limit.

We also want to consider the small \rho limit, and piece together the information that we find. Think about the following. The small \rho \rightarrow 0 or r \rightarrow 0 limit gives

\begin{aligned}\frac{d^2 R_l}{d\rho^2} - \frac{l(l+1)}{\rho^2} R_l = 0\end{aligned} \hspace{\stretch{1}}(6.142)

\paragraph{Question:} Is this correct?

Not always. Also: we will also think about the l=0 case later (where \lambda/\rho would probably need to be retained.)

We need:

\begin{aligned}\frac{d^2 R_l}{d\rho^2} + \frac{2}{\rho} \frac{d R_l}{d\rho} - \frac{l(l+1)}{\rho^2} R_l = 0\end{aligned} \hspace{\stretch{1}}(6.143)

Instead of using 6.142 as in the text, we must substuitute R_l = \rho^s into the above to find

\begin{aligned}s(s-1) \rho^{s-2} + 2 s \rho^{s-2} - l(l+1) \rho^{s-2} &= 0 \\ \left( s(s-1) + 2 s - l(l+1) \right) \rho^{s-2} &= \end{aligned} \hspace{\stretch{1}}(6.144)

for this equality for all \rho we need

\begin{aligned}s(s-1) + 2 s - l(l+1) = 0\end{aligned} \hspace{\stretch{1}}(6.146)

Solutions s = l and s = -(l+1) can be found to this, and we need s positive for normalizability, which implies

\begin{aligned}R_l(\rho) = \rho^l L(\rho) e^{-\rho/2}.\end{aligned} \hspace{\stretch{1}}(6.147)

Now we need to find what restrictions we must have on L(\rho). Recall that we have L(\rho) = \sum a_\nu \rho^\nu. Substutition into 6.142 gives

\begin{aligned}\rho \frac{d^2 L}{d\rho} + \left( 2(l+1) - \rho \right) \frac{d L}{d \rho} + (\lambda - l - 1) L = 0\end{aligned} \hspace{\stretch{1}}(6.148)

We get

\begin{aligned}A_0 + A_1 \rho + \cdots A_\nu \rho^\nu + \cdots = 0\end{aligned} \hspace{\stretch{1}}(6.149)

For this to be valid for all \rho,

\begin{aligned}a_{\nu+1} \left( (\nu+1)(\nu+ 2l + 2)\right)-a_{\nu} \left( \nu - \lambda + l + 1\right)=0\end{aligned} \hspace{\stretch{1}}(6.150)

or

\begin{aligned}\frac{a_{\nu+1}}{ a_{\nu} } =\frac{ \nu - \lambda + l + 1 }{ (\nu+1)(\nu+ 2l + 2) }\end{aligned} \hspace{\stretch{1}}(6.151)

For large \nu we have

\begin{aligned}\frac{a_{\nu+1}}{ a_{\nu} } =\frac{1}{{\nu+1}}\rightarrow \frac{1}{{\nu}}\end{aligned} \hspace{\stretch{1}}(6.152)

Recall that for the exponential Taylor series we have

\begin{aligned}e^\rho = 1 + \rho + \frac{\rho^2}{2!} + \cdots\end{aligned} \hspace{\stretch{1}}(6.153)

for which we have

\begin{aligned}\frac{a_{\nu+1}}{a_\nu} \rightarrow \frac{1}{{\nu}}\end{aligned} \hspace{\stretch{1}}(6.154)

L(\rho) is behaving like e^\rho, and if we had that

\begin{aligned}R_l(\rho) = \rho^l L(\rho) e^{-\rho/2} \rightarrow \rho^l e^\rho e^{-\rho/2} = \rho^l e^{\rho/2}\end{aligned} \hspace{\stretch{1}}(6.155)

This is divergent, so for normalizable solutions we require L(\rho) to be a polynomial of a finite number of terms.

The polynomial L(\rho) must stop at \nu = n', and we must have

\begin{aligned}a_{\nu+1} = a_{n' +1} = 0\end{aligned} \hspace{\stretch{1}}(6.156)

\begin{aligned}a_{n'} \ne 0\end{aligned} \hspace{\stretch{1}}(6.157)

From 6.150 we have

\begin{aligned}a_{n'} \left( n' - \lambda + l + 1\right)=0\end{aligned} \hspace{\stretch{1}}(6.158)

so we require

\begin{aligned}n' = \lambda - l - 1\end{aligned} \hspace{\stretch{1}}(6.159)

Let \lambda = n, an integer and n' = 0, 1, 2, \cdots so that n' + l + 1 = n says for n= 1,2, \cdots

\begin{aligned}l \le n-1\end{aligned} \hspace{\stretch{1}}(6.160)

If

\begin{aligned}\lambda = n = \frac{2 m Z e^2 }{\hbar^2 \alpha}\end{aligned} \hspace{\stretch{1}}(6.161)

we have

\begin{aligned}E = E_n = - \frac{Z^2 e^2 }{2 a_0} \frac{1}{{n^2}}\end{aligned} \hspace{\stretch{1}}(6.162)

where a_0 = \hbar^2/m e^2 is the Bohr radius, and \alpha = \sqrt{-8 m E/\hbar^2}. In the lecture m was originally used for the reduced mass. I’ve switched to \mu earlier so that this cannot be mixed up with this use of m for the azimuthal quantum number associated with L_z Y_{lm} = m \hbar Y_{lm}.

PICTURE ON BOARD. Energy level transitions on 1/n^2 graph with differences between n=2 to n=1 shown, and photon emitted as a result of the n=2 to n=1 transition.

From Chapter 4 and the story of the spherical harmonics, for a given l, the quantum number m varies between -l and l in integer steps. The radial part of the solution of this separtion of variables problem becomes

\begin{aligned}R_l = \rho^l L(\rho) e^{-\rho/2}\end{aligned} \hspace{\stretch{1}}(6.163)

where the functions L(\rho) are the Laguerre polynomials, and our complete wavefunction is

\begin{aligned}u_{nlm}(r, \theta, \phi) = R_l(\rho) Y_{lm}(\theta, \phi)\end{aligned} \hspace{\stretch{1}}(6.164)

\begin{aligned}n &= 1, 2, \cdots \\ l &= 0, 1, 2, \cdots, n-1 \\ m &= -l, -l+1, \cdots 0, 1, 2, \cdots, l-1, l\end{aligned} \hspace{\stretch{1}}(6.165)

Note that for n=1, l=0, R_{10} \propto e^{-r/a_0}, as graphed here.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , | Leave a Comment »

Believed to be typos in Desai’s QM Text

Posted by peeterjoot on November 19, 2010

[Click here for a PDF of this post with nicer formatting]

Vatche says that the root cause of what I’ve identified as a typo is in some cases incorrect, and that he’s going through the text carefully himself too.

Chapter I.

\begin{itemize}
\item Page 7. Text before (1.43). \alpha instead of a used.
\item Page 19. Equation (1.122). \daggers omitted after first equality.
\end{itemize}

Chapter II.

\begin{itemize}
\item Page 40. Text before (2.137). Reference to equation (2.133) should be (2.135)
\item Page 53. Is the “Also show that” here correct? I get a different answer.
\end{itemize}

Chapter III.

\begin{itemize}
\item Page 61. Equation (3.51). 1/\hbar missing.
\item Page 66. Equation (3.92). -(d/dt {\langle {\alpha} \rvert}) {\lvert {\alpha} \rangle} should be +{\lvert {\alpha} \rangle} d/dt {\langle {\alpha} \rvert}.
\item Page 66. Equation (3.93). H on wrong side of {\langle {\alpha} \rvert}
\end{itemize}

Chapter IV.

\begin{itemize}
\item Page 81. Equation (4.52). Should be -2\alpha in the exponent.
\item Page 82. Equation (4.67). 2\alpha in the denominator of the normalization should be \alpha'/\pi.
\item Page 83. Equation (4.74). A normalized wave function isn’t required for the discussion, but if that was intended, a 1/\sqrt{2\pi} factor is missing.
\item Page 86. Equation (4.99). Extra brace in the exponent.
\item Page 87. Equation (4.106). Extra brace in the exponent.
\item Page 89. Equation (4.129), (4.130). \lambda - m^2/... should be \lambda + ...
\item Page 93. Equation (4.169). conjugation missing for Y_{lm}. Y_{l'm'} is missing prime on the l index.
\item Page 95. Second line of text. Language choice? “We now implement”. perhaps utilize would be better?
\item Page 95. Text before (4.193). i is in bold.
\item Page 96. Text before (4.196). i is in bold.
\item Page 97. (4.205). i is in bold.
\item Page 97. (4.207-209). \mathbf{i}, and \mathbf{j}s aren’t in bold like \mathbf{k}
\item Page 101. (4.239-240). The approach here is unclear. FIXME: incorporate lecture notes from class that did this using braket notation.
\item Page 102. (4.248-249). Commas missing to separate l, and m\pm 1 in the kets.
\end{itemize}

Chapter V.

\begin{itemize}
\item Page 113. (5.86). One \sigma isn’t in bold.
\item Page 114. (5.100). \chi is in bold.
\item Page 115. Text before (5.106). \alpha in bold.
\item Page 118. Switch of notation in problem 5 for ensemble averages. [S_i] used instead of \left\langle{{S_i}}\right\rangle_{\text{av}}.
\end{itemize}

Chapter VI.

\begin{itemize}
\item Page 120. \phi in bold. A not in bold.
\item Page 123. (6.26). 1/i \hbar factor missing on RHS.
\item Page 124. Text before (6.37). You say canonical momenta P_k, but call these mechanical momenta on prev page.
\item Page 125. (6.41). Some \psis are in bold.
\item Page 126. (6.49). There’s no mention that \mathbf{B} is constant, leaving it unclear how the gauge condition and how the curl of \mathbf{A} reproduces \mathbf{B}. This would also help clarify how you are able to write \boldsymbol{\mu} \cdot \mathbf{B} = \mathbf{B} \cdot \boldsymbol{\mu}.
\item Page 128. (6.65). \boldsymbol{\mu} \cdot \mathbf{L} should be \boldsymbol{\mu} \cdot \mathbf{B}.
\item Page 129. (6.75). \boldsymbol{\mu} \cdot \mathbf{L} should be \boldsymbol{\mu} \cdot \mathbf{B}.
\end{itemize}

Posted in Math and Physics Learning. | Tagged: , , , , | 1 Comment »

stack layout on amd64

Posted by peeterjoot on November 19, 2010

I’ve got a raw stackdump to deal with in a stack corruption issue, but didn’t know the stack layout for the linux amd64 ABI. On AIX, we find nice pairs of stackframe-address/instruction-addresses, and kind of expected to see that on linux too, but didn’t. To see how this works, I compiled a simple program, and walked through a call in the debugger.

Prior to a call instruction, I’ve got the following in my stack:

(gdb) x/5gx $rsp
0x7fffffffcac8: 0x0000000000400845      0x00002aaaaabc7000
0x7fffffffcad8: 0x0000000000400834      0x00007fff00000007
0x7fffffffcae8: 0x00007fff00000008

And the program is sitting at the following location:

(gdb) p /x $rip
$3 = 0x4006fc
(gdb) disassemble
Dump of assembler code for function _Z4bar1v:
0x00000000004006fc :        sub    $0x8,%rsp
0x0000000000400700 :        callq  0x40070c 
0x0000000000400705 :        add    $0x8,%rsp
0x0000000000400709 :       retq
End of assembler dump.

One more instruction step gets me to the call point:

(gdb) stepi
0x0000000000400700 in bar1() ()

And my stack contents now contain:

(gdb) p $rsp
$4 = (void *) 0x7fffffffcac0
(gdb) x/5gx $rsp
0x7fffffffcac0: 0x00002aaa00000002      0x0000000000400845
0x7fffffffcad0: 0x00002aaaaabc7000      0x0000000000400834
0x7fffffffcae0: 0x00007fff00000007

The stack pointer had been decremented. Is this a stack frame allocated for the bar1() function itself, or a decrement in preparation for a save of the return address value?

One more instruction step gets me into the new function. This appears to automatically decrement my stack pointer $rsp 8 more bytes, and I now have the return address (0x400705) in the stack in the first 64-bits:

(gdb) stepi
0x000000000040070c in bar2() ()
(gdb) p /x $rip
$5 = 0x40070c

(gdb) x/5gx $rsp
0x7fffffffcab8: 0x0000000000400705      0x00002aaa00000002
0x7fffffffcac8: 0x0000000000400845      0x00002aaaaabc7000
0x7fffffffcad8: 0x0000000000400834

Does the linux (amd64) ABI require every function to allocate a stack frame, even if they don’t use it? I see this in bar1()’s code despite compiling with -O. It appears that the ret instruction also pops from the stack, incrementing $rsp as it branches to that address. It also looks like what we require to find out return address is to look at the pre-decrement value of $rsp (or calculate that), so to navigate the stack manually in a corrupted stack dump, we’ve got to disassemble to know where to look for the next return address. That’s much messier than on AIX where we can look for the longest chain of stackframe/saved-link-addresses to attempt to find a pre-corruption point in the code to try to pinpoint where things went wrong.

Posted in C/C++ development and debugging. | Tagged: , , , , | Leave a Comment »

PHY356F: Quantum Mechanics I. Lecture 9 — Bound states.

Posted by peeterjoot on November 16, 2010

My notes from Lecture 9, November 16, 2010. Taught by Prof. Vatche Deyirmenjian.

[Click here for a PDF of this post with nicer formatting]

Motivation. Motivation for today’s physics is Solar Cell technology and quantum dots.

Problem:

What ware the eigenvalues and eigenvectors for an electron trapped in a 1D potential well?

MODEL.

Quantum state \lvert {\Psi} \rangle describes the particle. What V(X) should we choose? Try a quantum well with infinite barriers first.

These spherical quantum dots are like quantum wells. When you trap electrons in this scale you’ll get energy quantization.

VISUALIZE.

Draw a picture for V(X) with infinite spikes at \pm a. (ie: figure 8.1 in the text).

SOLVE.

First task is to solve the time independent Schr\”{o}dinger equation.

\begin{aligned}H {\lvert {\Psi} \rangle} = E {\lvert {\Psi} \rangle}\end{aligned} \hspace{\stretch{1}}(5.98)

derivable from

\begin{aligned}H {\lvert {\Psi} \rangle} = i \hbar \frac{\partial {}}{\partial {t}} {\lvert {\Psi} \rangle}\end{aligned} \hspace{\stretch{1}}(5.99)

In the position representation, we project {\langle {x} \rvert} onto H {\lvert {\Psi} \rangle} and solve for \left\langle{{x}} \vert {{\Psi}}\right\rangle = \Psi(x). For the problems in Chapter 8,

\begin{aligned}H = \frac{\mathbf{P}^2}{2m} + V(X,Y,Z),\end{aligned} \hspace{\stretch{1}}(5.100)

where

\begin{aligned}P &= \text{momentum operator} \\ X &= \text{position operator} \\ m &= \text{electron mass}\end{aligned}

We should be careful to be strict about the notation, and not interchange the operators and their specific representations (ie: not interchanging “little-x” and “big-x”) as we see in the text in this chapter.

Here the potential energy operator V(X,Y,Z) is time independent.

If i \hbar \frac{d{\lvert {\Psi} \rangle}}{dt} = H {\lvert {\Psi} \rangle} and H is time independent then {\lvert {\Psi} \rangle} = {\lvert {u} \rangle} e^{-i E t/\hbar} implies

\begin{aligned}i \hbar \frac{ -i E }{\hbar} {\lvert {u} \rangle} e^{-i E t/\hbar} = H {\lvert {u} \rangle} e^{-i E t/\hbar}\end{aligned}

or

\begin{aligned}E {\lvert {u} \rangle} = H {\lvert {u} \rangle}\end{aligned} \hspace{\stretch{1}}(5.101)

Here E is the energy eigenvalue, and {\lvert {u} \rangle} is the energy eigenstate. Our differential equation now becomes

\begin{aligned}-\frac{\hbar^2 }{2m} \frac{d^2 u(x)}{dx^2} + V(x) u(x) = E u(x)\end{aligned} \hspace{\stretch{1}}(5.102)

where V(x) = 0 for {\left\lvert{x}\right\rvert} < a. We won't find anything like this for real, but this is our first approximation to the quantum dot.

Our differential equation in the well is now

\begin{aligned}-\frac{\hbar^2 }{2m} \frac{d^2 u(x)}{dx^2} = E u(x)\end{aligned} \hspace{\stretch{1}}(5.103)

or with \alpha = \sqrt{2m E/\hbar^2}

\begin{aligned}\frac{d^2 u(x)}{dx^2} u(x) = -\frac{2 m E}{\hbar^2} u(x) = - \alpha^2 u(x)\end{aligned} \hspace{\stretch{1}}(5.104)

Our solution for {\left\lvert{x}\right\rvert} < a is then

\begin{aligned}u(x) = A \cos \alpha x + B \sin\alpha x\end{aligned} \hspace{\stretch{1}}(5.105)

and for {\left\lvert{x}\right\rvert} > a we have u(x) = 0 since V(x) = \infty.

Setting u(a) = u(-a) = 0 we have

\begin{aligned}A \cos \alpha a + B \sin\alpha a &= 0 \\ A \cos \alpha a - B \sin\alpha a &= 0\end{aligned}

Type I.

B=0, A \cos\alpha a = 0. For A \ne 0 we must have

\begin{aligned}\cos \alpha a = 0\end{aligned}

or \alpha a = n \frac{\pi}{2}, where n = 1, 3, 5, ..., so our solution is

\begin{aligned}u(x) = A \cos \left( \frac{n \pi}{2 a} x \right) \end{aligned} \hspace{\stretch{1}}(5.106)

Type II.

A=0, B \sin\alpha a = 0. For B \ne 0 we must have

\begin{aligned}\sin \alpha a = 0\end{aligned}

or \alpha a = n \frac{\pi}{2}, where n = 1, 2, 4, ..., so our solution is

\begin{aligned}u(x) = B \sin \left( \frac{n \pi}{2 a} x \right) \end{aligned} \hspace{\stretch{1}}(5.107)

Via determinant

We could also write

\begin{aligned}\begin{bmatrix}\cos \alpha a & \sin\alpha a \\ \cos \alpha a & - \sin\alpha a \end{bmatrix}\begin{bmatrix}A \\ B\end{bmatrix}= 0\end{aligned}

and then must have zero determinant, or

\begin{aligned}-2 \sin\alpha a \cos\alpha a = -\sin 2 \alpha a\end{aligned} \hspace{\stretch{1}}(5.108)

so we must have

\begin{aligned}2 \alpha a = n \pi\end{aligned}

or

\begin{aligned}\alpha = \frac{n \pi}{2a}\end{aligned}

regardless of A and B. We can then determine the solutions 5.106, and 5.107 simply by noting that this value for \alpha kills off either the sine or cosine terms of 5.105 depending on whether n is even or odd.

CHECK.

\begin{aligned}u_n(x) &= A \cos \left( \frac{n \pi}{2 a} x \right) \\ u_n(x) &= B \sin \left( \frac{n \pi}{2 a} x \right) \end{aligned}

satisfy the time independent Schr\”{o}dinger equation, and the corresponding eigenvalues from from

\begin{aligned}\alpha = \sqrt{\frac{2 m E}{\hbar^2}},\end{aligned}

or

\begin{aligned}E = \frac{\hbar^2 \alpha^2}{2m} = \frac{\hbar^2}{2m} \left( \frac{n \pi}{2a} \right)^2 \end{aligned}

for n = 1, 2, 3, \cdots.

On the derivative of u at the boundaries

Integrating

\begin{aligned}-\frac{\hbar^2 }{2m} \frac{d^2 u(x)}{dx^2} u(x) + V(x) u(x) = E u(x),\end{aligned} \hspace{\stretch{1}}(5.109)

over [a-\epsilon,a+\epsilon] we have

\begin{aligned}-\frac{\hbar^2 }{2m} &\int_{a-\epsilon}^{a-\epsilon}\frac{d^2 u(x)}{dx^2} dx+ \int_{a-\epsilon}^{a-\epsilon}V(x) u(x) dx = \int_{a-\epsilon}^{a-\epsilon}E u(x) dx \\ -\frac{\hbar^2 }{2m} &\left( \left.\frac{du}{dx}\right\vert_{a-\epsilon}^{a+\epsilon} + 0 = 0\right)\end{aligned} \hspace{\stretch{1}}(5.110)

which gives us

\begin{aligned}\left.\frac{du}{dx}\right\vert_{a + \epsilon}-\left.\frac{du}{dx}\right\vert_{a - \epsilon} = 0\end{aligned} \hspace{\stretch{1}}(5.112)

or

\begin{aligned}\left.\frac{du}{dx}\right\vert_{a + \epsilon}&=\left.\frac{du}{dx}\right\vert_{a - \epsilon} \end{aligned} \hspace{\stretch{1}}(5.113)

We can infer how the derivative behaves over the potential discontinuity, so in the limit where \epsilon \rightarrow 0 we must have wave function continuity at despite the potential discontinuity.

This sort of analysis, which is potential dependent, we see that for this infinite well potential, our derivative must be continuous at the boundary.

Problem:

non-infinite step well potential.

Given a zero potential in the well {\left\lvert{x}\right\rvert} < a

\begin{aligned}-\frac{\hbar^2 }{2m} \frac{d^2 u(x)}{dx^2} u(x) + 0 = E u(x),\end{aligned} \hspace{\stretch{1}}(5.114)

and outside of the well {\left\lvert{x}\right\rvert} > a

\begin{aligned}-\frac{\hbar^2 }{2m} \frac{d^2 u(x)}{dx^2} u(x) + V_0 u(x) = E u(x)\end{aligned} \hspace{\stretch{1}}(5.115)

Inside of the well, we have the solution worked previously, with \alpha = \sqrt{2m E/\hbar^2}

\begin{aligned}u(x) &= A \cos\alpha x + B \sin\alpha x \end{aligned} \hspace{\stretch{1}}(5.116)

Then we have outside of the well the same form

\begin{aligned}-\frac{\hbar^2 }{2m} \frac{d^2 u(x)}{dx^2} u(x) = (E - V_0 )u(x) \end{aligned} \hspace{\stretch{1}}(5.117)

With \beta = \sqrt{ 2m (V_0 - E)/\hbar^2}, this is

\begin{aligned}\frac{d^2 u(x)}{dx^2} u(x) = \beta^2 u(x) \end{aligned} \hspace{\stretch{1}}(5.118)

If V_0 - E > 0, we have V_0 > E, and the states are “bound” or “localized” in the well.

Our solutions for this V_0 > E case are then

\begin{aligned}u(x) &= D e^{\beta x} \\ u(x) &= C e^{-\beta x}\end{aligned} \hspace{\stretch{1}}(5.119)

for x \le a, and x \ge a respectively.

Question: Why can we not have

\begin{aligned}u(x) = D e^{\beta x} + C e^{-\beta x}\end{aligned} \hspace{\stretch{1}}(5.121)

for x \le -a?

Answer: As x \rightarrow -\infty we would then have

\begin{aligned}u(x) \rightarrow C e^{\beta \infty} \rightarrow \infty\end{aligned}

This is a non-physical solution, and we discard it based on our normalization requirement.

Our total solution, in regions x  a respectively

\begin{aligned}u_1(x) &= D e^{\beta x} \\ u_2(x) &= A \cos\alpha x + B \sin\alpha x \\ u_3(x) &= C e^{-\beta x}\end{aligned}

To find the coefficients, set u_1(-a) = u_2(-a), u_2(a) = u_3(a) u_1'(-a) = u_2'(-a), u_2'(a) = u_3'(a), and NORMALIZE u(x).

Now, how about in region 2 (x < -a), V_0 < E implies that our equation is

\begin{aligned}\frac{d^2 u(x)}{dx^2} u(x) = - \frac{2m}{\hbar^2} (E - V_0) u(x) = - k^2 u(x)\end{aligned} \hspace{\stretch{1}}(5.122)

We no longer have quantized energy for such a solution. These correspond to the “unbound” or “continuum” states. Even though we do not have quantized energy we still have quantum effects. Our solution becomes

\begin{aligned}u_1(x) &= C_2 e^{i k x} +D_2 e^{-i k x}  \\ u_2(x) &= A e^{i \alpha x} +B e^{-i \alpha x}  \\ u_3(x) &= C_3 e^{i k x} \end{aligned}

Question. Why no D_2 e^{-i k x}, in the u_3(x) term?

Answer. We can, but this is not physically relevant. Why is because we associate e^{ikx} with an incoming wave, with reflection in the x < -a interval, and both e^{\pm i \alpha x} in the $latex {\left\lvert{x}\right\rvert} a$ region.

FIXME: scan picture: 9.1 in my notebook.

Observe that this is not normalizable as is. We require “delta-function” normalization. What we can do is ask about current densities. How much passes through the barrier, and so forth.

Note to self. We probably really we want to consider a wave packet of states, something like:

\begin{aligned}\Psi_1(x) &= \int dk f_1(k) e^{i k x} \\ \Psi_2(x) &= \int d\alpha f_2(\alpha) e^{i \alpha x} \\ \Psi_3(x) &= \int dk f_3(k) e^{i k x}\end{aligned}

Then we’d have something that we can normalize. Play with this later.

Setup for next week’s hydrogen atom lecture.

We’ll want to solve this using the formalism we’ve discussed. The general problem is a proton, positively charged, with a nearby negative charge (the electron).

Our equation to solve is

\begin{aligned}\left(-\frac{\hbar^2}{2 m_1} \boldsymbol{\nabla}_1^2-\frac{\hbar^2}{2 m_2} \boldsymbol{\nabla}_2^2\right)u(\mathbf{r}_1, \mathbf{r}_2) + V(\mathbf{r}_1, \mathbf{r}_2)u(\mathbf{r}_1, \mathbf{r}_2)=E u(\mathbf{r}_1, \mathbf{r}_2).\end{aligned} \hspace{\stretch{1}}(6.123)

Here \left( -\frac{\hbar^2}{2 m_1} \boldsymbol{\nabla}_1^2 -\frac{\hbar^2}{2 m_2} \boldsymbol{\nabla}_2^2 \right) is the total kinetic energy term. For hydrogen we can consider the potential to be the Coulomb potential energy function that depends only on \mathbf{r}_1 - \mathbf{r}_2. We can transform this using a center of mass transformation. Introduce the centre of mass coordinate and relative coordinate vectors

\begin{aligned}\mathbf{R} &= \frac{m_1 \mathbf{r}_1 + m_2 \mathbf{r}_2}{ m_1 + m_2 } \\ \mathbf{r} &= \mathbf{r}_1 - \mathbf{r}_2\end{aligned} \hspace{\stretch{1}}(6.124)

With this transformation we can reduce the problem to a single coordinate PDE.

Posted in Math and Physics Learning. | Tagged: , , , , , , , | Leave a Comment »

My submission for PHY356 (Quantum Mechanics I) Problem Set II.

Posted by peeterjoot on November 16, 2010

[Click here for a PDF of this post with nicer formatting]

Problem 1.

A particle of mass m is free to move along the x-direction such that V(X)=0. Express the time evolution operator U(t,t_0) defined by Eq. (2.166) using the momentum eigenstates {\lvert {p} \rangle} with delta-function normalization. Find {\langle {x} \rvert} U(t,t0) {\lvert {x'} \rangle}, where {\lvert {x} \rangle} and {\lvert {x'} \rangle} are position eigenstates. What is the physical meaning of this expression?

Momentum matrix element.

We can expand the time evolution operator in series

\begin{aligned}U(t,t_0) &= e^{-i H(t-t_0)/\hbar} \\ &= e^{ -i P^2 (t-t_0)/ 2m \hbar } \\ &= 1 + \sum_{k=1}^\infty \frac{1}{{k!}} \left( -i \frac{P^2 (t-t_0)}{2m \hbar} \right)^k.\end{aligned}

We can now evaluate the momentum matrix element {\langle {p} \rvert} U(t,t_0) {\lvert {p'} \rangle}, which will essentially require the value of {\langle {p} \rvert} P^{2k} {\lvert {p'} \rangle}. That is

\begin{aligned}{\langle {p} \rvert} P^{2k} {\lvert {p'} \rangle}&= {\langle {p} \rvert} P^{2k-1} P {\lvert {p'} \rangle} \\ &= {\langle {p} \rvert} P^{2k-1} {\lvert {p'} \rangle} p' \\ &= \cdots \\ &= \left\langle{{p}} \vert {{p'}}\right\rangle (p')^{2k}.\end{aligned}

The momentum matrix element is therefore reduced to

\begin{aligned}{\langle {p} \rvert} U(t,t_0) {\lvert {p'} \rangle}&=\left\langle{{p}} \vert {{p'}}\right\rangle \exp\left( -i \frac{p^2 (t-t_0)}{2m \hbar} \right)= \delta(p-p') \exp\left( -i \frac{p^2 (t-t_0)}{2m \hbar} \right)\end{aligned} \hspace{\stretch{1}}(1.1)

Position matrix element.

For the position matrix element we have a similar sum

\begin{aligned}{\langle {x} \rvert} U(t,t_0) {\lvert {x'} \rangle} &= \left\langle{{x}} \vert {{x'}}\right\rangle + \sum_{k=1}^\infty \frac{1}{{k!}} {\langle {x} \rvert} \left( -i \frac{P^2 (t-t_0)}{2m \hbar} \right)^k {\lvert {x'} \rangle},\end{aligned}

and require {\langle {x} \rvert} P^{2k} {\lvert {x'} \rangle} to continue. That is

\begin{aligned}{\langle {x} \rvert} P^{2k} {\lvert {x'} \rangle}&=\int dx''{\langle {x} \rvert} P^{2k-1} {\lvert {x''} \rangle}{\langle {x''} \rvert} P {\lvert {x'} \rangle} \\ &=\int dx''{\langle {x} \rvert} P^{2k-1} {\lvert {x''} \rangle} \delta(x''-x') (-i\hbar) \frac{d}{dx'} \\ &={\langle {x} \rvert} P^{2k-1} {\lvert {x'} \rangle} (-i\hbar) \frac{d}{dx'} \\ &= \cdots \\ &= \left\langle{{x}} \vert {{x'}}\right\rangle \left( (-i\hbar) \frac{d}{dx'} \right)^{2k}\end{aligned}

Our position matrix element is therefore the differential operator

\begin{aligned}{\langle {x} \rvert} U(t,t_0) {\lvert {x'} \rangle} &=\left\langle{{x}} \vert {{x'}}\right\rangle \exp\left( \frac{i (t-t_0)\hbar}{2m} \frac{d^2}{d{x'}^2} \right)=\delta(x-x') \exp\left( \frac{i (t-t_0)\hbar}{2m} \frac{d^2}{d{x'}^2} \right)\end{aligned} \hspace{\stretch{1}}(1.2)

Physical interpretation of the position matrix element operator.

Finally, we need to determine the physical meaning of such a matrix element operator.

With the delta function that this matrix element operator includes it really only takes on a meaning with a convolution integral. The simplest such integral would be

\begin{aligned}\int dx' {\langle {x} \rvert} U {\lvert {x'} \rangle} \left\langle{{x'}} \vert {{\phi_0}}\right\rangle &={\langle {x} \rvert} U {\lvert {\phi_0} \rangle} \\ &=\left\langle{{x}} \vert {{\phi(t)}}\right\rangle \\ &=\phi(x,t),\end{aligned}

or

\begin{aligned}\phi(x,t) = \int dx' {\langle {x} \rvert} U {\lvert {x'} \rangle} \phi(x',0)\end{aligned}

The LHS has a physical meaning, and in the absolute square

\begin{aligned}\int_{x_0}^{x_0+ \Delta x} {\left\lvert{\phi(x,t)}\right\rvert}^2 dx,\end{aligned} \hspace{\stretch{1}}(1.3)

provides the probability that the particle will be found in the region [x_0, x_0+ \Delta x].

If we ignore the absolute square requirement and think of the (presumed normalized) wave function \phi(x,t) more loosely as representing a probability directly, then we can in turn give a meaning to the matrix element {\langle {x} \rvert} U {\lvert {x'} \rangle} for the time evolution operator. This provides an operator valued weighting function that provides us with the probability that a particle initially at position x' will be at position x at time t. This probability is indirect since we need to absolute square and sum over a finite interval to obtain the probability of finding the particle in that interval.

Observe that the integral on the RHS of 1.3 is a summation over all x', so we can think of this as adding the probabilities that the particle was at each point to arrive at the total probability for finding it at the new location x. The time evolution operator matrix element provides the weighting in this conditional probability.

In 1.2 we found that the time evolution operators matrix element is differential operator in the position representation. In the general case this means that this probability weighting is not just numeric since the operation of the matrix element initial time wave function can produce wave functions for additional states. In some special cases, we may find that this weighting is strictly numeric, and one such example would be the Gaussian wave packet \phi(x',0) = e^{-a{x'}^2}. Application of the differential operations would then produce polynomial weighted multiples of the original Gaussian. In this special case we would be able to write

\begin{aligned}\phi(x,t) = \int dx' {\langle {x} \rvert} U {\lvert {x'} \rangle} \phi(x',0) = \int dx' K(x,x',t) \phi(x',0) \end{aligned}

Where K(x,x',t) is a polynomial valued function (and is in fact another exponential), and now just provides a numerical weighting for the conditional probability for the particle to move from x' to x in time t. In [1], this K(x,x',t) is called the Propagator function. It is perhaps justifiable to also call our similar operator valued matrix element a Propagator.

My grade.

I got full marks on this assignment. There’s apparently another way to do part of the first question on the position representation, and I was instructed by the TA to see the posted solution, which is not yet available.

References

[1] R. Liboff. Introductory quantum mechanics. Cambridge: Addison-Wesley Press, Inc, 2003.

Posted in Math and Physics Learning. | Tagged: , , , , , , , | Leave a Comment »