Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

Posts Tagged ‘spherical harmonics’

PHY456H1F: Quantum Mechanics II. Lecture L23 (Taught by Prof J.E. Sipe). 3D Scattering.

Posted by peeterjoot on December 4, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Disclaimer.

Peeter’s lecture notes from class. May not be entirely coherent.

3D Scattering.

READING: section 20, and section 4.8 of our text [1].

We continue to consider scattering off of a positive potential as depicted in figure (\ref{fig:qmTwoL23:qmTwoL23fig1})
\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.4\textheight]{qmTwoL23fig1}
\caption{Radially bounded potential.}
\end{figure}

Here we have V(r) = 0 for r > r_0. The wave function

\begin{aligned}e^{i k \hat{\mathbf{n}} \cdot \mathbf{r}}\end{aligned} \hspace{\stretch{1}}(2.1)

is found to be a solution of the free particle Schr\”{o}dinger equation.

\begin{aligned}- \frac{\hbar^2}{2\mu} \boldsymbol{\nabla}^2e^{i k \hat{\mathbf{n}} \cdot \mathbf{r}} = \frac{\hbar^2 \mathbf{k}^2}{2 \mu}e^{i k \hat{\mathbf{n}} \cdot \mathbf{r}}\end{aligned} \hspace{\stretch{1}}(2.2)

Seeking a post scattering solution away from the potential

What other solutions can be found for r > r_0, where our potential V(r) = 0? We are looking for \Phi(\mathbf{r}) such that

\begin{aligned}- \frac{\hbar^2}{2\mu} \boldsymbol{\nabla}^2\Phi(r) = \frac{\hbar^2 \mathbf{k}^2}{2 \mu}\Phi(r)\end{aligned} \hspace{\stretch{1}}(2.3)

What can we find?

We split our Laplacian into radial and angular components as we did for the hydrogen atom

\begin{aligned}- \frac{\hbar^2}{2\mu} \frac{\partial^2 }{\partial {{r}}^2} (r \Phi(\mathbf{r})) +\frac{\mathcal{L}^2}{2 \mu r^2}\Phi(\mathbf{r})=E \Phi(\mathbf{r}),\end{aligned} \hspace{\stretch{1}}(2.4)

where

\begin{aligned}\mathcal{L}^2 = -\hbar^2 \left(\frac{\partial^2 }{\partial {{\theta}}^2}+ \frac{1}{{\tan\theta}} \PD{\theta}+ \frac{1}{{\sin^2\theta}} \frac{\partial^2 }{\partial {{\phi}}^2}\right)\end{aligned} \hspace{\stretch{1}}(2.5)

Assuming a solution of

\begin{aligned}\Phi(\mathbf{r}) = R(r) Y_l^m(\theta, \phi),\end{aligned} \hspace{\stretch{1}}(2.6)

and noting that

\begin{aligned}\mathcal{L}^2 Y_l^m(\theta, \phi) = \hbar^2 l (l+1) Y_l^m(\theta, \phi),\end{aligned} \hspace{\stretch{1}}(2.7)

we find that our radial equation becomes

\begin{aligned}- \frac{\hbar^2}{2 \mu r} \frac{\partial^2 }{\partial {{r}}^2} (r R(r))+\frac{\hbar^2 l (l+1)}{2 \mu r^2}R(r)=E R(r)=\frac{\hbar^2 k^2}{2\mu} R(r).\end{aligned} \hspace{\stretch{1}}(2.8)

Writing

\begin{aligned}R(r) = \frac{u(r)}{r},\end{aligned} \hspace{\stretch{1}}(2.9)

we have

\begin{aligned}- \frac{\hbar^2}{2 \mu r} \frac{\partial^2 {{u(r)}}}{\partial {{r}}^2}+\frac{\hbar^2 l (l+1)}{2 \mu r}u(r)=\frac{\hbar^2 k^2}{2\mu} \frac{u(r)}{r},\end{aligned} \hspace{\stretch{1}}(2.10)

or

\begin{aligned}\left( \frac{d^2}{dr^2} + k^2 -\frac{l (l+1) }{r^2} \right) u(r) = 0\end{aligned} \hspace{\stretch{1}}(2.11)

Writing \rho = k r, we have

\begin{aligned}\left( \frac{d^2}{d\rho^2} + 1 -\frac{l (l+1) }{\rho^2} \right) u(r) = 0\end{aligned} \hspace{\stretch{1}}(2.12)

The radial equation and its solution.

With a last substitution of u(r) = U( k r ) = U(\rho), and introducing an explicit l suffix on our eigenfunction U(\rho) we have

\begin{aligned}\left( -\frac{d^2}{d\rho^2} +\frac{l (l+1) }{\rho^2} \right) U_l(\rho) = U_l(\rho).\end{aligned} \hspace{\stretch{1}}(2.13)

We’d not have done this before with the hydrogen atom since we had only finite E = \hbar^2 k^2/2 \mu. Now this can be anything.

Making one final substitution, U_l(\rho) = \rho f_l(\rho) we can rewrite 2.13 as

\begin{aligned}\left( \rho^2 \frac{d^2}{d\rho^2} + 2 \rho \frac{d}{d\rho} + (\rho^2 - l(l+1)) \right) f_l = 0.\end{aligned} \hspace{\stretch{1}}(2.14)

This is the spherical Bessel equation of order l and has solutions called the Bessel and Neumann functions of order l, which are

\begin{subequations}

\begin{aligned}j_l(\rho) = (-\rho)^l \left( \frac{1}{{\rho}} \frac{d}{d\rho} \right)^l \left( \frac{\sin\rho}{\rho} \right)\end{aligned} \hspace{\stretch{1}}(2.15a)

\begin{aligned}n_l(\rho) = (-\rho)^l \left( \frac{1}{{\rho}} \frac{d}{d\rho} \right)^l \left( -\frac{\cos\rho}{\rho} \right).\end{aligned} \hspace{\stretch{1}}(2.15b)

\end{subequations}

We can easily calculate

\begin{aligned}U_0(\rho) &= \rho j_0(\rho) = \sin\rho \\ U_1(\rho) &= \rho j_1(\rho) = -\cos\rho + \frac{\sin\rho}{\rho}\end{aligned} \hspace{\stretch{1}}(2.16)

and can plug these into 2.13 to verify that they are a solution. A more general proof looks a bit trickier.

Observe that the Neumann functions are less well behaved at the origin. To calculate the first few Bessel and Neumann functions we first compute

\begin{aligned}\frac{1}{{\rho}} \frac{d}{d\rho} \frac{\sin\rho}{\rho}&= \frac{1}{{\rho}} \left(\frac{\cos\rho}{\rho}-\frac{\sin\rho}{\rho^2}\right) \\ &=\frac{\cos\rho}{\rho^2}-\frac{\sin\rho}{\rho^3}\end{aligned}

\begin{aligned}\left( \frac{1}{{\rho}} \frac{d}{d\rho} \right)^2 \frac{\sin\rho}{\rho}&= \frac{1}{{\rho}} \left(-\frac{\sin\rho}{\rho^2}-2\frac{\cos\rho}{\rho^3}-\frac{\cos\rho}{\rho^3}+3\frac{\sin\rho}{\rho^4}\right) \\ &=\sin\rho\left(-\frac{1}{\rho^3}+\frac{3}{\rho^5}\right)-3\frac{\cos\rho}{\rho^4}\end{aligned}

and

\begin{aligned}\frac{1}{{\rho}} \frac{d}{d\rho} -\frac{\cos\rho}{\rho}&= \frac{1}{{\rho}} \left(\frac{\sin\rho}{\rho}+\frac{\cos\rho}{\rho^2}\right) \\ &=\frac{\sin\rho}{\rho^2}+\frac{\cos\rho}{\rho^3}\end{aligned}

\begin{aligned}\left( \frac{1}{{\rho}} \frac{d}{d\rho} \right)^2 -\frac{\cos\rho}{\rho}&= \frac{1}{{\rho}} \left(\frac{\cos\rho}{\rho^2}-2\frac{\sin\rho}{\rho^3}-\frac{\sin\rho}{\rho^3}-3\frac{\cos\rho}{\rho^4}\right) \\ &=\cos\rho\left(\frac{1}{\rho^3}-\frac{3}{\rho^5}\right)-3\frac{\sin\rho}{\rho^4}\end{aligned}

so we find

\begin{aligned}\begin{array}{l l l l}j_0(\rho) &= \frac{\sin\rho}{\rho} 					& n_0(\rho) &= -\frac{\cos\rho}{\rho} 	\\ j_1(\rho) &= \frac{\sin\rho}{\rho^2} -\frac{\cos\rho}{\rho} 		& n_1(\rho) &= -\frac{\cos\rho}{\rho^2} -\frac{\sin\rho}{\rho} \\ j_2(\rho) &= \sin\rho \left(-\frac{1}{\rho} + \frac{3}{\rho^3} \right) +\cos\rho \left(-\frac{3}{\rho^2} \right)& n_2(\rho) &= \cos\rho \left(\frac{1}{\rho} - \frac{3}{\rho^3} \right) +\sin\rho \left(-\frac{3}{\rho^2} \right)\end{array}\end{aligned} \hspace{\stretch{1}}(2.18)

Observe that our radial functions R(r) are proportional to these Bessel and Neumann functions

\begin{aligned}R(r)&= \frac{u(r)}{r}  \\ &= \frac{U(kr)}{r}  \\ &=\left\{\begin{array}{l}\frac{j_l(\rho) \rho}{r} \\ \frac{n_l(\rho) \rho}{r}\end{array}\right. \\ &=\left\{\begin{array}{l}\frac{j_l(\rho) k \not{{r}}}{\not{{r}}} \\ \frac{n_l(\rho) k \not{{r}}}{\not{{r}}}\end{array}\right.\end{aligned}

Or

\begin{aligned}R(r) \sim j_l(\rho), n_l(\rho).\end{aligned} \hspace{\stretch{1}}(2.19)

Limits of spherical Bessel and Neumann functions

With n!! denoting the double factorial, like factorial but skipping every other term

\begin{aligned}n!! = n(n-2)(n-4) \cdots,\end{aligned} \hspace{\stretch{1}}(2.20)

we can show that in the limit as \rho \rightarrow 0 we have

\begin{subequations}

\begin{aligned}j_l(\rho) \rightarrow \frac{\rho^l}{(2 l + 1)!!} \end{aligned} \hspace{\stretch{1}}(2.21a)

\begin{aligned}n_l(\rho) \rightarrow -\frac{(2 l - 1)!!}{\rho^{(l+1)}},\end{aligned} \hspace{\stretch{1}}(2.21b)

\end{subequations}

(for the l=0 case, note that (-1)!! = 1 by definition).

Comparing this to our explicit expansion for j_1(\rho) in 2.18 where we appear to have a 1/\rho dependence for small \rho it is not obvious that this would be the case. To compute this we need to start with a power series expansion for \sin\rho/\rho, which is well behaved at \rho =0 and then the result follows (done later).

It is apparently also possible to show that as \rho \rightarrow \infty we have

\begin{subequations}

\begin{aligned}j_l(\rho) \rightarrow \frac{1}{{\rho}} \sin\left( \rho - \frac{l \pi}{2} \right) \end{aligned} \hspace{\stretch{1}}(2.22a)

\begin{aligned}n_l(\rho) \rightarrow -\frac{1}{{\rho}} \cos\left( \rho - \frac{l \pi}{2} \right).\end{aligned} \hspace{\stretch{1}}(2.22b)

\end{subequations}

Back to our problem.

For r > r_0 we can construct (for fixed k) a superposition of the spherical functions

\begin{aligned}\sum_l \sum_m \left( A_l j_l( k r ) + B_l n_l(k r) \right) Y_l^m(\theta, \phi)\end{aligned} \hspace{\stretch{1}}(2.23)

we want outgoing waves, and as r \rightarrow \infty, we have

\begin{subequations}

\begin{aligned}j_l(k r) \rightarrow \frac{\sin\left(kr - \frac{l \pi}{2}\right)}{k r} \end{aligned} \hspace{\stretch{1}}(2.24a)

\begin{aligned}n_l(k r) \rightarrow -\frac{\cos\left(kr - \frac{l \pi}{2}\right)}{k r}\end{aligned} \hspace{\stretch{1}}(2.24b)

\end{subequations}

Put A_l/B_l = -i for a given l we have

\begin{aligned}\frac{1}{{k r}} \left( -i\frac{\sin\left(kr - \frac{l \pi}{2}\right)}{k r}-\frac{\cos\left(kr - \frac{l \pi}{2}\right)}{k r} \right)\sim \frac{1}{{k r}} e^{i (k r - \pi l/2)}\end{aligned} \hspace{\stretch{1}}(2.25)

For

\begin{aligned}\sum_l\sum_m B_l\frac{1}{{k r}} e^{i (k r - \pi l/2)} Y_l^m(\theta, \phi).\end{aligned} \hspace{\stretch{1}}(2.26)

Making this choice to achieve outgoing waves (and factoring a (-i)^l out of B_l for some reason, we have another wave function that satisfies our Hamiltonian equation

\begin{aligned}\frac{e^{i k r}}{k r}\sum_l\sum_m(-1)^lB_lY_l^m(\theta, \phi).\end{aligned} \hspace{\stretch{1}}(2.27)

The B_l coefficients will depend on V(r) for the incident wave e^{i \mathbf{k} \cdot \mathbf{r}}. Suppose we encapsulate that dependence in a helper function f_\mathbf{k}(\theta, \phi) and write

\begin{aligned}\frac{e^{i k r}}{r} f_\mathbf{k}(\theta, \phi)\end{aligned} \hspace{\stretch{1}}(2.28)

We seek a solution \psi_\mathbf{k}(\mathbf{r})

\begin{aligned}\left( - \frac{\hbar^2}{2\mu} \boldsymbol{\nabla}^2+ V(\mathbf{r})\right)\psi_\mathbf{k}(\mathbf{r}) = \frac{\hbar^2 \mathbf{k}^2}{2 \mu}\psi_\mathbf{k}(\mathbf{r}),\end{aligned} \hspace{\stretch{1}}(2.29)

where as r \rightarrow \infty

\begin{aligned}\psi_\mathbf{k}(\mathbf{r}) \rightarrow e^{i \mathbf{k} \cdot \mathbf{r}} + \frac{e^{i k r}}{r} f_\mathbf{k}(\theta, \phi).\end{aligned} \hspace{\stretch{1}}(2.30)

Note that for r < r_0 in general for finite r, \psi_k(\mathbf{r}), is much more complicated. This is the analogue of the plane wave result

\begin{aligned}\psi(x) = e^{i k x} + \beta_k e^{-i k x}\end{aligned} \hspace{\stretch{1}}(2.31)

Scattering geometry and nomenclature.

We can think classically first, and imagine a scattering of a stream of particles barraging a target as in
figure (\ref{fig:qmTwoL23:qmTwoL23fig2})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.3\textheight]{qmTwoL23fig2}
\caption{Scattering cross section.}
\end{figure}

Here we assume that d\Omega is far enough away that it includes no non-scattering particles.

Write P for the number density

\begin{aligned}P = \frac{\text{number of particles}}{\text{unit volume}},\end{aligned} \hspace{\stretch{1}}(3.32)

and

\begin{aligned}J = P v_0 =\frac{\text{Number of particles flowing through}}{\text{a unit area in unit time}}\end{aligned} \hspace{\stretch{1}}(3.33)

We want to count the rate of particles per unit time dN through this solid angle d\Omega and write

\begin{aligned}dN = J \left( \frac{d \sigma(\Omega)}{d\Omega} \right) d\Omega.\end{aligned} \hspace{\stretch{1}}(3.34)

The factor

\begin{aligned}\frac{d \sigma(\Omega)}{d\Omega},\end{aligned} \hspace{\stretch{1}}(3.35)

is called the differential cross section, and has “units” of

\begin{aligned}\frac{\text{area}}{\text{steradians}}\end{aligned} \hspace{\stretch{1}}(3.36)

(recalling that steradians are radian like measures of solid angle [2]).

The total number of particles through the volume per unit time is then

\begin{aligned}\int J \frac{d \sigma(\Omega)}{d\Omega} d\Omega= J \int \frac{d \sigma(\Omega)}{d\Omega} d\Omega= J \sigma\end{aligned} \hspace{\stretch{1}}(3.37)

where \sigma is the total cross section and has units of area. The cross section \sigma his the effective size of the area required to collect all particles, and characterizes the scattering, but isn’t necessarily entirely geometrical. For example, in photon scattering we may have frequency matching with atomic resonance, finding \sigma \sim \lambda^2, something that can be much bigger than the actual total area involved.

Appendix

Q: Are Bessel and Neumann functions orthogonal?

Answer: There is an orthogonality relation, but it is not one of plain old multiplication.

Curious about this, I find an orthogonality condition in [3]

\begin{aligned}\int_0^\infty J_\alpha(z) J_\beta(z) \frac{dz}{z} = \frac{2}{\pi} \frac{\sin\left(\frac{\pi}{2}\left( \alpha - \beta\right) \right) }{\alpha^2 - \beta^2},\end{aligned} \hspace{\stretch{1}}(4.38)

from which we find for the spherical Bessel functions

\begin{aligned}\int_0^\infty j_l(\rho) j_m(\rho) d\rho =\frac{\sin\left(\frac{\pi}{2}\left( l - m \right) \right) }{(l+ 1/2)^2 - (m + 1/2)^2}.\end{aligned} \hspace{\stretch{1}}(4.39)

Is this a satisfactory orthogonality integral? At a glance it doesn’t appear to be well behaved for l = m, but perhaps the limit can be taken?

Deriving the large limit Bessel and Neumann function approximations.

For 2.22 we are referred to any “good book on electromagnetism” for details. I thought that perhaps the weighty [4] would be to be such a book, but it also leaves out the details. In section 16.1 the spherical Bessel and Neumann functions are related to the plain old Bessel functions with

\begin{subequations}

\begin{aligned}j_l(x) = \sqrt{\frac{\pi}{2x} } J_{l+1/2}(x) \end{aligned} \hspace{\stretch{1}}(4.40a)

\begin{aligned}n_l(x) = \sqrt{\frac{\pi}{2x} } N_{l+1/2}(x)\end{aligned} \hspace{\stretch{1}}(4.40b)

\end{subequations}

Referring back to section 3.7 of that text where the limiting forms of the Bessel functions are given

\begin{subequations}

\begin{aligned}J_\nu(x) \rightarrow \sqrt{\frac{2}{\pi x}} \cos\left(x - \frac{\nu\pi}{2} - \frac{\pi}{4} \right) \end{aligned} \hspace{\stretch{1}}(4.41a)

\begin{aligned}N_\nu(x) \rightarrow \sqrt{\frac{2}{\pi x}} \sin\left(x - \frac{\nu\pi}{2} - \frac{\pi}{4} \right)\end{aligned} \hspace{\stretch{1}}(4.41b)

\end{subequations}

This does give us our desired identities, but there’s no hint in the text how one would derive 4.41 from the power series that was computed by solving the Bessel equation.

Deriving the small limit Bessel and Neumann function approximations.

Writing the \text{sinc} function in series form

\begin{aligned}\frac{\sin x}{x} = \sum_{k=0}^\infty (-1)^k \frac{x^{2k}}{(2k + 1)!},\end{aligned} \hspace{\stretch{1}}(4.42)

we can differentiate easily

\begin{aligned}\begin{aligned}\frac{1}{{x}} \frac{d}{dx} \frac{\sin x}{x}&= \sum_{k=1}^\infty (-1)^k (2k) \frac{x^{2k-2}}{(2k + 1)!} \\ &= (-1) \sum_{k=0}^\infty (-1)^k (2k + 2) \frac{x^{2k}}{(2k + 3)!} \\ &= (-1) \sum_{k=0}^\infty (-1)^k \frac{1}{{2k + 3}} \frac{x^{2k}}{(2k + 1)!} \\ \end{aligned}\end{aligned} \hspace{\stretch{1}}(4.43)

Performing the derivative operation a second time we find

\begin{aligned}\begin{aligned}\left(\frac{1}{{x}} \frac{d}{dx}\right)^2 \frac{\sin x}{x}&= (-1) \sum_{k=1}^\infty (-1)^k \frac{1}{{2k + 3}} (2k) \frac{x^{2k-2}}{(2k + 1)!} \\ &= \sum_{k=0}^\infty (-1)^k \frac{1}{{2k + 5}} \frac{1}{{2k + 3}} \frac{x^{2k}}{(2k + 1)!}\end{aligned}\end{aligned} \hspace{\stretch{1}}(4.44)

It appears reasonable to form the inductive hypotheses

\begin{aligned}\left(\frac{1}{{x}} \frac{d}{dx}\right)^l \frac{\sin x}{x}= (-1)^l\sum_{k=0}^\infty (-1)^k \frac{(2k+1)!!}{(2(k + l) + 1)!!}\frac{x^{2k}}{(2k + 1)!},\end{aligned} \hspace{\stretch{1}}(4.45)

and this proves to be correct. We find then that the spherical Bessel function has the power series expansion of

\begin{aligned}j_l(x) =\sum_{k=0}^\infty (-1)^k \frac{(2k+1)!!}{(2(k + l) + 1)!!}\frac{x^{2k + l}}{(2k + 1)!}\end{aligned} \hspace{\stretch{1}}(4.46)

and from this the Bessel function limit of 2.21a follows immediately.

Finding the matching induction series for the Neumann functions is a bit harder. It’s not really any more difficult to write it, but it is harder to put it in a tidy form that is.

We find

\begin{aligned}-\frac{\cos x}{x} &= - \sum_{k=0}^\infty (-1)^k \frac{x^{2k-1}}{(2k)!} \\ \frac{1}{{x}} \frac{d}{dx}-\frac{\cos x}{x} &= - \sum_{k=0}^\infty (-1)^k \frac{2k-1}{2k} \frac{x^{2k-3}}{(2k-2)!} \\ \left( \frac{1}{{x}} \frac{d}{dx} \right)^2-\frac{\cos x}{x} &= - \sum_{k=0}^\infty (-1)^k \frac{(2k-1)(2k -3)}{2k(2k -2)} \frac{x^{2k-3}}{(2k-4)!}\end{aligned} \hspace{\stretch{1}}(4.47)

The general expression, after a bit of messing around (and I got it wrong the first time), can be found to be

\begin{aligned}\begin{aligned}\left( \frac{1}{{x}} \frac{d}{dx} \right)^l-\frac{\cos x}{x} &= (-1)^{l+1}\sum_{k=0}^{l-1} \prod_{j=0}^{l-1}  {\left\lvert{ 2(k-j)-1}\right\rvert} \frac{x^{2(k-l)-1}}{(2k)!} \\ &\quad +(-1)^{l+1}\sum_{k=0}^\infty (-1)^k \frac{(2(k+l)-1)!!}{(2k - 1)!!}\frac{x^{2k-1}}{(2(k + l)!}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(4.50)

We really only need the lowest order term (which dominates for small x) to confirm the small limit 2.21b of the Neumann function, and this follows immediately.

For completeness, we note that the series expansion of the Neumann function is

\begin{aligned}\begin{aligned}n_l(x)&= -\sum_{k=0}^{l-1} \prod_{j=0}^{l-1}  {\left\lvert{ 2(k-j)-1}\right\rvert} \frac{x^{2 k -l -1}}{(2k)!} \\ &\quad -\sum_{k=0}^\infty (-1)^k \frac{(2 k + 3 l - 1)!!}{(2k - 1)!!}\frac{x^{2k-1}}{(2(k + l)!}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(4.51)

Verifying the solution to the spherical Bessel equation.

One way to verify that 2.15a is a solution to the Bessel equation 2.14 as claimed should be to substitute the series expression and verify that we get zero. Another way is to solve this equation directly. We have a regular singular point at the origin, so we look for solutions of the form

\begin{aligned}f = x^r \sum_{k=0}^\infty a_k x^k\end{aligned} \hspace{\stretch{1}}(4.52)

Writing our differential operator as

\begin{aligned}L = x^2 \frac{d^2}{dx^2} + 2 x \frac{d}{dx} + x^2 - l(l+1),\end{aligned} \hspace{\stretch{1}}(4.53)

we get

\begin{aligned}0 &= L f \\ &= \sum_{k=0}^\infty a_k \Bigl( (k+r)(k+r-1) + 2 (k + r) - l (l+1) \Bigr) x^{k + r} + a_k x^{k + r + 2} \\ &= a_0 \Bigl( r( r + 1) - l(l + 1)\Bigr) x^r \\ &+a_1 \Bigl( (r+ 1)( r + 2) - l(l + 1)\Bigr) x^{r+1} \\ &+\sum_{k=2}^\infty a_k \Bigl( (k+r)(k+r-1) + 2 (k + r) - l (l+1) + a_{k-2} \Bigr) x^{k + r} \end{aligned}

Since we require this to be zero for all x including non-zero values, we must have constraints on r. Assuming first that a_0 is non-zero we must then have

\begin{aligned}0 = r( r + 1) - l(l + 1).\end{aligned} \hspace{\stretch{1}}(4.54)

One solution is obviously r = l. Assuming we have another solution r = l + k for some integer k we find that r = -l-1 is also a solution. Restricting attention first to r = l, we must have a_1 = 0 since for non-negative l we have (l+1)(l+2) - l(l+1) = 2(l+1) \ne 0. Thus for non-zero a_0 we find that our function is of the form

\begin{aligned}f = \sum_k a_{2k} x^{2k + l}.\end{aligned} \hspace{\stretch{1}}(4.55)

It doesn’t matter that we started with a_0 \ne 0. If we instead start with a_1 \ne 0 we find that we must have r = l-1, -l-2, so end up with exactly the same functional form as 4.55. It ends up slightly simpler if we start with 4.55 instead, since we now know that we don’t have any odd powered a_k‘s to deal with. Doing so we find

\begin{aligned}0 &= L f \\ &= \sum_{k=0}^\infty a_{2k} \Bigl((2k + l)(2k + l - 1) + 2(2k + l) - l (l+1)\Bigr) x^{2k + l} + a_{2k} x^{2k + l + 2} \\ &=\sum_{k=1}^\infty \Bigl(a_{2k} 2k (2 (k+l) + 1) + a_{2(k-1)}\Bigr) x^{2k + l} \end{aligned}

We find

\begin{aligned}\frac{a_{2k} }{a_{2(k-1)}}= \frac{-1}{2k (2 (k+l) + 1) }.\end{aligned} \hspace{\stretch{1}}(4.56)

Proceeding recursively, we find

\begin{aligned}f = a_0 (2 l + 1)!! \sum_{k=0}^\infty \frac{(-1)^k}{(2k)!! (2 (k+l) + 1)!!} x^{2k + l}.\end{aligned} \hspace{\stretch{1}}(4.57)

With a_0 = 1/(2l + 1)!! and the observation that

\begin{aligned}\frac{1}{{(2k)!!}} = \frac{(2k + 1)!!}{(2k+1)!},\end{aligned} \hspace{\stretch{1}}(4.58)

we have f = j_l(x) as given in 4.46.

If we do the same for the r = -l-1 case, we find

\begin{aligned}\frac{a_{2k} }{a_{2(k-1)}}= \frac{-1}{2k (2 (k-l) - 1) },\end{aligned} \hspace{\stretch{1}}(4.59)

and find

\begin{aligned}\frac{a_{2k}}{a_0} = \frac{(-1)^k}{(2k)!! (2(k-l) -1)(2(k-l)-3)\cdots(-2l + 1)}.\end{aligned} \hspace{\stretch{1}}(4.60)

Flipping signs around, we can rewrite this as

\begin{aligned}\frac{a_{2k} }{a_0}= \frac{1}{(2k)!! (2(l-k) + 1) (2(l-k) + 3) \cdots (2 l - 1)}.\end{aligned} \hspace{\stretch{1}}(4.61)

For those values of l > k we can write this as

\begin{aligned}\frac{a_{2k} }{a_0}= \frac{(2(l-k)-1)!!}{(2k)!! (2 l - 1)!!}.\end{aligned} \hspace{\stretch{1}}(4.62)

Comparing to the small limit 2.21b, the k=0 term, we find that we must have

\begin{aligned}\frac{a_0}{(2 l - 1)!!} = -1.\end{aligned} \hspace{\stretch{1}}(4.63)

After some play we find

\begin{aligned}a_{2k}= \left\{\begin{array}{l l}-\frac{(2(l-k)-1)!!}{ (2k)!!  } & \quad \mbox{if latex l \ge k$} \\ \frac{(-1)^{k-l+1}}{ (2k)!! (2 (k-l) -1)!! } & \quad \mbox{if l \le k} \\ \end{array}\right.\end{aligned} \hspace{\stretch{1}}(4.64)$

Putting this all together we have

\begin{aligned}n_l(x) =-\sum_{0 \le k \le l}(2(l-k)-1)!!\frac{x^{2k -l -1}}{(2k)!!}-\sum_{l < k}\frac{(-1)^{k-l}} { (2 (k-l) -1)!! }\frac{x^{2k -l -1}}{(2k)!!}\end{aligned} \hspace{\stretch{1}}(4.65)

FIXME: check that this matches the series calculated earlier 4.51.

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] Wikipedia. Steradian — wikipedia, the free encyclopedia [online]. 2011. [Online; accessed 4-December-2011]. http://en.wikipedia.org/w/index.php?title=Steradian&oldid=462086182.

[3] Wikipedia. Bessel function — wikipedia, the free encyclopedia [online]. 2011. [Online; accessed 4-December-2011]. http://en.wikipedia.org/w/index.php?title=Bessel_function&oldid=461096228.

[4] JD Jackson. Classical Electrodynamics Wiley. John Wiley and Sons, 2nd edition, 1975.

Advertisements

Posted in Math and Physics Learning. | Tagged: , , , , | Leave a Comment »

PHY456H1F: Quantum Mechanics II. Lecture 19 (Taught by Prof J.E. Sipe). Rotations of operators.

Posted by peeterjoot on November 16, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Disclaimer.

Peeter’s lecture notes from class. May not be entirely coherent.

Rotations of operators.

READING: section 28 [1].

Rotating with U[M] as in figure (\ref{fig:qmTwoL19:qmTwoL19fig1})
\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL19fig1}
\caption{Rotating a state centered at F}
\end{figure}

\begin{aligned}\tilde{r}_i = \sum_j M_{ij} \bar{r}_j\end{aligned} \hspace{\stretch{1}}(2.1)

\begin{aligned}{\left\langle {\psi} \right\rvert} R_i {\left\lvert {\psi} \right\rangle} = \bar{r}_i\end{aligned} \hspace{\stretch{1}}(2.2)

\begin{aligned}{\left\langle {\psi} \right\rvert} U^\dagger[M] R_i U[M] {\left\lvert {\psi} \right\rangle}&= \tilde{r}_i = \sum_j M_{ij} \bar{r}_j \\ &={\left\langle {\psi} \right\rvert} \Bigl( U^\dagger[M] R_i U[M] \Bigr) {\left\lvert {\psi} \right\rangle}\end{aligned}

So

\begin{aligned}U^\dagger[M] R_i U[M] = \sum_j M_{ij} R_j\end{aligned} \hspace{\stretch{1}}(2.3)

Any three operators V_x, V_y, V_z that transform according to

\begin{aligned}U^\dagger[M] V_i U[M] = \sum_j M_{ij} V_j\end{aligned} \hspace{\stretch{1}}(2.4)

form the components of a vector operator.

Infinitesimal rotations

Consider infinitesimal rotations, where we can show that

\begin{aligned}\left[{V_i},{J_j}\right] = i \hbar \sum_k \epsilon_{ijk} V_k\end{aligned} \hspace{\stretch{1}}(2.5)

Note that for V_i = J_i we recover the familiar commutator rules for angular momentum, but this also holds for operators \mathbf{R}, \mathbf{P}, \mathbf{J}, …

Note that

\begin{aligned}U^\dagger[M] = U[M^{-1}] = U[M^\text{T}],\end{aligned} \hspace{\stretch{1}}(2.6)

so

\begin{aligned}U^\dagger[M] V_i U^\dagger[M] = U^\dagger[M^\dagger] V_i U[M^\dagger] = \sum_j M_{ji} V_j\end{aligned} \hspace{\stretch{1}}(2.7)

so

\begin{aligned}{\left\langle {\psi} \right\rvert} V_i {\left\lvert {\psi} \right\rangle}={\left\langle {\psi} \right\rvert}U^\dagger[M] \Bigl( U[M] V_i U^\dagger[M] \Bigr) U[M]{\left\lvert {\psi} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.8)

In the same way, suppose we have nine operators

\begin{aligned}\tau_{ij}, \qquad i, j = x, y, z\end{aligned} \hspace{\stretch{1}}(2.9)

that transform according to

\begin{aligned}U[M] \tau_{ij} U^\dagger[M] = \sum_{lm} M_{li} M_{mj} \tau_{lm}\end{aligned} \hspace{\stretch{1}}(2.10)

then we will call these the components of (Cartesian) a second rank tensor operator. Suppose that we have an operator S that transforms

\begin{aligned}U[M] S U^\dagger[M] = S\end{aligned} \hspace{\stretch{1}}(2.11)

Then we will call S a scalar operator.

A problem.

This all looks good, but it is really not satisfactory. There is a problem.

Suppose that we have a Cartesian tensor operator like this, lets look at the quantity

\begin{aligned}\sum_i \tau_{ii}&=\sum_iU[M] \tau_{ii} U^\dagger[M]  \\ &= \sum_i\sum_{lm} M_{li} M_{mi} \tau_{lm} \\ &= \sum_i\sum_{lm} M_{li} M_{im}^\text{T} \tau_{lm} \\ &= \sum_{lm} \delta_{lm} \tau_{lm} \\ &= \sum_{l} \tau_{ll} \end{aligned}

We see buried inside these Cartesian tensors of higher rank there is some simplicity embedded (in this case trace invariance). Who knows what other relationships are also there? We want to work with and extract the buried simplicities, and we will find that the Cartesian way of expressing these tensors is horribly inefficient. What is a representation that doesn’t have any excess information, and is in some sense minimal?

How do we extract these buried simplicities?

Recall

\begin{aligned}U[M] {\left\lvert {j m''} \right\rangle} \end{aligned} \hspace{\stretch{1}}(2.12)

gives a linear combination of the {\left\lvert {j m'} \right\rangle}.

\begin{aligned}U[M] {\left\lvert {j m''} \right\rangle} &=\sum_{m'} {\left\lvert {j m'} \right\rangle} {\left\langle {j m'} \right\rvert} U[M] {\left\lvert {j m''} \right\rangle}  \\ &=\sum_{m'} {\left\lvert {j m'} \right\rangle} D^{(j)}_{m' m''}[M] \\ \end{aligned}

We’ve talked about before how these D^{(j)}_{m' m''}[M] form a representation of the rotation group. These are in fact (not proved here) an irreducible representation.

Look at each element of D^{(j)}_{m' m''}[M]. These are matrices and will be different according to which rotation M is chosen. There is some M for which this element is nonzero. There’s no element in this matrix element that is zero for all possible M. There are more formal ways to think about this in a group theory context, but this is a physical way to think about this.

Think of these as the basis vectors for some eigenket of J^2.

\begin{aligned}{\left\lvert {\psi} \right\rangle} &= \sum_{m''} {\left\lvert {j m''} \right\rangle} \left\langle{{j m''}} \vert {{\psi}}\right\rangle \\ &= \sum_{m''} \bar{a}_{m''} {\left\lvert {j m''} \right\rangle}\end{aligned}

where

\begin{aligned}\bar{a}_{m''} = \left\langle{{j m''}} \vert {{\psi}}\right\rangle \end{aligned} \hspace{\stretch{1}}(2.13)

So

\begin{aligned}U[M] {\left\lvert {\psi} \right\rangle} = &= \sum_{m'} U[M] {\left\lvert {j m'} \right\rangle} \left\langle{{j m'}} \vert {{\psi}}\right\rangle \\ &= \sum_{m'} U[M] {\left\lvert {j m'} \right\rangle} \bar{a}_{m'} \\ &= \sum_{m', m''} {\left\lvert {j m''} \right\rangle} {\left\langle {j m''} \right\rvert}U[M] {\left\lvert {j m'} \right\rangle} \bar{a}_{m'} \\ &= \sum_{m', m''} {\left\lvert {j m''} \right\rangle} D^{(j)}_{m'', m'}\bar{a}_{m'} \\ &= \sum_{m''} \tilde{a}_{m''} {\left\lvert {j m''} \right\rangle} \end{aligned}

where

\begin{aligned}\tilde{a}_{m''} = \sum_{m'} D^{(j)}_{m'', m'} \bar{a}_{m'} \\ \end{aligned} \hspace{\stretch{1}}(2.14)

Recall that

\begin{aligned}\tilde{r}_j = \sum_j M_{ij} \bar{r}_j\end{aligned} \hspace{\stretch{1}}(2.15)

Define (2k + 1) operators {T_k}^q, q = k, k-1, \cdots -k as the elements of a spherical tensor of rank k if

\begin{aligned}U[M] {T_k}^q U^\dagger[M] = \sum_{q'} D^{(j)}_{q' q} {T_k}^{q'}\end{aligned} \hspace{\stretch{1}}(2.16)

Here we are looking for a better way to organize things, and it will turn out (not to be proved) that this will be an irreducible way to represent things.

Examples.

We want to work though some examples of spherical tensors, and how they relate to Cartesian tensors. To do this, a motivating story needs to be told.

Let’s suppose that {\left\lvert {\psi} \right\rangle} is a ket for a single particle. Perhaps we are talking about an electron without spin, and write

\begin{aligned}\left\langle{\mathbf{r}} \vert {{\psi}}\right\rangle &= Y_{lm}(\theta, \phi) f(r) \\ &= \sum_{m''} \bar{a}_{m''} Y_{l m''}(\theta, \phi) \end{aligned}

for \bar{a}_{m''} = \delta_{m'' m} and after dropping f(r). So

\begin{aligned}{\left\langle {\mathbf{r}} \right\rvert} U[M] {\left\lvert {\psi} \right\rangle} =\sum_{m''} \sum_{m'} D^{(j)}_{m'' m} \bar{a}_{m'} Y_{l m''}(\theta, \phi) \end{aligned} \hspace{\stretch{1}}(2.17)

We are writing this in this particular way to make a point. Now also assume that

\begin{aligned}\left\langle{\mathbf{r}} \vert {{\psi}}\right\rangle = Y_{lm}(\theta, \phi)\end{aligned} \hspace{\stretch{1}}(2.18)

so we find

\begin{aligned}{\left\langle {\mathbf{r}} \right\rvert} U[M] {\left\lvert {\psi} \right\rangle} &=\sum_{m''} Y_{l m''}(\theta, \phi) D^{(j)}_{m'' m} \\ &=Y_{l m}(\theta, \phi) \end{aligned}

\begin{aligned}Y_{l m}(\theta, \phi)  = Y_{lm}(x, y, z)\end{aligned} \hspace{\stretch{1}}(2.19)

so

\begin{aligned}Y'_{l m}(x, y, z)= \sum_{m''} Y_{l m''}(x, y, z)D^{(j)}_{m'' m} \end{aligned} \hspace{\stretch{1}}(2.20)

Now consider the spherical harmonic as an operator Y_{l m}(X, Y, Z)

\begin{aligned}U[M] Y_{lm}(X, Y, Z) U^\dagger[M] =\sum_{m''} Y_{l m''}(X, Y, Z)D^{(j)}_{m'' m} \end{aligned} \hspace{\stretch{1}}(2.21)

So this is a way to generate spherical tensor operators of rank 0, 1, 2, \cdots.

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , | Leave a Comment »

PHY456H1F, Quantum Mechanics II. My solutions to problem set 1 (ungraded).

Posted by peeterjoot on September 19, 2011

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Harmonic oscillator.

Consider

\begin{aligned}H_0 = \frac{P^2}{2m} + \frac{1}{{2}} m \omega^2 X^2\end{aligned} \hspace{\stretch{1}}(1.1)

Since it’s been a while let’s compute the raising and lowering factorization that was used so extensively for this problem.

It was of the form

\begin{aligned}H_0 = (a X - i b P)(a X + i b P) + \cdots\end{aligned} \hspace{\stretch{1}}(1.2)

Why this factorization has an imaginary in it is a good question. It’s not one that is given any sort of rationale in the text ([1]).

It’s clear that we want a = \sqrt{m/2} \omega and b = 1/\sqrt{2m}. The difference is then

\begin{aligned}H_0 - (a X - i b P)(a X + i b P)=- i a b \left[{X},{P}\right]  = - i \frac{\omega}{2} \left[{X},{P}\right]\end{aligned} \hspace{\stretch{1}}(1.3)

That commutator is an i\hbar value, but what was the sign? Let’s compute so we don’t get it wrong

\begin{aligned}\left[{x},{ p}\right] \psi&= -i \hbar \left[{x},{\partial_x}\right] \psi \\ &= -i \hbar ( x \partial_x \psi - \partial_x (x \psi) ) \\ &= -i \hbar ( - \psi ) \\ &= i \hbar \psi\end{aligned}

So we have

\begin{aligned}H_0 =\left(\omega \sqrt{\frac{m}{2}} X - i \sqrt{\frac{1}{2m}} P\right)\left(\omega \sqrt{\frac{m}{2}} X + i \sqrt{\frac{1}{2m}} P\right)+ \frac{\hbar \omega}{2}\end{aligned} \hspace{\stretch{1}}(1.4)

Factoring out an \hbar \omega produces the form of the Hamiltonian that we used before

\begin{aligned}H_0 =\hbar \omega \left(\left(\sqrt{\frac{m \omega}{2 \hbar}} X - i \sqrt{\frac{1}{2m \hbar \omega}} P\right)\left(\sqrt{\frac{m \omega}{2 \hbar}} X + i \sqrt{\frac{1}{2m \hbar \omega}} P\right)+ \frac{1}{{2}}\right).\end{aligned} \hspace{\stretch{1}}(1.5)

The factors were labeled the uppering (a^\dagger) and lowering (a) operators respectively, and written

\begin{aligned}H_0 &= \hbar \omega \left( a^\dagger a + \frac{1}{{2}} \right) \\ a &= \sqrt{\frac{m \omega}{2 \hbar}} X + i \sqrt{\frac{1}{2m \hbar \omega}} P \\ a^\dagger &= \sqrt{\frac{m \omega}{2 \hbar}} X - i \sqrt{\frac{1}{2m \hbar \omega}} P.\end{aligned} \hspace{\stretch{1}}(1.6)

Observe that we can find the inverse relations

\begin{aligned}X &= \sqrt{ \frac{\hbar}{2 m \omega} } \left( a + a^\dagger \right) \\ P &= i \sqrt{ \frac{m \hbar \omega}{2} } \left( a^\dagger  - a \right)\end{aligned} \hspace{\stretch{1}}(1.9)

Question
What is a good reason that we chose this particular factorization? For example, a quick computation shows that we could have also picked

\begin{aligned}H_0 = \hbar \omega \left( a a^\dagger - \frac{1}{{2}} \right).\end{aligned} \hspace{\stretch{1}}(1.11)

I don’t know that answer. That said, this second factorization is useful in that it provides the commutator relation between the raising and lowering operators, since subtracting 1.11 and 1.6 yields

\begin{aligned}\left[{a},{a^\dagger}\right] = 1.\end{aligned} \hspace{\stretch{1}}(1.12)

If we suppose that we have eigenstates for the operator a^\dagger a of the form

\begin{aligned}a^\dagger a {\lvert {n} \rangle} = \lambda_n {\lvert {n} \rangle},\end{aligned} \hspace{\stretch{1}}(1.13)

then the problem of finding the eigensolution of H_0 reduces to solving this problem. Because a^\dagger a commutes with 1/2, an eigenstate of a^\dagger a is also an eigenstate of H_0. Utilizing 1.12 we then have

\begin{aligned}a^\dagger a ( a {\lvert {n} \rangle} )&= (a a^\dagger - 1 ) a {\lvert {n} \rangle} \\ &= a (a^\dagger a - 1 ) {\lvert {n} \rangle} \\ &= a (\lambda_n - 1 ) {\lvert {n} \rangle} \\ &= (\lambda_n - 1 ) a {\lvert {n} \rangle},\end{aligned}

so we see that a {\lvert {n} \rangle} is an eigenstate of a^\dagger a with eigenvalue \lambda_n - 1.

Similarly for the raising operator

\begin{aligned}a^\dagger a ( a^\dagger {\lvert {n} \rangle} )&=a^\dagger (a  a^\dagger) {\lvert {n} \rangle} ) \\ &=a^\dagger (a^\dagger a + 1) {\lvert {n} \rangle} ) \\ &=a^\dagger (\lambda_n + 1) {\lvert {n} \rangle} ),\end{aligned}

and find that a^\dagger {\lvert {n} \rangle} is also an eigenstate of a^\dagger a with eigenvalue \lambda_n + 1.

Supposing that there is a lowest energy level (because the potential V(x) = m \omega x^2 /2 has a lower bound of zero) then the state {\lvert {0} \rangle} for which the energy is the lowest when operated on by a we have

\begin{aligned}a {\lvert {0} \rangle} = 0\end{aligned} \hspace{\stretch{1}}(1.14)

Thus

\begin{aligned}a^\dagger a {\lvert {0} \rangle} = 0,\end{aligned} \hspace{\stretch{1}}(1.15)

and

\begin{aligned}\lambda_0 = 0.\end{aligned} \hspace{\stretch{1}}(1.16)

This seems like a small bit of slight of hand, since it sneakily supplies an integer value to \lambda_0 where up to this point 0 was just a label.

If the eigenvalue equation we are trying to solve for the Hamiltonian is

\begin{aligned}H_0 {\lvert {n} \rangle} = E_n {\lvert {n} \rangle}.\end{aligned} \hspace{\stretch{1}}(1.17)

Then we must then have

\begin{aligned}E_n = \hbar \omega \left(\lambda_n + \frac{1}{{2}} \right) = \hbar \omega \left(n + \frac{1}{{2}} \right)\end{aligned} \hspace{\stretch{1}}(1.18)

Part (a)

We’ve now got enough context to attempt the first part of the question, calculation of

\begin{aligned}{\langle {n} \rvert} X^4 {\lvert {n} \rangle}\end{aligned} \hspace{\stretch{1}}(1.19)

We’ve calculated things like this before, such as

\begin{aligned}{\langle {n} \rvert} X^2 {\lvert {n} \rangle}&=\frac{\hbar}{2 m \omega} {\langle {n} \rvert} (a + a^\dagger)^2 {\lvert {n} \rangle}\end{aligned}

To continue we need an exact relation between {\lvert {n} \rangle} and {\lvert {n \pm 1} \rangle}. Recall that a {\lvert {n} \rangle} was an eigenstate of a^\dagger a with eigenvalue n - 1. This implies that the eigenstates a {\lvert {n} \rangle} and {\lvert {n-1} \rangle} are proportional

\begin{aligned}a {\lvert {n} \rangle} = c_n {\lvert {n - 1} \rangle},\end{aligned} \hspace{\stretch{1}}(1.20)

or

\begin{aligned}{\langle {n} \rvert} a^\dagger a {\lvert {n} \rangle} &= {\left\lvert{c_n}\right\rvert}^2 \left\langle{{n - 1}} \vert {{n-1}}\right\rangle = {\left\lvert{c_n}\right\rvert}^2 \\ n \left\langle{{n}} \vert {{n}}\right\rangle &= \\ n &=\end{aligned}

so that

\begin{aligned}a {\lvert {n} \rangle} = \sqrt{n} {\lvert {n - 1} \rangle}.\end{aligned} \hspace{\stretch{1}}(1.21)

Similarly let

\begin{aligned}a^\dagger {\lvert {n} \rangle} = b_n {\lvert {n + 1} \rangle},\end{aligned} \hspace{\stretch{1}}(1.22)

or

\begin{aligned}{\langle {n} \rvert} a a^\dagger {\lvert {n} \rangle} &= {\left\lvert{b_n}\right\rvert}^2 \left\langle{{n - 1}} \vert {{n-1}}\right\rangle = {\left\lvert{b_n}\right\rvert}^2 \\ {\langle {n} \rvert} (1 + a^\dagger a) {\lvert {n} \rangle} &= \\ 1 + n &=\end{aligned}

so that

\begin{aligned}a^\dagger {\lvert {n} \rangle} = \sqrt{n+1} {\lvert {n + 1} \rangle}.\end{aligned} \hspace{\stretch{1}}(1.23)

We can now return to 1.19, and find

\begin{aligned}{\langle {n} \rvert} X^4 {\lvert {n} \rangle}&=\frac{\hbar^2}{4 m^2 \omega^2} {\langle {n} \rvert} (a + a^\dagger)^4 {\lvert {n} \rangle}\end{aligned}

Consider half of this braket

\begin{aligned}(a + a^\dagger)^2 {\lvert {n} \rangle}&=\left( a^2 + (a^\dagger)^2 + a^\dagger a + a a^\dagger \right) {\lvert {n} \rangle} \\ &=\left( a^2 + (a^\dagger)^2 + a^\dagger a + (1 + a^\dagger a) \right) {\lvert {n} \rangle} \\ &=\left( a^2 + (a^\dagger)^2 + 1 + 2 a^\dagger a \right) {\lvert {n} \rangle} \\ &=\sqrt{n-1}\sqrt{n-2} {\lvert {n-2} \rangle}+\sqrt{n+1}\sqrt{n+2} {\lvert {n + 2} \rangle}+{\lvert {n} \rangle}+  2 n {\lvert {n} \rangle}\end{aligned}

Squaring, utilizing the Hermitian nature of the X operator

\begin{aligned}{\langle {n} \rvert} X^4 {\lvert {n} \rangle}=\frac{\hbar^2}{4 m^2 \omega^2}\left((n-1)(n-2) + (n+1)(n+2) + (1 + 2n)^2\right)=\frac{\hbar^2}{4 m^2 \omega^2}\left( 6 n^2 + 4 n + 5 \right)\end{aligned} \hspace{\stretch{1}}(1.24)

Part (b)

Find the ground state energy of the Hamiltonian H = H_0 + \gamma X^2 for \gamma > 0.

The new Hamiltonian has the form

\begin{aligned}H = \frac{P^2}{2m} + \frac{1}{{2}} m \left(\omega^2 + \frac{2 \gamma}{m} \right) X^2 =\frac{P^2}{2m} + \frac{1}{{2}} m {\omega'}^2 X^2,\end{aligned} \hspace{\stretch{1}}(1.25)

where

\begin{aligned}\omega' = \sqrt{ \omega^2 + \frac{2 \gamma}{m} }\end{aligned} \hspace{\stretch{1}}(1.26)

The energy states of the Hamiltonian are thus

\begin{aligned}E_n = \hbar \sqrt{ \omega^2 + \frac{2 \gamma}{m} } \left( n + \frac{1}{{2}} \right)\end{aligned} \hspace{\stretch{1}}(1.27)

and the ground state of the modified Hamiltonian H is thus

\begin{aligned}E_0 = \frac{\hbar}{2} \sqrt{ \omega^2 + \frac{2 \gamma}{m} }\end{aligned} \hspace{\stretch{1}}(1.28)

Part (c)

Find the ground state energy of the Hamiltonian H = H_0 - \alpha X.

With a bit of play, this new Hamiltonian can be factored into

\begin{aligned}H= \hbar \omega \left( b^\dagger b + \frac{1}{{2}} \right) - \frac{\alpha^2}{2 m \omega^2}= \hbar \omega \left( b b^\dagger - \frac{1}{{2}} \right) - \frac{\alpha^2}{2 m \omega^2},\end{aligned} \hspace{\stretch{1}}(1.29)

where

\begin{aligned}b &= \sqrt{\frac{m \omega}{2\hbar}} X + \frac{i P}{\sqrt{2 m \hbar \omega}} - \frac{\alpha}{\omega \sqrt{ 2 m \hbar \omega }} \\ b^\dagger &= \sqrt{\frac{m \omega}{2\hbar}} X - \frac{i P}{\sqrt{2 m \hbar \omega}} - \frac{\alpha}{\omega \sqrt{ 2 m \hbar \omega }}.\end{aligned} \hspace{\stretch{1}}(1.30)

From 1.29 we see that we have the same sort of commutator relationship as in the original Hamiltonian

\begin{aligned}\left[{b},{b^\dagger}\right] = 1,\end{aligned} \hspace{\stretch{1}}(1.32)

and because of this, all the preceding arguments follow unchanged with the exception that the energy eigenstates of this Hamiltonian are shifted by a constant

\begin{aligned}H {\lvert {n} \rangle} = \left( \hbar \omega \left( n + \frac{1}{{2}} \right) - \frac{\alpha^2}{2 m \omega^2} \right) {\lvert {n} \rangle},\end{aligned} \hspace{\stretch{1}}(1.33)

where the {\lvert {n} \rangle} states are simultaneous eigenstates of the b^\dagger b operator

\begin{aligned}b^\dagger b {\lvert {n} \rangle} = n {\lvert {n} \rangle}.\end{aligned} \hspace{\stretch{1}}(1.34)

The ground state energy is then

\begin{aligned}E_0 = \frac{\hbar \omega }{2} - \frac{\alpha^2}{2 m \omega^2}.\end{aligned} \hspace{\stretch{1}}(1.35)

This makes sense. A translation of the entire position of the system should not effect the energy level distribution of the system, but we have set our reference potential differently, and have this constant energy adjustment to the entire system.

Hydrogen atom and spherical harmonics.

We are asked to show that for any eigenkets of the hydrogen atom {\lvert {\Phi_{nlm}} \rangle} we have

\begin{aligned}{\langle {\Phi_{nlm}} \rvert} X {\lvert {\Phi_{nlm}} \rangle} ={\langle {\Phi_{nlm}} \rvert} Y {\lvert {\Phi_{nlm}} \rangle} ={\langle {\Phi_{nlm}} \rvert} Z {\lvert {\Phi_{nlm}} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.36)

The summary sheet provides us with the wavefunction

\begin{aligned}\left\langle{\mathbf{r}} \vert {{\Phi_{nlm}}}\right\rangle = \frac{2}{n^2 a_0^{3/2}} \sqrt{\frac{(n-l-1)!}{(n+l)!)^3}} F_{nl}\left( \frac{2r}{n a_0} \right) Y_l^m(\theta, \phi),\end{aligned} \hspace{\stretch{1}}(2.37)

where F_{nl} is a real valued function defined in terms of Lagueere polynomials. Working with the expectation of the X operator to start with we have

\begin{aligned}{\langle {\Phi_{nlm}} \rvert} X {\lvert {\Phi_{nlm}} \rangle} &=\int \left\langle{{\Phi_{nlm}}} \vert {{\mathbf{r}'}}\right\rangle {\langle {\mathbf{r}'} \rvert} X {\lvert {\mathbf{r}} \rangle} \left\langle{\mathbf{r}} \vert {{\Phi_{nlm}}}\right\rangle d^3 \mathbf{r} d^3 \mathbf{r}' \\ &=\int \left\langle{{\Phi_{nlm}}} \vert {{\mathbf{r}'}}\right\rangle \delta(\mathbf{r} - \mathbf{r}') r \sin\theta \cos\phi \left\langle{\mathbf{r}} \vert {{\Phi_{nlm}}}\right\rangle d^3 \mathbf{r} d^3 \mathbf{r}' \\ &=\int \Phi_{nlm}^{*}(\mathbf{r}) r \sin\theta \cos\phi \Phi_{nlm}(\mathbf{r}) d^3 \mathbf{r} \\ &\sim\int r^2 dr {\left\lvert{ F_{nl}\left(\frac{2 r}{ n a_0} \right)}\right\rvert}^2 r \int \sin\theta d\theta d\phi{Y_l^m}^{*}(\theta, \phi) \sin\theta \cos\phi Y_l^m(\theta, \phi) \\ \end{aligned}

Recalling that the only \phi dependence in Y_l^m is e^{i m \phi} we can perform the d\phi integration directly, which is

\begin{aligned}\int_{\phi=0}^{2\pi} \cos\phi d\phi e^{-i m \phi} e^{i m \phi} = 0.\end{aligned} \hspace{\stretch{1}}(2.38)

We have the same story for the Y expectation which is

\begin{aligned}{\langle {\Phi_{nlm}} \rvert} X {\lvert {\Phi_{nlm}} \rangle} \sim\int r^2 dr {\left\lvert{F_{nl}\left( \frac{2 r}{ n a_0} \right)}\right\rvert}^2 r \int \sin\theta d\theta d\phi{Y_l^m}^{*}(\theta, \phi) \sin\theta \sin\phi Y_l^m(\theta, \phi).\end{aligned} \hspace{\stretch{1}}(2.39)

Our \phi integral is then just

\begin{aligned}\int_{\phi=0}^{2\pi} \sin\phi d\phi e^{-i m \phi} e^{i m \phi} = 0,\end{aligned} \hspace{\stretch{1}}(2.40)

also zero. The Z expectation is a slightly different story. There we have

\begin{aligned}\begin{aligned}{\langle {\Phi_{nlm}} \rvert} Z {\lvert {\Phi_{nlm}} \rangle} &\sim\int dr {\left\lvert{F_{nl}\left( \frac{2 r}{ n a_0} \right)}\right\rvert}^2 r^3  \\ &\quad \int_0^{2\pi} d\phi\int_0^\pi \sin \theta d\theta\left( \sin\theta \right)^{-2m}\left( \frac{d^{l - m}}{d (\cos\theta)^{l-m}} \sin^{2l}\theta \right)^2\cos\theta.\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.41)

Within this last integral we can make the substitution

\begin{aligned}u &= \cos\theta \\ \sin\theta d\theta &= - d(\cos\theta) = -du \\ u &\in [1, -1],\end{aligned} \hspace{\stretch{1}}(2.42)

and the integral takes the form

\begin{aligned}-\int_{-1}^1 (-du) \frac{1}{{(1 - u^2)^m}} \left( \frac{d^{l-m}}{d u^{l -m }} (1 - u^2)^l\right)^2 u.\end{aligned} \hspace{\stretch{1}}(2.45)

Here we have the product of two even functions, times one odd function (u), over a symmetric interval, so the end result is zero, completing the problem.

I wasn’t able to see how to exploit the parity result suggested in the problem, but it wasn’t so bad to show these directly.

Angular momentum operator.

Working with the appropriate expressions in Cartesian components, confirm that L_i {\lvert {\psi} \rangle} = 0 for each component of angular momentum L_i, if \left\langle{\mathbf{r}} \vert {{\psi}}\right\rangle = \psi(\mathbf{r}) is in fact only a function of r = {\left\lvert{\mathbf{r}}\right\rvert}.

In order to proceed, we will have to consider a matrix element, so that we can operate on {\lvert {\psi} \rangle} in position space. For that matrix element, we can proceed to insert complete states, and reduce the problem to a question of wavefunctions. That is

\begin{aligned}{\langle {\mathbf{r}} \rvert} L_i {\lvert {\psi} \rangle}&=\int d^3 \mathbf{r}' {\langle {\mathbf{r}} \rvert} L_i {\lvert {\mathbf{r}'} \rangle} \left\langle{{\mathbf{r}'}} \vert {{\psi}}\right\rangle \\ &=\int d^3 \mathbf{r}' {\langle {\mathbf{r}} \rvert} \epsilon_{i a b} X_a P_b {\lvert {\mathbf{r}'} \rangle} \left\langle{{\mathbf{r}'}} \vert {{\psi}}\right\rangle \\ &=-i \hbar \epsilon_{i a b} \int d^3 \mathbf{r}' x_a {\langle {\mathbf{r}} \rvert} \frac{\partial {\psi(\mathbf{r}')}}{\partial {X_b}} {\lvert {\mathbf{r}'} \rangle}  \\ &=-i \hbar \epsilon_{i a b} \int d^3 \mathbf{r}' x_a \frac{\partial {\psi(\mathbf{r}')}}{\partial {x_b}} \left\langle{\mathbf{r}} \vert {{\mathbf{r}'}}\right\rangle  \\ &=-i \hbar \epsilon_{i a b} \int d^3 \mathbf{r}' x_a \frac{\partial {\psi(\mathbf{r}')}}{\partial {x_b}} \delta^3(\mathbf{r} - \mathbf{r}') \\ &=-i \hbar \epsilon_{i a b} x_a \frac{\partial {\psi(\mathbf{r})}}{\partial {x_b}} \end{aligned}

With \psi(\mathbf{r}) = \psi(r) we have

\begin{aligned}{\langle {\mathbf{r}} \rvert} L_i {\lvert {\psi} \rangle}&=-i \hbar \epsilon_{i a b} x_a \frac{\partial {\psi(r)}}{\partial {x_b}}  \\ &=-i \hbar \epsilon_{i a b} x_a \frac{\partial {r}}{\partial {x_b}} \frac{d\psi(r)}{dr}  \\ &=-i \hbar \epsilon_{i a b} x_a \frac{1}{{2}} 2 x_b \frac{1}{{r}} \frac{d\psi(r)}{dr}  \\ \end{aligned}

We are left with an sum of a symmetric product x_a x_b with the antisymmetric tensor \epsilon_{i a b} so this is zero for all i \in [1,3].

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , | Leave a Comment »

A problem on spherical harmonics.

Posted by peeterjoot on January 10, 2011

[Click here for a PDF of this post with nicer formatting]

Motivation.

One of the PHY356 exam questions from the final I recall screwing up on, and figuring it out after the fact on the drive home. The question actually clarified a difficulty I’d had, but unfortunately I hadn’t had the good luck to perform such a question, to help figure this out before the exam.

From what I recall the question provided an initial state, with some degeneracy in m, perhaps of the following form

\begin{aligned}{\lvert {\phi(0)} \rangle} = \sqrt{\frac{1}{7}} {\lvert { 12 } \rangle}+\sqrt{\frac{2}{7}} {\lvert { 10 } \rangle}+\sqrt{\frac{4}{7}} {\lvert { 20 } \rangle},\end{aligned} \hspace{\stretch{1}}(1.1)

and a Hamiltonian of the form

\begin{aligned}H = \alpha L_z\end{aligned} \hspace{\stretch{1}}(1.2)

From what I recall of the problem, I am going to reattempt it here now.

Evolved state.

One part of the question was to calculate the evolved state. Application of the time evolution operator gives us

\begin{aligned}{\lvert {\phi(t)} \rangle} = e^{-i \alpha L_z t/\hbar} \left(\sqrt{\frac{1}{7}} {\lvert { 12 } \rangle}+\sqrt{\frac{2}{7}} {\lvert { 10 } \rangle}+\sqrt{\frac{4}{7}} {\lvert { 20 } \rangle} \right).\end{aligned} \hspace{\stretch{1}}(1.3)

Now we note that L_z {\lvert {12} \rangle} = 2 \hbar {\lvert {12} \rangle}, and L_z {\lvert { l 0} \rangle} = 0 {\lvert {l 0} \rangle}, so the exponentials reduce this nicely to just

\begin{aligned}{\lvert {\phi(t)} \rangle} = \sqrt{\frac{1}{7}} e^{ -2 i \alpha t } {\lvert { 12 } \rangle}+\sqrt{\frac{2}{7}} {\lvert { 10 } \rangle}+\sqrt{\frac{4}{7}} {\lvert { 20 } \rangle}.\end{aligned} \hspace{\stretch{1}}(1.4)

Probabilities for L_z measurement outcomes.

I believe we were also asked what the probabilities for the outcomes of a measurement of L_z at this time would be. Here is one place that I think that I messed up, and it is really a translation error, attempting to get from the english description of the problem to the math description of the same. I’d had trouble with this process a few times in the problems, and managed to blunder through use of language like “measure”, and “outcome”, but don’t think I really understood how these were used properly.

What are the outcomes that we measure? We measure operators, but the result of a measurement is the eigenvalue associated with the operator. What are the eigenvalues of the L_z operator? These are the m \hbar values, from the operation L_z {\lvert {l m} \rangle} = m \hbar {\lvert {l m} \rangle}. So, given this initial state, there are really two outcomes that are possible, since we have two distinct eigenvalues. These are 2 \hbar and 0 for m = 2, and m= 0 respectively.

A measurement of the “outcome” 2 \hbar, will be the probability associated with the amplitude \left\langle{{ 1 2 }} \vert {{\phi(t)}}\right\rangle (ie: the absolute square of this value). That is

\begin{aligned}{\left\lvert{ \left\langle{{ 1 2 }} \vert {{\phi(t) }}\right\rangle }\right\rvert}^2 = \frac{1}{7}.\end{aligned} \hspace{\stretch{1}}(1.5)

Now, the only other outcome for a measurement of L_z for this state is a measurement of 0 \hbar, and the probability of this is then just 1 - \frac{1}{7} = \frac{6}{7}. On the exam, I think I listed probabilities for three outcomes, with values \frac{1}{7}, \frac{2}{7}, \frac{4}{7} respectively, but in retrospect that seems blatently wrong.

Probabilities for \mathbf{L}^2 measurement outcomes.

What are the probabilities for the outcomes for a measurement of \mathbf{L}^2 after this? The first question is really what are the outcomes. That’s really a question of what are the possible eigenvalues of \mathbf{L}^2 that can be measured at this point. Recall that we have

\begin{aligned}\mathbf{L}^2 {\lvert {l m} \rangle} = \hbar^2 l (l + 1) {\lvert {l m} \rangle}\end{aligned} \hspace{\stretch{1}}(1.6)

So for a state that has only l=1,2 contributions before the measurement, the eigenvalues that can be observed for the \mathbf{L}^2 operator are respectively 2 \hbar^2 and 6 \hbar^2 respectively.

For the l=2 case, our probability is 4/7, leaving 3/7 as the probability for measurement of the l=1 (2 \hbar^2) eigenvalue. We can compute this two ways, and it seems worthwhile to consider both. This first method makes use of the fact that the L_z operator leaves the state vector intact, but it also seems like a bit of a cheat. Consider instead two possible results of measurement after the L_z observation. When an L_z measurement of 0 \hbar is performed our state will be left with only the m=0 kets. That is

\begin{aligned}{\lvert {\psi_a} \rangle} = \frac{1}{{\sqrt{3}}} \left( {\lvert {10} \rangle} + \sqrt{2} {\lvert {20} \rangle} \right),\end{aligned} \hspace{\stretch{1}}(1.7)

whereas, when a 2 \hbar measurement of L_z is performed our state would then only have the m=2 contribution, and would be

\begin{aligned}{\lvert {\psi_b} \rangle} = e^{-2 i \alpha t} {\lvert {12 } \rangle}.\end{aligned} \hspace{\stretch{1}}(1.8)

We have two possible ways of measuring the 2 \hbar^2 eigenvalue for \mathbf{L}^2. One is when our state was {\lvert {\psi_a} \rangle} (, and the resulting state has a {\lvert {10} \rangle} component, and the other is after the m=2 measurement, where our state is left with a {\lvert {12} \rangle} component.

The resulting probability is then a conditional probability result

\begin{aligned}\frac{6}{7} {\left\lvert{ \left\langle{{10}} \vert {{\psi_a}}\right\rangle }\right\rvert}^2 + \frac{1}{7} {\left\lvert{ \left\langle{{12 }} \vert {{\psi_b}}\right\rangle}\right\rvert}^2 = \frac{3}{7}\end{aligned} \hspace{\stretch{1}}(1.9)

The result is the same, as expected, but this is likely a more convicing argument.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , | Leave a Comment »

Some worked problems from old PHY356 exams.

Posted by peeterjoot on January 9, 2011

[Click here for a PDF of this post with nicer formatting]

Motivation.

Some of the old exam questions that I did for preparation for the exam I liked, and thought I’d write up some of them for potential future reference.

Questions from the Dec 2007 PHY355H1F exam.

1b. Parity operator.

\paragraph{Q:} If \Pi is the parity operator, defined by \Pi {\lvert {x} \rangle} = {\lvert {-x} \rangle}, where {\lvert {x} \rangle} is the eigenket of the position operator X with eigenvalue x), and P is the momentum operator conjugate to X, show (carefully) that \Pi P \Pi = -P.

\paragraph{A:}

Consider the matrix element {\langle {-x'} \rvert} \left[{\Pi},{P}\right] {\lvert {x} \rangle}. This is

\begin{aligned}{\langle {-x'} \rvert} \left[{\Pi},{P}\right] {\lvert {x} \rangle}&={\langle {-x'} \rvert} \Pi P - P \Pi {\lvert {x} \rangle} \\ &={\langle {-x'} \rvert} \Pi P {\lvert {x} \rangle} - {\langle {-x} \rvert} P \Pi {\lvert {x} \rangle} \\ &={\langle {x'} \rvert} P {\lvert {x} \rangle} - {\langle {-x} \rvert} P {\lvert {-x} \rangle} \\ &=- i \hbar \left(\delta(x'-x) \frac{\partial {}}{\partial {x}}-\underbrace{\delta(-x -(-x'))}_{= \delta(x'-x) = \delta(x-x')} \frac{\partial {}}{\partial {-x}}\right) \\ &=- 2 i \hbar \delta(x'-x) \frac{\partial {}}{\partial {x}} \\ &=2 {\langle {x'} \rvert} P {\lvert {x} \rangle} \\ &=2 {\langle {-x'} \rvert} \Pi P {\lvert {x} \rangle} \\ \end{aligned}

We’ve taken advantage of the Hermitian property of P and \Pi here, and can rearrange for

\begin{aligned}{\langle {-x'} \rvert} \Pi P - P \Pi - 2 \Pi P {\lvert {x} \rangle} = 0\end{aligned} \hspace{\stretch{1}}(2.1)

Since this is true for all {\langle {-x} \rvert} and {\lvert {x} \rangle} we have

\begin{aligned}\Pi P + P \Pi = 0.\end{aligned} \hspace{\stretch{1}}(2.2)

Right multiplication by \Pi and rearranging we have

\begin{aligned}\Pi P \Pi = - P \Pi \Pi = - P.\end{aligned} \hspace{\stretch{1}}(2.3)

1f. Free particle propagator.

\paragraph{Q:} For a free particle moving in one-dimension, the propagator (i.e. the coordinate representation of the evolution operator),

\begin{aligned}G(x,x';t) = {\langle {x} \rvert} U(t) {\lvert {x'} \rangle}\end{aligned} \hspace{\stretch{1}}(2.4)

is given by

\begin{aligned}G(x,x';t) = \sqrt{\frac{m}{2 \pi i \hbar t}} e^{i m (x-x')^2/ (2 \hbar t)}.\end{aligned} \hspace{\stretch{1}}(2.5)

\paragraph{A:}

This problem is actually fairly straightforward, but it is nice to work it having had a similar problem set question where we were asked about this time evolution operator matrix element (ie: what it’s physical meaning is). Here we have a concrete example of the form of this matrix operator.

Proceeding directly, we have

\begin{aligned}{\langle {x} \rvert} U {\lvert {x'} \rangle}&=\int \left\langle{x} \vert {p'}\right\rangle {\langle {p'} \rvert} U {\lvert {p} \rangle} \left\langle{p} \vert {x'}\right\rangle dp dp' \\ &=\int u_{p'}(x) {\langle {p'} \rvert} e^{-i P^2 t/(2 m \hbar)} {\lvert {p} \rangle} u_p^{*}(x') dp dp' \\ &=\int u_{p'}(x) e^{-i p^2 t/(2 m \hbar)} \delta(p-p') u_p^{*}(x') dp dp' \\ &=\int u_{p}(x) e^{-i p^2 t/(2 m \hbar)} u_p^{*}(x') dp \\ &=\frac{1}{(\sqrt{2 \pi \hbar})^2} \int e^{i p (x-x')/\hbar} e^{-i p^2 t/(2 m \hbar)} dp \\ &=\frac{1}{2 \pi \hbar} \int e^{i p (x-x')/\hbar} e^{-i p^2 t/(2 m \hbar)} dp \\ &=\frac{1}{2 \pi} \int e^{i k (x-x')} e^{-i \hbar k^2 t/(2 m)} dk \\ &=\frac{1}{2 \pi} \int dk e^{- \left(k^2 \frac{ i \hbar t}{2m} - i k (x-x')\right)} \\ &=\frac{1}{2 \pi} \int dk e^{- \frac{ i \hbar t}{2m}\left(k - i \frac{2m}{i \hbar t}\frac{(x-x')}{2} \right)^2- \frac{i^2 2 m (x-x')^2}{4 i \hbar t} } \\ &=\frac{1}{2 \pi}  \sqrt{\pi} \sqrt{\frac{2m}{i \hbar t}}e^{\frac{ i m (x-x')^2}{2 \hbar t}},\end{aligned}

which is the desired result. Now, let’s look at how this would be used. We can express our time evolved state using this matrix element by introducing an identity

\begin{aligned}\left\langle{{x}} \vert {{\psi(t)}}\right\rangle &={\langle {x} \rvert} U {\lvert {\psi(0)} \rangle} \\ &=\int dx' {\langle {x} \rvert} U {\lvert {x'} \rangle} \left\langle{{x'}} \vert {{\psi(0)}}\right\rangle \\ &=\sqrt{\frac{m}{2 \pi i \hbar t}} \int dx' e^{i m (x-x')^2/ (2 \hbar t)}\left\langle{{x'}} \vert {{\psi(0)}}\right\rangle \\ \end{aligned}

This gives us

\begin{aligned}\psi(x, t)=\sqrt{\frac{m}{2 \pi i \hbar t}} \int dx' e^{i m (x-x')^2/ (2 \hbar t)} \psi(x', 0)\end{aligned} \hspace{\stretch{1}}(2.6)

However, note that our free particle wave function at time zero is

\begin{aligned}\psi(x, 0) = \frac{e^{i p x/\hbar}}{\sqrt{2 \pi \hbar}}\end{aligned} \hspace{\stretch{1}}(2.7)

So the convolution integral 2.6 does not exist. We likely have to require that the solution be not a pure state, but instead a superposition of a set of continuous states (a wave packet in position or momentum space related by Fourier transforms). That is

\begin{aligned}\psi(x, 0) &= \frac{1}{{\sqrt{2 \pi \hbar}}} \int \hat{\psi}(p, 0) e^{i p x/\hbar} dp \\ \hat{\psi}(p, 0) &= \frac{1}{{\sqrt{2 \pi \hbar}}} \int \psi(x'', 0) e^{-i p x''/\hbar} dx''\end{aligned} \hspace{\stretch{1}}(2.8)

The time evolution of this wave packet is then determined by the propagator, and is

\begin{aligned}\psi(x,t) =\sqrt{\frac{m}{2 \pi i \hbar t}} \frac{1}{{\sqrt{2 \pi \hbar}}} \int dx' dpe^{i m (x-x')^2/ (2 \hbar t)}\hat{\psi}(p, 0) e^{i p x'/\hbar} ,\end{aligned} \hspace{\stretch{1}}(2.10)

or in terms of the position space wave packet evaluated at time zero

\begin{aligned}\psi(x,t) =\sqrt{\frac{m}{2 \pi i \hbar t}}\frac{1}{{2 \pi}}\int dx' dx'' dke^{i m (x-x')^2/ (2 \hbar t)}e^{i k (x' - x'')} \psi(x'', 0)\end{aligned} \hspace{\stretch{1}}(2.11)

We see that the propagator also ends up with a Fourier transform structure, and we have

\begin{aligned}\psi(x,t) &= \int dx' U(x, x' ; t) \psi(x', 0) \\ U(x, x' ; t) &=\sqrt{\frac{m}{2 \pi i \hbar t}}\frac{1}{{2 \pi}}\int du dke^{i m (x - x' - u)^2/ (2 \hbar t)}e^{i k u }\end{aligned} \hspace{\stretch{1}}(2.12)

Does that Fourier transform exist? I’d not be surprised if it ended up with a delta function representation. I’ll hold off attempting to evaluate and reduce it until another day.

4. Hydrogen atom.

This problem deals with the hydrogen atom, with an initial ket

\begin{aligned}{\lvert {\psi(0)} \rangle} = \frac{1}{{\sqrt{3}}} {\lvert {100} \rangle}+\frac{1}{{\sqrt{3}}} {\lvert {210} \rangle}+\frac{1}{{\sqrt{3}}} {\lvert {211} \rangle},\end{aligned} \hspace{\stretch{1}}(2.14)

where

\begin{aligned}\left\langle{\mathbf{r}} \vert {{100}}\right\rangle = \Phi_{100}(\mathbf{r}),\end{aligned} \hspace{\stretch{1}}(2.15)

etc.

\paragraph{Q: (a)}

If no measurement is made until time t = t_0,

\begin{aligned}t_0 = \frac{\pi \hbar}{ \frac{3}{4} (13.6 \text{eV}) } = \frac{ 4 \pi \hbar }{ 3 E_I},\end{aligned} \hspace{\stretch{1}}(2.16)

what is the ket {\lvert {\psi(t)} \rangle} just before the measurement is made?

\paragraph{A:}

Our time evolved state is

\begin{aligned}{\lvert {\psi{t_0}} \rangle} = \frac{1}{{\sqrt{3}}} e^{-i E_1 t_0 /\hbar } {\lvert {100} \rangle}+\frac{1}{{\sqrt{3}}} e^{- i E_2 t_0/\hbar } ({\lvert {210} \rangle} + {\lvert {211} \rangle}).\end{aligned} \hspace{\stretch{1}}(2.17)

Also observe that this initial time was picked to make the exponential values come out nicely, and we have

\begin{aligned}\frac{E_n t_0 }{\hbar} &= - \frac{E_I \pi \hbar }{\frac{3}{4} E_I n^2 \hbar} \\ &= - \frac{4 \pi }{ 3 n^2 },\end{aligned}

so our time evolved state is just

\begin{aligned}{\lvert {\psi(t_0)} \rangle} = \frac{1}{{\sqrt{3}}} e^{-i 4 \pi / 3} {\lvert {100} \rangle}+\frac{1}{{\sqrt{3}}} e^{- i \pi / 3 } ({\lvert {210} \rangle} + {\lvert {211} \rangle}).\end{aligned} \hspace{\stretch{1}}(2.18)

\paragraph{Q: (b)}

Suppose that at time t_0 an L_z measurement is made, and the outcome 0 is recorded. What is the appropriate ket \psi_{\text{after}}(t_0) right after the measurement?

\paragraph{A:}

A measurement with outcome 0, means that the L_z operator measurement found the state at that point to be the eigenstate for L_z eigenvalue 0. Recall that if {\lvert {\phi} \rangle} is an eigenstate of L_z we have

\begin{aligned}L_z {\lvert {\phi} \rangle} = m \hbar {\lvert {\phi} \rangle},\end{aligned} \hspace{\stretch{1}}(2.19)

so a measurement of L_z with outcome zero means that we have m=0. Our measurement of L_z at time t_0 therefore filters out all but the m=0 states and our new state is proportional to the projection over all m=0 states as follows

\begin{aligned}{\lvert {\psi_{\text{after}}(t_0)} \rangle}&\propto \left( \sum_{n l} {\lvert {n l 0} \rangle}{\langle {n l 0} \rvert} \right) {\lvert {\psi(t_0)} \rangle}  \\ &\propto \left( {\lvert {1 0 0} \rangle}{\langle {1 0 0} \rvert} +{\lvert {2 1 0} \rangle}{\langle {2 1 0} \rvert} \right) {\lvert {\psi(t_0)} \rangle}  \\ &= \frac{1}{{\sqrt{3}}} e^{-i 4 \pi / 3} {\lvert {100} \rangle}+\frac{1}{{\sqrt{3}}} e^{- i \pi / 3 } {\lvert {210} \rangle} \end{aligned}

A final normalization yields

\begin{aligned}{\lvert {\psi_{\text{after}}(t_0)} \rangle}= \frac{1}{{\sqrt{2}}} ({\lvert {210} \rangle} - {\lvert {100} \rangle})\end{aligned} \hspace{\stretch{1}}(2.20)

\paragraph{Q: (c)}

Right after this L_z measurement, what is {\left\lvert{\psi_{\text{after}}(t_0)}\right\rvert}^2?

\paragraph{A:}

Our amplitude is

\begin{aligned}\left\langle{\mathbf{r}} \vert {{\psi_{\text{after}}(t_0)}}\right\rangle&= \frac{1}{{\sqrt{2}}} (\left\langle{\mathbf{r}} \vert {{210}}\right\rangle - \left\langle{\mathbf{r}} \vert {{100}}\right\rangle) \\ &= \frac{1}{{\sqrt{2 \pi a_0^3}}}\left(\frac{r}{4\sqrt{2} a_0} e^{-r/2a_0} \cos\theta-e^{-r/a_0}\right) \\ &= \frac{1}{{\sqrt{2 \pi a_0^3}}}e^{-r/2 a_0} \left(\frac{r}{4\sqrt{2} a_0} \cos\theta-e^{-r/2 a_0}\right),\end{aligned}

so the probability density is

\begin{aligned}{\left\lvert{\left\langle{\mathbf{r}} \vert {{\psi_{\text{after}}(t_0)}}\right\rangle}\right\rvert}^2= \frac{1}{{2 \pi a_0^3}}e^{-r/a_0} \left(\frac{r}{4\sqrt{2} a_0} \cos\theta-e^{-r/2 a_0}\right)^2 \end{aligned} \hspace{\stretch{1}}(2.21)

\paragraph{Q: (d)}

If then a position measurement is made immediately, which if any components of the expectation value of \mathbf{R} will be nonvanishing? Justify your answer.

\paragraph{A:}

The expectation value of this vector valued operator with respect to a radial state {\lvert {\psi} \rangle} = \sum_{nlm} a_{nlm} {\lvert {nlm} \rangle} can be expressed as

\begin{aligned}\left\langle{\mathbf{R}}\right\rangle = \sum_{i=1}^3 \mathbf{e}_i \sum_{nlm, n'l'm'} a_{nlm}^{*} a_{n'l'm'} {\langle {nlm} \rvert} X_i{\lvert {n'l'm'} \rangle},\end{aligned} \hspace{\stretch{1}}(2.22)

where X_1 = X = R \sin\Theta \cos\Phi, X_2 = Y = R \sin\Theta \sin\Phi, X_3 = Z = R \cos\Phi.

Consider one of the matrix elements, and expand this by introducing an identity twice

\begin{aligned}{\langle {nlm} \rvert} X_i {\lvert {n'l'm'} \rangle}&=\int r^2 \sin\theta dr d\theta d\phi{r'}^2 \sin\theta' dr' d\theta' d\phi'\left\langle{{nlm}} \vert {{r \theta \phi}}\right\rangle {\langle {r \theta \phi} \rvert} X_i {\lvert {r' \theta' \phi' } \rangle}\left\langle{{r' \theta' \phi'}} \vert {{n'l'm'}}\right\rangle \\ &=\int r^2 \sin\theta dr d\theta d\phi{r'}^2 \sin\theta' dr' d\theta' d\phi'R_{nl}(r) Y_{lm}^{*}(\theta,\phi)\delta^3(\mathbf{x} - \mathbf{x}') x_iR_{n'l'}(r') Y_{l'm'}(\theta',\phi')\\ &=\int r^2 \sin\theta dr d\theta d\phi{r'}^2 \sin\theta' dr' d\theta' d\phi'R_{nl}(r) Y_{lm}^{*}(\theta,\phi) \\ &\qquad{r'}^2 \sin\theta' \delta(r-r') \delta(\theta - \theta') \delta(\phi-\phi')x_iR_{n'l'}(r') Y_{l'm'}(\theta',\phi')\\ &=\int r^2 \sin\theta dr d\theta d\phidr' d\theta' d\phi'R_{nl}(r) Y_{lm}^{*}(\theta,\phi) \delta(r-r') \delta(\theta - \theta') \delta(\phi-\phi')x_iR_{n'l'}(r') Y_{l'm'}(\theta',\phi')\\ &=\int r^2 \sin\theta dr d\theta d\phiR_{nl}(r) R_{n'l'}(r) Y_{lm}^{*}(\theta,\phi) Y_{l'm'}(\theta,\phi)x_i\\ \end{aligned}

Because our state has only m=0 contributions, the only \phi dependence for the X and Y components of \mathbf{R} come from those components themselves. For X, we therefore integrate \int_0^{2\pi} \cos\phi d\phi = 0, and for Y we integrate \int_0^{2\pi} \sin\phi d\phi = 0, and these terms vanish. Our expectation value for \mathbf{R} for this state, therefore lies completely on the z axis.

Questions from the Dec 2008 PHY355H1F exam.

1b. Trace invariance for unitary transformation.

\paragraph{Q:} Show that the trace of an operator is invariant under unitary transforms, i.e. if A' = U^\dagger A U, where U is a unitary operator, prove \text{Tr}(A') = \text{Tr}(A).

\paragraph{A:}

The bulk of this question is really to show that commutation of operators leaves the trace invariant (unless this is assumed). To show that we start with the definition of the trace

\begin{aligned}\text{Tr}(AB) &= \sum_n {\langle {n} \rvert} A B {\lvert {n} \rangle} \\ &= \sum_{n m} {\langle {n} \rvert} A {\lvert {m} \rangle} {\langle {m} \rvert} B {\lvert {n} \rangle} \\ &= \sum_{n m} {\langle {m} \rvert} B {\lvert {n} \rangle} {\langle {n} \rvert} A {\lvert {m} \rangle} \\ &= \sum_{m} {\langle {m} \rvert} B A {\lvert {m} \rangle}.\end{aligned}

Thus we have

\begin{aligned}\text{Tr}(A B) = \text{Tr}( B A ).\end{aligned} \hspace{\stretch{1}}(3.23)

For the unitarily transformed operator we have

\begin{aligned}\text{Tr}(A') &= \text{Tr}( U^\dagger A U ) \\ &= \text{Tr}( U^\dagger (A U) ) \\ &= \text{Tr}( (A U) U^\dagger ) \\ &= \text{Tr}( A (U U^\dagger) ) \\ &= \text{Tr}( A ) \qquad \square\end{aligned}

1d. Determinant of an exponential operator in terms of trace.

\paragraph{Q:} If A is an Hermitian operator, show that

\begin{aligned}\text{Det}( \exp A ) = \exp ( \text{Tr}(A) )\end{aligned} \hspace{\stretch{1}}(3.24)

where the Determinant (\text{Det}) of an operator is the product of all its eigenvectors.

\paragraph{A:}

The eigenvalues clue in the question provides the starting point. We write the exponential in its series form

\begin{aligned}e^A = 1 + \sum_{k=1}^\infty \frac{1}{{k!}} A^k\end{aligned} \hspace{\stretch{1}}(3.25)

Now, suppose that we have the following eigenvalue relationships for A

\begin{aligned}A {\lvert {n} \rangle} = \lambda_n {\lvert {n} \rangle}.\end{aligned} \hspace{\stretch{1}}(3.26)

From this the exponential is

\begin{aligned}e^A {\lvert {n} \rangle} &= {\lvert {n} \rangle} + \sum_{k=1}^\infty \frac{1}{{k!}} A^k {\lvert {n} \rangle} \\ &= {\lvert {n} \rangle} + \sum_{k=1}^\infty \frac{1}{{k!}} (\lambda_n)^k {\lvert {n} \rangle} \\ &= e^{\lambda_n} {\lvert {n} \rangle}.\end{aligned}

We see that the eigenstates of e^A are those of A, with eigenvalues e^{\lambda_n}.

By the definition of the determinant given we have

\begin{aligned}\text{Det}( e^A ) &= \Pi_n e^{\lambda_n} \\ &= e^{\sum_n \lambda_n} \\ &= e^{\text{Tr}ace(A)} \qquad \square\end{aligned}

1e. Eigenvectors of the Harmonic oscillator creation operator.

\paragraph{Q:} Prove that the only eigenvector of the Harmonic oscillator creation operator is {\lvert {\text{null}} \rangle}.

\paragraph{A:}

Recall that the creation (raising) operator was given by

\begin{aligned}a^\dagger = \sqrt{\frac{m \omega}{2 \hbar}} X - \frac{ i }{\sqrt{2 m \omega \hbar} } P= \frac{1}{{ \alpha \sqrt{2} }} X - \frac{ i \alpha }{\sqrt{2} \hbar } P,\end{aligned} \hspace{\stretch{1}}(3.27)

where \alpha = \sqrt{\hbar/m \omega}. Now assume that a^\dagger {\lvert {\phi} \rangle} = \lambda {\lvert {\phi} \rangle} so that

\begin{aligned}{\langle {x} \rvert} a^\dagger {\lvert {\phi} \rangle} = {\langle {x} \rvert} \lambda {\lvert {\phi} \rangle}.\end{aligned} \hspace{\stretch{1}}(3.28)

Write \left\langle{{x}} \vert {{\phi}}\right\rangle = \phi(x), and expand the LHS using 3.27 for

\begin{aligned}\lambda \phi(x) &= {\langle {x} \rvert} a^\dagger {\lvert {\phi} \rangle}  \\ &= {\langle {x} \rvert} \left( \frac{1}{{ \alpha \sqrt{2} }} X - \frac{ i \alpha }{\sqrt{2} \hbar } P \right) {\lvert {\phi} \rangle} \\ &= \frac{x \phi(x)}{ \alpha \sqrt{2} } - \frac{ i \alpha }{\sqrt{2} \hbar } (-i\hbar)\frac{\partial {}}{\partial {x}} \phi(x) \\ &= \frac{x \phi(x)}{ \alpha \sqrt{2} } - \frac{ \alpha }{\sqrt{2} } \frac{\partial {\phi(x)}}{\partial {x}}.\end{aligned}

As usual write \xi = x/\alpha, and rearrange. This gives us

\begin{aligned}\frac{\partial {\phi}}{\partial {\xi}} +\sqrt{2} \lambda \phi - \xi \phi = 0.\end{aligned} \hspace{\stretch{1}}(3.29)

Observe that this can be viewed as a homogeneous LDE of the form

\begin{aligned}\frac{\partial {\phi}}{\partial {\xi}} - \xi \phi = 0,\end{aligned} \hspace{\stretch{1}}(3.30)

augmented by a forcing term \sqrt{2}\lambda \phi. The homogeneous equation has the solution \phi = A e^{\xi^2/2}, so for the complete equation we assume a solution

\begin{aligned}\phi(\xi) = A(\xi) e^{\xi^2/2}.\end{aligned} \hspace{\stretch{1}}(3.31)

Since \phi' = (A' + A \xi) e^{\xi^2/2}, we produce a LDE of

\begin{aligned}0 &= (A' + A \xi -\xi A + \sqrt{2} \lambda A ) e^{\xi^2/2} \\ &= (A' + \sqrt{2} \lambda A ) e^{\xi^2/2},\end{aligned}

or

\begin{aligned}0 = A' + \sqrt{2} \lambda A.\end{aligned} \hspace{\stretch{1}}(3.32)

This has solution A = B e^{-\sqrt{2} \lambda \xi}, so our solution for 3.29 is

\begin{aligned}\phi(\xi) = B e^{\xi^2/2 - \sqrt{2} \lambda \xi} = B' e^{ (\xi - \lambda \sqrt{2} )^2/2}.\end{aligned} \hspace{\stretch{1}}(3.33)

This wave function is an imaginary Gaussian with minimum at \xi = \lambda\sqrt{2}. It is also unnormalizable since we require B' = 0 for any \lambda if \int {\left\lvert{\phi}\right\rvert}^2 < \infty. Since \left\langle{{\xi}} \vert {{\phi}}\right\rangle = \phi(\xi) = 0, we must also have {\lvert {\phi} \rangle} = 0, completing the exercise.

2. Two level quantum system.

Consider a two-level quantum system, with basis states \{{\lvert {a} \rangle}, {\lvert {b} \rangle}\}. Suppose that the Hamiltonian for this system is given by

\begin{aligned}H = \frac{\hbar \Delta}{2} ( {\lvert {b} \rangle}{\langle {b} \rvert}- {\lvert {a} \rangle}{\langle {a} \rvert})+ i \frac{\hbar \Omega}{2} ( {\lvert {a} \rangle}{\langle {b} \rvert}- {\lvert {b} \rangle}{\langle {a} \rvert})\end{aligned} \hspace{\stretch{1}}(3.34)

where \Delta and \Omega are real positive constants.

\paragraph{Q: (a)} Find the energy eigenvalues and the normalized energy eigenvectors (expressed in terms of the \{{\lvert {a} \rangle}, {\lvert {b} \rangle}\} basis). Write the time evolution operator U(t) = e^{-i H t/\hbar} using these eigenvectors.

\paragraph{A:}

The eigenvalue part of this problem is probably easier to do in matrix form. Let

\begin{aligned}{\lvert {a} \rangle} &= \begin{bmatrix}1 \\ 0\end{bmatrix} \\ {\lvert {b} \rangle} &= \begin{bmatrix}0 \\ 1\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.35)

Our Hamiltonian is then

\begin{aligned}H = \frac{\hbar}{2} \begin{bmatrix}-\Delta & i \Omega \\ -i \Omega & \Delta\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.37)

Computing \det{H - \lambda I} = 0, we get

\begin{aligned}\lambda = \pm \frac{\hbar}{2} \sqrt{ \Delta^2 + \Omega^2 }.\end{aligned} \hspace{\stretch{1}}(3.38)

Let \delta = \sqrt{ \Delta^2 + \Omega^2 }. Our normalized eigenvectors are found to be

\begin{aligned}{\lvert {\pm} \rangle} = \frac{1}{{\sqrt{ 2 \delta (\delta \pm \Delta)} }}\begin{bmatrix}i \Omega \\ \Delta \pm \delta\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.39)

In terms of {\lvert {a} \rangle} and {\lvert {b} \rangle}, we then have

\begin{aligned}{\lvert {\pm} \rangle} = \frac{1}{{\sqrt{ 2 \delta (\delta \pm \Delta)} }}\left(i \Omega {\lvert {a} \rangle}+ (\Delta \pm \delta) {\lvert {b} \rangle} \right).\end{aligned} \hspace{\stretch{1}}(3.40)

Note that our Hamiltonian has a simple form in this basis. That is

\begin{aligned}H = \frac{\delta \hbar}{2} ({\lvert {+} \rangle}{\langle {+} \rvert} - {\lvert {-} \rangle}{\langle {-} \rvert} )\end{aligned} \hspace{\stretch{1}}(3.41)

Observe that once we do the diagonalization, we have a Hamiltonian that appears to have the form of a scaled projector for an open Stern-Gerlach aparatus.

Observe that the diagonalized Hamiltonian operator makes the time evolution operator’s form also simple, which is, by inspection

\begin{aligned}U(t) = e^{-i t \frac{\delta}{2}} {\lvert {+} \rangle}{\langle {+} \rvert} + e^{i t \frac{\delta}{2}} {\lvert {-} \rangle}{\langle {-} \rvert}.\end{aligned} \hspace{\stretch{1}}(3.42)

Since we are asked for this in terms of {\lvert {a} \rangle}, and {\lvert {b} \rangle}, the projectors {\lvert {\pm} \rangle}{\langle {\pm} \rvert} are required. These are

\begin{aligned}{\lvert {\pm} \rangle}{\langle {\pm} \rvert} &= \frac{1}{{2 \delta (\delta \pm \Delta)}}\Bigl( i \Omega {\lvert {a} \rangle} + (\Delta \pm \delta) {\lvert {b} \rangle} \Bigr)\Bigl( -i \Omega {\langle {a} \rvert} + (\Delta \pm \delta) {\langle {b} \rvert} \Bigr) \\ \end{aligned}

\begin{aligned}{\lvert {\pm} \rangle}{\langle {\pm} \rvert} = \frac{1}{{2 \delta (\delta \pm \Delta)}}\Bigl(\Omega^2 {\lvert {a} \rangle}{\langle {a} \rvert}+(\delta \pm \delta)^2 {\lvert {b} \rangle}{\langle {b} \rvert}+i \Omega (\Delta \pm \delta) ({\lvert {a} \rangle}{\langle {b} \rvert}-{\lvert {b} \rangle}{\langle {a} \rvert})\Bigr)\end{aligned} \hspace{\stretch{1}}(3.43)

Substitution into 3.42 and a fair amount of algebra leads to

\begin{aligned}U(t) = \cos(\delta t/2) \Bigl( {\lvert {a} \rangle}{\langle {a} \rvert} + {\lvert {b} \rangle}{\langle {b} \rvert} \Bigr)+ i \frac{\Omega}{\delta} \sin(\delta t/2) \Bigl( {\lvert {a} \rangle}{\langle {a} \rvert} - {\lvert {b} \rangle}{\langle {b} \rvert} -i ({\lvert {a} \rangle}{\langle {b} \rvert} - {\lvert {b} \rangle}{\langle {a} \rvert} )\Bigr).\end{aligned} \hspace{\stretch{1}}(3.44)

Note that while a big cumbersome, we can also verify that we can recover the original Hamiltonian from 3.41 and 3.43.

\paragraph{Q: (b)}

Suppose that the initial state of the system at time t = 0 is {\lvert {\phi(0)} \rangle}= {\lvert {b} \rangle}. Find an expression for the state at some later time t > 0, {\lvert {\phi(t)} \rangle}.

\paragraph{A:}

Most of the work is already done. Computation of {\lvert {\phi(t)} \rangle} = U(t) {\lvert {\phi(0)} \rangle} follows from 3.44

\begin{aligned}{\lvert {\phi(t)} \rangle} =\cos(\delta t/2) {\lvert {b} \rangle}- i \frac{\Omega}{\delta} \sin(\delta t/2) \Bigl( {\lvert {b} \rangle} +i {\lvert {a} \rangle}\Bigr).\end{aligned} \hspace{\stretch{1}}(3.45)

\paragraph{Q: (c)}

Suppose that an observable, specified by the operator X = {\lvert {a} \rangle}{\langle {b} \rvert} + {\lvert {b} \rangle}{\langle {a} \rvert}, is measured for this system. What is the probabilbity that, at time t, the result 1 is obtained? Plot this probability as a function of time, showing the maximum and minimum values of the function, and the corresponding values of t.

\paragraph{A:}

The language of questions like these attempt to bring some physics into the mathematics. The phrase “the result 1 is obtained”, is really a statement that the operator X, after measurement is found to have the eigenstate with numeric value 1.

We can calcuate the eigenvectors for this operator easily enough and find them to be \pm 1. For the positive eigenvalue we can also compute the eigenstate to be

\begin{aligned}{\lvert {X+} \rangle} = \frac{1}{{\sqrt{2}}} \Bigl( {\lvert {a} \rangle} + {\lvert {b} \rangle} \Bigr).\end{aligned} \hspace{\stretch{1}}(3.46)

The question of what the probability for this measurement is then really a question asking for the computation of the amplitude

\begin{aligned}{\left\lvert{\frac{1}{{\sqrt{2}}}\left\langle{{ (a + b)}} \vert {{\phi(t)}}\right\rangle}\right\rvert}^2\end{aligned} \hspace{\stretch{1}}(3.47)

From 3.45 we find this probability to be

\begin{aligned}{\left\lvert{\frac{1}{{\sqrt{2}}}\left\langle{{ (a + b)}} \vert {{\phi(t)}}\right\rangle}\right\rvert}^2&=\frac{1}{{2}} \left(\left(\cos(\delta t/2) + \frac{\Omega}{\delta} \sin(\delta t/2)\right)^2+ \frac{ \Omega^2 \sin^2(\delta t/2)}{\delta^2}\right) \\ &=\frac{1}{{4}} \left( 1 + 3 \frac{\Omega^2}{\delta^2} + \frac{\Delta^2}{\delta^2} \cos (\delta t) + 2 \frac{ \Omega}{\delta} \sin(\delta t) \right)\end{aligned}

We have a simple superposition of two sinusuiods out of phase, periodic with period 2 \pi/\delta. I’d attempted a rough sketch of this on paper, but won’t bother scanning it here or describing it further.

\paragraph{Q: (d)}

Suppose an experimenter has control over the values of the parameters \Delta and \Omega. Explain how she might prepare the state ({\lvert {a} \rangle} + {\lvert {b} \rangle})/\sqrt{2}.

\paragraph{A:}

For this part of the question I wasn’t sure what approach to take. I thought perhaps this linear combination of states could be made to equal one of the energy eigenstates, and if one could prepare the system in that state, then for certain values of \delta and \Delta one would then have this desired state.

To get there I note that we can express the states {\lvert {a} \rangle}, and {\lvert {b} \rangle} in terms of the eigenstates by inverting

\begin{aligned}\begin{bmatrix}{\lvert {+} \rangle} \\ {\lvert {-} \rangle} \\ \end{bmatrix}=\frac{1}{{\sqrt{2\delta}}}\begin{bmatrix}\frac{i \Omega}{\sqrt{\delta + \Delta}} & \sqrt{\delta + \Delta} \\ \frac{i \Omega}{\sqrt{\delta - \Delta}} & -\sqrt{\delta - \Delta}\end{bmatrix}\begin{bmatrix}{\lvert {a} \rangle} \\ {\lvert {b} \rangle} \\ \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.48)

Skipping all the algebra one finds

\begin{aligned}\begin{bmatrix}{\lvert {a} \rangle} \\ {\lvert {b} \rangle} \\ \end{bmatrix}=\begin{bmatrix}-i\sqrt{\delta - \Delta} & -i\sqrt{\delta + \Delta} \\ \frac{\Omega}{\sqrt{\delta - \Delta}} &-\frac{\Omega}{\sqrt{\delta + \Delta}} \end{bmatrix}\begin{bmatrix}{\lvert {+} \rangle} \\ {\lvert {-} \rangle} \\ \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.49)

Unfortunately, this doesn’t seem helpful. I find

\begin{aligned}\frac{1}{{\sqrt{2}}} ( {\lvert {a} \rangle} + {\lvert {b} \rangle} ) = \frac{{\lvert {+} \rangle}}{\sqrt{\delta - \Delta}}( \Omega - i (\delta - \Delta) )-\frac{{\lvert {-} \rangle}}{\sqrt{\delta + \Delta}}( \Omega + i (\delta + \Delta) )\end{aligned} \hspace{\stretch{1}}(3.50)

There’s no obvious way to pick \Omega and \Delta to leave just {\lvert {+} \rangle} or {\lvert {-} \rangle}. When I did this on paper originally I got a different answer for this sum, but looking at it now, I can’t see how I managed to get that answer (it had no factors of i in the result as the one above does).

3. One dimensional harmonic oscillator.

Consider a one-dimensional harmonic oscillator with the Hamiltonian

\begin{aligned}H = \frac{1}{{2m}}P^2 + \frac{1}{{2}} m \omega^2 X^2\end{aligned} \hspace{\stretch{1}}(3.51)

Denote the ground state of the system by {\lvert {0} \rangle}, the first excited state by {\lvert {1} \rangle} and so on.

\paragraph{Q: (a)}
Evaluate {\langle {n} \rvert} X {\lvert {n} \rangle} and {\langle {n} \rvert} X^2 {\lvert {n} \rangle} for arbitrary {\lvert {n} \rangle}.

\paragraph{A:}

Writing X in terms of the raising and lowering operators we have

\begin{aligned}X = \frac{\alpha}{\sqrt{2}} (a^\dagger + a),\end{aligned} \hspace{\stretch{1}}(3.52)

so \left\langle{{X}}\right\rangle is proportional to

\begin{aligned}{\langle {n} \rvert} a^\dagger + a {\lvert {n} \rangle} = \sqrt{n+1} \left\langle{{n}} \vert {{n+1}}\right\rangle + \sqrt{n} \left\langle{{n}} \vert {{n-1}}\right\rangle = 0.\end{aligned} \hspace{\stretch{1}}(3.53)

For \left\langle{{X^2}}\right\rangle we have

\begin{aligned}\left\langle{{X^2}}\right\rangle&=\frac{\alpha^2}{2}{\langle {n} \rvert} (a^\dagger + a)(a^\dagger + a) {\lvert {n} \rangle} \\ &=\frac{\alpha^2}{2}{\langle {n} \rvert} (a^\dagger + a) \left( \sqrt{n+1} {\lvert {n+1} \rangle} + \sqrt{n-1} {\lvert {n-1} \rangle}\right)  \\ &=\frac{\alpha^2}{2}{\langle {n} \rvert} \Bigl( (n+1) {\lvert {n} \rangle} + \sqrt{n(n-1)} {\lvert {n-2} \rangle}+ \sqrt{(n+1)(n+2)} {\lvert {n+2} \rangle} + n {\lvert {n} \rangle} \Bigr).\end{aligned}

We are left with just

\begin{aligned}\left\langle{{X^2}}\right\rangle = \frac{\hbar}{2 m \omega} (2n + 1).\end{aligned} \hspace{\stretch{1}}(3.54)

\paragraph{Q: (b)}

Suppose that at t=0 the system is prepared in the state

\begin{aligned}{\lvert {\psi(0)} \rangle} = \frac{1}{{\sqrt{2}}} ( {\lvert {0} \rangle} + i {\lvert {1} \rangle} ).\end{aligned} \hspace{\stretch{1}}(3.55)

If a measurement of position X were performaed immediately, sketch the propability distribution P(x) that a particle would be found within dx of x. Justify how you construct the sketch.

\paragraph{A:}

The probability that we started in state {\lvert {\psi(0)} \rangle} and ended up in position x is governed by the amplitude \left\langle{{x}} \vert {{\psi(0)}}\right\rangle, and the probability of being within an interval \Delta x, surrounding the point x is given by

\begin{aligned}\int_{x'=x-\Delta x/2}^{x+\Delta x/2} {\left\lvert{ \left\langle{{x'}} \vert {{\psi(0)}}\right\rangle }\right\rvert}^2 dx'.\end{aligned} \hspace{\stretch{1}}(3.56)

In the limit as \Delta x \rightarrow 0, this is just the squared amplitude itself evaluated at the point x, so we are interested in the quantity

\begin{aligned}{\left\lvert{ \left\langle{{x}} \vert {{\psi(0)}}\right\rangle }\right\rvert}^2  = \frac{1}{{2}} {\left\lvert{ \left\langle{{x}} \vert {{0}}\right\rangle + i \left\langle{{x}} \vert {{1}}\right\rangle }\right\rvert}^2.\end{aligned} \hspace{\stretch{1}}(3.57)

We are given these wave functions in the supplemental formulas. Namely,

\begin{aligned}\left\langle{{x}} \vert {{0}}\right\rangle &= \psi_0(x) = \frac{e^{-x^2/2\alpha^2}}{ \sqrt{\alpha \sqrt{\pi}}} \\ \left\langle{{x}} \vert {{1}}\right\rangle &= \psi_1(x) = \frac{e^{-x^2/2\alpha^2} 2 x }{ \alpha \sqrt{2 \alpha \sqrt{\pi}}}.\end{aligned} \hspace{\stretch{1}}(3.58)

Substituting these into 3.57 we have

\begin{aligned}{\left\lvert{ \left\langle{{x}} \vert {{\psi(0)}}\right\rangle }\right\rvert}^2 = \frac{1}{{2}} e^{-x^2/\alpha^2}\frac{1}{{ \alpha \sqrt{\pi}}}{\left\lvert{ 1 + \frac{2 i x}{\alpha \sqrt{2} } }\right\rvert}^2=\frac{e^{-x^2/\alpha^2}}{ 2\alpha \sqrt{\pi}}\left( 1 + \frac{2 x^2}{\alpha^2 } \right).\end{aligned} \hspace{\stretch{1}}(3.60)

This \href{http://www.wolframalpha.com/input/?i=graph+e^(-x^2)+(1+

\paragraph{Q: (c)}

Now suppose the state given in (b) above were allowed to evolve for a time t, determine the expecation value of X and \Delta X at that time.

\paragraph{A:}

Our time evolved state is

\begin{aligned}U(t) {\lvert {\psi(0)} \rangle} = \frac{1}{{\sqrt{2}}}\left(e^{-i \hbar \omega \left( 0 + \frac{1}{{2}} \right) t/\hbar } {\lvert {0} \rangle}+ i e^{-i \hbar \omega \left( 1 + \frac{1}{{2}} \right) t/\hbar } {\lvert {0} \rangle}\right)=\frac{1}{{\sqrt{2}}}\left(e^{-i \omega t/2 } {\lvert {0} \rangle}+ i e^{- 3 i \omega t/2 } {\lvert {1} \rangle}\right).\end{aligned} \hspace{\stretch{1}}(3.61)

The position expectation is therefore

\begin{aligned}{\langle {\psi(t)} \rvert} X {\lvert {\psi(t)} \rangle}&= \frac{\alpha}{2 \sqrt{2}}\left(e^{i \omega t/2 } {\langle {0} \rvert}- i e^{ 3 i \omega t/2 } {\langle {1} \rvert}\right)(a^\dagger + a)\left(e^{-i \omega t/2 } {\lvert {0} \rangle}+ i e^{- 3 i \omega t/2 } {\lvert {1} \rangle}\right) \\ \end{aligned}

We have already demonstrated that {\langle {n} \rvert} X {\lvert {n} \rangle} = 0, so we must only expand the cross terms, but those are just {\langle {0} \rvert} a^\dagger + a {\lvert {1} \rangle} = 1. This leaves

\begin{aligned}{\langle {\psi(t)} \rvert} X {\lvert {\psi(t)} \rangle}= \frac{\alpha}{2 \sqrt{2}}\left( -i e^{i \omega t} + i e^{-i \omega t} \right)=\sqrt{\frac{\hbar}{2 m \omega}} \cos(\omega t)\end{aligned} \hspace{\stretch{1}}(3.62)

For the squared position expectation

\begin{aligned}{\langle {\psi(t)} \rvert} X^2 {\lvert {\psi(t)} \rangle}&= \frac{\alpha^2}{4 (2)}\left(e^{i \omega t/2 } {\langle {0} \rvert}- i e^{ 3 i \omega t/2 } {\langle {1} \rvert}\right)(a^\dagger + a)^2\left(e^{-i \omega t/2 } {\lvert {0} \rangle}+ i e^{- 3 i \omega t/2 } {\lvert {1} \rangle}\right) \\ &=\frac{1}{{2}} ( {\langle {0} \rvert} X^2 {\lvert {0} \rangle} + {\langle {1} \rvert} X^2 {\lvert {1} \rangle} )+ i \frac{\alpha^2 }{8} ( - e^{ i \omega t} {\langle {1} \rvert} (a^\dagger + a)^2 {\lvert {0} \rangle}+ e^{ -i \omega t} {\langle {0} \rvert} (a^\dagger + a)^2 {\lvert {1} \rangle})\end{aligned}

Noting that (a^\dagger + a) {\lvert {0} \rangle} = {\lvert {1} \rangle}, and (a^\dagger + a)^2 {\lvert {0} \rangle} = (a^\dagger + a){\lvert {1} \rangle} = \sqrt{2} {\lvert {2} \rangle} + {\lvert {0} \rangle}, so we see the last two terms are zero. The first two we can evaluate using our previous result 3.54 which was \left\langle{{X^2}}\right\rangle = \frac{\alpha^2}{2} (2n + 1). This leaves

\begin{aligned}{\langle {\psi(t)} \rvert} X^2 {\lvert {\psi(t)} \rangle} = \alpha^2 \end{aligned} \hspace{\stretch{1}}(3.63)

Since \left\langle{{X}}\right\rangle^2 = \alpha^2 \cos^2(\omega t)/2, we have

\begin{aligned}(\Delta X)^2 = \left\langle{{X^2}}\right\rangle - \left\langle{{X}}\right\rangle^2 = \alpha^2 \left(1 - \frac{1}{{2}} \cos^2(\omega t) \right)\end{aligned} \hspace{\stretch{1}}(3.64)

\paragraph{Q: (d)}

Now suppose that initially the system were prepared in the ground state {\lvert {0} \rangle}, and then the resonance frequency is changed abrubtly from \omega to \omega' so that the Hamiltonian becomes

\begin{aligned}H = \frac{1}{{2m}}P^2 + \frac{1}{{2}} m {\omega'}^2 X^2.\end{aligned} \hspace{\stretch{1}}(3.65)

Immediately, an energy measurement is performed ; what is the probability of obtaining the result E = \hbar \omega' (3/2)?

\paragraph{A:}

This energy measurement E = \hbar \omega' (3/2) = \hbar \omega' (1 + 1/2), corresponds to an observation of state {\lvert {1'} \rangle}, after an initial observation of {\lvert {0} \rangle}. The probability of such a measurement is

\begin{aligned}{\left\lvert{ \left\langle{{1'}} \vert {{0}}\right\rangle }\right\rvert}^2\end{aligned} \hspace{\stretch{1}}(3.66)

Note that

\begin{aligned}\left\langle{{1'}} \vert {{0}}\right\rangle &=\int dx \left\langle{{1'}} \vert {{x}}\right\rangle\left\langle{{x}} \vert {{0}}\right\rangle \\ &=\int dx \psi_{1'}^{*} \psi_0(x) \\ \end{aligned}

The wave functions above are

\begin{aligned}\phi_{1'}(x) &= \frac{ 2 x e^{-x^2/2 {\alpha'}^2 }}{ \alpha' \sqrt{ 2 \alpha' \sqrt{\pi} } } \\ \phi_{0}(x) &= \frac{ e^{-x^2/2 {\alpha}^2 } } { \sqrt{ \alpha \sqrt{\pi} } } \end{aligned} \hspace{\stretch{1}}(3.67)

Putting the pieces together we have

\begin{aligned}\left\langle{{1'}} \vert {{0}}\right\rangle &=\frac{2 }{ \alpha' \sqrt{ 2 \alpha' \alpha \pi } }\int dxx e^{-\frac{x^2}{2}\left( \frac{1}{{{\alpha'}^2}} + \frac{1}{{\alpha^2}} \right) }\end{aligned} \hspace{\stretch{1}}(3.69)

Since this is an odd integral kernel over an even range, this evaluates to zero, and we conclude that the probability of measuring the specified energy is zero when the system is initially prepared in the ground state associated with the original Hamiltonian. Intuitively this makes some sense, if one thinks of the Fourier coefficient problem: one cannot construct an even function from linear combinations of purely odd functions.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

My submission for PHY356 (Quantum Mechanics I) Problem Set 4.

Posted by peeterjoot on December 7, 2010

[Click here for a PDF of this post with nicer formatting]

Grading notes.

The pdf version above has been adjusted with some grading commentary. [Click here for the PDF for the original submission, as found below.

Problem 1.

Statement

Is it possible to derive the eigenvalues and eigenvectors presented in Section 8.2 from those in Section 8.1.2? What does this say about the potential energy operator in these two situations?

For reference 8.1.2 was a finite potential barrier, V(x) = V_0, {\left\lvert{x}\right\rvert} > a, and zero in the interior of the well. This had trigonometric solutions in the interior, and died off exponentially past the boundary of the well.

On the other hand, 8.2 was a delta function potential V(x) = -g \delta(x), which had the solution u(x) = \sqrt{\beta} e^{-\beta {\left\lvert{x}\right\rvert}}, where \beta = m g/\hbar^2.

Solution

The pair of figures in the text [1] for these potentials doesn’t make it clear that there are possibly any similarities. The attractive delta function potential isn’t illustrated (although the delta function is, but with opposite sign), and the scaling and the reference energy levels are different. Let’s illustrate these using the same reference energy level and sign conventions to make the similarities more obvious.

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.4\textheight]{FiniteWellPotential}
\caption{8.1.2 Finite Well potential (with energy shifted downwards by V_0)}
\end{figure}

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.4\textheight]{deltaFunctionPotential}
\caption{8.2 Delta function potential.}
\end{figure}

The physics isn’t changed by picking a different point for the reference energy level, so let’s compare the two potentials, and their solutions using V(x) = 0 outside of the well for both cases. The method used to solve the finite well problem in the text is hard to follow, so re-doing this from scratch in a slightly tidier way doesn’t hurt.

Schr\”{o}dinger’s equation for the finite well, in the {\left\lvert{x}\right\rvert} > a region is

\begin{aligned}-\frac{\hbar^2}{2m} u'' = E u = - E_B u,\end{aligned} \hspace{\stretch{1}}(2.1)

where a positive bound state energy E_B = -E > 0 has been introduced.

Writing

\begin{aligned}\beta = \sqrt{\frac{2 m E_B}{\hbar^2}},\end{aligned} \hspace{\stretch{1}}(2.2)

the wave functions outside of the well are

\begin{aligned}u(x) =\left\{\begin{array}{l l}u(-a) e^{\beta(x+a)} &\quad \mbox{latex x a$} \\ \end{array}\right.\end{aligned} \hspace{\stretch{1}}(2.3)$

Within the well Schr\”{o}dinger’s equation is

\begin{aligned}-\frac{\hbar^2}{2m} u'' - V_0 u = E u = - E_B u,\end{aligned} \hspace{\stretch{1}}(2.4)

or

\begin{aligned}\frac{\hbar^2}{2m} u'' = - \frac{2m}{\hbar^2} (V_0 - E_B) u,\end{aligned} \hspace{\stretch{1}}(2.5)

Noting that the bound state energies are the E_B < V_0 values, let \alpha^2 = 2m (V_0 - E_B)/\hbar^2, so that the solutions are of the form

\begin{aligned}u(x) = A e^{i\alpha x} + B e^{-i\alpha x}.\end{aligned} \hspace{\stretch{1}}(2.6)

As was done for the wave functions outside of the well, the normalization constants can be expressed in terms of the values of the wave functions on the boundary. That provides a pair of equations to solve

\begin{aligned}\begin{bmatrix}u(a) \\ u(-a)\end{bmatrix}=\begin{bmatrix}e^{i \alpha a} & e^{-i \alpha a} \\ e^{-i \alpha a} & e^{i \alpha a}\end{bmatrix}\begin{bmatrix}A \\ B\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.7)

Inverting this and substitution back into 2.6 yields

\begin{aligned}u(x) &=\begin{bmatrix}e^{i\alpha x} & e^{-i\alpha x}\end{bmatrix}\begin{bmatrix}A \\ B\end{bmatrix} \\ &=\begin{bmatrix}e^{i\alpha x} & e^{-i\alpha x}\end{bmatrix}\frac{1}{{e^{2 i \alpha a} - e^{-2 i \alpha a}}}\begin{bmatrix}e^{i \alpha a} & -e^{-i \alpha a} \\ -e^{-i \alpha a} & e^{i \alpha a}\end{bmatrix}\begin{bmatrix}u(a) \\ u(-a)\end{bmatrix} \\ &=\begin{bmatrix}\frac{\sin(\alpha (a + x))}{\sin(2 \alpha a)} &\frac{\sin(\alpha (a - x))}{\sin(2 \alpha a)}\end{bmatrix}\begin{bmatrix}u(a) \\ u(-a)\end{bmatrix}.\end{aligned}

Expanding the last of these matrix products the wave function is close to completely specified.

\begin{aligned}u(x) =\left\{\begin{array}{l l}u(-a) e^{\beta(x+a)} & \quad \mbox{latex x < -a$} \\ u(a) \frac{\sin(\alpha (a + x))}{\sin(2 \alpha a)} +u(-a) \frac{\sin(\alpha (a – x))}{\sin(2 \alpha a)} & \quad \mbox{$latex {\left\lvert{x}\right\rvert} a$} \\ \end{array}\right.\end{aligned} \hspace{\stretch{1}}(2.8)$

There are still two unspecified constants u(\pm a) and the constraints on E_B have not been determined (both \alpha and \beta are functions of that energy level). It should be possible to eliminate at least one of the u(\pm a) by computing the wavefunction normalization, and since the well is being narrowed the \alpha term will not be relevant. Since only the vanishingly narrow case where a \rightarrow 0, x \in [-a,a] is of interest, the wave function in that interval approaches

\begin{aligned}u(x) \rightarrow \frac{1}{{2}} (u(a) + u(-a)) + \frac{x}{2} ( u(a) - u(-a) ) \rightarrow \frac{1}{{2}} (u(a) + u(-a)).\end{aligned} \hspace{\stretch{1}}(2.9)

Since no discontinuity is expected this is just u(a) = u(-a). Let’s write \lim_{a\rightarrow 0} u(a) = A for short, and the limited width well wave function becomes

\begin{aligned}u(x) =\left\{\begin{array}{l l}A e^{\beta x} & \quad \mbox{latex x 0$} \\ \end{array}\right.\end{aligned} \hspace{\stretch{1}}(2.10)$

This is now the same form as the delta function potential, and normalization also gives A = \sqrt{\beta}.

One task remains before the attractive delta function potential can be considered a limiting case for the finite well, since the relation between a, V_0, and g has not been established. To do so integrate the Schr\”{o}dinger equation over the infinitesimal range [-a,a]. This was done in the text for the delta function potential, and that provided the relation

\begin{aligned}\beta = \frac{mg}{\hbar^2}\end{aligned} \hspace{\stretch{1}}(2.11)

For the finite well this is

\begin{aligned}\int_{-a}^a -\frac{\hbar^2}{2m} u'' - V_0 \int_{-a}^a u = -E_B \int_{-a}^a u \\ \end{aligned} \hspace{\stretch{1}}(2.12)

In the limit as a \rightarrow 0 this is

\begin{aligned}\frac{\hbar^2}{2m} (u'(a) - u'(-a)) + V_0 2 a u(0) = 2 E_B a u(0).\end{aligned} \hspace{\stretch{1}}(2.13)

Some care is required with the V_0 a term since a \rightarrow 0 as V_0 \rightarrow \infty, but the E_B term is unambiguously killed, leaving

\begin{aligned}\frac{\hbar^2}{2m} u(0) (-2\beta e^{-\beta a}) = -V_0 2 a u(0).\end{aligned} \hspace{\stretch{1}}(2.14)

The exponential vanishes in the limit and leaves

\begin{aligned}\beta = \frac{m (2 a) V_0}{\hbar^2}\end{aligned} \hspace{\stretch{1}}(2.15)

Comparing to 2.11 from the attractive delta function completes the problem. The conclusion is that when the finite well is narrowed with a \rightarrow 0, also letting V_0 \rightarrow \infty such that the absolute area of the well g = (2 a) V_0 is maintained, the finite potential well produces exactly the attractive delta function wave function and associated bound state energy.

Problem 2.

Statement

For the hydrogen atom, determine {\langle {nlm} \rvert}(1/R){\lvert {nlm} \rangle} and 1/{\langle {nlm} \rvert}R{\lvert {nlm} \rangle} such that (nlm)=(211) and R is the radial position operator (X^2+Y^2+Z^2)^{1/2}. What do these quantities represent physically and are they the same?

Solution

Both of the computation tasks for the hydrogen like atom require expansion of a braket of the following form

\begin{aligned}{\langle {nlm} \rvert} A(R) {\lvert {nlm} \rangle},\end{aligned} \hspace{\stretch{1}}(3.16)

where A(R) = R = (X^2 + Y^2 + Z^2)^{1/2} or A(R) = 1/R.

The spherical representation of the identity resolution is required to convert this braket into integral form

\begin{aligned}\mathbf{1} = \int r^2 \sin\theta dr d\theta d\phi {\lvert { r \theta \phi} \rangle}{\langle { r \theta \phi} \rvert},\end{aligned} \hspace{\stretch{1}}(3.17)

where the spherical wave function is given by the braket \left\langle{{ r \theta \phi}} \vert {{nlm}}\right\rangle = R_{nl}(r) Y_{lm}(\theta,\phi).

Additionally, the radial form of the delta function will be required, which is

\begin{aligned}\delta(\mathbf{x} - \mathbf{x}') = \frac{1}{{r^2 \sin\theta}} \delta(r - r') \delta(\theta - \theta') \delta(\phi - \phi')\end{aligned} \hspace{\stretch{1}}(3.18)

Two applications of the identity operator to the braket yield

\begin{aligned}\rvert} A(R) {\lvert {nlm} \rangle} \\ &={\langle {nlm} \rvert} \mathbf{1} A(R) \mathbf{1} {\lvert {nlm} \rangle} \\ &=\int dr d\theta d\phi dr' d\theta' d\phi'r^2 \sin\theta {r'}^2 \sin\theta' \left\langle{{nlm}} \vert {{ r \theta \phi}}\right\rangle{\langle { r \theta \phi} \rvert} A(R) {\lvert { r' \theta' \phi'} \rangle}\left\langle{{ r' \theta' \phi'}} \vert {{nlm}}\right\rangle \\ &=\int dr d\theta d\phi dr' d\theta' d\phi'r^2 \sin\theta {r'}^2 \sin\theta' R_{nl}(r) Y_{lm}^{*}(\theta, \phi){\langle { r \theta \phi} \rvert} A(R) {\lvert { r' \theta' \phi'} \rangle}R_{nl}(r') Y_{lm}(\theta', \phi') \\ \end{aligned}

To continue an assumption about the matrix element {\langle { r \theta \phi} \rvert} A(R) {\lvert { r' \theta' \phi'} \rangle} is required. It seems reasonable that this would be

\begin{aligned}{\langle { r \theta \phi} \rvert} A(R) {\lvert { r' \theta' \phi'} \rangle} = \\ \delta(\mathbf{x} - \mathbf{x}') A(r) = \frac{1}{{r^2 \sin\theta}} \delta(r-r') \delta(\theta -\theta')\delta(\phi-\phi') A(r).\end{aligned} \hspace{\stretch{1}}(3.19)

The braket can now be written completely in integral form as

\begin{aligned}\rvert} A(R) {\lvert {nlm} \rangle} \\ &=\int dr d\theta d\phi dr' d\theta' d\phi'r^2 \sin\theta {r'}^2 \sin\theta' R_{nl}(r) Y_{lm}^{*}(\theta, \phi)\frac{1}{{r^2 \sin\theta}} \delta(r-r') \delta(\theta -\theta')\delta(\phi-\phi') A(r)R_{nl}(r') Y_{lm}(\theta', \phi') \\ &=\int dr d\theta d\phi {r'}^2 \sin\theta' dr' d\theta' d\phi'R_{nl}(r) Y_{lm}^{*}(\theta, \phi)\delta(r-r') \delta(\theta -\theta')\delta(\phi-\phi') A(r)R_{nl}(r') Y_{lm}(\theta', \phi') \\ \end{aligned}

Application of the delta functions then reduces the integral, since the only \theta, and \phi dependence is in the (orthonormal) Y_{lm} terms they are found to drop out

\begin{aligned}{\langle {nlm} \rvert} A(R) {\lvert {nlm} \rangle}&=\int dr d\theta d\phi r^2 \sin\theta R_{nl}(r) Y_{lm}^{*}(\theta, \phi)A(r)R_{nl}(r) Y_{lm}(\theta, \phi) \\ &=\int dr r^2 R_{nl}(r) A(r)R_{nl}(r) \underbrace{\int\sin\theta d\theta d\phi Y_{lm}^{*}(\theta, \phi)Y_{lm}(\theta, \phi) }_{=1}\\ \end{aligned}

This leaves just the radial wave functions in the integral

\begin{aligned}{\langle {nlm} \rvert} A(R) {\lvert {nlm} \rangle}=\int dr r^2 R_{nl}^2(r) A(r)\end{aligned} \hspace{\stretch{1}}(3.20)

As a consistency check, observe that with A(r) = 1, this integral evaluates to 1 according to equation (8.274) in the text, so we can think of (r R_{nl}(r))^2 as the radial probability density for functions of r.

The problem asks specifically for these expectation values for the {\lvert {211} \rangle} state. For that state the radial wavefunction is found in (8.277) as

\begin{aligned}R_{21}(r) = \left(\frac{Z}{2a_0}\right)^{3/2} \frac{ Z r }{a_0 \sqrt{3}} e^{-Z r/2 a_0}\end{aligned} \hspace{\stretch{1}}(3.21)

The braket can now be written explicitly

\begin{aligned}{\langle {21m} \rvert} A(R) {\lvert {21m} \rangle}=\frac{1}{{24}} \left(\frac{ Z }{a_0 } \right)^5\int_0^\inftydr r^4 e^{-Z r/ a_0}A(r)\end{aligned} \hspace{\stretch{1}}(3.22)

Now, let’s consider the two functions A(r) separately. First for A(r) = r we have

\begin{aligned}{\langle {21m} \rvert} R {\lvert {21m} \rangle}&=\frac{1}{{24}} \left(\frac{ Z }{a_0 } \right)^5\int_0^\inftydr r^5 e^{-Z r/ a_0} \\ &=\frac{ a_0 }{ 24 Z } \int_0^\inftydu u^5 e^{-u} \\ \end{aligned}

The last integral evaluates to 120, leaving

\begin{aligned}{\langle {21m} \rvert} R {\lvert {21m} \rangle}=\frac{ 5 a_0 }{ Z }.\end{aligned} \hspace{\stretch{1}}(3.23)

The expectation value associated with this {\lvert {21m} \rangle} state for the radial position is found to be proportional to the Bohr radius. For the hydrogen atom where Z=1 this average value for repeated measurements of the physical quantity associated with the operator R is found to be 5 times the Bohr radius for n=2, l=1 states.

Our problem actually asks for the inverse of this expectation value, and for reference this is

\begin{aligned}1/ {\langle {21m} \rvert} R {\lvert {21m} \rangle}=\frac{ Z }{ 5 a_0 } \end{aligned} \hspace{\stretch{1}}(3.24)

Performing the same task for A(R) = 1/R

\begin{aligned}{\langle {21m} \rvert} 1/R {\lvert {21m} \rangle}&=\frac{1}{{24}} \left(\frac{ Z }{a_0 } \right)^5\int_0^\inftydr r^3e^{-Z r/ a_0} \\ &=\frac{1}{{24}} \frac{ Z }{ a_0 } \int_0^\inftydu u^3e^{-u}.\end{aligned}

This last integral has value 6, and we have the second part of the computational task complete

\begin{aligned}{\langle {21m} \rvert} 1/R {\lvert {21m} \rangle} = \frac{1}{{4}} \frac{ Z }{ a_0 } \end{aligned} \hspace{\stretch{1}}(3.25)

The question of whether or not 3.24, and 3.25 are equal is answered. They are not.

Still remaining for this problem is the question of the what these quantities represent physically.

The quantity {\langle {nlm} \rvert} R {\lvert {nlm} \rangle} is the expectation value for the radial position of the particle measured from the center of mass of the system. This is the average outcome for many measurements of this radial distance when the system is prepared in the state {\lvert {nlm} \rangle} prior to each measurement.

Interestingly, the physical quantity that we associate with the operator R has a different measurable value than the inverse of the expectation value for the inverted operator 1/R. Regardless, we have a physical (observable) quantity associated with the operator 1/R, and when the system is prepared in state {\lvert {21m} \rangle} prior to each measurement, the average outcome of many measurements of this physical quantity produces this value {\langle {21m} \rvert} 1/R {\lvert {21m} \rangle} = Z/n^2 a_0, a quantity inversely proportional to the Bohr radius.

ASIDE: Comparing to the general case.

As a confirmation of the results obtained, we can check 3.24, and 3.25 against the general form of the expectation values \left\langle{{R^s}}\right\rangle for various powers s of the radial position operator. These can be found in locations such as farside.ph.utexas.edu which gives for Z=1 (without proof), and in [2] (where these and harder looking ones expectation values are left as an exercise for the reader to prove). Both of those give:

\begin{aligned}\left\langle{{R}}\right\rangle &= \frac{a_0}{2} ( 3 n^2 -l (l+1) ) \\ \left\langle{{1/R}}\right\rangle &= \frac{1}{n^2 a_0} \end{aligned} \hspace{\stretch{1}}(3.26)

It is curious to me that the general expectation values noted in 3.26 we have a l quantum number dependence for \left\langle{{R}}\right\rangle, but only the n quantum number dependence for \left\langle{{1/R}}\right\rangle. It is not obvious to me why this would be the case.

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] R. Liboff. Introductory quantum mechanics. Cambridge: Addison-Wesley Press, Inc, 2003.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , | Leave a Comment »

PHY356F: Quantum Mechanics I. Lecture 10 notes. Hydrogen atom.

Posted by peeterjoot on November 23, 2010

[Click here for a PDF of this post with nicer formatting]

Introduce the center of mass coordinates.

We’ll want to solve this using the formalism we’ve discussed. The general problem is a proton, positively charged, with a nearby negative charge (the electron).

Our equation to solve is

\begin{aligned}\left(-\frac{\hbar^2}{2 m_1} \boldsymbol{\nabla}_1^2-\frac{\hbar^2}{2 m_2} \boldsymbol{\nabla}_2^2\right)\bar{u}(\mathbf{r}_1, \mathbf{r}_2) + V(\mathbf{r}_1, \mathbf{r}_2)\bar{u}(\mathbf{r}_1, \mathbf{r}_2)=E \bar{u}(\mathbf{r}_1, \mathbf{r}_2).\end{aligned} \hspace{\stretch{1}}(6.123)

Here \left( -\frac{\hbar^2}{2 m_1} \boldsymbol{\nabla}_1^2 -\frac{\hbar^2}{2 m_2} \boldsymbol{\nabla}_2^2 \right) is the total kinetic energy term.
For hydrogen we can consider the potential to be the Coulomb potential energy function that depends only on \mathbf{r}_1 - \mathbf{r}_2. We can transform this using a center of mass transformation. Introduce the centre of mass coordinate and relative coordinate vectors

\begin{aligned}\mathbf{R} &= \frac{m_1 \mathbf{r}_1 + m_2 \mathbf{r}_2}{ m_1 + m_2 } \\ \mathbf{r} &= \mathbf{r}_1 - \mathbf{r}_2.\end{aligned} \hspace{\stretch{1}}(6.124)

The notation \boldsymbol{\nabla}_k^2 represents the Laplacian for the positions of the k’th particle, so that if \mathbf{r}_1 = (x_1, x_2, x_3) is the position of the first particle, the Laplacian for this is:

\begin{aligned}\boldsymbol{\nabla}_1^2=\frac{\partial^2}{\partial x_1^2}+\frac{\partial^2}{\partial y_1^2}+\frac{\partial^2}{\partial z_1^2}\end{aligned} \hspace{\stretch{1}}(6.126)

Here \mathbf{R} is the center of mass coordinate, and \mathbf{r} is the relative coordinate. With this transformation we can reduce the problem to a single coordinate PDE.

We set \bar{u}(\mathbf{r}_1, \mathbf{r}_2) = u(\mathbf{r}) U(\mathbf{R}) and E = E_{rel} + E_{cm}, and get

\begin{aligned}-\frac{\hbar^2}{2\mu} {\boldsymbol{\nabla}_{\mathbf{r}}}^2 u(\mathbf{r}) + V(\mathbf{r}) u(\mathbf{r}) = E_{rel} u(\mathbf{r})\end{aligned} \hspace{\stretch{1}}(6.127)

and

\begin{aligned}-\frac{\hbar^2}{2M} {\boldsymbol{\nabla}_{\mathbf{R}}}^2 U(\mathbf{R}) = E_{cm} U(\mathbf{R})\end{aligned} \hspace{\stretch{1}}(6.128)

where M = m_1 + m_2 is the total mass, and \mu = m_1 m_2/M is the reduced mass.

Aside: WHY do we care (slide of Hydrogen line spectrum shown)? This all started because when people looked at the spectrum for the hydrogen atom, a continuous spectrum was not found. Instead what was found was quantized frequencies. All this abstract Hilbert space notation with its bras and kets is a way of representing observable phenomina.

Also note that we have the same sort of problems in electrodynamics and mechanics, so we are able to recycle this sort of work, either applying it in those problems later, or using those techniques here.

In Electromagnetism these are the problems involving the solution to

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{E} = 0\end{aligned} \hspace{\stretch{1}}(6.129)

or for

\begin{aligned}\mathbf{E} = - \boldsymbol{\nabla} \Phi\end{aligned} \hspace{\stretch{1}}(6.130)

\begin{aligned}\boldsymbol{\nabla}^2 \Phi = 0,\end{aligned} \hspace{\stretch{1}}(6.131)

where \mathbf{E} is the electric field and \Phi is the electric potential.

We need sol solve 6.127 for u(\mathbf{r}). In spherical coordinates

\begin{aligned}-\frac{\hbar^2}{2m} \frac{1}{{r}} \frac{d^2}{dr^2} ( r R_l ) + \left( V(\mathbf{r}) + \frac{\hbar^2 }{2m} l(l+1) \right) R_l = E R_l\end{aligned} \hspace{\stretch{1}}(6.132)

where

\begin{aligned}u(\mathbf{r}) = R_l(\mathbf{r}) Y_{lm}(\theta, \phi)\end{aligned} \hspace{\stretch{1}}(6.133)

This all follows by the separation of variables technique that we’ll use here, in E and M, in PDEs, and so forth.

FIXME: picture drawn. Theta measured down from \mathbf{e}_3 axis to the position \mathbf{r} and \phi measured in the x,y plane measured in the \mathbf{e}_1 to \mathbf{e}_2 orientation.

For the hydrogen atom, we have

\begin{aligned}V(\mathbf{r}) = - \frac{Z e^2}{r}\end{aligned} \hspace{\stretch{1}}(6.134)

Here is what this looks like.

We introduce

\begin{aligned}\rho &= \alpha r \\ \alpha &= \sqrt{\frac{-8 m E}{\hbar^2}} \\ \lambda &= \frac{2 m Z e^2}{\hbar^2 \alpha} \\ \frac{2 m (- E) }{\hbar^2 \alpha^2 } &= \frac{1}{{4}}\end{aligned} \hspace{\stretch{1}}(6.135)

and write

\begin{aligned}\frac{d^2 R_l}{d\rho^2} + \frac{2}{\rho} \frac{d R_l}{d\rho} + \left( \frac{\lambda}{\rho} - \frac{l(l+1)}{\rho^2} - \frac{1}{{4}} \right) R_l = 0\end{aligned} \hspace{\stretch{1}}(6.139)

Large \rho limit.

For \rho \rightarrow \infty, 6.139 becomes

\begin{aligned}\frac{d^2 R_l}{d\rho^2} - \frac{1}{{4}} R_l = 0\end{aligned} \hspace{\stretch{1}}(6.140)

which implies solutions of the form

\begin{aligned}R_l(\rho) = e^{\pm \rho/2}\end{aligned} \hspace{\stretch{1}}(6.141)

but keep R_l(\rho) = e^{-\rho/2} and note that R_l(\rho) = F(\rho)e^{-\rho/2} is also a solution in the limit of \rho \rightarrow \infty, where F(\rho) is a polynomial.

Let F(\rho) = \rho^s L(\rho) where L(\rho) = a_0 + a_1 \rho + \cdots a_\nu \rho^\nu + \cdots.

Small \rho limit.

We also want to consider the small \rho limit, and piece together the information that we find. Think about the following. The small \rho \rightarrow 0 or r \rightarrow 0 limit gives

\begin{aligned}\frac{d^2 R_l}{d\rho^2} - \frac{l(l+1)}{\rho^2} R_l = 0\end{aligned} \hspace{\stretch{1}}(6.142)

\paragraph{Question:} Is this correct?

Not always. Also: we will also think about the l=0 case later (where \lambda/\rho would probably need to be retained.)

We need:

\begin{aligned}\frac{d^2 R_l}{d\rho^2} + \frac{2}{\rho} \frac{d R_l}{d\rho} - \frac{l(l+1)}{\rho^2} R_l = 0\end{aligned} \hspace{\stretch{1}}(6.143)

Instead of using 6.142 as in the text, we must substuitute R_l = \rho^s into the above to find

\begin{aligned}s(s-1) \rho^{s-2} + 2 s \rho^{s-2} - l(l+1) \rho^{s-2} &= 0 \\ \left( s(s-1) + 2 s - l(l+1) \right) \rho^{s-2} &= \end{aligned} \hspace{\stretch{1}}(6.144)

for this equality for all \rho we need

\begin{aligned}s(s-1) + 2 s - l(l+1) = 0\end{aligned} \hspace{\stretch{1}}(6.146)

Solutions s = l and s = -(l+1) can be found to this, and we need s positive for normalizability, which implies

\begin{aligned}R_l(\rho) = \rho^l L(\rho) e^{-\rho/2}.\end{aligned} \hspace{\stretch{1}}(6.147)

Now we need to find what restrictions we must have on L(\rho). Recall that we have L(\rho) = \sum a_\nu \rho^\nu. Substutition into 6.142 gives

\begin{aligned}\rho \frac{d^2 L}{d\rho} + \left( 2(l+1) - \rho \right) \frac{d L}{d \rho} + (\lambda - l - 1) L = 0\end{aligned} \hspace{\stretch{1}}(6.148)

We get

\begin{aligned}A_0 + A_1 \rho + \cdots A_\nu \rho^\nu + \cdots = 0\end{aligned} \hspace{\stretch{1}}(6.149)

For this to be valid for all \rho,

\begin{aligned}a_{\nu+1} \left( (\nu+1)(\nu+ 2l + 2)\right)-a_{\nu} \left( \nu - \lambda + l + 1\right)=0\end{aligned} \hspace{\stretch{1}}(6.150)

or

\begin{aligned}\frac{a_{\nu+1}}{ a_{\nu} } =\frac{ \nu - \lambda + l + 1 }{ (\nu+1)(\nu+ 2l + 2) }\end{aligned} \hspace{\stretch{1}}(6.151)

For large \nu we have

\begin{aligned}\frac{a_{\nu+1}}{ a_{\nu} } =\frac{1}{{\nu+1}}\rightarrow \frac{1}{{\nu}}\end{aligned} \hspace{\stretch{1}}(6.152)

Recall that for the exponential Taylor series we have

\begin{aligned}e^\rho = 1 + \rho + \frac{\rho^2}{2!} + \cdots\end{aligned} \hspace{\stretch{1}}(6.153)

for which we have

\begin{aligned}\frac{a_{\nu+1}}{a_\nu} \rightarrow \frac{1}{{\nu}}\end{aligned} \hspace{\stretch{1}}(6.154)

L(\rho) is behaving like e^\rho, and if we had that

\begin{aligned}R_l(\rho) = \rho^l L(\rho) e^{-\rho/2} \rightarrow \rho^l e^\rho e^{-\rho/2} = \rho^l e^{\rho/2}\end{aligned} \hspace{\stretch{1}}(6.155)

This is divergent, so for normalizable solutions we require L(\rho) to be a polynomial of a finite number of terms.

The polynomial L(\rho) must stop at \nu = n', and we must have

\begin{aligned}a_{\nu+1} = a_{n' +1} = 0\end{aligned} \hspace{\stretch{1}}(6.156)

\begin{aligned}a_{n'} \ne 0\end{aligned} \hspace{\stretch{1}}(6.157)

From 6.150 we have

\begin{aligned}a_{n'} \left( n' - \lambda + l + 1\right)=0\end{aligned} \hspace{\stretch{1}}(6.158)

so we require

\begin{aligned}n' = \lambda - l - 1\end{aligned} \hspace{\stretch{1}}(6.159)

Let \lambda = n, an integer and n' = 0, 1, 2, \cdots so that n' + l + 1 = n says for n= 1,2, \cdots

\begin{aligned}l \le n-1\end{aligned} \hspace{\stretch{1}}(6.160)

If

\begin{aligned}\lambda = n = \frac{2 m Z e^2 }{\hbar^2 \alpha}\end{aligned} \hspace{\stretch{1}}(6.161)

we have

\begin{aligned}E = E_n = - \frac{Z^2 e^2 }{2 a_0} \frac{1}{{n^2}}\end{aligned} \hspace{\stretch{1}}(6.162)

where a_0 = \hbar^2/m e^2 is the Bohr radius, and \alpha = \sqrt{-8 m E/\hbar^2}. In the lecture m was originally used for the reduced mass. I’ve switched to \mu earlier so that this cannot be mixed up with this use of m for the azimuthal quantum number associated with L_z Y_{lm} = m \hbar Y_{lm}.

PICTURE ON BOARD. Energy level transitions on 1/n^2 graph with differences between n=2 to n=1 shown, and photon emitted as a result of the n=2 to n=1 transition.

From Chapter 4 and the story of the spherical harmonics, for a given l, the quantum number m varies between -l and l in integer steps. The radial part of the solution of this separtion of variables problem becomes

\begin{aligned}R_l = \rho^l L(\rho) e^{-\rho/2}\end{aligned} \hspace{\stretch{1}}(6.163)

where the functions L(\rho) are the Laguerre polynomials, and our complete wavefunction is

\begin{aligned}u_{nlm}(r, \theta, \phi) = R_l(\rho) Y_{lm}(\theta, \phi)\end{aligned} \hspace{\stretch{1}}(6.164)

\begin{aligned}n &= 1, 2, \cdots \\ l &= 0, 1, 2, \cdots, n-1 \\ m &= -l, -l+1, \cdots 0, 1, 2, \cdots, l-1, l\end{aligned} \hspace{\stretch{1}}(6.165)

Note that for n=1, l=0, R_{10} \propto e^{-r/a_0}, as graphed here.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , | Leave a Comment »

Oct 12, PHY356F lecture notes.

Posted by peeterjoot on October 12, 2010

Today as an experiment I tried taking live notes in latex in class (previously I’d not taken any notes since the lectures followed the text so closely).

[Click here for a PDF of this post with nicer formatting]

Oct 12.

Review. What have we learned?

Chapter 1.

Information about systems comes from vectors and operators. Express the vector {\lvert {\phi} \rangle} describing the system in terms of eigenvectors {\lvert {a_n} \rangle}. n \in 1,2,3,\cdots.

of some operator A. What are the coefficients c_n? Act on both sides by {\langle {a_m} \rvert} to find

\begin{aligned}\left\langle{{a_m}} \vert {{\phi}}\right\rangle &= \sum_n c_n \underbrace{\left\langle{{a_m}} \vert {{a_n}}\right\rangle}_{\text{Kronicker delta}}  \\ &= \sum c_n \delta_{mn} \\ &= c_m\end{aligned}

\begin{aligned}c_m = \left\langle{{a_m}} \vert {{\phi}}\right\rangle\end{aligned}

Analogy

\begin{aligned}\mathbf{v} = \sum_i v_i \mathbf{e}_i \end{aligned}

\begin{aligned}\mathbf{e}_1 \cdot \mathbf{v} = \sum_i v_i \mathbf{e}_1 \cdot \mathbf{e}_i = v_1\end{aligned}

Physical information comes from the probability for obtaining a measurement of the physical entity associated with operator A. The probability of obtaining outcome a_m, an eigenvalue of A, is {\left\lvert{c_n}\right\rvert}^2

Chapter 2.

Deal with operators that have continuous eigenvalues and eigenvectors.

We now express

\begin{aligned}{\lvert {\phi} \rangle} = \int dk f(k) {\lvert {k} \rangle}\end{aligned}

Here the coeffecients f(k) are analogous to c_n.

Now if we project onto k'

\begin{aligned}\left\langle{{k'}} \vert {{\phi}}\right\rangle &= \int dk f(k) \underbrace{\left\langle{{k'}} \vert {{k}}\right\rangle}_{\text{Dirac delta}} \\ &= \int dk f(k) \delta(k' -k) \\ &= f(k') \end{aligned}

Unlike the discrete case, this is not a probability. Probability density for obtaining outcome k' is {\left\lvert{f(k')}\right\rvert}^2.

Example 2.

\begin{aligned}{\lvert {\phi} \rangle} = \int dk f(k) {\lvert {k} \rangle}\end{aligned}

Now if we project x onto both sides

\begin{aligned}\left\langle{{x}} \vert {{\phi}}\right\rangle &= \int dk f(k) \left\langle{{x}} \vert {{k}}\right\rangle \\ \end{aligned}

With \left\langle{{x}} \vert {{k}}\right\rangle = u_k(x)

\begin{aligned}\phi(x) &\equiv \left\langle{{x}} \vert {{\phi}}\right\rangle \\ &= \int dk f(k) u_k(x)  \\ &= \int dk f(k) \frac{1}{{\sqrt{L}}} e^{ikx}\end{aligned}

This is with periodic boundary value conditions for the normalization. The infinite normalization is also possible.

\begin{aligned}\phi(x) &= \frac{1}{{\sqrt{L}}} \int dk f(k) e^{ikx}\end{aligned}

Multiply both sides by e^{-ik'x}/\sqrt{L} and integrate. This is analogous to multiplying {\lvert {\phi} \rangle} = \int f(k) {\lvert {k} \rangle} dk by {\langle {k'} \rvert}. We get

\begin{aligned}\int \phi(x) \frac{1}{{\sqrt{L}}} e^{-ik'x} dx&= \frac{1}{{L}} \iint dk f(k) e^{i(k-k')x} dx \\ &= \int dk f(k) \Bigl( \frac{1}{{L}} \int e^{i(k-k')x} \Bigr) \\ &= \int dk f(k) \delta(k-k') \\ &= f(k')\end{aligned}

\begin{aligned}f(k') &=\int \phi(x) \frac{1}{{\sqrt{L}}} e^{-ik'x} dx\end{aligned}

We can talk about the state vector in terms of its position basis \phi(x) or in the momentum space via Fourier transformation. This is the equivalent thing, but just expressed different. The question of interpretation in terms of probabilities works out the same. Either way we look at the probability density.

The quantity

\begin{aligned}{\lvert {\phi} \rangle} = \int dk f(k) {\lvert {k} \rangle}\end{aligned}

is also called a wave packet state since it involves a superposition of many stats {\lvert {k} \rangle}. Example: See Fig 4.1 (Gaussian wave packet, with {\left\lvert{\phi}\right\rvert}^2 as the height). This wave packet is a snapshot of the wave function amplitude at one specific time instant. The evolution of this wave packet is governed by the Hamiltonian, which brings us to chapter 3.

Chapter 3.

For

\begin{aligned}{\lvert {\phi} \rangle} = \int dk f(k) {\lvert {k} \rangle}\end{aligned}

How do we find {\lvert {\phi(t)} \rangle}, the time evolved state? Here we have the option of choosing which of the pictures (Schr\”{o}dinger, Heisenberg, interaction) we deal with. Since the Heisenberg picture deals with time evolved operators, and the interaction picture with evolving Hamiltonian’s, neither of these is required to answer this question. Consider the Schr\”{o}dinger picture which gives

\begin{aligned}{\lvert {\phi(t)} \rangle} = \int dk f(k) {\lvert {k} \rangle} e^{-i E_k t/\hbar}\end{aligned}

where E_k is the eigenvalue of the Hamiltonian operator H.

STRONG SEEMING HINT: If looking for additional problems and homework, consider in detail the time evolution of the Gaussian wave packet state.

Chapter 4.

For three dimensions with V(x,y,z) = 0

\begin{aligned}H &= \frac{1}{{2m}} \mathbf{p}^2 \\ \mathbf{p} &= \sum_i p_i \mathbf{e}_i \\ \end{aligned}

In the position representation, where

\begin{aligned}p_i &= -i \hbar \frac{d}{dx_i}\end{aligned}

the Sch equation is

\begin{aligned}H u(x,y,z) &= E u(x,y,z) \\ H &= -\frac{\hbar^2}{2m} \boldsymbol{\nabla}^2 \\ = -\frac{\hbar^2}{2m} \left( \frac{\partial^2}{\partial {x}^2}+\frac{\partial^2}{\partial {y}^2}+\frac{\partial^2}{\partial {z}^2}\right) \end{aligned}

Separation of variables assumes it is possible to let

\begin{aligned}u(x,y,z) = X(x) Y(y) Z(z)\end{aligned}

(these capital letters are functions, not operators).

\begin{aligned}-\frac{\hbar^2}{2m} \left( YZ \frac{\partial^2 X}{\partial {x}^2}+ XZ \frac{\partial^2 Y}{\partial {y}^2}+ YZ \frac{\partial^2 Z}{\partial {z}^2}\right)&= E X Y Z\end{aligned}

Dividing as usual by XYZ we have

\begin{aligned}-\frac{\hbar^2}{2m} \left( \frac{1}{{X}} \frac{\partial^2 X}{\partial {x}^2}+ \frac{1}{{Y}} \frac{\partial^2 Y}{\partial {y}^2}+ \frac{1}{{Z}} \frac{\partial^2 Z}{\partial {z}^2} \right)&= E \end{aligned}

The curious thing is that we have these three derivatives, which is supposed to be related to an Energy, which is independent of any x,y,z, so it must be that each of these is separately constant. We can separate these into three individual equations

\begin{aligned}-\frac{\hbar^2}{2m} \frac{1}{{X}} \frac{\partial^2 X}{\partial {x}^2} &= E_1 \\ -\frac{\hbar^2}{2m} \frac{1}{{Y}} \frac{\partial^2 Y}{\partial {x}^2} &= E_2 \\ -\frac{\hbar^2}{2m} \frac{1}{{Z}} \frac{\partial^2 Z}{\partial {x}^2} &= E_3\end{aligned}

or

\begin{aligned}\frac{\partial^2 X}{\partial {x}^2} &= \left( - \frac{2m E_1}{\hbar^2} \right) X  \\ \frac{\partial^2 Y}{\partial {x}^2} &= \left( - \frac{2m E_2}{\hbar^2} \right) Y  \\ \frac{\partial^2 Z}{\partial {x}^2} &= \left( - \frac{2m E_3}{\hbar^2} \right) Z \end{aligned}

We have then

\begin{aligned}X(x) = C_1 e^{i k x}\end{aligned}

with

\begin{aligned}E_1 &= \frac{\hbar^2 k_1^2 }{2m} = \frac{p_1^2}{2m} \\ E_2 &= \frac{\hbar^2 k_2^2 }{2m} = \frac{p_2^2}{2m} \\ E_3 &= \frac{\hbar^2 k_3^2 }{2m} = \frac{p_3^2}{2m} \end{aligned}

We are free to use any sort of normalization procedure we wish (periodic boundary conditions, infinite Dirac, …)

Angular momentum.

HOMEWORK: go through the steps to understand how to formulate \boldsymbol{\nabla}^2 in spherical polar coordinates. This is a lot of work, but is good practice and background for dealing with the Hydrogen atom, something with spherical symmetry that is most naturally analyzed in the spherical polar coordinates.

In spherical coordinates (We won’t go through this here, but it is good practice) with

\begin{aligned}x &= r \sin\theta \cos\phi \\ y &= r \sin\theta \sin\phi \\ z &= r \cos\theta\end{aligned}

we have with u = u(r,\theta, \phi)

\begin{aligned}-\frac{\hbar^2}{2m} \left( \frac{1}{{r}} \partial_{rr} (r u) +  \frac{1}{{r^2 \sin\theta}} \partial_\theta (\sin\theta \partial_\theta u) + \frac{1}{{r^2 \sin^2\theta}} \partial_{\phi\phi} u \right)&= E u\end{aligned}

We see the start of a separation of variables attack with u = R(r) Y(\theta, \phi). We end up with

\begin{aligned}-\frac{\hbar^2}{2m} &\left( \frac{r}{R} (r R')' +  \frac{1}{{Y \sin\theta}} \partial_\theta (\sin\theta \partial_\theta Y) + \frac{1}{{Y \sin^2\theta}} \partial_{\phi\phi} Y \right) \\ \end{aligned}

\begin{aligned}r (r R')' + \left( \frac{2m E}{\hbar^2} r^2 - \lambda \right) R &= 0\end{aligned}

\begin{aligned}\frac{1}{{Y \sin\theta}} \partial_\theta (\sin\theta \partial_\theta Y) + \frac{1}{{Y \sin^2\theta}} \partial_{\phi\phi} Y &= -\lambda\end{aligned}

Application of separation of variables again, with Y = P(\theta) Q(\phi) gives us

\begin{aligned}\frac{1}{{P \sin\theta}} \partial_\theta (\sin\theta \partial_\theta P) + \frac{1}{{Q \sin^2\theta}} \partial_{\phi\phi} Q &= -\lambda \end{aligned}

\begin{aligned}\frac{\sin\theta}{P } \partial_\theta (\sin\theta \partial_\theta P) +\lambda  \sin^2\theta+ \frac{1}{{Q }} \partial_{\phi\phi} Q &= 0\end{aligned}

\begin{aligned}\frac{\sin\theta}{P } \partial_\theta (\sin\theta \partial_\theta P) + \lambda \sin^2\theta - \mu = 0\frac{1}{{Q }} \partial_{\phi\phi} Q &= -\mu\end{aligned}

or

\begin{aligned}\frac{1}{P \sin\theta} \partial_\theta (\sin\theta \partial_\theta P) +\lambda -\frac{\mu}{\sin^2\theta} &= 0\end{aligned} \hspace{\stretch{1}}(1.1)

\begin{aligned}\partial_{\phi\phi} Q &= -\mu Q\end{aligned} \hspace{\stretch{1}}(1.2)

The equation for P can be solved using the Legendre function P_l^m(\cos\theta) where \lambda = l(l+1) and l is an integer

Replacing \mu with m^2, where m is an integer

\begin{aligned}\frac{d^2 Q}{d\phi^2} &= -m^2 Q\end{aligned}

Imposing a periodic boundary condition Q(\phi) = Q(\phi + 2\pi), where (m = 0, \pm 1, \pm 2, \cdots) we have

\begin{aligned}Q &= \frac{1}{{\sqrt{2\pi}}} e^{im\phi}\end{aligned}

There is the overall solution r(r,\theta,\phi) = R(r) Y(\theta, \phi) for a free particle. The functions Y(\theta, \phi) are

\begin{aligned}Y_{lm}(\theta, \phi) &= N \left( \frac{1}{{\sqrt{2\pi}}} e^{im\phi} \right) \underbrace{ P_l^m(\cos\theta) }_{ -l \le m \le l }\end{aligned}

where N is a normalization constant, and m = 0, \pm 1, \pm 2, \cdots. Y_{lm} is an eigenstate of the \mathbf{L}^2 operator and L_z (two for the price of one). There’s no specific reason for the direction z, but it is the direction picked out of convention.

Angular momentum is given by

\begin{aligned}\mathbf{L} = \mathbf{r} \times \mathbf{p}\end{aligned}

where

\begin{aligned}\mathbf{R} = x \hat{\mathbf{x}} + y\hat{\mathbf{y}} + z\hat{\mathbf{z}}\end{aligned}

and

\begin{aligned}\mathbf{p} = p_x \hat{\mathbf{x}} + p_y\hat{\mathbf{y}} + p_z\hat{\mathbf{z}}\end{aligned}

The important thing to remember is that the aim of following all the math is to show that

\begin{aligned}\mathbf{L}^2 Y_{lm} = \hbar^2 l (l+1) Y_{lm}\end{aligned}

and simultaneously

\begin{aligned}\mathbf{L}_z Y_{lm} = \hbar m Y_{lm}\end{aligned}

Part of the solution involves working with \left[{L_z},{L_{+}}\right], and \left[{L_z},{L_{-}}\right], where

\begin{aligned}L_{+} &= L_x + i L_y \\ L_{-} &= L_x - i L_y\end{aligned}

An exercise (not in the book) is to evaluate

\begin{aligned}\left[{L_z},{L_{+}}\right] &= L_z L_x + i L_z L_y - L_x L_z - i L_y L_z \end{aligned} \hspace{\stretch{1}}(1.3)

where

\begin{aligned}\left[{L_x},{L_y}\right]  &= i \hbar L_z \\ \left[{L_y},{L_z}\right]  &= i \hbar L_x \\ \left[{L_z},{L_x}\right]  &= i \hbar L_y\end{aligned} \hspace{\stretch{1}}(1.4)

Substitution back in 1.3 we have

\begin{aligned}\left[{L_z},{L_{+}}\right] &=\left[{L_z},{L_x}\right] + i \left[{L_z},{L_y}\right]  \\ &=i \hbar ( L_y - i L_x ) \\ &=\hbar ( i L_y +  L_x ) \\ &=\hbar L_{+}\end{aligned}

Posted in Math and Physics Learning. | Tagged: , , , , , , , , | Leave a Comment »

Notes and problems for Desai chapter IV.

Posted by peeterjoot on October 12, 2010

[Click here for a PDF of this post with nicer formatting]

Notes.

Chapter IV notes and problems for [1].

There’s a lot of magic related to the spherical Harmonics in this chapter, with identities pulled out of the Author’s butt. It would be nice to work through that, but need a better reference to work from (or skip ahead to chapter 26 where some of this is apparently derived).

Other stuff pending background derivation and verification are

\begin{itemize}
\item Antisymmetric tensor summation identity.

\begin{aligned}\sum_i \epsilon_{ijk} \epsilon_{iab} = \delta_{ja} \delta_{kb} - \delta_{jb}\delta_{ka}\end{aligned} \hspace{\stretch{1}}(1.1)

This is obviously the coordinate equivalent of the dot product of two bivectors

\begin{aligned}(\mathbf{e}_j \wedge \mathbf{e}_k) \cdot (\mathbf{e}_a \wedge \mathbf{e}_b) &=( (\mathbf{e}_j \wedge \mathbf{e}_k) \cdot \mathbf{e}_a ) \cdot \mathbf{e}_b) =\delta_{ka}\delta_{jb} - \delta_{ja}\delta_{kb}\end{aligned} \hspace{\stretch{1}}(1.2)

We can prove 1.1 by expanding the LHS of 1.2 in coordinates

\begin{aligned}(\mathbf{e}_j \wedge \mathbf{e}_k) \cdot (\mathbf{e}_a \wedge \mathbf{e}_b)&= \sum_{ie} \left\langle{{\epsilon_{ijk} \mathbf{e}_j \mathbf{e}_k \epsilon_{eab} \mathbf{e}_a \mathbf{e}_b}}\right\rangle \\ &=\sum_{ie}\epsilon_{ijk} \epsilon_{eab}\left\langle{{(\mathbf{e}_i \mathbf{e}_i) \mathbf{e}_j \mathbf{e}_k (\mathbf{e}_e \mathbf{e}_e) \mathbf{e}_a \mathbf{e}_b}}\right\rangle \\ &=\sum_{ie}\epsilon_{ijk} \epsilon_{eab}\left\langle{{\mathbf{e}_i \mathbf{e}_e I^2}}\right\rangle \\ &=-\sum_{ie} \epsilon_{ijk} \epsilon_{eab} \delta_{ie} \\ &=-\sum_i\epsilon_{ijk} \epsilon_{iab}\qquad\square\end{aligned}

\item Question on raising and lowering arguments.

How equation (4.240) was arrived at is not clear. In (4.239) he writes

\begin{aligned}\int_0^{2\pi} \int_0^{\pi} d\theta d\phi(L_{-} Y_{lm})^\dagger L_{-} Y_{lm} \sin\theta\end{aligned}

Shouldn’t that Hermitian conjugation be just complex conjugation? if so one would have

\begin{aligned}\int_0^{2\pi} \int_0^{\pi} d\theta d\phi L_{-}^{*} Y_{lm}^{*}L_{-} Y_{lm} \sin\theta\end{aligned}

How does he end up with the L_{-} and the Y_{lm}^{*} interchanged. What justifies this commutation?

A much clearer discussion of this can be found in The operators L_{\pm}, where Dirac notation is used for the normalization discussion.

\item Another question on raising and lowering arguments.

The reasoning leading to (4.238) isn’t clear to me. I fail to see how the L_{-} commutation with \mathbf{L}^2 implies this?

\end{itemize}

Problems

Problem 1.

Statement.

Write down the free particle Schr\”{o}dinger equation for two dimensions in (i) Cartesian and (ii) polar coordinates. Obtain the corresponding wavefunction.

Cartesian case.

For the Cartesian coordinates case we have

\begin{aligned}H = -\frac{\hbar^2}{2m} (\partial_{xx} + \partial_{yy}) = i \hbar \partial_t\end{aligned} \hspace{\stretch{1}}(2.3)

Application of separation of variables with \Psi = XYT gives

\begin{aligned}-\frac{\hbar^2}{2m} \left( \frac{X''}{X} +\frac{Y''}{Y} \right) = i \hbar \frac{T'}{T} = E .\end{aligned} \hspace{\stretch{1}}(2.4)

Immediately, we have the time dependence

\begin{aligned}T \propto e^{-i E t/\hbar},\end{aligned} \hspace{\stretch{1}}(2.5)

with the PDE reduced to

\begin{aligned}\frac{X''}{X} +\frac{Y''}{Y} = - \frac{2m E}{\hbar^2}.\end{aligned} \hspace{\stretch{1}}(2.6)

Introducing separate independent constants

\begin{aligned}\frac{X''}{X} &= a^2 \\ \frac{Y''}{Y} &= b^2 \end{aligned} \hspace{\stretch{1}}(2.7)

provides the pre-normalized wave function and the constraints on the constants

\begin{aligned}\Psi &= C e^{ax}e^{by}e^{-iE t/\hbar} \\ a^2 + b^2 &= -\frac{2 m E}{\hbar^2}.\end{aligned} \hspace{\stretch{1}}(2.9)

Rectangular normalization.

We are now ready to apply normalization constraints. One possibility is a rectangular periodicity requirement.

\begin{aligned}e^{ax} &= e^{a(x + \lambda_x)} \\ e^{ay} &= e^{a(y + \lambda_y)} ,\end{aligned} \hspace{\stretch{1}}(2.11)

or

\begin{aligned}a\lambda_x &= 2 \pi i m \\ a\lambda_y &= 2 \pi i n.\end{aligned} \hspace{\stretch{1}}(2.13)

This provides a more explicit form for the energy expression

\begin{aligned}E_{mn} &= \frac{1}{{2m}} 4 \pi^2 \hbar^2 \left( \frac{m^2}{{\lambda_x}^2}+\frac{n^2}{{\lambda_y}^2}\right).\end{aligned} \hspace{\stretch{1}}(2.15)

We can also add in the area normalization using

\begin{aligned}\left\langle{{\psi}} \vert {{\phi}}\right\rangle &= \int_{x=0}^{\lambda_x} dx\int_{y=0}^{\lambda_x} dy \psi^{*}(x,y) \phi(x,y).\end{aligned} \hspace{\stretch{1}}(2.16)

Our eigenfunctions are now completely specified

\begin{aligned}u_{mn}(x,y,t) &= \frac{1}{{\sqrt{\lambda_x \lambda_y}}}e^{2 \pi i x/\lambda_x}e^{2 \pi i y/\lambda_y}e^{-iE t/\hbar}.\end{aligned} \hspace{\stretch{1}}(2.17)

The interesting thing about this solution is that we can make arbitrary linear combinations

\begin{aligned}f(x,y) = a_{mn} u_{mn}\end{aligned} \hspace{\stretch{1}}(2.18)

and then “solve” for a_{mn}, for an arbitrary f(x,y) by taking inner products

\begin{aligned}a_{mn} = \left\langle{{u_mn}} \vert {{f}}\right\rangle =\int_{x=0}^{\lambda_x} dx \int_{y=0}^{\lambda_x} dy f(x,y) u_mn^{*}(x,y).\end{aligned} \hspace{\stretch{1}}(2.19)

This gives the appearance that any function f(x,y) is a solution, but the equality of 2.18 only applies for functions in the span of this function vector space. The procedure works for arbitrary square integrable functions f(x,y), but the equality really means that the RHS will be the periodic extension of f(x,y).

Infinite space normalization.

An alternate normalization is possible by using the Fourier transform normalization, in which we substitute

\begin{aligned}\frac{2 \pi m }{\lambda_x} &= k_x \\ \frac{2 \pi n }{\lambda_y} &= k_y \end{aligned} \hspace{\stretch{1}}(2.20)

Our inner product is now

\begin{aligned}\left\langle{{\psi}} \vert {{\phi}}\right\rangle &= \int_{-\infty}^{\infty} dx\int_{\infty}^{\infty} dy \psi^{*}(x,y) \phi(x,y).\end{aligned} \hspace{\stretch{1}}(2.22)

And the corresponding normalized wavefunction and associated energy constant E are

\begin{aligned}u_{\mathbf{k}}(x,y,t) &= \frac{1}{{2\pi}}e^{i k_x x}e^{i k_y y}e^{-iE t/\hbar} = \frac{1}{{2\pi}}e^{i \mathbf{k} \cdot \mathbf{x}}e^{-iE t/\hbar} \\ E &= \frac{\hbar^2 \mathbf{k}^2 }{2m}\end{aligned} \hspace{\stretch{1}}(2.23)

Now via this Fourier inner product we are able to construct a solution from any square integrable function. Again, this will not be
an exact equality since the Fourier transform has the effect of averaging across discontinuities.

Polar case.

In polar coordinates our gradient is

\begin{aligned}\boldsymbol{\nabla} &= \hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta.\end{aligned} \hspace{\stretch{1}}(2.25)

with

\begin{aligned}\hat{\mathbf{r}} &= \mathbf{e}_1 e^{\mathbf{e}_1 \mathbf{e}_2 \theta} \\ \hat{\boldsymbol{\theta}} &= \mathbf{e}_2 e^{\mathbf{e}_1 \mathbf{e}_2 \theta} .\end{aligned} \hspace{\stretch{1}}(2.26)

Squaring the gradient for the Laplacian we’ll need the partials, which are

\begin{aligned}\partial_r \hat{\mathbf{r}} &= 0 \\ \partial_r \hat{\boldsymbol{\theta}} &= 0 \\ \partial_\theta \hat{\mathbf{r}} &= \hat{\boldsymbol{\theta}} \\ \partial_\theta \hat{\boldsymbol{\theta}} &= -\hat{\mathbf{r}}.\end{aligned}

The Laplacian is therefore

\begin{aligned}\boldsymbol{\nabla}^2 &= (\hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta) \cdot(\hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta) \\ &= \partial_{rr} + \frac{\hat{\boldsymbol{\theta}}}{r} \cdot \partial_\theta \hat{\mathbf{r}} \partial_r \frac{\hat{\boldsymbol{\theta}}}{r} \cdot \partial_\theta \frac{\hat{\boldsymbol{\theta}}}{r} \partial_\theta \\ &= \partial_{rr} + \frac{\hat{\boldsymbol{\theta}}}{r} \cdot (\partial_\theta \hat{\mathbf{r}}) \partial_r + \frac{\hat{\boldsymbol{\theta}}}{r} \cdot \frac{\hat{\boldsymbol{\theta}}}{r} \partial_{\theta\theta} + \frac{\hat{\boldsymbol{\theta}}}{r} \cdot (\partial_\theta \hat{\boldsymbol{\theta}}) \frac{1}{{r}} \partial_\theta .\end{aligned}

Evalating the derivatives we have

\begin{aligned}\boldsymbol{\nabla}^2 = \partial_{rr} + \frac{1}{{r}} \partial_r + \frac{1}{r^2} \partial_{\theta\theta},\end{aligned} \hspace{\stretch{1}}(2.28)

and are now prepared to move on to the solution of the Hamiltonian H = -(\hbar^2/2m) \boldsymbol{\nabla}^2. With separation of variables again using \Psi = R(r) \Theta(\theta) T(t) we have

\begin{aligned}-\frac{\hbar^2}{2m} \left( \frac{R''}{R} + \frac{R'}{rR} + \frac{1}{{r^2}} \frac{\Theta''}{\Theta} \right) = i \hbar \frac{T'}{T} = E.\end{aligned} \hspace{\stretch{1}}(2.29)

Rearranging to separate the \Theta term we have

\begin{aligned}\frac{r^2 R''}{R} + \frac{r R'}{R} + \frac{2 m E}{\hbar^2} r^2 E = -\frac{\Theta''}{\Theta} = \lambda^2.\end{aligned} \hspace{\stretch{1}}(2.30)

The angular solutions are given by

\begin{aligned}\Theta = \frac{1}{{\sqrt{2\pi}}} e^{i \lambda \theta}\end{aligned} \hspace{\stretch{1}}(2.31)

Where the normalization is given by

\begin{aligned}\left\langle{{\psi}} \vert {{\phi}}\right\rangle &= \int_{0}^{2 \pi} d\theta \psi^{*}(\theta) \phi(\theta).\end{aligned} \hspace{\stretch{1}}(2.32)

And the radial by the solution of the PDE

\begin{aligned}r^2 R'' + r R' + \left( \frac{2 m E}{\hbar^2} r^2 E - \lambda^2 \right) R = 0\end{aligned} \hspace{\stretch{1}}(2.33)

Problem 2.

Statement.

Use the orthogonality property of P_l(\cos\theta)

\begin{aligned}\int_{-1}^1 dx P_l(x) P_{l'}(x) = \frac{2}{2l+1} \delta_{l l'},\end{aligned} \hspace{\stretch{1}}(2.34)

confirm that at least the first two terms of (4.171)

\begin{aligned}e^{i k r \cos\theta} = \sum_{l=0}^\infty (2l + 1) i^l j_l(kr) P_l(\cos\theta)\end{aligned} \hspace{\stretch{1}}(2.35)

are correct.

Solution.

Taking the inner product using the integral of 2.34 we have

\begin{aligned}\int_{-1}^1 dx e^{i k r x} P_l'(x) = 2 i^l j_l(kr) \end{aligned} \hspace{\stretch{1}}(2.36)

To confirm the first two terms we need

\begin{aligned}P_0(x) &= 1 \\ P_1(x) &= x \\ j_0(\rho) &= \frac{\sin\rho}{\rho} \\ j_1(\rho) &= \frac{\sin\rho}{\rho^2} - \frac{\cos\rho}{\rho}.\end{aligned} \hspace{\stretch{1}}(2.37)

On the LHS for l'=0 we have

\begin{aligned}\int_{-1}^1 dx e^{i k r x} = 2 \frac{\sin{kr}}{kr}\end{aligned} \hspace{\stretch{1}}(2.41)

On the LHS for l'=1 note that

\begin{aligned}\int dx x e^{i k r x} &= \int dx x \frac{d}{dx} \frac{e^{i k r x}}{ikr} \\ &= x \frac{e^{i k r x}}{ikr} - \frac{e^{i k r x}}{(ikr)^2}.\end{aligned}

So, integration in [-1,1] gives us

\begin{aligned}\int_{-1}^1 dx e^{i k r x} =  -2i \frac{\cos{kr}}{kr} + 2i \frac{1}{{(kr)^2}} \sin{kr}.\end{aligned} \hspace{\stretch{1}}(2.42)

Now compare to the RHS for l'=0, which is

\begin{aligned}2 j_0(kr) = 2 \frac{\sin{kr}}{kr},\end{aligned} \hspace{\stretch{1}}(2.43)

which matches 2.41. For l'=1 we have

\begin{aligned}2 i j_1(kr) = 2i \frac{1}{{kr}} \left( \frac{\sin{kr}}{kr} - \cos{kr} \right),\end{aligned} \hspace{\stretch{1}}(2.44)

which in turn matches 2.42, completing the exersize.

Problem 3.

Statement.

Obtain the commutation relations \left[{L_i},{L_j}\right] by calculating the vector \mathbf{L} \times \mathbf{L} using the definition \mathbf{L} = \mathbf{r} \times \mathbf{p} directly instead of introducing a differential operator.

Solution.

Expressing the product \mathbf{L} \times \mathbf{L} in determinant form sheds some light on this question. That is

\begin{aligned}\begin{vmatrix} \mathbf{e}_1 & \mathbf{e}_2 & \mathbf{e}_3 \\  L_1 & L_2 & L_3 \\  L_1 & L_2 & L_3\end{vmatrix}&= \mathbf{e}_1 \left[{L_2},{L_3}\right] +\mathbf{e}_2 \left[{L_3},{L_1}\right] +\mathbf{e}_3 \left[{L_1},{L_2}\right]= \mathbf{e}_i \epsilon_{ijk} \left[{L_j},{L_k}\right]\end{aligned} \hspace{\stretch{1}}(2.45)

We see that evaluating this cross product in turn requires evaluation of the set of commutators. We can do that with the canonical commutator relationships directly using L_i = \epsilon_{ijk} r_j p_k like so

\begin{aligned}\left[{L_i},{L_j}\right]&=\epsilon_{imn} r_m p_n \epsilon_{jab} r_a p_b- \epsilon_{jab} r_a p_b \epsilon_{imn} r_m p_n \\ &=\epsilon_{imn} \epsilon_{jab} r_m (p_n r_a) p_b- \epsilon_{jab} \epsilon_{imn} r_a (p_b r_m) p_n \\ &=\epsilon_{imn} \epsilon_{jab} r_m (r_a p_n -i \hbar \delta_{an}) p_b- \epsilon_{jab} \epsilon_{imn} r_a (r_m p_b - i \hbar \delta{mb}) p_n \\ &=\epsilon_{imn} \epsilon_{jab} (r_m r_a p_n p_b - r_a r_m p_b p_n )- i \hbar ( \epsilon_{imn} \epsilon_{jnb} r_m p_b - \epsilon_{jam} \epsilon_{imn} r_a p_n ).\end{aligned}

The first two terms cancel, and we can employ (4.179) to eliminate the antisymmetric tensors from the last two terms

\begin{aligned}\left[{L_i},{L_j}\right]&=i \hbar ( \epsilon_{nim} \epsilon_{njb} r_m p_b - \epsilon_{mja} \epsilon_{min} r_a p_n ) \\ &=i \hbar ( (\delta_{ij} \delta_{mb} -\delta_{ib} \delta_{mj}) r_m p_b - (\delta_{ji} \delta_{an} -\delta_{jn} \delta_{ai}) r_a p_n ) \\ &=i \hbar (\delta_{ij} \delta_{mb} r_m p_b - \delta_{ji} \delta_{an} r_a p_n - \delta_{ib} \delta_{mj} r_m p_b + \delta_{jn} \delta_{ai} r_a p_n ) \\ &=i \hbar (\delta_{ij} r_m p_m- \delta_{ji} r_a p_a- r_j p_i+ r_i p_j ) \\ \end{aligned}

For k \ne i,j, this is i\hbar (\mathbf{r} \times \mathbf{p})_k, so we can write

\begin{aligned}\mathbf{L} \times \mathbf{L} &= i\hbar \mathbf{e}_k \epsilon_{kij} ( r_i p_j - r_j p_i ) = i\hbar \mathbf{L} = i\hbar \mathbf{e}_k L_k = i\hbar \mathbf{L}.\end{aligned} \hspace{\stretch{1}}(2.46)

In [2], the commutator relationships are summarized this way, instead of using the antisymmetric tensor (4.224)

\begin{aligned}\left[{L_i},{L_j}\right] &= i \hbar \epsilon_{ijk} L_k\end{aligned} \hspace{\stretch{1}}(2.47)

as here in Desai. Both say the same thing.

Problem 4.

Statement.

Solution.

TODO.

Problem 5.

Statement.

A free particle is moving along a path of radius R. Express the Hamiltonian in terms of the derivatives involving the polar angle of the particle and write down the Schr\”{o}dinger equation. Determine the wavefunction and the energy eigenvalues of the particle.

Solution.

In classical mechanics our Lagrangian for this system is

\begin{aligned}\mathcal{L} = \frac{1}{{2}} m R^2 \dot{\theta}^2,\end{aligned} \hspace{\stretch{1}}(2.48)

with the canonical momentum

\begin{aligned}p_\theta = \frac{\partial {\mathcal{L}}}{\partial {\dot{\theta}}} = m R^2 \dot{\theta}.\end{aligned} \hspace{\stretch{1}}(2.49)

Thus the classical Hamiltonian is

\begin{aligned}H = \frac{1}{{2m R^2}} {p_\theta}^2.\end{aligned} \hspace{\stretch{1}}(2.50)

By analogy the QM Hamiltonian operator will therefore be

\begin{aligned}H = -\frac{\hbar^2}{2m R^2} \partial_{\theta\theta}.\end{aligned} \hspace{\stretch{1}}(2.51)

For \Psi = \Theta(\theta) T(t), separation of variables gives us

\begin{aligned}-\frac{\hbar^2}{2m R^2} \frac{\Theta''}{\Theta} = i \hbar \frac{T'}{T} = E,\end{aligned} \hspace{\stretch{1}}(2.52)

from which we have

\begin{aligned}T &\propto e^{-i E t/\hbar} \\ \Theta &\propto e^{ \pm i \sqrt{2m E} R \theta/\hbar }.\end{aligned} \hspace{\stretch{1}}(2.53)

Requiring single valued \Theta, equal at any multiples of 2\pi, we have

\begin{aligned}e^{ \pm i \sqrt{2m E} R (\theta + 2\pi)/\hbar } = e^{ \pm i \sqrt{2m E} R \theta/\hbar },\end{aligned}

or

\begin{aligned}\pm \sqrt{2m E} \frac{R}{\hbar} 2\pi = 2 \pi n,\end{aligned}

Suffixing the energy values with this index we have

\begin{aligned}E_n = \frac{n^2 \hbar^2}{2 m R^2}.\end{aligned} \hspace{\stretch{1}}(2.55)

Allowing both positive and negative integer values for n we have

\begin{aligned}\Psi = \frac{1}{{\sqrt{2\pi}}} e^{i n \theta} e^{-i E_n t/\hbar},\end{aligned} \hspace{\stretch{1}}(2.56)

where the normalization was a result of the use of a [0,2\pi] inner product over the angles

\begin{aligned}\left\langle{{\psi}} \vert {{\phi}}\right\rangle \equiv \int_0^{2\pi} \psi^{*}(\theta) \phi(\theta) d\theta.\end{aligned} \hspace{\stretch{1}}(2.57)

Problem 6.

Statement.

Determine \left[{L_i},{r}\right] and \left[{L_i},{\mathbf{r}}\right].

Solution.

Since L_i contain only \theta and \phi partials, \left[{L_i},{r}\right] = 0. For the position vector, however, we have an angular dependence, and are left to evaluate \left[{L_i},{\mathbf{r}}\right] = r \left[{L_i},{\hat{\mathbf{r}}}\right]. We’ll need the partials for \hat{\mathbf{r}}. We have

\begin{aligned}\hat{\mathbf{r}} &= \mathbf{e}_3 e^{I \hat{\boldsymbol{\phi}} \theta} \\ \hat{\boldsymbol{\phi}} &= \mathbf{e}_2 e^{\mathbf{e}_1 \mathbf{e}_2 \phi} \\ I &= \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3\end{aligned} \hspace{\stretch{1}}(2.58)

Evaluating the partials we have

\begin{aligned}\partial_\theta \hat{\mathbf{r}} = \hat{\mathbf{r}} I \hat{\boldsymbol{\phi}}\end{aligned}

With

\begin{aligned}\hat{\boldsymbol{\theta}} &= \tilde{R} \mathbf{e}_1 R \\ \hat{\boldsymbol{\phi}} &= \tilde{R} \mathbf{e}_2 R \\ \hat{\mathbf{r}} &= \tilde{R} \mathbf{e}_3 R\end{aligned} \hspace{\stretch{1}}(2.61)

where \tilde{R} R = 1, and \hat{\boldsymbol{\theta}} \hat{\boldsymbol{\phi}} \hat{\mathbf{r}} = \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3, we have

\begin{aligned}\partial_\theta \hat{\mathbf{r}} &= \tilde{R} \mathbf{e}_3 \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3 \mathbf{e}_2 R = \tilde{R} \mathbf{e}_1 R = \hat{\boldsymbol{\theta}}\end{aligned} \hspace{\stretch{1}}(2.64)

For the \phi partial we have

\begin{aligned}\partial_\phi \hat{\mathbf{r}}&= \mathbf{e}_3 \sin\theta I \hat{\boldsymbol{\phi}} \mathbf{e}_1 \mathbf{e}_2 \\ &= \sin\theta \hat{\boldsymbol{\phi}}\end{aligned}

We are now prepared to evaluate the commutators. Starting with the easiest we have

\begin{aligned}\left[{L_z},{\hat{\mathbf{r}}}\right] \Psi&=-i \hbar (\partial_\phi \hat{\mathbf{r}} \Psi - \hat{\mathbf{r}} \partial_\phi \Psi ) \\ &=-i \hbar (\partial_\phi \hat{\mathbf{r}}) \Psi  \\ \end{aligned}

So we have

\begin{aligned}\left[{L_z},{\hat{\mathbf{r}}}\right]&=-i \hbar \sin\theta \hat{\boldsymbol{\phi}}\end{aligned} \hspace{\stretch{1}}(2.65)

Observe that by virtue of chain rule, only the action of the partials on \hat{\mathbf{r}} itself contributes, and all the partials applied to \Psi cancel out due to the commutator differences. That simplifies the remaining commutator evaluations. For reference the polar form of L_x, and L_y are

\begin{aligned}L_x &= -i \hbar (-S_\phi \partial_\theta - C_\phi \cot\theta \partial_\phi) \\ L_y &= -i \hbar (C_\phi \partial_\theta - S_\phi \cot\theta \partial_\phi),\end{aligned} \hspace{\stretch{1}}(2.66)

where the sines and cosines are written with S, and C respectively for short.

We therefore have

\begin{aligned}\left[{L_x},{\hat{\mathbf{r}}}\right]&= -i \hbar (-S_\phi (\partial_\theta \hat{\mathbf{r}}) - C_\phi \cot\theta (\partial_\phi \hat{\mathbf{r}}) ) \\ &= -i \hbar (-S_\phi \hat{\boldsymbol{\theta}} - C_\phi \cot\theta S_\theta \hat{\boldsymbol{\phi}} ) \\ &= -i \hbar (-S_\phi \hat{\boldsymbol{\theta}} - C_\phi C_\theta \hat{\boldsymbol{\phi}} ) \\ \end{aligned}

and

\begin{aligned}\left[{L_y},{\hat{\mathbf{r}}}\right]&= -i \hbar (C_\phi (\partial_\theta \hat{\mathbf{r}}) - S_\phi \cot\theta (\partial_\phi \hat{\mathbf{r}})) \\ &= -i \hbar (C_\phi \hat{\boldsymbol{\theta}} - S_\phi C_\theta \hat{\boldsymbol{\phi}} ).\end{aligned}

Adding back in the factor of r, and summarizing we have

\begin{aligned}\left[{L_i},{r}\right] &= 0 \\ \left[{L_x},{\mathbf{r}}\right] &= -i \hbar r (-\sin\phi \hat{\boldsymbol{\theta}} - \cos\phi \cos\theta \hat{\boldsymbol{\phi}} ) \\ \left[{L_y},{\mathbf{r}}\right] &= -i \hbar r (\cos\phi \hat{\boldsymbol{\theta}} - \sin\phi \cos\theta \hat{\boldsymbol{\phi}} ) \\ \left[{L_z},{\mathbf{r}}\right] &= -i \hbar r \sin\theta \hat{\boldsymbol{\phi}}\end{aligned} \hspace{\stretch{1}}(2.68)

Problem 7.

Statement.

Show that

\begin{aligned}e^{-i\pi L_x /\hbar } {\lvert {l,m} \rangle} = {\lvert {l,m-1} \rangle}\end{aligned} \hspace{\stretch{1}}(2.72)

Solution.

TODO.

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] R. Liboff. Introductory quantum mechanics. 2003.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , , , , | Leave a Comment »

Spherical harmonic Eigenfunctions by application of the raising operator.

Posted by peeterjoot on August 23, 2009

[Click here for a PDF of this post with nicer formatting]

Motivation

In Bohm’s QT ([1], the following spherical harmonic eigenfunctions of the raising operator are found

\begin{aligned}\psi_l^{l-s} = \frac{e^{i(l-s)\phi}}{(1-\zeta^2)^{(l-s)/2}} \frac{\partial^s}{\partial \zeta^s} (1-\zeta^2)^l \end{aligned} \quad\quad\quad(1)

This (unnormalized) result (with \zeta = \cos\theta) is valid for s \in [0,l]. As an exersize do this by applying the raising operator to \psi_l^{-l}. This should help verify the result (unproven or unclear if proven) that the \psi_l^m and \psi_l^{-m} eigenfunctions differ only by a sign in the \phi phase term.

Guts

The staring point, with C for \cos and S for \sin, will be equations (15) from the text

\begin{aligned}L_z/\hbar &= -i \partial_\phi \\ L_x/\hbar &= i (S_\phi \partial_\theta + \cot\theta C_\phi \partial_\phi) \\ L_y/\hbar &= -i (C_\phi \partial_\theta - \cot\theta S_\phi \partial_\phi) \end{aligned}

From these the raising and lowering operators (setting \hbar=1) are respectively

\begin{aligned}L_x \pm iL_y&= i (S_\phi \partial_\theta + \cot\theta C_\phi \partial_\phi)\pm (C_\phi \partial_\theta - \cot\theta S_\phi \partial_\phi) \\ &= e^{\pm i\phi} (\pm \partial_\theta + i \cot\theta \partial_\phi ) \end{aligned}

So, if we are after solutions to

\begin{aligned}(L_x \pm iL_y) \psi_l^{\pm l} = 0 \end{aligned} \quad\quad\quad(2)

and require of these \psi_l^{\pm l} = e^{\pm i l \phi} f_l^{\pm l}(\theta), then we want solutions of

\begin{aligned}(\pm \partial_\theta \pm i^2 l \cot\theta ) f_l^{\pm l} = 0 \end{aligned} \quad\quad\quad(3)

or

\begin{aligned}\pm (\partial_\theta - l \cot\theta ) f_l^{\pm l} = 0 \end{aligned} \quad\quad\quad(4)

What I wanted to demonstrate to myself, that the \theta dependence is the same for \psi_l^m as it is for \psi_l^{-m} is therefore true from (4) for the first case with m=l. We’ll need to apply the raising operator to \psi_l^{-l} to verify that this is the case for the rest of the indexes m.

To continue we need to integrate for f_l^{\pm l}

\begin{aligned}\int \frac{d f}{f} = l \int \cot\theta d\theta \end{aligned}

Which integrates to

\begin{aligned}\ln(f) = l \ln(\sin\theta) + \ln(\kappa) \end{aligned}

Exponentiating we have

\begin{aligned}f = \kappa (\sin\theta)^l \end{aligned}

and have

\begin{aligned}\psi_l^{\pm l} = e^{\pm i l \phi} (\sin\theta)^l \end{aligned} \quad\quad\quad(5)

Now are now set to apply the raising operator to \psi_l^{-l}.

\begin{aligned}(L_x + iL_y) \psi_l^{-l} &=e^{i\phi} (\partial_\theta + i \cot\theta \partial_\phi) \psi_l^{-l} \\ &=e^{i\phi} (\partial_\theta + i \cot\theta (-i l)) \psi_l^{-l} \\ &=e^{i\phi} (\partial_\theta + l \cot\theta ) \psi_l^{-l} \\  \end{aligned}

Now comes the sneaky trick from the text used in the lowering application argument. I’m not sure how to guess this one, but playing it backwards we find the differential operator above

\begin{aligned}\frac{1}{{(\sin\theta)^l}} \frac{\partial {}}{\partial {\theta}} \left( (\sin\theta)^l \psi_l^{\pm l} \right)&=\frac{1}{{(\sin\theta)^l}} \left( l (\sin\theta)^{l-1} \cos\theta + (\sin\theta)^l \partial_\theta \right) \psi_l^{\pm l} \\ &=\frac{1}{{(\sin\theta)^l}} \left( l (\sin\theta)^{l} \cot\theta + (\sin\theta)^l \partial_\theta \right) \psi_l^{\pm l} \\  \end{aligned}

That gives the sneaky identity

\begin{aligned}\frac{1}{{(\sin\theta)^l}} \frac{\partial {}}{\partial {\theta}} \left( (\sin\theta)^l \psi_l^{\pm l} \right)&=\left( l \cot\theta + \partial_\theta \right) \psi_l^{\pm l}  \end{aligned} \quad\quad\quad(6)

Backsubstution gives

\begin{aligned}(L_x + iL_y) \psi_l^{-l} &=e^{i\phi} \frac{1}{{(\sin\theta)^l}} \frac{\partial {}}{\partial {\theta}} \left( (\sin\theta)^l \psi_l^{-l} \right) \\  \end{aligned}

For

\begin{aligned}\psi_l^{1-l}&=e^{i(1-l)\phi} \frac{1}{{(\sin\theta)^l}} \frac{\partial {}}{\partial {\theta}} (\sin\theta)^{2l} \end{aligned} \quad\quad\quad(7)

For a second raising operator application we have

\begin{aligned}(L_x + iL_y) \psi_l^{1-l} &=e^{i\phi} (\partial_\theta + i \cot\theta \partial_\phi) \psi_l^{1-l} \\ &=e^{i\phi} (\partial_\theta + i \cot\theta (-i)(l-1)) \psi_l^{1-l} \\ &=e^{i\phi} (\partial_\theta + (l-1)\cot\theta ) \psi_l^{1-l} \\  \end{aligned}

A second application of the sneaky identity (6) gives us

\begin{aligned}\psi_l^{2-l} &=e^{i\phi} \frac{1}{{(\sin\theta)^{l-1}}} \frac{\partial {}}{\partial {\theta}} \left( (\sin\theta)^{l-1} \psi_l^{1-l} \right) \\ &=e^{i(2-l)\phi} \frac{1}{{(\sin\theta)^{l-1}}} \frac{\partial {}}{\partial {\theta}} \left( (\sin\theta)^{l-1} \frac{1}{{(\sin\theta)^l}} \frac{\partial {}}{\partial {\theta}} (\sin\theta)^{2l}\right) \\ &=e^{i(2-l)\phi} \frac{1}{{(\sin\theta)^{l-1}}} \frac{\partial {}}{\partial {\theta}} \left( \frac{1}{{\sin\theta}} \frac{\partial {}}{\partial {\theta}} (\sin\theta)^{2l}\right) \\ &=e^{i(2-l)\phi} \frac{\sin\theta}{(\sin\theta)^{l-1}} \frac{1}{{\sin\theta}}\frac{\partial {}}{\partial {\theta}} \left( \frac{1}{{\sin\theta}} \frac{\partial {}}{\partial {\theta}} (\sin\theta)^{2l}\right) \\  \end{aligned}

This gives

\begin{aligned}\psi_l^{2-l} &=e^{i(2-l)\phi} \frac{1}{{(\sin\theta)^{l-2}}} \left( \frac{1}{{\sin\theta}}\frac{\partial {}}{\partial {\theta}} \right)^2 (\sin\theta)^{2l}  \end{aligned} \quad\quad\quad(8)

A comparison with \phi_l^{1-l} from (7) shows that the induction hypothosis is

\begin{aligned}\psi_l^{s-l} &=e^{i(s-l)\phi} \frac{1}{{(\sin\theta)^{l-s}}} \left( \frac{1}{{\sin\theta}}\frac{\partial {}}{\partial {\theta}} \right)^s (\sin\theta)^{2l}  \end{aligned}

The induction.

The induction, starting with cut-and-paste-regex replacement,

\begin{aligned}(L_x + iL_y) \psi_l^{{s-1}-l} &=e^{i\phi} (\partial_\theta + i \cot\theta \partial_\phi) \psi_l^{{s-1}-l} \\ &=e^{i\phi} (\partial_\theta + i \cot\theta (-i)(l -(s-1)) \psi_l^{{s-1}-l} \\ &=e^{i\phi} (\partial_\theta + (l -(s-1)))\cot\theta ) \psi_l^{(s-1)-l} \\  \end{aligned}

A second application of the sneaky identity (6) gives us

\begin{aligned}\psi_l^{s-l} &=e^{i\phi} \frac{1}{{(\sin\theta)^{l-(s-1)}}} \frac{\partial {}}{\partial {\theta}} \left( (\sin\theta)^{l-(s-1)} \psi_l^{(s-1)-l} \right) \\ &=e^{i\phi} \frac{1}{{(\sin\theta)^{l-(s-1)}}} \frac{\partial {}}{\partial {\theta}} \left( (\sin\theta)^{l-(s-1)} e^{i(s-1-l)\phi} \frac{1}{{(\sin\theta)^{l-(s-1)}}} \left( \frac{1}{{\sin\theta}}\frac{\partial {}}{\partial {\theta}} \right)^{s-1} (\sin\theta)^{2l} \right) \\ &=e^{i(s-l)\phi}\frac{\sin\theta}{(\sin\theta)^{l-(s-1)}} \frac{1}{{\sin\theta}} \frac{\partial {}}{\partial {\theta}} \left( \left( \frac{1}{{\sin\theta}}\frac{\partial {}}{\partial {\theta}} \right)^{s-1} (\sin\theta)^{2l} \right) \\  \end{aligned}

This completes the induction arriving at the negative index equivalent of Bohm’s equation (46), and as claimed in the text this differs only by sign of the \phi exponential

\begin{aligned}\psi_l^{s-l} &=e^{i(s-l)\phi} \frac{1}{{(\sin\theta)^{l-s}}} \left( \frac{1}{{\sin\theta}}\frac{\partial {}}{\partial {\theta}} \right)^s (\sin\theta)^{2l}  \end{aligned} \quad\quad\quad(10)

References

[1] D. Bohm. Quantum Theory. Courier Dover Publications, 1989.

Posted in Math and Physics Learning. | Tagged: , , , | Leave a Comment »