Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

Archive for November, 2011

PHY456H1F: Quantum Mechanics II. Lecture 22 (Taught by Prof J.E. Sipe). Scattering (cont.)

Posted by peeterjoot on November 30, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Disclaimer.

Peeter’s lecture notes from class. May not be entirely coherent.

Scattering. Recap

READING: section 19, section 20 of the text [1].

We used a positive potential of the form of figure (\ref{fig:qmTwoL22:qmTwoL22fig1})
\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL22fig1}
\caption{A bounded positive potential.}
\end{figure}

\begin{aligned}-\frac{\hbar^2}{2 \mu} \frac{\partial^2 {{\psi_k(x)}}}{\partial {{x}}^2} + V(x) \psi_k(x) = \frac{\hbar^2 k^2}{2 \mu}\end{aligned} \hspace{\stretch{1}}(2.1)

for x \ge x_3

\begin{aligned}\psi_k(x) = C e^{i k x}\end{aligned} \hspace{\stretch{1}}(2.2)

\begin{aligned}\phi_k(x) = \frac{d{{\psi_k(x)}}}{dx}\end{aligned} \hspace{\stretch{1}}(2.3)

for x \ge x_3

\begin{aligned}\phi_k(x) = i k C e^{i k x}\end{aligned} \hspace{\stretch{1}}(2.4)

\begin{aligned}\frac{d{{\psi_k(x)}}}{dx} &= \phi_k(x) \\ -\frac{\hbar^2}{2 \mu} \frac{d{{\phi_k(x)}}}{dx} &= - V(x) \psi_k(x) + \frac{\hbar^2 k^2}{2 \mu}\end{aligned} \hspace{\stretch{1}}(2.5)

integrate these equations back to x_1.

For x \le x_1

\begin{aligned}\psi_k(x) = A e^{i k x} + B e^{-i k x},\end{aligned} \hspace{\stretch{1}}(2.7)

where both A and B are proportional to C, dependent on k.

There are cases where we can solve this analytically (one of these is on our problem set).

Alternatively, write as (so long as A \ne 0)

\begin{aligned}\begin{array}{l l l}\psi_k(x)&\rightarrow e^{i k x} + \beta_k e^{-i k x} & \quad \mbox{for latex x x_2$}\end{array}\end{aligned} \hspace{\stretch{1}}(2.8)$

Now want to consider the problem of no potential in the interval of interest, and our window bounded potential as in figure (\ref{fig:qmTwoL22:qmTwoL22fig3})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL22fig3}
\caption{Wave packet in free space and with positive potential.}
\end{figure}

where we model our particle as a wave packet as we found can have the fourier transform description, for t_{\text{initial}} < 0, of

\begin{aligned}\psi(x, t_{\text{initial}}) = \int \frac{dk}{\sqrt{2 \pi}} \alpha(k, t_{\text{initial}}) e^{i k x}\end{aligned} \hspace{\stretch{1}}(2.9)

Returning to the same coefficients, the solution of the Schr\”{o}dinger eqn for problem with the potential 2.8

For x \le x_1,

\begin{aligned}\psi(x, t) = \psi_i(x, t) + \psi_r(x, t)\end{aligned} \hspace{\stretch{1}}(2.10)

where as illustrated in figure (\ref{fig:qmTwoL22:qmTwoL22fig4})
\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL22fig4}
\caption{Reflection and transmission of wave packet.}
\end{figure}

\begin{aligned}\psi_i(x, t) &= \int \frac{dk}{\sqrt{2 \pi}} \alpha(k, t_{\text{initial}}) e^{i k x} \\ \psi_r(x, t) &= \int \frac{dk}{\sqrt{2 \pi}} \alpha(k, t_{\text{initial}}) \beta_k e^{-i k x}.\end{aligned} \hspace{\stretch{1}}(2.11)

For x > x_2

\begin{aligned}\psi(x, t) = \psi_t(x, t)\end{aligned} \hspace{\stretch{1}}(2.13)

and

\begin{aligned}\psi_t(x, t) = \int \frac{dk}{\sqrt{2 \pi}} \alpha(k, t_{\text{initial}}) \gamma_k e^{i k x}\end{aligned} \hspace{\stretch{1}}(2.14)

Look at

\begin{aligned}\psi_r(x, t) = \chi(-x, t)\end{aligned} \hspace{\stretch{1}}(2.15)

where

\begin{aligned}\begin{aligned}\chi(x, t)&= \int \frac{dk}{\sqrt{2 \pi}} \alpha(k, t_{\text{initial}}) \beta_k e^{i k x} \\ &\approx\beta_{k_0} \int \frac{dk}{\sqrt{2 \pi}} \alpha(k, t_{\text{initial}}) e^{i k x}\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.16)

for t = t_{\text{initial}}, this is nonzero for x < x_1.

so for x < x_1

\begin{aligned}\psi_r(x, t_{\text{initial}}) = 0\end{aligned} \hspace{\stretch{1}}(2.17)

In the same way, for x > x_2

\begin{aligned}\psi_t(x, t_{\text{initial}}) = 0.\end{aligned} \hspace{\stretch{1}}(2.18)

What hasn’t been proved is that the wavefunction is also zero in the [x_1, x_2] interval.

Summarizing

For t = t_{\text{initial}}

\begin{aligned}\psi(x, t_{\text{initial}})=\left\{\begin{array}{l l}\int \frac{dk}{\sqrt{2 \pi}} \alpha(k, t_{\text{initial}}) e^{i k x} &\quad \mbox{for latex x x_2$ (and actually also for x > x_1 (unproven))}\end{array}\right.\end{aligned} \hspace{\stretch{1}}(2.19)$

for t = t_{\text{final}}

\begin{aligned}\psi(x, t_{\text{final}})\rightarrow\left\{\begin{array}{l l}\int \frac{dk}{\sqrt{2 \pi}} \beta_k \alpha(k, t_{\text{final}}) e^{-i k x} &\quad \mbox{for latex x x_2$ }\end{array}\right.\end{aligned} \hspace{\stretch{1}}(2.20)$

Probability of reflection is

\begin{aligned}\int {\left\lvert{\psi_r(x, t_{\text{final}})}\right\rvert}^2 dx\end{aligned} \hspace{\stretch{1}}(2.21)

If we have a sufficiently localized packet, we can form a first order approximation around the peak of \beta_k (FIXME: or is this a sufficiently localized responce to the potential on reflection?)

\begin{aligned}\psi_r(x, t_{\text{final}}) \approx \beta_{k_0}\int \frac{dk}{\sqrt{2 \pi}} \alpha(k, t_{\text{final}}) e^{-i k x},\end{aligned} \hspace{\stretch{1}}(2.22)

so

\begin{aligned}\int {\left\lvert{\psi_r(x, t_{\text{final}})}\right\rvert}^2 dx\approx {\left\lvert{\beta_{k_0}}\right\rvert}^2 \equiv R\end{aligned} \hspace{\stretch{1}}(2.23)

Probability of transmission is

\begin{aligned}\int {\left\lvert{\psi_t(x, t_{\text{final}})}\right\rvert}^2 dx\end{aligned} \hspace{\stretch{1}}(2.24)

Again, assuming a small spread in \gamma_k, with \gamma_k \approx \gamma_{k_0} for some k_0

\begin{aligned}\psi_t(x, t_{\text{final}}) \approx \gamma_{k_0}\int \frac{dk}{\sqrt{2 \pi}} \alpha(k, t_{\text{final}}) e^{i k x},\end{aligned} \hspace{\stretch{1}}(2.25)

we have for x > x_2

\begin{aligned}\int {\left\lvert{\psi_t(x, t_{\text{final}})}\right\rvert}^2 dx\approx {\left\lvert{\gamma_{k_0}}\right\rvert}^2 \equiv T.\end{aligned} \hspace{\stretch{1}}(2.26)

By constructing the wave packets in this fashion we get as a side effect the solution of the scattering problem.

The

\begin{aligned}\psi_k(x) \rightarrow & e^{i k x} + \beta_k e^{-i k x} \\ & \gamma_k e^{i k x}\end{aligned}

are called asymptotic in states. Their physical applicability is only once we have built wave packets out of them.

Moving to 3D

For a potential V(\mathbf{r}) \approx 0 for r > r_0 as in figure (\ref{fig:qmTwoL22:qmTwoL22fig5})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL22fig5}
\caption{Radially bounded spherical potential.}
\end{figure}

From 1D we’ve learned to build up solutions from time independent solutions (non normalizable). Consider an incident wave

\begin{aligned}e^{i \mathbf{k} \cdot \mathbf{r}} = e^{i k \hat{\mathbf{n}} \cdot \mathbf{r}}\end{aligned} \hspace{\stretch{1}}(3.27)

This is a solution of the time independent Schr\”{o}dinger equation

\begin{aligned}-\frac{\hbar^2}{2 \mu} \boldsymbol{\nabla}^2 e^{i \mathbf{k} \cdot \mathbf{r}} = Ee^{i \mathbf{k} \cdot \mathbf{r}},\end{aligned} \hspace{\stretch{1}}(3.28)

where

\begin{aligned}E = \frac{\hbar^2 \mathbf{k}^2}{2 \mu}.\end{aligned} \hspace{\stretch{1}}(3.29)

In the presence of a potential expect scattered waves. We’ll next be indentifying the nature of these solutions.

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , , , , | Leave a Comment »

Ontario government accessibility education module.

Posted by peeterjoot on November 28, 2011

I’ve just completed the Ontario government accessibility education module that’s mandatory for all IBM employees. Ironically, it is delivered in what seems like a condescending fashion, stating many things that seem obvious.

To demonstrate this, consider the inverse of some of these points:

  • Assume the individual can’t see you.
  • Assume what a person can or cannot do.
  • Be inflexible.
  • Disrespect personal space.
  • Do not be confident and never reassure.
  • Do not speak directly to your customer.
  • Do not take the time to get to know your customer’s needs.
  • Exercise impatience.
  • Help before you ask.
  • If you’re giving directions or providing any information, be imprecise and undescriptive.
  • Interrupt or finish your customer’s sentences.
  • Lean over them and on any assistive devices.
  • Leave the individual in awkward, dangerous or undignified positions.
  • Leave your customer in the middle of a room.
  • Make sure your customer does not understand what you’ve said.
  • Move items, such as canes and walkers, out of the person’s reach.
  • Provide information in a way that does not work for your customer.
  • Shout.
  • Use obscure and incomprehensible language.
  • Walk away without saying good-bye.

I think that many of the people who actually need many of the tips given aren’t going to be helped at all by them since this likely indicates a  failure to observe their environment.

That said, I think I still did learn some things from the education.  One is that most people that are legally blind are not fully blind.  I also admit that I don’t understand all of the points.  For example, if I encountered somebody with a deafblind impairment who was accompanied by an “Intervenor”, I would guess that I’d address my communication at the Intervenor because I could communicate with that individual.  Perhaps that point was meant only for Intervenors for less severe communication issues, such as communication with somebody deaf accompanied by somebody who signs for them?

I’d also guess that the tendency to want to help causes many people to violate the “Ask before you help” point without them even realizing it, but if the individual doesn’t mention it when it happens, being told to ask first is probably not enough.  That must be frustrating to an involuntarily “helped” individual.

Posted in Incoherent ramblings | Leave a Comment »

Believed to be typos in Desai’s QM Text

Posted by peeterjoot on November 25, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Here’s a few more typos, in addition to those noted previously, spotted in our QM text [1] from chapters we’ve covered in this term’s class.

Chapter 17.

\begin{itemize}
\item Page 297. (17.38). appears to be off by a factor of 2 since (\sin^2 2x)' = 2 \sin x \cos x = \sin(2 x).
\item Page 311. (17.134). d/dt' missing on the H_{ss}' term in the integral.
\item Page 311. (17.136). First term (non-integral part) should be negated.
\item Page 312. (17.144). Sign on \lambda before sum positive instead of negative.
\item Page 313. (17.149). 1/\hbar missing.
\item Page 313. (17.152,17.154). extra bra around the bra.
\item Page 313. (17.153). bra missing on \phi_n
\end{itemize}

Chapter 24.

\begin{itemize}
\item Page 450. (24.6). k^2(x) should be k^2(x) u.
\item Page 452. (24.18). In the E < V case should be 1/\sqrt{\kappa} instead of 1/\sqrt{k} (although what's in the text is strictly still correct since it only changes the phase of the wavefunction).

\item Page 455. (24.40). RHD should be multiplied by \hbar.

\item Page 460. (24.71). \psi should be \phi.
\item Page 460. Third paragraph. (\mathbf{r}_1 - \mathbf{r}) should be (\mathbf{r}_1 - \mathbf{r}_2).
\item Page 460. (24.76). The integral should be 1/3, not 5 \pi/32. This messes up some of the subsequent stuff, unless there is also another compensating error. Note that one can check this easily since the derivative of -1/(3 (1+x)^3) is (1 + x)^{-4}.
\end{itemize}

Chapter 25.

\begin{itemize}
\item Page 471. (25.18). i subscripts missing on \mathcal{L} and \dot{y}^2.
\end{itemize}

Chapter 26.

\begin{itemize}
\item Page 486. (26.60). \mathbf{n} \times \mathbf{r} \cdot \boldsymbol{\nabla} ought to have braces and read (\mathbf{n} \times \mathbf{r}) \cdot \boldsymbol{\nabla}.
\item Page 487. (26.67). 0 in the 3,3 position should be 1.
\item Page 489. before (26.84). For rotations about the imaginary axis was probably meant to be the i’th axis.
\item Page 495. (26.149,26.150). Looks like \hbar‘s are missing (esp. compared to 26.144-145).
\item Page 495. (26.150). J_y off by -1. (J_x + iJ_y \ne J_{+}) as is.
\item Page 496. (26.154). An extra Y_{l'm} in the integral, in between Y_{l' m'} and the (\theta, \phi).
\item Page 498. (26.175). e^{i\phi} should be e^{-i\phi} in the first line.
\item Page 498. (26.178). An \hbar factor has been lost in either (26.178) or (26.179).
\item Page 499. (26.190). minor: \mathbf{j} should be \mathbf{J}.
\item Page 450. (26.192). minor: \mathbf{j} should be \mathbf{J}, and R should be \hbar.
\end{itemize}

Chapter 27.

\begin{itemize}
\item Page 503. (27.8). minor: bold \chi. R(\theta, \phi) probably meant to be R(\chi).
\item Page 504. (27.20). Missing \hbar E_n factor on LHS.
\item Page 507. before (27.53). minor: Velocity v missing bold.
\item Page 510. before (27.78). minor: periods in the two kets should be commas.
\item Page 510. (27.80). \sigma_y and \sigma_z should be interchanged (if \alpha is the polar angle then \hat{\mathbf{n}} = \hat{\mathbf{y}} for that rotation, and \hat{\mathbf{n}} = \hat{\mathbf{z}} for the rotation in the x,y plane). The \hbar‘s here should also be dropped.
\item Page 511. (27.81). Same as 27.80.
\item Page 511. (27.83). \hbar‘s should be dropped.
\item Page 514. (27.109). minor: dot instead of cdot.
\item Page 515. (27.117). Same error as in (26.149-150). \hbar‘s missing, and wrong sign on J_y.
\end{itemize}

Chapter 28.

This chapter written as if \hbar = 1, without a statement that this is being done.
\begin{itemize}
\item Page 518. (28.4). \hbar missing. Also in text following, eigenvalue should be m \hbar = \hbar (m_1 + m_2).
\item Page 519. (28.9). \hbar missing LHS.
\item Page 519. (28.11). \hbar missing (after each equality). Text following m and j(j+1) eigenvalues should be multipled by \hbar and \hbar^2 respectively.
\item Page 519. following (28.14). \hbar missing in J_{-} equality.
\item Page 520. (28.15). \hbar missing for two factors after last =.
\item Page 520. (28.16). \hbar missing LHS.
\item Page 520. (28.21). \downarrow \downarrow should be \uparrow \downarrow.
\item Page 521. following (28.25). Chapter 2 should read Chapter 5.
\item Page 522. (28.31). Notational inconsistency. {\left\lvert {j_1 j_2 jm} \right\rangle} should read {\left\lvert {j_1 j_2, j m} \right\rangle}
\item Page 522. (28.31). Extra \vert between braket and ket.
\item Page 523. (28.36). Notational inconsistency. \left\langle{{m_1, m_2}} \vert {{jm -1}}\right\rangle should read \left\langle{{m_1, m_2}} \vert {{j,m -1}}\right\rangle
\item Page 525. following (28.52). l(l+1), s(s+1), j(j+1) eigenvalues all missing \hbar^2.
\item Page 525. (28.53). LHS missing \hbar^2.
\item Page 525. (28.54). LHS missing \hbar^2.
\item Page 525. (28.57). RHS missing \hbar.
\item Page 525. (28.58). RHS missing \hbar. m \pm \frac{1}{{2}} should be m \pm 1.
\item Page 526. (28.60). \hbar^2 missing from both terms.
\item Page 526. (28.61). In first term \sqrt{l + m_1 + 1} should be \sqrt{(l + m_1 + 1)(l - m_1)}.
\end{itemize}

Chapter 29.

This chapter written as if \hbar = 1, without a statement that this is being done.
\begin{itemize}
\item Page 531. (29.23). \hbar missing from second two lines.
\item Page 533. (29.25). \hbar should multiply all.
\item Page 533. (29.26). \hbar should multiply all (RHS).
\item Page 533. (29.29). \hbar should multiply all (RHS).
\item Page 533. (29.30). \hbar should multiply all (RHS).
\item Page 533. (29.31). \hbar should multiply all (RHS).
\item Page 536. (29.59). \hbar should multiply RHS.
\item Page 536. (29.60). \hbar should multiply RHS. {\left\lvert {j m + 1} \right\rangle} should be {\left\lvert {j, m+1} \right\rangle}.
\item Page 536. (29.61). \hbar should multiply RHS. {\left\lvert {j'm' - 1} \right\rangle} should be {\left\lvert {j', m'-1} \right\rangle}.
\item Page 537. (29.65). \left\langle{{j'm' - 1}} \vert {{m, q}}\right\rangle should be \left\langle{{j', m'-1}} \vert {{m, q}}\right\rangle.
\end{itemize}

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , | Leave a Comment »

PHY456H1F: Quantum Mechanics II. Lecture 21 (Taught by Prof J.E. Sipe). Scattering theory

Posted by peeterjoot on November 24, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Disclaimer.

Peeter’s lecture notes from class. May not be entirely coherent.

Scattering theory.

READING: section 19, section 20 of the text [1].

Here’s (\ref{fig:qmTwoL21:qmTwoL21Fig1}) a simple classical picture of a two particle scattering collision

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL21Fig1}
\caption{classical collision of particles.}
\end{figure}

We will focus on point particle elastic collisions (no energy lost in the collision). With particles of mass m_1 and m_2 we write for the total and reduced mass respectively

\begin{aligned}M = m_1 + m_2\end{aligned} \hspace{\stretch{1}}(2.1)

\begin{aligned}\frac{1}{{\mu}} = \frac{1}{{m_1}} + \frac{1}{{m_2}},\end{aligned} \hspace{\stretch{1}}(2.2)

so that interaction due to a potential V(\mathbf{r}_1 - \mathbf{r}_2) that depends on the difference in position \mathbf{r} = \mathbf{r}_1 - \mathbf{r} has, in the center of mass frame, the Hamiltonian

\begin{aligned}H = \frac{\mathbf{p}^2}{2 \mu} + V(\mathbf{r})\end{aligned} \hspace{\stretch{1}}(2.3)

In the classical picture we would investigate the scattering radius r_0 associated with the impact parameter \rho as depicted in figure (\ref{fig:qmTwoL21:qmTwoL21Fig2})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL21Fig2}
\caption{Classical scattering radius and impact parameter.}
\end{figure}

1D QM scattering. No potential wave packet time evolution.

Now lets move to the QM picture where we assume that we have a particle that can be represented as a wave packet as in figure (\ref{fig:qmTwoL21:qmTwoL21Fig3})
\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL21Fig3}
\caption{Wave packet for a particle wavefunction \Re(\psi(x,0))}
\end{figure}

First without any potential V(x) = 0, lets consider the evolution. Our position and momentum space representations are related by

\begin{aligned}\int {\left\lvert{\psi(x, t)}\right\rvert}^2 dx = 1 = \int {\left\lvert{\psi(p, t)}\right\rvert}^2 dp,\end{aligned} \hspace{\stretch{1}}(2.4)

and by Fourier transform

\begin{aligned}\psi(x, t) = \int \frac{dp}{\sqrt{2 \pi \hbar}} \overline{\psi}(p, t) e^{i p x/\hbar}.\end{aligned} \hspace{\stretch{1}}(2.5)

Schr\”{o}dinger’s equation takes the form

\begin{aligned}i \hbar \frac{\partial {\psi(x,t)}}{\partial {t}} = - \frac{\hbar^2}{2 \mu} \frac{\partial^2 {{\psi(x, t)}}}{\partial {{x}}^2},\end{aligned} \hspace{\stretch{1}}(2.6)

or more simply in momentum space

\begin{aligned}i \hbar \frac{\partial {\overline{\psi}(p,t)}}{\partial {t}} = \frac{p^2}{2 \mu} \frac{\partial^2 {{\overline{\psi}(p, t)}}}{\partial {{x}}^2}.\end{aligned} \hspace{\stretch{1}}(2.7)

Rearranging to integrate we have

\begin{aligned}\frac{\partial {\overline{\psi}}}{\partial {t}} = -\frac{i p^2}{2 \mu \hbar} \overline{\psi},\end{aligned} \hspace{\stretch{1}}(2.8)

and integrating

\begin{aligned}\ln \overline{\psi} = -\frac{i p^2 t}{2 \mu \hbar} + \ln C,\end{aligned} \hspace{\stretch{1}}(2.9)

or

\begin{aligned}\overline{\psi} = C e^{-\frac{i p^2 t}{2 \mu \hbar}} = \overline{\psi}(p, 0) e^{-\frac{i p^2 t}{2 \mu \hbar}}.\end{aligned} \hspace{\stretch{1}}(2.10)

Time evolution in momentum space for the free particle changes only the phase of the wavefunction, the momentum probability density of that particle.

Fourier transforming, we find our position space wavefunction to be

\begin{aligned}\psi(x, t) = \int \frac{dp}{\sqrt{2 \pi \hbar}} \overline{\psi}(p, 0) e^{i p x/\hbar} e^{-i p^2 t/2 \mu \hbar}.\end{aligned} \hspace{\stretch{1}}(2.11)

To clean things up, write

\begin{aligned}p = \hbar k,\end{aligned} \hspace{\stretch{1}}(2.12)

for

\begin{aligned}\psi(x, t) = \int \frac{dk}{\sqrt{2 \pi}} a(k, 0) ) e^{i k x} e^{-i \hbar k^2 t/2 \mu},\end{aligned} \hspace{\stretch{1}}(2.13)

where

\begin{aligned}a(k, 0) = \sqrt{\hbar} \overline{\psi}(p, 0).\end{aligned} \hspace{\stretch{1}}(2.14)

Putting

\begin{aligned}a(k, t) = a(k, 0) e^{ -i \hbar k^2/2 \mu},\end{aligned} \hspace{\stretch{1}}(2.15)

we have

\begin{aligned}\psi(x, t) = \int \frac{dk}{\sqrt{2 \pi}} a(k, t) ) e^{i k x} \end{aligned} \hspace{\stretch{1}}(2.16)

Observe that we have

\begin{aligned}\int dk {\left\lvert{ a(k, t)}\right\rvert}^2 = \int dp {\left\lvert{ \overline{\psi}(p, t)}\right\rvert}^2 = 1.\end{aligned} \hspace{\stretch{1}}(2.17)

A Gaussian wave packet

Suppose that we have, as depicted in figure (\ref{fig:qmTwoL21:qmTwoL21Fig4})
\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL21Fig4}
\caption{Gaussian wave packet.}
\end{figure}

a Gaussian wave packet of the form

\begin{aligned}\psi(x, 0) = \frac{ (\pi \Delta^2)^{1/4}} e^{i k_0 x} e^{- x^2/2 \Delta^2}.\end{aligned} \hspace{\stretch{1}}(2.18)

This is actually a minimum uncertainty packet with

\begin{aligned}\Delta x &= \frac{\Delta}{\sqrt{2}} \\ \Delta p &= \frac{\hbar}{\Delta \sqrt{2}}.\end{aligned} \hspace{\stretch{1}}(2.19)

Taking Fourier transforms we have

\begin{aligned}a(k, 0) &= \left(\frac{\Delta^2}{\pi}\right)^{1/4} e^{-(k - k_0)^2 \Delta^2/2} \\ a(k, t) &= \left(\frac{\Delta^2}{\pi}\right)^{1/4} e^{-(k - k_0)^2 \Delta^2/2} e^{ -i \hbar k^2 t/ 2\mu} \equiv \alpha(k, t)\end{aligned} \hspace{\stretch{1}}(2.21)

For t > 0 our wave packet will start moving and spreading as in figure (\ref{fig:qmTwoL21:qmTwoL21Fig5})
\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL21Fig5}
\caption{moving spreading Gaussian packet.}
\end{figure}

With a potential.

Now “switch on” a potential, still assuming a wave packet representation for the particle. With a positive (repulsive) potential as in figure (\ref{fig:qmTwoL21:qmTwoL21Fig6}), at a time long before the interaction of the wave packet with the potential we can visualize the packet as heading towards the barrier.

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL21Fig6}
\caption{QM wave packet prior to interaction with repulsive potential.}
\end{figure}

After some time long after the interaction, classically for this sort of potential where the particle kinetic energy is less than the barrier “height”, we would have total reflection. In the QM case, we’ve seen before that we will have a reflected and a transmitted portion of the wave packet as depicted in figure (\ref{fig:qmTwoL21:qmTwoL21Fig7})
\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL21Fig7}
\caption{QM wave packet long after interaction with repulsive potential.}
\end{figure}

Even if the particle kinetic energy is greater than the barrier height, as in figure (\ref{fig:qmTwoL21:qmTwoL21Fig8}), we can still have a reflected component.
\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL21Fig8}
\caption{Kinetic energy greater than potential energy.}
\end{figure}

This is even true for a negative potential as depicted in figure (\ref{fig:qmTwoL21:qmTwoL21Fig9})!

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL21Fig9}
\caption{qmTwoL21Fig9}
\end{figure}

Consider the probability for the particle to be found anywhere long after the interaction, summing over the transmitted and reflected wave functions, we have

\begin{aligned}1 &= \int {\left\lvert{\psi_r + \psi_t}\right\rvert}^2 \\ &= \int {\left\lvert{\psi_r}\right\rvert}^2  + \int {\left\lvert{\psi_t}\right\rvert}^2 + 2 \Re \int \psi_r^{*} \psi_t\end{aligned}

Observe that long after the interaction the cross terms in the probabilities will vanish because they are non-overlapping, leaving just the probably densities for the transmitted and reflected probably densities independently.

We define

\begin{aligned}T &= \int {\left\lvert{\psi_t(x, t)}\right\rvert}^2 dx \\ R &= \int {\left\lvert{\psi_r(x, t)}\right\rvert}^2 dx.\end{aligned} \hspace{\stretch{1}}(2.23)

The objective of most of our scattering problems will be the calculation of these probabilities and the comparisons of their ratios.

Question. Can we have more than one wave packet reflect off. Yes, we could have multiple wave packets for both the reflected and the transmitted portions. For example, if the potential has some internal structure there could be internal reflections before anything emerges on either side and things could get quite messy.

Considering the time independent case temporarily.

We are going to work through something that is going to seem at first to be completely unrelated. We will (eventually) see that this can be applied to this problem, so a bit of patience will be required.

We will be using the time independent Schr\”{o}dinger equation

\begin{aligned}- \frac{\hbar^2}{2 \mu} \psi_k''(x) = V(x) \psi_k(x) = E \psi_k(x),\end{aligned} \hspace{\stretch{1}}(3.25)

where we have added a subscript k to our wave function with the intention (later) of allowing this to vary. For “future use” we define for k > 0

\begin{aligned}E = \frac{\hbar^2 k^2}{2 \mu}.\end{aligned} \hspace{\stretch{1}}(3.26)

Consider a potential as in figure (\ref{fig:qmTwoL21:qmTwoL21Fig10}), where V(x) = 0 for x > x_2 and x < x_1.

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL21Fig10}
\caption{potential zero outside of a specific region.}
\end{figure}

We won't have bound states here (repulsive potential). There will be many possible solutions, but we want to look for a solution that is of the form

\begin{aligned}\psi_k(x) = C e^{i k x}, \qquad x > x_2\end{aligned} \hspace{\stretch{1}}(3.27)

Suppose x = x_3 > x_2, we have

\begin{aligned}\psi_k(x_3) = C e^{i k x_3}\end{aligned} \hspace{\stretch{1}}(3.28)

\begin{aligned}{\left.{{\frac{d\psi_k}{dx}}}\right\vert}_{{x = x_3}} = i k C e^{i k x_3} \equiv \phi_k(x_3)\end{aligned} \hspace{\stretch{1}}(3.29)

\begin{aligned}{\left.{{\frac{d^2\psi_k}{dx^2}}}\right\vert}_{{x = x_3}} = -k^2 C e^{i k x_3} \end{aligned} \hspace{\stretch{1}}(3.30)

Defining

\begin{aligned}\phi_k(x) = \frac{d\psi_k}{dx},\end{aligned} \hspace{\stretch{1}}(3.31)

we write Schr\”{o}dinger’s equation as a pair of coupled first order equations

\begin{aligned}\frac{d\psi_k}{dx} &= \phi_k(x) \\ -\frac{\hbar^2}{2 \mu} \frac{d\phi_k(x)}{dx} = - V(x) \psi_k(x) + \frac{\hbar^2 k^2}{2\mu} \psi_k(x).\end{aligned} \hspace{\stretch{1}}(3.32)

At this x = x_3 specifically, we “know” both \phi_k(x_3) and \psi_k(x_3) and have

\begin{aligned}{\left.{{\frac{d\psi_k}{dx}}}\right\vert}_{{x_3}} &= \phi_k(x) \\ -\frac{\hbar^2}{2 \mu} {\left.{{\frac{d\phi_k(x)}{dx}}}\right\vert}_{{x_3}} = - V(x_3) \psi_k(x_3) + \frac{\hbar^2 k^2}{2\mu} \psi_k(x_3),\end{aligned} \hspace{\stretch{1}}(3.34)

This allows us to find both

\begin{aligned}{dx}}}\right\vert}_{{x_3}} \\ {dx}}}\right\vert}_{{x_3}} \end{aligned} \hspace{\stretch{1}}(3.36)

then proceed to numerically calculate \phi_k(x) and \psi_k(x) at neighboring points x = x_3 + \epsilon. Essentially, this allows us to numerically integrate backwards from x_3 to find the wave function at previous points for any sort of potential.

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , | Leave a Comment »

On conditions for Clebsh-Gordan coefficients to be zero

Posted by peeterjoot on November 23, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Motivation.

In section 28.2 of the text [1] is a statement that the Clebsh-Gordan coefficient

\begin{aligned}\left\langle{{m_1 m_2}} \vert {{jm}}\right\rangle\end{aligned} \hspace{\stretch{1}}(1.1)

unless m = m_1 + m_2. It appeared that it was related to the operation of J_z, but how exactly wasn’t obvious to me. In tutorial today we hashed through this. Here’s the details lying behind this statement

Recap on notation.

We are taking an arbitrary two particle ket and decomposing it utilizing an insertion of a complete set of states

\begin{aligned}{\left\lvert {jm} \right\rangle} = \sum_{m_1' m_2'} \Bigl({\left\lvert {j_1 m_1'} \right\rangle} {\left\lvert {j_2 m_2'} \right\rangle}{\left\langle {j_1 m_1'} \right\rvert} {\left\langle {j_2 m_2'} \right\rvert}\Bigr){\left\lvert {jm} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.2)

with j_1 and j_2 fixed, this is written with the shorthand

\begin{aligned}{\left\lvert {j_1 m_1} \right\rangle} {\left\lvert {j_2 m_2} \right\rangle} &= {\left\lvert {m_1 m_2} \right\rangle} \\ {\left\langle {j_1 m_1} \right\rvert} {\left\langle {j_2 m_2} \right\rvert} {\left\lvert {jm} \right\rangle} &= \left\langle{{m_1 m_2}} \vert {{jm}}\right\rangle,\end{aligned} \hspace{\stretch{1}}(2.3)

so that we write

\begin{aligned}{\left\lvert {jm} \right\rangle} = \sum_{m_1' m_2'} {\left\lvert {m_1' m_2'} \right\rangle} \left\langle{{m_1' m_2'}} \vert {{jm}}\right\rangle\end{aligned} \hspace{\stretch{1}}(2.5)

The J_z action.

We have two ways that we can apply the operator J_z to {\left\lvert {jm} \right\rangle}. One is using the sum above, for which we find

\begin{aligned}J_z {\left\lvert {jm} \right\rangle} &= \sum_{m_1' m_2'} J_z {\left\lvert {m_1' m_2'} \right\rangle} \left\langle{{m_1' m_2'}} \vert {{jm}}\right\rangle \\ &= \hbar \sum_{m_1' m_2'} (m_1' + m_2') {\left\lvert {m_1' m_2'} \right\rangle} \left\langle{{m_1' m_2'}} \vert {{jm}}\right\rangle \\ \end{aligned}

We can also act directly on {\left\lvert {jm} \right\rangle} and then insert a complete set of states

\begin{aligned}J_z {\left\lvert {jm} \right\rangle} &=\sum_{m_1' m_2'} {\left\lvert {m_1' m_2'} \right\rangle}{\left\langle {m_1' m_2'} \right\rvert}J_z {\left\lvert {jm} \right\rangle} \\ &=\hbar m\sum_{m_1' m_2'} {\left\lvert {m_1' m_2'} \right\rangle}\left\langle{{m_1' m_2'}} \vert {{jm}}\right\rangle \\ \end{aligned}

This provides us with the identity

\begin{aligned}m\sum_{m_1' m_2'} {\left\lvert {m_1' m_2'} \right\rangle}\left\langle{{m_1' m_2'}} \vert {{jm}}\right\rangle = \sum_{m_1' m_2'} (m_1' + m_2') {\left\lvert {m_1' m_2'} \right\rangle} \left\langle{{m_1' m_2'}} \vert {{jm}}\right\rangle \end{aligned} \hspace{\stretch{1}}(3.6)

This equality must be valid for any {\left\lvert {jm} \right\rangle}, and since all the kets {\left\lvert {m_1' m_2'} \right\rangle} are linearly independent, we must have for any m_1', m_2'

\begin{aligned}(m - m_1' - m_2') \left\langle{{m_1' m_2'}} \vert {{jm}}\right\rangle {\left\lvert {m_1' m_2'} \right\rangle} = 0\end{aligned} \hspace{\stretch{1}}(3.7)

We have two ways to get this zero. One of them is a m = m_1' + m_2' condition, and the other is for the CG coeff \left\langle{{m_1' m_2'}} \vert {{jm}}\right\rangle to be zero whenever m \ne m_1' + m_2'.

It’s not a difficult argument, but one that wasn’t clear from a read of the text (at least to me).

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , , | Leave a Comment »

PHY456H1F: Quantum Mechanics II. Lecture 20 (Taught by Prof J.E. Sipe). Spherical tensors.

Posted by peeterjoot on November 23, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Disclaimer.

Peeter’s lecture notes from class. May not be entirely coherent.

Spherical tensors (cont).

READING: section 29 of [1].

definition. Any (2k + 1) operator T(k, q), q = -k, \cdots, k are the elements of a spherical tensor of rank k if

\begin{aligned}U[M] T(k, q) U^{-1}[M]= \sum_{q'} T(k, q') D^{(k)}_{q q'}\end{aligned} \hspace{\stretch{1}}(2.1)

where D^{(k)}_{q q'} was the matrix element of the rotation operator

\begin{aligned}D^{(k)}_{q q'} = {\left\langle {k q'} \right\rvert} U[M] {\left\lvert {k q''} \right\rangle}.\end{aligned} \hspace{\stretch{1}}(2.2)

So, if we have a Cartesian vector operator with components V_x, V_y, V_z then we can construct a corresponding spherical vector operator

\begin{aligned}\begin{array}{l l l}T(1, 1) &= - \frac{V_x + i V_y}{\sqrt{2}} &\equiv V_{+1} \\ T(1, 0) &= V_z &\equiv V_0 \\ T(1, -1) &= - \frac{V_x - i V_y}{\sqrt{2}} &\equiv V_{-1}\end{array}.\end{aligned} \hspace{\stretch{1}}(2.3)

By considering infinitesimal rotations we can come up with the commutation relations between the angular momentum operators

\begin{aligned}\left[{J_{\pm}},{T(k, q)}\right] &= \hbar \sqrt{(k \mp q)(k \pm q + 1)} T(k, q \pm 1) \\ \left[{J_{z}},{T(k, q)}\right] &= \hbar q T(k, q)\end{aligned} \hspace{\stretch{1}}(2.4)

Note that the text in (29.15) defines these, whereas in class these were considered consequences of 2.1, once infinitesimal rotations were used.

Recall that these match our angular momentum raising and lowering identities

\begin{aligned}J_{\pm} {\left\lvert {k q} \right\rangle} &= \hbar \sqrt{(k \mp q)(k \pm q + 1)} {\left\lvert {k, q \pm 1} \right\rangle} \\ J_{z} {\left\lvert {k q} \right\rangle} &= \hbar q {\left\lvert {k, q} \right\rangle}.\end{aligned} \hspace{\stretch{1}}(2.6)

Consider two problems

\begin{aligned}\begin{array}{l l l}T(k, q)						& \right\rangle} \\ \left[{J_{\pm}},{T(k, q)}\right] 		&\leftrightarrow &J_{\pm} {\left\lvert {k q} \right\rangle} \\ \left[{J_{z}},{T(k, q)}\right] 			&\leftrightarrow &J_{z} {\left\lvert {k q} \right\rangle}\end{array}\end{aligned} \hspace{\stretch{1}}(2.8)

We have a correspondence between the spherical tensors and angular momentum kets

\begin{aligned}\begin{array}{l l l l}T_1(k_1, q_1)&\qquad q_1 = -k_1, \cdots, k_1 		& \qquad {\left\lvert {k_1 q_1} \right\rangle} 		\right\rangle} \\ T_2(k_2, q_2)&\qquad q_2 = -k_2, \cdots, k_2		& \qquad q_1 = -k_1, \cdots k_1 	& q_2 = -k_2, \cdots k_2 \\ \end{array}\end{aligned} \hspace{\stretch{1}}(2.9)

So, as we can write for angular momentum

\begin{aligned}{\left\lvert {kq} \right\rangle} &= \sum_{q_1, q_2} {\left\lvert {k_1, q_1} \right\rangle}{\left\lvert {k_2, q_2} \right\rangle}\underbrace{\left\langle{{ k_1 q_1 k_2 q_2 }} \vert {{ k q}}\right\rangle}_{\text{These are the C.G coefficients}}  \\ {\left\lvert {k_1 q_1 ; k_2 q_2} \right\rangle}&=\sum_{k, q'}{\left\lvert {k q'} \right\rangle} \left\langle{{ k q'}} \vert {{ k_1 q_1 k_2 q_2 }}\right\rangle \end{aligned}

We also have for spherical tensors

\begin{aligned}T(k, q) &= \sum_{q_1, q_2} T_1(k_1, q_1)T_2(k_2, q_2)\left\langle{{ k_1 q_1 k_2 q_2 }} \vert {{ k q}}\right\rangle	\\ T_1(k_1, q_1)T_2(k_2, q_2)&=\sum_{k, q'}T(k, q') \left\langle{{ k q'}} \vert {{ k_1 q_1 k_2 q_2 }}\right\rangle &\end{aligned}

Can form eigenstates {\left\lvert {kq} \right\rangle} of (\text{total angular momentum})^2 and (z-comp of the total angular momentum).
FIXME: this won’t be proven, but we are strongly suggested to try this ourselves.

\begin{aligned}\begin{array}{l l l}\text{spherical tensor (3)} 				&\leftrightarrow &\text{Cartesian vector (3)} \\ (\text{spherical vector})(\text{spherical vector})	&		 &\text{Cartesian tensor}\end{array}\end{aligned} \hspace{\stretch{1}}(2.10)

We can check the dimensions for a spherical tensor decomposition into rank 0, rank 1 and rank 2 tensors.

\begin{aligned}\begin{array}{l l l}\text{spherical tensor rank latex 0$} & (1) & (\text{Cartesian vector})(\text{Cartesian vector}) \\ \text{spherical tensor rank 1} & (3) & (3)(3) \\ \text{spherical tensor rank 2} & (5) & 9 \\ \hline\text{dimension check sum} & 9 & \\ \end{array}\end{aligned} \hspace{\stretch{1}}(2.11)$

Or in the direct product and sum shorthand

\begin{aligned}1 \otimes 1 = 0 \oplus 1 \oplus 2\end{aligned} \hspace{\stretch{1}}(2.12)

Note that this is just like problem 4 in problem set 10 where we calculated the CG kets for the 1 \otimes 1 = 0 \oplus 1 \oplus 2 decomposition starting from kets {\left\lvert {1 m} \right\rangle}{\left\lvert {1 m'} \right\rangle}.

\begin{aligned}\begin{array}{l l l}{\left\lvert {22} \right\rangle}		&				& 		\\ {\left\lvert {21} \right\rangle}		\right\rangle} 			& 		\\ {\left\lvert {20} \right\rangle}		\right\rangle} 			\right\rangle} 	\\ {\left\lvert {2\overline{1}} \right\rangle}	} \right\rangle} 		& 		\\ {\left\lvert {2\overline{2}} \right\rangle}	&				&\end{array}\end{aligned} \hspace{\stretch{1}}(2.13)

Example.

How about a Cartesian tensor of rank 3?

\begin{aligned}A_{ijk}\end{aligned} \hspace{\stretch{1}}(2.14)

\begin{aligned}1 \otimes 1 \otimes 1  &=1 \otimes ( 0 \oplus 1 \oplus 2) \\ &=(1 \otimes 0) \oplus (1 \otimes 1) \oplus (1 \otimes 2) \\ &=\begin{array}{l l l l l l l l l l l l l l}1 &\oplus   &(0 &\oplus &1 &\oplus &2) &\oplus &(3  &\oplus & 2 &\oplus &1) \\ 3 &+        &1 &+      &3 &+      &5  &+       &7  &+      & 5 &+      &3 = 27\end{array}\end{aligned}

Why bother?

Consider a tensor operator T(k, q) and an eigenket of angular momentum {\left\lvert {\alpha j m} \right\rangle}, where \alpha is a degeneracy index.

Look at

\begin{aligned}T(k, q) {\left\lvert {\alpha j m} \right\rangle}U[M] T(k, q) {\left\lvert {\alpha j m} \right\rangle}&=U[M] T(k, q) U^\dagger[M] U[M] {\left\lvert {\alpha j m} \right\rangle} \\ &=\sum_{q' m'} D^{(k)}_{q q'} D^{(j)}_{m m'} T(k, q') {\left\lvert {\alpha j m'} \right\rangle} \end{aligned}

This transforms like {\left\lvert {k q} \right\rangle} \otimes {\left\lvert {j m} \right\rangle}. We can say immediately

\begin{aligned}{\left\langle {\alpha' j' m'} \right\rvert} T(k, q) {\left\lvert {\alpha j m} \right\rangle} = 0 \end{aligned} \hspace{\stretch{1}}(2.15)

unless

\begin{aligned}{\left\lvert{k - j}\right\rvert} &\le j' \le k + j \\ m' &= m + q\end{aligned} \hspace{\stretch{1}}(2.16)

This is the “selection rule”.

Examples.

\begin{itemize}
\item Scalar T(0, 0)

\begin{aligned}{\left\langle {\alpha' j' m'} \right\rvert} T(0, 0) {\left\lvert {\alpha j m} \right\rangle} = 0 ,\end{aligned} \hspace{\stretch{1}}(2.18)

unless j = j' and m = m'.

\item V_x, V_y, V_z. What are the non-vanishing matrix elements?

\begin{aligned}V_x = \frac{ V_{-1} - V_{+1}}{\sqrt{2}}, \cdots\end{aligned} \hspace{\stretch{1}}(2.19)

\begin{aligned}{\left\langle {\alpha' j' m'} \right\rvert} V_{x, y} {\left\lvert {\alpha j m} \right\rangle} = 0 ,\end{aligned} \hspace{\stretch{1}}(2.20)

unless

\begin{aligned}{\left\lvert{j - 1}\right\rvert} &\le j' \le j + 1  \\ m' &= m \pm 1\end{aligned} \hspace{\stretch{1}}(2.21)

\begin{aligned}{\left\langle {\alpha' j' m'} \right\rvert} V_{z} {\left\lvert {\alpha j m} \right\rangle} = 0 ,\end{aligned} \hspace{\stretch{1}}(2.23)

unless

\begin{aligned}{\left\lvert{j - 1}\right\rvert} &\le j' \le j + 1  \\ m' &= m  \end{aligned} \hspace{\stretch{1}}(2.24)

\end{itemize}

Very generally one can prove (the Wigner-Eckart theory in the text section 29.3)

\begin{aligned}{\left\langle {\alpha_2 j_2 m_2} \right\rvert} T(k, q) {\left\lvert {\alpha_1 j_1 m_1} \right\rangle}={\left\langle {\alpha_2 j_2 } \right\rvert} T(k) {\left\lvert {\alpha_1 j_1} \right\rangle} \cdot\left\langle{{j_2 m_2}} \vert {{k q_1 ; j_1 m_1}}\right\rangle \end{aligned} \hspace{\stretch{1}}(2.26)

where we split into a “reduced matrix element” describing the “physics”, and the CG coefficient for “geometry” respectively.

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , | Leave a Comment »

a handy multithreading debugging technique: a local variable controlled semi-infinite loop.

Posted by peeterjoot on November 17, 2011

I had a three thread timing hole scenerio that I wanted to confirm with the debugger. Adding blocks of code to selected points like this turned out to be really handy:


   {
      volatile int loop = 1 ;
      while (loop)
      {
         loop = 1 ;

         sleep(1) ;
      }
   }

Because the variable loop is local, I could have two different functions paused where I wanted them, and once I break on the sleep line, can let each go with a debugger command like so at exactly the right point in time

(gdb) p loop=0

(assigns a value of zero to the loop variable after switching to the thread of interest). The gdb ‘set scheduler-locking on/off’ and ‘info threads’ ‘thread N’ commands are also very handy for this sort of race condition debugging (this one was actually debugged by code inspection, but I wanted to see it in action to confirm that I had it right).

I suppose that I could have done this with a thread specific breakpoint. I wonder if that’s also possible (probably). I’ll have to try that next time, but hopefully I don’t have to look at race conditions like today’s for a quite a while!

Posted in C/C++ development and debugging. | Tagged: , , , | Leave a Comment »

A collection of quantum two (PHY456H1F) notes.

Posted by peeterjoot on November 16, 2011

Here’s a complete collection of all the QM II notes I’ve made so far this term.. Included are (possibly corrected) versions of all the following individual bits of class and personal study notes:

Nov 16, 2011 Rotations of operators.

Nov 12, 2011 The Clebsch-Gordon convention for the basis elements of summed generalized angular momentum

Nov 11, 2011 Second order time evolution for the coefficients of an initially pure ket with an adiabatically changing Hamiltonian.

Nov 9, 2011 Two spin systems and angular momentum.

Nov 7, 2011 Degeneracy and diagonalization

Nov 6, 2011 Review of approximation results.

Nov 2, 2011 Hydrogen atom with spin, and two spin systems.

Oct 31, 2011 Rotation operator in spin space

Oct 28, 2011 WKB method and Stark shift.

Oct 27, 2011 A different derivation of the adiabatic perturbation coefficient equation

Oct 26, 2011 Representation of two state kets and Pauli spin matrices.

Oct 24, 2011 Spin and spinors (cont.)

Oct 19, 2011 WKB Method

Oct 17, 2011 Spin and Spinors

Oct 12, 2011 phy456 Problem set 4, problem 2 notes.

Oct 10, 2011 Fermi’s golden rule (cont.)

Oct 9, 2011 Simple entanglement example.

Oct 5, 2011 Adiabatic perturbation theory (cont.)

Oct 3, 2011 Time dependent pertubation (cont.)

Sept 29, 2011 Helium atom ground state energy estimation notes.

Sept 26, 2011 Interaction picture.

Sept 25, 2011 Time dependent pertubation

Sept 23, 2011 Pertubation theory and degeneracy. Review of dynamics.

Sept 21, 2011 Time independent perturbation theory (continued)

Sept 19, 2011 Perturbation methods

Sept 12, 2011 My solutions to problem set 1 (ungraded).

Sept 12, 2011 Approximate methods.

Sept 12, 2011 Review: Composite systems

Sept 1, 2011 Curious problem using the variational method to find the ground state energy of the Harmonic oscillator.

Posted in Math and Physics Learning. | Tagged: | Leave a Comment »

PHY456H1F: Quantum Mechanics II. Lecture 19 (Taught by Prof J.E. Sipe). Rotations of operators.

Posted by peeterjoot on November 16, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Disclaimer.

Peeter’s lecture notes from class. May not be entirely coherent.

Rotations of operators.

READING: section 28 [1].

Rotating with U[M] as in figure (\ref{fig:qmTwoL19:qmTwoL19fig1})
\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL19fig1}
\caption{Rotating a state centered at F}
\end{figure}

\begin{aligned}\tilde{r}_i = \sum_j M_{ij} \bar{r}_j\end{aligned} \hspace{\stretch{1}}(2.1)

\begin{aligned}{\left\langle {\psi} \right\rvert} R_i {\left\lvert {\psi} \right\rangle} = \bar{r}_i\end{aligned} \hspace{\stretch{1}}(2.2)

\begin{aligned}{\left\langle {\psi} \right\rvert} U^\dagger[M] R_i U[M] {\left\lvert {\psi} \right\rangle}&= \tilde{r}_i = \sum_j M_{ij} \bar{r}_j \\ &={\left\langle {\psi} \right\rvert} \Bigl( U^\dagger[M] R_i U[M] \Bigr) {\left\lvert {\psi} \right\rangle}\end{aligned}

So

\begin{aligned}U^\dagger[M] R_i U[M] = \sum_j M_{ij} R_j\end{aligned} \hspace{\stretch{1}}(2.3)

Any three operators V_x, V_y, V_z that transform according to

\begin{aligned}U^\dagger[M] V_i U[M] = \sum_j M_{ij} V_j\end{aligned} \hspace{\stretch{1}}(2.4)

form the components of a vector operator.

Infinitesimal rotations

Consider infinitesimal rotations, where we can show that

\begin{aligned}\left[{V_i},{J_j}\right] = i \hbar \sum_k \epsilon_{ijk} V_k\end{aligned} \hspace{\stretch{1}}(2.5)

Note that for V_i = J_i we recover the familiar commutator rules for angular momentum, but this also holds for operators \mathbf{R}, \mathbf{P}, \mathbf{J}, …

Note that

\begin{aligned}U^\dagger[M] = U[M^{-1}] = U[M^\text{T}],\end{aligned} \hspace{\stretch{1}}(2.6)

so

\begin{aligned}U^\dagger[M] V_i U^\dagger[M] = U^\dagger[M^\dagger] V_i U[M^\dagger] = \sum_j M_{ji} V_j\end{aligned} \hspace{\stretch{1}}(2.7)

so

\begin{aligned}{\left\langle {\psi} \right\rvert} V_i {\left\lvert {\psi} \right\rangle}={\left\langle {\psi} \right\rvert}U^\dagger[M] \Bigl( U[M] V_i U^\dagger[M] \Bigr) U[M]{\left\lvert {\psi} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.8)

In the same way, suppose we have nine operators

\begin{aligned}\tau_{ij}, \qquad i, j = x, y, z\end{aligned} \hspace{\stretch{1}}(2.9)

that transform according to

\begin{aligned}U[M] \tau_{ij} U^\dagger[M] = \sum_{lm} M_{li} M_{mj} \tau_{lm}\end{aligned} \hspace{\stretch{1}}(2.10)

then we will call these the components of (Cartesian) a second rank tensor operator. Suppose that we have an operator S that transforms

\begin{aligned}U[M] S U^\dagger[M] = S\end{aligned} \hspace{\stretch{1}}(2.11)

Then we will call S a scalar operator.

A problem.

This all looks good, but it is really not satisfactory. There is a problem.

Suppose that we have a Cartesian tensor operator like this, lets look at the quantity

\begin{aligned}\sum_i \tau_{ii}&=\sum_iU[M] \tau_{ii} U^\dagger[M]  \\ &= \sum_i\sum_{lm} M_{li} M_{mi} \tau_{lm} \\ &= \sum_i\sum_{lm} M_{li} M_{im}^\text{T} \tau_{lm} \\ &= \sum_{lm} \delta_{lm} \tau_{lm} \\ &= \sum_{l} \tau_{ll} \end{aligned}

We see buried inside these Cartesian tensors of higher rank there is some simplicity embedded (in this case trace invariance). Who knows what other relationships are also there? We want to work with and extract the buried simplicities, and we will find that the Cartesian way of expressing these tensors is horribly inefficient. What is a representation that doesn’t have any excess information, and is in some sense minimal?

How do we extract these buried simplicities?

Recall

\begin{aligned}U[M] {\left\lvert {j m''} \right\rangle} \end{aligned} \hspace{\stretch{1}}(2.12)

gives a linear combination of the {\left\lvert {j m'} \right\rangle}.

\begin{aligned}U[M] {\left\lvert {j m''} \right\rangle} &=\sum_{m'} {\left\lvert {j m'} \right\rangle} {\left\langle {j m'} \right\rvert} U[M] {\left\lvert {j m''} \right\rangle}  \\ &=\sum_{m'} {\left\lvert {j m'} \right\rangle} D^{(j)}_{m' m''}[M] \\ \end{aligned}

We’ve talked about before how these D^{(j)}_{m' m''}[M] form a representation of the rotation group. These are in fact (not proved here) an irreducible representation.

Look at each element of D^{(j)}_{m' m''}[M]. These are matrices and will be different according to which rotation M is chosen. There is some M for which this element is nonzero. There’s no element in this matrix element that is zero for all possible M. There are more formal ways to think about this in a group theory context, but this is a physical way to think about this.

Think of these as the basis vectors for some eigenket of J^2.

\begin{aligned}{\left\lvert {\psi} \right\rangle} &= \sum_{m''} {\left\lvert {j m''} \right\rangle} \left\langle{{j m''}} \vert {{\psi}}\right\rangle \\ &= \sum_{m''} \bar{a}_{m''} {\left\lvert {j m''} \right\rangle}\end{aligned}

where

\begin{aligned}\bar{a}_{m''} = \left\langle{{j m''}} \vert {{\psi}}\right\rangle \end{aligned} \hspace{\stretch{1}}(2.13)

So

\begin{aligned}U[M] {\left\lvert {\psi} \right\rangle} = &= \sum_{m'} U[M] {\left\lvert {j m'} \right\rangle} \left\langle{{j m'}} \vert {{\psi}}\right\rangle \\ &= \sum_{m'} U[M] {\left\lvert {j m'} \right\rangle} \bar{a}_{m'} \\ &= \sum_{m', m''} {\left\lvert {j m''} \right\rangle} {\left\langle {j m''} \right\rvert}U[M] {\left\lvert {j m'} \right\rangle} \bar{a}_{m'} \\ &= \sum_{m', m''} {\left\lvert {j m''} \right\rangle} D^{(j)}_{m'', m'}\bar{a}_{m'} \\ &= \sum_{m''} \tilde{a}_{m''} {\left\lvert {j m''} \right\rangle} \end{aligned}

where

\begin{aligned}\tilde{a}_{m''} = \sum_{m'} D^{(j)}_{m'', m'} \bar{a}_{m'} \\ \end{aligned} \hspace{\stretch{1}}(2.14)

Recall that

\begin{aligned}\tilde{r}_j = \sum_j M_{ij} \bar{r}_j\end{aligned} \hspace{\stretch{1}}(2.15)

Define (2k + 1) operators {T_k}^q, q = k, k-1, \cdots -k as the elements of a spherical tensor of rank k if

\begin{aligned}U[M] {T_k}^q U^\dagger[M] = \sum_{q'} D^{(j)}_{q' q} {T_k}^{q'}\end{aligned} \hspace{\stretch{1}}(2.16)

Here we are looking for a better way to organize things, and it will turn out (not to be proved) that this will be an irreducible way to represent things.

Examples.

We want to work though some examples of spherical tensors, and how they relate to Cartesian tensors. To do this, a motivating story needs to be told.

Let’s suppose that {\left\lvert {\psi} \right\rangle} is a ket for a single particle. Perhaps we are talking about an electron without spin, and write

\begin{aligned}\left\langle{\mathbf{r}} \vert {{\psi}}\right\rangle &= Y_{lm}(\theta, \phi) f(r) \\ &= \sum_{m''} \bar{a}_{m''} Y_{l m''}(\theta, \phi) \end{aligned}

for \bar{a}_{m''} = \delta_{m'' m} and after dropping f(r). So

\begin{aligned}{\left\langle {\mathbf{r}} \right\rvert} U[M] {\left\lvert {\psi} \right\rangle} =\sum_{m''} \sum_{m'} D^{(j)}_{m'' m} \bar{a}_{m'} Y_{l m''}(\theta, \phi) \end{aligned} \hspace{\stretch{1}}(2.17)

We are writing this in this particular way to make a point. Now also assume that

\begin{aligned}\left\langle{\mathbf{r}} \vert {{\psi}}\right\rangle = Y_{lm}(\theta, \phi)\end{aligned} \hspace{\stretch{1}}(2.18)

so we find

\begin{aligned}{\left\langle {\mathbf{r}} \right\rvert} U[M] {\left\lvert {\psi} \right\rangle} &=\sum_{m''} Y_{l m''}(\theta, \phi) D^{(j)}_{m'' m} \\ &=Y_{l m}(\theta, \phi) \end{aligned}

\begin{aligned}Y_{l m}(\theta, \phi)  = Y_{lm}(x, y, z)\end{aligned} \hspace{\stretch{1}}(2.19)

so

\begin{aligned}Y'_{l m}(x, y, z)= \sum_{m''} Y_{l m''}(x, y, z)D^{(j)}_{m'' m} \end{aligned} \hspace{\stretch{1}}(2.20)

Now consider the spherical harmonic as an operator Y_{l m}(X, Y, Z)

\begin{aligned}U[M] Y_{lm}(X, Y, Z) U^\dagger[M] =\sum_{m''} Y_{l m''}(X, Y, Z)D^{(j)}_{m'' m} \end{aligned} \hspace{\stretch{1}}(2.21)

So this is a way to generate spherical tensor operators of rank 0, 1, 2, \cdots.

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , | Leave a Comment »

PHY456H1F: Quantum Mechanics II. Lecture 18 (Taught by Prof J.E. Sipe). The Clebsch-Gordon convention for the basis elements of summed generalized angular momentum

Posted by peeterjoot on November 14, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Disclaimer.

Peeter’s lecture notes from class. May not be entirely coherent.

Recap.

Recall our table

\begin{aligned}\begin{array}{| l | l | l | l | l |} \hline j = & j_1 + j_2				& j_1 + j_2 -1 				& \cdots 	& j_1 - j_2 \\ \hline \hline  \right\rangle}	 	&					& 		& \\ \hline  \right\rangle}	\right\rangle}	& 		& \\ \hline  &                                     \right\rangle}	& 		& \\ \hline  & \vdots 	 			&					& 		\right\rangle} \\ \hline  & \vdots 	 			&					& 		& \vdots \\ \hline  & \vdots 	 			&					& 		\right\rangle} \\ \hline  & \vdots 	 			&					& 		& \\ \hline  \right\rangle}	\right\rangle}	& 		& \\ \hline  \right\rangle}	&					& 		&  \\ \hline \end{array}\end{aligned} \hspace{\stretch{1}}(2.1)

First column

Let’s start with computation of the kets in the lowest position of the first column, which we will obtain by successive application of the lowering operator to the state

\begin{aligned}{\left\lvert {j_1 + j_2, j_1 + j_2} \right\rangle} = {\left\lvert {j_1 j_1} \right\rangle} \otimes {\left\lvert {j_2 j_2} \right\rangle}.\end{aligned} \hspace{\stretch{1}}(2.2)

Recall that our lowering operator was found to be (or defined as)

\begin{aligned}J_{-} {\left\lvert {j, m} \right\rangle} = \sqrt{(j+m)(j-m+1)} \hbar {\left\lvert {j, m-1} \right\rangle},\end{aligned} \hspace{\stretch{1}}(2.3)

so that application of the lowering operator gives us

\begin{aligned}{\left\lvert {j_1 + j_2, j_1 + j_2 -1} \right\rangle} &= \frac{J_{-} {\left\lvert {j_1 j_1} \right\rangle} \otimes {\left\lvert {j_2 j_2} \right\rangle}}{\left(2 (j_1+ j_2)\right)^{1/2} \hbar} \\ &=\frac{(J_{1-} + J_{2-}) {\left\lvert {j_1 j_1} \right\rangle} \otimes {\left\lvert {j_2 j_2} \right\rangle}}{\left(2 (j_1+ j_2)\right)^{1/2} \hbar} \\ &=\frac{\left( \sqrt{(j_1 + j_1)(j_1 - j_1 + 1)} \hbar {\left\lvert {j_1(j_1 - 1)} \right\rangle} \right) \otimes {\left\lvert {j_2 j_2} \right\rangle}}{\left(2 (j_1+ j_2)\right)^{1/2} \hbar} \\ &\quad+\frac{{\left\lvert {j_1 j_1} \right\rangle} \otimes \left(\sqrt{(j_2 + j_2)(j_2 - j_2 + 1)} \hbar {\left\lvert {j_2(j_2 -1)} \right\rangle}\right)}{\left(2 (j_1+ j_2)\right)^{1/2} \hbar} \\ &=\left(\frac{j_1}{j_1 + j_2}\right)^{1/2}{\left\lvert {j_1 (j_1-1)} \right\rangle} \otimes {\left\lvert {j_2 j_2} \right\rangle}+\left(\frac{j_2}{j_1 + j_2}\right)^{1/2}{\left\lvert {j_1 j_1} \right\rangle} \otimes {\left\lvert {j_2 (j_2-1)} \right\rangle} \\ \end{aligned}

Proceeding iteratively would allow us to finish off this column.

Second column

Moving on to the second column, the top most element in the table

\begin{aligned}{\left\lvert {j_1 + j_2 - 1, j_1 + j_2 -1} \right\rangle} ,\end{aligned} \hspace{\stretch{1}}(2.4)

can only be made up of {\left\lvert {j_1 m_1} \right\rangle} \otimes {\left\lvert {j_2 m_2} \right\rangle} with m_1 + m_2 = j_1 + j_2 -1. There are two possibilities

\begin{aligned}\begin{array}{l l l l}m_1 &= j_1 	& m_2 &= j_2 - 1 \\ m_1 &= j_1 - 1  & m_2 &= j_2\end{array}\end{aligned} \hspace{\stretch{1}}(2.5)

So for some A and B to be determined we must have

\begin{aligned}{\left\lvert {j_1 + j_2 - 1, j_1 + j_2 -1} \right\rangle} =A{\left\lvert {j_1 j_1} \right\rangle} \otimes {\left\lvert {j_2 (j_2-1)} \right\rangle}+B{\left\lvert {j_1 (j_1-1)} \right\rangle} \otimes {\left\lvert {j_2 j_2} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.6)

Observe that these are the same kets that we ended up with by application of the lowering operator on the topmost element of the first column in our table. Since {\left\lvert {j_1 + j_2, j_1 + j_2 -1} \right\rangle} and {\left\lvert {j_1 + j_2 - 1, j_1 + j_2 -1} \right\rangle} are orthogonal, we can construct our ket for the top of the second column by just seeking such an orthonormal superposition. Consider for example

\begin{aligned}0 &=(a {\left\langle {b} \right\rvert} + c {\left\langle {d} \right\rvert})( A {\left\lvert {b} \right\rangle} + C {\left\lvert {d} \right\rangle}) \\ &=a A + c C\end{aligned}

With A = 1 we find that C = -a/c, so we have

\begin{aligned}A {\left\lvert {b} \right\rangle} + C {\left\lvert {d} \right\rangle} &= {\left\lvert {b} \right\rangle} - \frac{a}{c} {\left\lvert {d} \right\rangle}  \\ &\simc {\left\lvert {b} \right\rangle} - a {\left\lvert {d} \right\rangle}  \\ \end{aligned}

So we find, for real a and c that

\begin{aligned}0 = (a {\left\langle {b} \right\rvert} + c {\left\langle {d} \right\rvert})( c {\left\lvert {b} \right\rangle} - a {\left\lvert {d} \right\rangle}),\end{aligned} \hspace{\stretch{1}}(2.7)

for any orthonormal pair of kets {\left\lvert {a} \right\rangle} and {\left\lvert {d} \right\rangle}. Using this we find

\begin{aligned}{\left\lvert {j_1 + j_2 - 1, j_1 + j_2 -1} \right\rangle} =\left(\frac{j_2}{j_1 + j_2}\right)^{1/2}{\left\lvert {j_1 j_1} \right\rangle} \otimes {\left\lvert {j_2 (j_2-1)} \right\rangle}-\left(\frac{j_1}{j_1 + j_2}\right)^{1/2}{\left\lvert {j_1 (j_1-1)} \right\rangle} \otimes {\left\lvert {j_2 j_2} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.8)

This will work, although we could also multiply by any phase factor if desired. Such a choice of phase factors is essentially just a convention.

The Clebsch-Gordon convention

This is the convention we will use, where we

\begin{itemize}
\item choose the coefficients to be real.
\item require the coefficient of the m_1 = j_1 term to be \ge 0
\end{itemize}

This gives us the first state in the second column, and we can proceed to iterate using the lowering operators to get all those values.

Moving on to the third column

\begin{aligned}{\left\lvert {j_1 + j_2 - 2, j_1 + j_2 -2} \right\rangle} \end{aligned} \hspace{\stretch{1}}(2.9)

can only be made up of {\left\lvert {j_1 m_1} \right\rangle} \otimes {\left\lvert {j_2 m_2} \right\rangle} with m_1 + m_2 = j_1 + j_2 -2. There are now three possibilities

\begin{aligned}\begin{array}{l l l l}m_1 &= j_1	 &  m_2 &= j_2 - 2 \\ m_1 &= j_1 - 2  &  m_2 &= j_2 \\ m_1 &= j_1 - 1  &  m_2 &= j_2 - 1\end{array}\end{aligned} \hspace{\stretch{1}}(2.10)

and 2 orthogonality conditions, plus conventions. This is enough to determine the ket in the third column.

We can formally write

\begin{aligned}{\left\lvert {jm ; j_1 j_2} \right\rangle} = \sum_{m_1, m_2}{\left\lvert { j_1 m_1, j_2 m_2} \right\rangle}\left\langle{{ j_1 m_1, j_2 m_2}} \vert {{jm ; j_1 j_2}}\right\rangle \end{aligned} \hspace{\stretch{1}}(2.11)

where

\begin{aligned}{\left\lvert { j_1 m_1, j_2 m_2} \right\rangle} = {\left\lvert {j_1 m_1} \right\rangle} \otimes {\left\lvert {j_2 m_2} \right\rangle},\end{aligned} \hspace{\stretch{1}}(2.12)

and

\begin{aligned}\left\langle{{ j_1 m_1, j_2 m_2}} \vert {{jm ; j_1 j_2}}\right\rangle \end{aligned} \hspace{\stretch{1}}(2.13)

are the Clebsch-Gordon coefficients, sometimes written as

\begin{aligned}\left\langle{{ j_1 m_1, j_2 m_2 }} \vert {{ jm }}\right\rangle\end{aligned} \hspace{\stretch{1}}(2.14)

Properties
\begin{enumerate}
\item \left\langle{{ j_1 m_1, j_2 m_2 }} \vert {{ jm }}\right\rangle \ne 0 only if j_1 - j_2 \le j \le j_1 + j+2

This is sometimes called the triangle inequality, depicted in figure (\ref{fig:qmTwoL18:qmTwoL18fig1})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL18fig1}
\caption{Angular momentum triangle inequality.}
\end{figure}

\item \left\langle{{ j_1 m_1, j_2 m_2 }} \vert {{ jm }}\right\rangle \ne 0 only if m = m_1 + m_2.

\item Real (convention).

\item \left\langle{{ j_1 j_1, j_2 (j - j_1) }} \vert {{ j j }}\right\rangle positive (convention again).

\item Proved in the text. If follows that

\begin{aligned}\left\langle{{ j_1 m_1, j_2 m_2 }} \vert {{ j m }}\right\rangle=(-1)^{j_1 + j_2 - j}\left\langle{{ j_1 (-m_1), j_2 (-m_2) }} \vert {{ j (-m) }}\right\rangle\end{aligned} \hspace{\stretch{1}}(2.15)

\end{enumerate}

Note that the \left\langle{{ j_1 m_1, j_2 m_2 }} \vert {{ j m }}\right\rangle are all real. So, they can be assembled into an orthogonal matrix. Example

\begin{aligned}\begin{bmatrix}{\left\lvert {11} \right\rangle} \\ {\left\lvert {10} \right\rangle} \\ {\left\lvert {\overline{11}} \right\rangle} \\ {\left\lvert {00} \right\rangle}\end{bmatrix}=\begin{bmatrix}1 & 0 & 0 & 0 \\ 0 & \frac{1}{{\sqrt{2}}} & \frac{1}{{\sqrt{2}}} & 0 \\ 0 & 0 & 0 & 1 \\ 0 & \frac{1}{{\sqrt{2}}} & \frac{-1}{\sqrt{2}} & 0 \\ \end{bmatrix}\begin{bmatrix}{\left\lvert {++} \right\rangle} \\ {\left\lvert {++} \right\rangle} \\ {\left\lvert {-+} \right\rangle} \\ {\left\lvert {--} \right\rangle}\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(2.16)

Example. Electrons

Consider the special case of an electron, a spin 1/2 particle with s = 1/2 and m_s = \pm 1/2 where we have

\begin{aligned}\mathbf{J} = \mathbf{L} + \mathbf{S}\end{aligned} \hspace{\stretch{1}}(2.17)

\begin{aligned}{\left\lvert {lm} \right\rangle} \otimes {\left\lvert {\frac{1}{2} m_s} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.18)

possible values of j are l \pm 1/2

\begin{aligned}l \otimes \frac{1}{2} = \left(l + \frac{1}{2}\right)\oplus\left(l - \frac{1}{2}\right)\end{aligned} \hspace{\stretch{1}}(2.19)

Our table representation is then

\begin{aligned}\begin{array}{| l | l | l |} \hline j = & l + \frac{1}{2} 			& l - \frac{1}{2} \\ \hline \hline  {2}, l + \frac{1}{2}} \right\rangle}	 	&					 \\ \hline  {2}, l + \frac{1}{2} - 1} \right\rangle}	{2}, l - \frac{1}{2}} \right\rangle}	 \\ \hline  &                                     {2}, -(l - \frac{1}{2}} \right\rangle}	 \\ \hline  {2}, -(l + \frac{1}{2})} \right\rangle}	&					 \\ \hline \end{array}\end{aligned} \hspace{\stretch{1}}(2.20)

Here {\left\lvert {l + \frac{1}{2}, m} \right\rangle}

can only have contributions from

\begin{aligned}{\left\lvert {l, m-\frac{1}{2}} \right\rangle} &\otimes {\left\lvert {\frac{1}{2}\frac{1}{2}} \right\rangle} \\ {\left\lvert {l, m+\frac{1}{2}} \right\rangle} &\otimes {\left\lvert {\frac{1}{2}\overline{\frac{1}{2}}} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.21)

{\left\lvert {l - \frac{1}{2}, m} \right\rangle} from the same two. So using this and conventions we can work out (in section 28 page 524, of our text [1]).

\begin{aligned}{\left\lvert {l\pm \frac{1}{2}, m} \right\rangle} =\frac{1}{{\sqrt{2 l + 1}}}\left(\pm (l + \frac{1}{2} \pm m)^{1/2}{\left\lvert {l, m - \frac{1}{2}} \right\rangle} \times {\left\lvert {\frac{1}{2}\frac{1}{2}} \right\rangle}\pm (l + \frac{1}{2} \mp m)^{1/2}{\left\lvert {l, m + \frac{1}{2}} \right\rangle} \times {\left\lvert {\frac{1}{2} \overline{\frac{1}{2}}} \right\rangle}\right)\end{aligned} \hspace{\stretch{1}}(2.23)

Tensor operators

section 29 of the text.

Recall how we characterized a rotation

\begin{aligned}\mathbf{r} \rightarrow \mathcal{R}(\mathbf{r}).\end{aligned} \hspace{\stretch{1}}(3.24)

Here we are using an active rotation as depicted in figure (\ref{fig:qmTwoL18:qmTwoL18fig2})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL18fig2}
\caption{active rotation}
\end{figure}

Suppose that

\begin{aligned}{\begin{bmatrix}\mathcal{R}(\mathbf{r})\end{bmatrix}}_i= \sum_j M_{ij} r_j\end{aligned} \hspace{\stretch{1}}(3.25)

so that

\begin{aligned}U = e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{J}/\hbar}\end{aligned} \hspace{\stretch{1}}(3.26)

rotates in the same way. Rotating a ket as in figure (\ref{fig:qmTwoL18:qmTwoL18fig3})
\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL18fig3}
\caption{Rotating a wavefunction.}
\end{figure}

Rotating a ket

\begin{aligned}{\left\lvert {\psi} \right\rangle}\end{aligned} \hspace{\stretch{1}}(3.27)

using the prescription

\begin{aligned}{\left\lvert {\psi'} \right\rangle} = e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{J}/\hbar} {\left\lvert {\psi} \right\rangle}\end{aligned} \hspace{\stretch{1}}(3.28)

and write

\begin{aligned}{\left\lvert {\psi'} \right\rangle} = U[M] {\left\lvert {\psi} \right\rangle}\end{aligned} \hspace{\stretch{1}}(3.29)

Now look at

\begin{aligned}{\left\langle {\psi} \right\rvert} \mathcal{O} {\left\lvert {\psi} \right\rangle}\end{aligned} \hspace{\stretch{1}}(3.30)

and compare with

\begin{aligned}{\left\langle {\psi'} \right\rvert} \mathcal{O} {\left\lvert {\psi'} \right\rangle}={\left\langle {\psi} \right\rvert} \underbrace{U^\dagger[M] \mathcal{O} U[M]}_{{*}} {\left\lvert {\psi} \right\rangle}\end{aligned} \hspace{\stretch{1}}(3.31)

We’ll be looking in more detail at {*}.

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , , , , , | Leave a Comment »