Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

Posts Tagged ‘angular momentum operator’

On conditions for Clebsh-Gordan coefficients to be zero

Posted by peeterjoot on November 23, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]


In section 28.2 of the text [1] is a statement that the Clebsh-Gordan coefficient

\begin{aligned}\left\langle{{m_1 m_2}} \vert {{jm}}\right\rangle\end{aligned} \hspace{\stretch{1}}(1.1)

unless m = m_1 + m_2. It appeared that it was related to the operation of J_z, but how exactly wasn’t obvious to me. In tutorial today we hashed through this. Here’s the details lying behind this statement

Recap on notation.

We are taking an arbitrary two particle ket and decomposing it utilizing an insertion of a complete set of states

\begin{aligned}{\left\lvert {jm} \right\rangle} = \sum_{m_1' m_2'} \Bigl({\left\lvert {j_1 m_1'} \right\rangle} {\left\lvert {j_2 m_2'} \right\rangle}{\left\langle {j_1 m_1'} \right\rvert} {\left\langle {j_2 m_2'} \right\rvert}\Bigr){\left\lvert {jm} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.2)

with j_1 and j_2 fixed, this is written with the shorthand

\begin{aligned}{\left\lvert {j_1 m_1} \right\rangle} {\left\lvert {j_2 m_2} \right\rangle} &= {\left\lvert {m_1 m_2} \right\rangle} \\ {\left\langle {j_1 m_1} \right\rvert} {\left\langle {j_2 m_2} \right\rvert} {\left\lvert {jm} \right\rangle} &= \left\langle{{m_1 m_2}} \vert {{jm}}\right\rangle,\end{aligned} \hspace{\stretch{1}}(2.3)

so that we write

\begin{aligned}{\left\lvert {jm} \right\rangle} = \sum_{m_1' m_2'} {\left\lvert {m_1' m_2'} \right\rangle} \left\langle{{m_1' m_2'}} \vert {{jm}}\right\rangle\end{aligned} \hspace{\stretch{1}}(2.5)

The J_z action.

We have two ways that we can apply the operator J_z to {\left\lvert {jm} \right\rangle}. One is using the sum above, for which we find

\begin{aligned}J_z {\left\lvert {jm} \right\rangle} &= \sum_{m_1' m_2'} J_z {\left\lvert {m_1' m_2'} \right\rangle} \left\langle{{m_1' m_2'}} \vert {{jm}}\right\rangle \\ &= \hbar \sum_{m_1' m_2'} (m_1' + m_2') {\left\lvert {m_1' m_2'} \right\rangle} \left\langle{{m_1' m_2'}} \vert {{jm}}\right\rangle \\ \end{aligned}

We can also act directly on {\left\lvert {jm} \right\rangle} and then insert a complete set of states

\begin{aligned}J_z {\left\lvert {jm} \right\rangle} &=\sum_{m_1' m_2'} {\left\lvert {m_1' m_2'} \right\rangle}{\left\langle {m_1' m_2'} \right\rvert}J_z {\left\lvert {jm} \right\rangle} \\ &=\hbar m\sum_{m_1' m_2'} {\left\lvert {m_1' m_2'} \right\rangle}\left\langle{{m_1' m_2'}} \vert {{jm}}\right\rangle \\ \end{aligned}

This provides us with the identity

\begin{aligned}m\sum_{m_1' m_2'} {\left\lvert {m_1' m_2'} \right\rangle}\left\langle{{m_1' m_2'}} \vert {{jm}}\right\rangle = \sum_{m_1' m_2'} (m_1' + m_2') {\left\lvert {m_1' m_2'} \right\rangle} \left\langle{{m_1' m_2'}} \vert {{jm}}\right\rangle \end{aligned} \hspace{\stretch{1}}(3.6)

This equality must be valid for any {\left\lvert {jm} \right\rangle}, and since all the kets {\left\lvert {m_1' m_2'} \right\rangle} are linearly independent, we must have for any m_1', m_2'

\begin{aligned}(m - m_1' - m_2') \left\langle{{m_1' m_2'}} \vert {{jm}}\right\rangle {\left\lvert {m_1' m_2'} \right\rangle} = 0\end{aligned} \hspace{\stretch{1}}(3.7)

We have two ways to get this zero. One of them is a m = m_1' + m_2' condition, and the other is for the CG coeff \left\langle{{m_1' m_2'}} \vert {{jm}}\right\rangle to be zero whenever m \ne m_1' + m_2'.

It’s not a difficult argument, but one that wasn’t clear from a read of the text (at least to me).


[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , , | Leave a Comment »

PHY456H1F: Quantum Mechanics II. Lecture 18 (Taught by Prof J.E. Sipe). The Clebsch-Gordon convention for the basis elements of summed generalized angular momentum

Posted by peeterjoot on November 14, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]


Peeter’s lecture notes from class. May not be entirely coherent.


Recall our table

\begin{aligned}\begin{array}{| l | l | l | l | l |} \hline j = & j_1 + j_2				& j_1 + j_2 -1 				& \cdots 	& j_1 - j_2 \\ \hline \hline  \right\rangle}	 	&					& 		& \\ \hline  \right\rangle}	\right\rangle}	& 		& \\ \hline  &                                     \right\rangle}	& 		& \\ \hline  & \vdots 	 			&					& 		\right\rangle} \\ \hline  & \vdots 	 			&					& 		& \vdots \\ \hline  & \vdots 	 			&					& 		\right\rangle} \\ \hline  & \vdots 	 			&					& 		& \\ \hline  \right\rangle}	\right\rangle}	& 		& \\ \hline  \right\rangle}	&					& 		&  \\ \hline \end{array}\end{aligned} \hspace{\stretch{1}}(2.1)

First column

Let’s start with computation of the kets in the lowest position of the first column, which we will obtain by successive application of the lowering operator to the state

\begin{aligned}{\left\lvert {j_1 + j_2, j_1 + j_2} \right\rangle} = {\left\lvert {j_1 j_1} \right\rangle} \otimes {\left\lvert {j_2 j_2} \right\rangle}.\end{aligned} \hspace{\stretch{1}}(2.2)

Recall that our lowering operator was found to be (or defined as)

\begin{aligned}J_{-} {\left\lvert {j, m} \right\rangle} = \sqrt{(j+m)(j-m+1)} \hbar {\left\lvert {j, m-1} \right\rangle},\end{aligned} \hspace{\stretch{1}}(2.3)

so that application of the lowering operator gives us

\begin{aligned}{\left\lvert {j_1 + j_2, j_1 + j_2 -1} \right\rangle} &= \frac{J_{-} {\left\lvert {j_1 j_1} \right\rangle} \otimes {\left\lvert {j_2 j_2} \right\rangle}}{\left(2 (j_1+ j_2)\right)^{1/2} \hbar} \\ &=\frac{(J_{1-} + J_{2-}) {\left\lvert {j_1 j_1} \right\rangle} \otimes {\left\lvert {j_2 j_2} \right\rangle}}{\left(2 (j_1+ j_2)\right)^{1/2} \hbar} \\ &=\frac{\left( \sqrt{(j_1 + j_1)(j_1 - j_1 + 1)} \hbar {\left\lvert {j_1(j_1 - 1)} \right\rangle} \right) \otimes {\left\lvert {j_2 j_2} \right\rangle}}{\left(2 (j_1+ j_2)\right)^{1/2} \hbar} \\ &\quad+\frac{{\left\lvert {j_1 j_1} \right\rangle} \otimes \left(\sqrt{(j_2 + j_2)(j_2 - j_2 + 1)} \hbar {\left\lvert {j_2(j_2 -1)} \right\rangle}\right)}{\left(2 (j_1+ j_2)\right)^{1/2} \hbar} \\ &=\left(\frac{j_1}{j_1 + j_2}\right)^{1/2}{\left\lvert {j_1 (j_1-1)} \right\rangle} \otimes {\left\lvert {j_2 j_2} \right\rangle}+\left(\frac{j_2}{j_1 + j_2}\right)^{1/2}{\left\lvert {j_1 j_1} \right\rangle} \otimes {\left\lvert {j_2 (j_2-1)} \right\rangle} \\ \end{aligned}

Proceeding iteratively would allow us to finish off this column.

Second column

Moving on to the second column, the top most element in the table

\begin{aligned}{\left\lvert {j_1 + j_2 - 1, j_1 + j_2 -1} \right\rangle} ,\end{aligned} \hspace{\stretch{1}}(2.4)

can only be made up of {\left\lvert {j_1 m_1} \right\rangle} \otimes {\left\lvert {j_2 m_2} \right\rangle} with m_1 + m_2 = j_1 + j_2 -1. There are two possibilities

\begin{aligned}\begin{array}{l l l l}m_1 &= j_1 	& m_2 &= j_2 - 1 \\ m_1 &= j_1 - 1  & m_2 &= j_2\end{array}\end{aligned} \hspace{\stretch{1}}(2.5)

So for some A and B to be determined we must have

\begin{aligned}{\left\lvert {j_1 + j_2 - 1, j_1 + j_2 -1} \right\rangle} =A{\left\lvert {j_1 j_1} \right\rangle} \otimes {\left\lvert {j_2 (j_2-1)} \right\rangle}+B{\left\lvert {j_1 (j_1-1)} \right\rangle} \otimes {\left\lvert {j_2 j_2} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.6)

Observe that these are the same kets that we ended up with by application of the lowering operator on the topmost element of the first column in our table. Since {\left\lvert {j_1 + j_2, j_1 + j_2 -1} \right\rangle} and {\left\lvert {j_1 + j_2 - 1, j_1 + j_2 -1} \right\rangle} are orthogonal, we can construct our ket for the top of the second column by just seeking such an orthonormal superposition. Consider for example

\begin{aligned}0 &=(a {\left\langle {b} \right\rvert} + c {\left\langle {d} \right\rvert})( A {\left\lvert {b} \right\rangle} + C {\left\lvert {d} \right\rangle}) \\ &=a A + c C\end{aligned}

With A = 1 we find that C = -a/c, so we have

\begin{aligned}A {\left\lvert {b} \right\rangle} + C {\left\lvert {d} \right\rangle} &= {\left\lvert {b} \right\rangle} - \frac{a}{c} {\left\lvert {d} \right\rangle}  \\ &\simc {\left\lvert {b} \right\rangle} - a {\left\lvert {d} \right\rangle}  \\ \end{aligned}

So we find, for real a and c that

\begin{aligned}0 = (a {\left\langle {b} \right\rvert} + c {\left\langle {d} \right\rvert})( c {\left\lvert {b} \right\rangle} - a {\left\lvert {d} \right\rangle}),\end{aligned} \hspace{\stretch{1}}(2.7)

for any orthonormal pair of kets {\left\lvert {a} \right\rangle} and {\left\lvert {d} \right\rangle}. Using this we find

\begin{aligned}{\left\lvert {j_1 + j_2 - 1, j_1 + j_2 -1} \right\rangle} =\left(\frac{j_2}{j_1 + j_2}\right)^{1/2}{\left\lvert {j_1 j_1} \right\rangle} \otimes {\left\lvert {j_2 (j_2-1)} \right\rangle}-\left(\frac{j_1}{j_1 + j_2}\right)^{1/2}{\left\lvert {j_1 (j_1-1)} \right\rangle} \otimes {\left\lvert {j_2 j_2} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.8)

This will work, although we could also multiply by any phase factor if desired. Such a choice of phase factors is essentially just a convention.

The Clebsch-Gordon convention

This is the convention we will use, where we

\item choose the coefficients to be real.
\item require the coefficient of the m_1 = j_1 term to be \ge 0

This gives us the first state in the second column, and we can proceed to iterate using the lowering operators to get all those values.

Moving on to the third column

\begin{aligned}{\left\lvert {j_1 + j_2 - 2, j_1 + j_2 -2} \right\rangle} \end{aligned} \hspace{\stretch{1}}(2.9)

can only be made up of {\left\lvert {j_1 m_1} \right\rangle} \otimes {\left\lvert {j_2 m_2} \right\rangle} with m_1 + m_2 = j_1 + j_2 -2. There are now three possibilities

\begin{aligned}\begin{array}{l l l l}m_1 &= j_1	 &  m_2 &= j_2 - 2 \\ m_1 &= j_1 - 2  &  m_2 &= j_2 \\ m_1 &= j_1 - 1  &  m_2 &= j_2 - 1\end{array}\end{aligned} \hspace{\stretch{1}}(2.10)

and 2 orthogonality conditions, plus conventions. This is enough to determine the ket in the third column.

We can formally write

\begin{aligned}{\left\lvert {jm ; j_1 j_2} \right\rangle} = \sum_{m_1, m_2}{\left\lvert { j_1 m_1, j_2 m_2} \right\rangle}\left\langle{{ j_1 m_1, j_2 m_2}} \vert {{jm ; j_1 j_2}}\right\rangle \end{aligned} \hspace{\stretch{1}}(2.11)


\begin{aligned}{\left\lvert { j_1 m_1, j_2 m_2} \right\rangle} = {\left\lvert {j_1 m_1} \right\rangle} \otimes {\left\lvert {j_2 m_2} \right\rangle},\end{aligned} \hspace{\stretch{1}}(2.12)


\begin{aligned}\left\langle{{ j_1 m_1, j_2 m_2}} \vert {{jm ; j_1 j_2}}\right\rangle \end{aligned} \hspace{\stretch{1}}(2.13)

are the Clebsch-Gordon coefficients, sometimes written as

\begin{aligned}\left\langle{{ j_1 m_1, j_2 m_2 }} \vert {{ jm }}\right\rangle\end{aligned} \hspace{\stretch{1}}(2.14)

\item \left\langle{{ j_1 m_1, j_2 m_2 }} \vert {{ jm }}\right\rangle \ne 0 only if j_1 - j_2 \le j \le j_1 + j+2

This is sometimes called the triangle inequality, depicted in figure (\ref{fig:qmTwoL18:qmTwoL18fig1})

\caption{Angular momentum triangle inequality.}

\item \left\langle{{ j_1 m_1, j_2 m_2 }} \vert {{ jm }}\right\rangle \ne 0 only if m = m_1 + m_2.

\item Real (convention).

\item \left\langle{{ j_1 j_1, j_2 (j - j_1) }} \vert {{ j j }}\right\rangle positive (convention again).

\item Proved in the text. If follows that

\begin{aligned}\left\langle{{ j_1 m_1, j_2 m_2 }} \vert {{ j m }}\right\rangle=(-1)^{j_1 + j_2 - j}\left\langle{{ j_1 (-m_1), j_2 (-m_2) }} \vert {{ j (-m) }}\right\rangle\end{aligned} \hspace{\stretch{1}}(2.15)


Note that the \left\langle{{ j_1 m_1, j_2 m_2 }} \vert {{ j m }}\right\rangle are all real. So, they can be assembled into an orthogonal matrix. Example

\begin{aligned}\begin{bmatrix}{\left\lvert {11} \right\rangle} \\ {\left\lvert {10} \right\rangle} \\ {\left\lvert {\overline{11}} \right\rangle} \\ {\left\lvert {00} \right\rangle}\end{bmatrix}=\begin{bmatrix}1 & 0 & 0 & 0 \\ 0 & \frac{1}{{\sqrt{2}}} & \frac{1}{{\sqrt{2}}} & 0 \\ 0 & 0 & 0 & 1 \\ 0 & \frac{1}{{\sqrt{2}}} & \frac{-1}{\sqrt{2}} & 0 \\ \end{bmatrix}\begin{bmatrix}{\left\lvert {++} \right\rangle} \\ {\left\lvert {++} \right\rangle} \\ {\left\lvert {-+} \right\rangle} \\ {\left\lvert {--} \right\rangle}\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(2.16)

Example. Electrons

Consider the special case of an electron, a spin 1/2 particle with s = 1/2 and m_s = \pm 1/2 where we have

\begin{aligned}\mathbf{J} = \mathbf{L} + \mathbf{S}\end{aligned} \hspace{\stretch{1}}(2.17)

\begin{aligned}{\left\lvert {lm} \right\rangle} \otimes {\left\lvert {\frac{1}{2} m_s} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.18)

possible values of j are l \pm 1/2

\begin{aligned}l \otimes \frac{1}{2} = \left(l + \frac{1}{2}\right)\oplus\left(l - \frac{1}{2}\right)\end{aligned} \hspace{\stretch{1}}(2.19)

Our table representation is then

\begin{aligned}\begin{array}{| l | l | l |} \hline j = & l + \frac{1}{2} 			& l - \frac{1}{2} \\ \hline \hline  {2}, l + \frac{1}{2}} \right\rangle}	 	&					 \\ \hline  {2}, l + \frac{1}{2} - 1} \right\rangle}	{2}, l - \frac{1}{2}} \right\rangle}	 \\ \hline  &                                     {2}, -(l - \frac{1}{2}} \right\rangle}	 \\ \hline  {2}, -(l + \frac{1}{2})} \right\rangle}	&					 \\ \hline \end{array}\end{aligned} \hspace{\stretch{1}}(2.20)

Here {\left\lvert {l + \frac{1}{2}, m} \right\rangle}

can only have contributions from

\begin{aligned}{\left\lvert {l, m-\frac{1}{2}} \right\rangle} &\otimes {\left\lvert {\frac{1}{2}\frac{1}{2}} \right\rangle} \\ {\left\lvert {l, m+\frac{1}{2}} \right\rangle} &\otimes {\left\lvert {\frac{1}{2}\overline{\frac{1}{2}}} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.21)

{\left\lvert {l - \frac{1}{2}, m} \right\rangle} from the same two. So using this and conventions we can work out (in section 28 page 524, of our text [1]).

\begin{aligned}{\left\lvert {l\pm \frac{1}{2}, m} \right\rangle} =\frac{1}{{\sqrt{2 l + 1}}}\left(\pm (l + \frac{1}{2} \pm m)^{1/2}{\left\lvert {l, m - \frac{1}{2}} \right\rangle} \times {\left\lvert {\frac{1}{2}\frac{1}{2}} \right\rangle}\pm (l + \frac{1}{2} \mp m)^{1/2}{\left\lvert {l, m + \frac{1}{2}} \right\rangle} \times {\left\lvert {\frac{1}{2} \overline{\frac{1}{2}}} \right\rangle}\right)\end{aligned} \hspace{\stretch{1}}(2.23)

Tensor operators

section 29 of the text.

Recall how we characterized a rotation

\begin{aligned}\mathbf{r} \rightarrow \mathcal{R}(\mathbf{r}).\end{aligned} \hspace{\stretch{1}}(3.24)

Here we are using an active rotation as depicted in figure (\ref{fig:qmTwoL18:qmTwoL18fig2})

\caption{active rotation}

Suppose that

\begin{aligned}{\begin{bmatrix}\mathcal{R}(\mathbf{r})\end{bmatrix}}_i= \sum_j M_{ij} r_j\end{aligned} \hspace{\stretch{1}}(3.25)

so that

\begin{aligned}U = e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{J}/\hbar}\end{aligned} \hspace{\stretch{1}}(3.26)

rotates in the same way. Rotating a ket as in figure (\ref{fig:qmTwoL18:qmTwoL18fig3})
\caption{Rotating a wavefunction.}

Rotating a ket

\begin{aligned}{\left\lvert {\psi} \right\rangle}\end{aligned} \hspace{\stretch{1}}(3.27)

using the prescription

\begin{aligned}{\left\lvert {\psi'} \right\rangle} = e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{J}/\hbar} {\left\lvert {\psi} \right\rangle}\end{aligned} \hspace{\stretch{1}}(3.28)

and write

\begin{aligned}{\left\lvert {\psi'} \right\rangle} = U[M] {\left\lvert {\psi} \right\rangle}\end{aligned} \hspace{\stretch{1}}(3.29)

Now look at

\begin{aligned}{\left\langle {\psi} \right\rvert} \mathcal{O} {\left\lvert {\psi} \right\rangle}\end{aligned} \hspace{\stretch{1}}(3.30)

and compare with

\begin{aligned}{\left\langle {\psi'} \right\rvert} \mathcal{O} {\left\lvert {\psi'} \right\rangle}={\left\langle {\psi} \right\rvert} \underbrace{U^\dagger[M] \mathcal{O} U[M]}_{{*}} {\left\lvert {\psi} \right\rangle}\end{aligned} \hspace{\stretch{1}}(3.31)

We’ll be looking in more detail at {*}.


[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , , , , , | Leave a Comment »

PHY456H1F: Quantum Mechanics II. Lecture 16 (Taught by Prof J.E. Sipe). Hydrogen atom with spin, and two spin systems.

Posted by peeterjoot on November 2, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]


Peeter’s lecture notes from class. May not be entirely coherent.

The hydrogen atom with spin.

READING: what chapter of [1] ?

For a spinless hydrogen atom, the Hamiltonian was

\begin{aligned}H = H_{\text{CM}} \otimes H_{\text{rel}}\end{aligned} \hspace{\stretch{1}}(2.1)

where we have independent Hamiltonian’s for the motion of the center of mass and the relative motion of the electron to the proton.

The basis kets for these could be designated {\left\lvert {\mathbf{p}_\text{CM}} \right\rangle} and {\left\lvert {\mathbf{p}_\text{rel}} \right\rangle} respectively.

Now we want to augment this, treating

\begin{aligned}H = H_{\text{CM}} \otimes H_{\text{rel}} \otimes H_{\text{s}}\end{aligned} \hspace{\stretch{1}}(2.2)

where H_{\text{s}} is the Hamiltonian for the spin of the electron. We are neglecting the spin of the proton, but that could also be included (this turns out to be a lesser effect).

We’ll introduce a Hamiltonian including the dynamics of the relative motion and the electron spin

\begin{aligned}H_{\text{rel}} \otimes H_{\text{s}}\end{aligned} \hspace{\stretch{1}}(2.3)

Covering the Hilbert space for this system we’ll use basis kets

\begin{aligned}{\left\lvert {nlm\pm} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.4)

\begin{aligned}\begin{aligned}{\left\lvert {nlm+} \right\rangle} &\rightarrow \begin{bmatrix}\left\langle{{\mathbf{r}+}} \vert {{nlm+}}\right\rangle \\ \left\langle{{\mathbf{r}-}} \vert {{nlm+}}\right\rangle \\ \end{bmatrix}=\begin{bmatrix}\Phi_{nlm}(\mathbf{r}) \\ 0\end{bmatrix} \\ {\left\lvert {nlm-} \right\rangle} &\rightarrow \begin{bmatrix}\left\langle{{\mathbf{r}+}} \vert {{nlm-}}\right\rangle \\ \left\langle{{\mathbf{r}-}} \vert {{nlm-}}\right\rangle \\ \end{bmatrix}=\begin{bmatrix}0 \\ \Phi_{nlm}(\mathbf{r}) \end{bmatrix}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.5)

Here \mathbf{r} should be understood to really mean \mathbf{r}_\text{rel}. Our full Hamiltonian, after introducing a magnetic pertubation is

\begin{aligned}H = \frac{P_\text{CM}^2}{2M} + \left(\frac{P_\text{rel}^2}{2\mu}-\frac{e^2}{R_\text{rel}}\right)- \boldsymbol{\mu}_0 \cdot \mathbf{B}- \boldsymbol{\mu}_s \cdot \mathbf{B}\end{aligned} \hspace{\stretch{1}}(2.6)


\begin{aligned}M = m_\text{proton} + m_\text{electron},\end{aligned} \hspace{\stretch{1}}(2.7)


\begin{aligned}\frac{1}{{\mu}} = \frac{1}{{m_\text{proton}}} + \frac{1}{{m_\text{electron}}}.\end{aligned} \hspace{\stretch{1}}(2.8)

For a uniform magnetic field

\begin{aligned}\boldsymbol{\mu}_0 &= \left( -\frac{e}{2 m c} \right) \mathbf{L} \\ \boldsymbol{\mu}_s &= g \left( -\frac{e}{2 m c} \right) \mathbf{S}\end{aligned} \hspace{\stretch{1}}(2.9)

We also have higher order terms (higher order multipoles) and relativistic corrections (like spin orbit coupling [2]).

Two spins.

READING: section 28 of [1].

Example: Consider two electrons, 1 in each of 2 quantum dots.

\begin{aligned}H = H_{1} \otimes H_{2}\end{aligned} \hspace{\stretch{1}}(3.11)

where H_1 and H_2 are both spin Hamiltonian’s for respective 2D Hilbert spaces. Our complete Hilbert space is thus a 4D space.

We’ll write

\begin{aligned}\begin{aligned}{\left\lvert {+} \right\rangle}_1 \otimes {\left\lvert {+} \right\rangle}_2 &= {\left\lvert {++} \right\rangle} \\ {\left\lvert {+} \right\rangle}_1 \otimes {\left\lvert {-} \right\rangle}_2 &= {\left\lvert {+-} \right\rangle} \\ {\left\lvert {-} \right\rangle}_1 \otimes {\left\lvert {+} \right\rangle}_2 &= {\left\lvert {-+} \right\rangle} \\ {\left\lvert {-} \right\rangle}_1 \otimes {\left\lvert {-} \right\rangle}_2 &= {\left\lvert {--} \right\rangle} \end{aligned}\end{aligned} \hspace{\stretch{1}}(3.12)

Can introduce

\begin{aligned}\mathbf{S}_1 &= \mathbf{S}_1^{(1)} \otimes I^{(2)} \\ \mathbf{S}_2 &= I^{(1)} \otimes \mathbf{S}_2^{(2)}\end{aligned} \hspace{\stretch{1}}(3.13)

Here we “promote” each of the individual spin operators to spin operators in the complete Hilbert space.

We write

\begin{aligned}S_{1z}{\left\lvert {++} \right\rangle} &= \frac{\hbar}{2} {\left\lvert {++} \right\rangle} \\ S_{1z}{\left\lvert {+-} \right\rangle} &= \frac{\hbar}{2} {\left\lvert {+-} \right\rangle}\end{aligned} \hspace{\stretch{1}}(3.15)


\begin{aligned}\mathbf{S} = \mathbf{S}_1 + \mathbf{S}_2,\end{aligned} \hspace{\stretch{1}}(3.17)

for the full spin angular momentum operator. The z component of this operator is

\begin{aligned}S_z = S_{1z} + S_{2z}\end{aligned} \hspace{\stretch{1}}(3.18)

\begin{aligned}S_z{\left\lvert {++} \right\rangle} &= (S_{1z} + S_{2z}) {\left\lvert {++} \right\rangle} = \left( \frac{\hbar}{2} +\frac{\hbar}{2} \right) {\left\lvert {++} \right\rangle} = \hbar {\left\lvert {++} \right\rangle} \\  S_z{\left\lvert {+-} \right\rangle} &= (S_{1z} + S_{2z}) {\left\lvert {+-} \right\rangle} = \left( \frac{\hbar}{2} -\frac{\hbar}{2} \right) {\left\lvert {+-} \right\rangle} = 0 \\ S_z{\left\lvert {-+} \right\rangle} &= (S_{1z} + S_{2z}) {\left\lvert {-+} \right\rangle} = \left( -\frac{\hbar}{2} +\frac{\hbar}{2} \right) {\left\lvert {-+} \right\rangle} = 0 \\ S_z{\left\lvert {--} \right\rangle} &= (S_{1z} + S_{2z}) {\left\lvert {--} \right\rangle} = \left( -\frac{\hbar}{2} -\frac{\hbar}{2} \right) {\left\lvert {--} \right\rangle} = -\hbar {\left\lvert {--} \right\rangle} \end{aligned} \hspace{\stretch{1}}(3.19)

So, we find that {\left\lvert {x x} \right\rangle} are all eigenkets of S_z. These will also all be eigenkets of \mathbf{S}_1^2 = S_{1x}^2 +S_{1y}^2 +S_{1z}^2 since we have

\begin{aligned}S_1^2 {\left\lvert {x x} \right\rangle} &= \hbar^2 \left(\frac{1}{{2}}\right) \left(1 + \frac{1}{{2}}\right) {\left\lvert {x x} \right\rangle} = \frac{3}{4} \hbar^2 {\left\lvert {x x} \right\rangle} \\ S_2^2 {\left\lvert {x x} \right\rangle} &= \hbar^2 \left(\frac{1}{{2}}\right) \left(1 + \frac{1}{{2}}\right) {\left\lvert {x x} \right\rangle} = \frac{3}{4} \hbar^2 {\left\lvert {x x} \right\rangle} \end{aligned} \hspace{\stretch{1}}(3.23)

\begin{aligned}\begin{aligned}S^2 &= (\mathbf{S}_1^2+\mathbf{S}_2^2) \cdot(\mathbf{S}_1^2+\mathbf{S}_2^2)  \\ &= S_1^2 + S_2^2 + 2 \mathbf{S}_1 \cdot \mathbf{S}_2\end{aligned}\end{aligned} \hspace{\stretch{1}}(3.25)

Are all the product kets also eigenkets of S^2? Calculate

\begin{aligned}S^2 {\left\lvert {+-} \right\rangle} &= (S_1^2 + S_2^2 + 2 \mathbf{S}_1 \cdot \mathbf{S}_2) {\left\lvert {+-} \right\rangle} \\ &=\left(\frac{3}{4}\hbar^2+\frac{3}{4}\hbar^2\right)+ 2 S_{1x} S_{2x} {\left\lvert {+-} \right\rangle} + 2 S_{1y} S_{2y} {\left\lvert {+-} \right\rangle} + 2 S_{1z} S_{2z} {\left\lvert {+-} \right\rangle} \end{aligned}

For the z mixed terms, we have

\begin{aligned}2 S_{1z} S_{2z} {\left\lvert {+-} \right\rangle}  = 2 \left(\frac{\hbar}{2}\right)\left(-\frac{\hbar}{2}\right){\left\lvert {+-} \right\rangle}\end{aligned} \hspace{\stretch{1}}(3.26)


\begin{aligned}S^2{\left\lvert {+-} \right\rangle} = \hbar^2 {\left\lvert {+-} \right\rangle} + 2 S_{1x} S_{2x} {\left\lvert {+-} \right\rangle} + 2 S_{1y} S_{2y} {\left\lvert {+-} \right\rangle} \end{aligned} \hspace{\stretch{1}}(3.27)

Since we have set our spin direction in the z direction with

\begin{aligned}{\left\lvert {+} \right\rangle} &\rightarrow \begin{bmatrix}1 \\ 0\end{bmatrix} \\ {\left\lvert {-} \right\rangle} &\rightarrow \begin{bmatrix}0 \\ 1 \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.28)

We have

\begin{aligned}S_x{\left\lvert {+} \right\rangle} &\rightarrow \frac{\hbar}{2} \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}\begin{bmatrix}1 \\ 0\end{bmatrix} =\frac{\hbar}{2}\begin{bmatrix}0 \\ 1 \end{bmatrix}=\frac{\hbar}{2} {\left\lvert {-} \right\rangle} \\ S_x{\left\lvert {-} \right\rangle} &\rightarrow \frac{\hbar}{2} \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}\begin{bmatrix}0 \\ 1 \end{bmatrix} =\frac{\hbar}{2}\begin{bmatrix}1  \\ 0 \end{bmatrix}=\frac{\hbar}{2} {\left\lvert {+} \right\rangle} \\ S_y{\left\lvert {+} \right\rangle} &\rightarrow \frac{\hbar}{2} \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix}\begin{bmatrix}1  \\ 0 \end{bmatrix} =\frac{i\hbar}{2}\begin{bmatrix}0  \\ 1 \end{bmatrix}=\frac{i\hbar}{2} {\left\lvert {-} \right\rangle} \\ S_y{\left\lvert {-} \right\rangle} &\rightarrow \frac{\hbar}{2} \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix}\begin{bmatrix}0  \\ 1 \end{bmatrix} =\frac{-i\hbar}{2}\begin{bmatrix}1  \\ 0 \end{bmatrix}=-\frac{i\hbar}{2} {\left\lvert {+} \right\rangle} \\ \end{aligned}

And are able to arrive at the action of S^2 on our mixed composite state

\begin{aligned}S^2{\left\lvert {+-} \right\rangle} = \hbar^2 ({\left\lvert {+-} \right\rangle} + {\left\lvert {-+} \right\rangle} ).\end{aligned} \hspace{\stretch{1}}(3.30)

For the action on the {\left\lvert {++} \right\rangle} state we have

\begin{aligned}S^2 {\left\lvert {++} \right\rangle} &=\left(\frac{3}{4}\hbar^2 +\frac{3}{4}\hbar^2\right){\left\lvert {++} \right\rangle} + 2 \frac{\hbar^2}{4} {\left\lvert {--} \right\rangle} + 2 i^2 \frac{\hbar^2}{4} {\left\lvert {--} \right\rangle} +2 \left(\frac{\hbar}{2}\right)\left(\frac{\hbar}{2}\right){\left\lvert {++} \right\rangle} \\ &=2 \hbar^2 {\left\lvert {++} \right\rangle} \\ \end{aligned}

and on the {\left\lvert {--} \right\rangle} state we have

\begin{aligned}S^2 {\left\lvert {--} \right\rangle} &=\left(\frac{3}{4}\hbar^2 +\frac{3}{4}\hbar^2\right){\left\lvert {--} \right\rangle} + 2 \frac{(-\hbar)^2}{4} {\left\lvert {++} \right\rangle} + 2 i^2 \frac{\hbar^2}{4} {\left\lvert {++} \right\rangle} +2 \left(-\frac{\hbar}{2}\right)\left(-\frac{\hbar}{2}\right){\left\lvert {--} \right\rangle} \\ &=2 \hbar^2 {\left\lvert {--} \right\rangle} \end{aligned}

All of this can be assembled into a tidier matrix form

\begin{aligned}S^2\rightarrow \hbar^2\begin{bmatrix}2 & 0 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 2 \\ \end{bmatrix},\end{aligned} \hspace{\stretch{1}}(3.31)

where the matrix is taken with respect to the (ordered) basis

\begin{aligned}\{{\left\lvert {++} \right\rangle},{\left\lvert {+-} \right\rangle},{\left\lvert {-+} \right\rangle},{\left\lvert {--} \right\rangle}\}.\end{aligned} \hspace{\stretch{1}}(3.32)


\begin{aligned}\left[{S^2},{S_z}\right] &= 0 \\ \left[{S_i},{S_j}\right] &= i \hbar \sum_k \epsilon_{ijk} S_k\end{aligned} \hspace{\stretch{1}}(3.33)

It should be possible to find eigenkets of S^2 and S_z

\begin{aligned}S^2 {\left\lvert {s m_s} \right\rangle} &= s(s+1)\hbar^2 {\left\lvert {s m_s} \right\rangle} \\ S_z {\left\lvert {s m_s} \right\rangle} &= \hbar m_s {\left\lvert {s m_s} \right\rangle} \end{aligned} \hspace{\stretch{1}}(3.35)

An orthonormal set of eigenkets of S^2 and S_z is found to be

\begin{aligned}\begin{array}{l l}{\left\lvert {++} \right\rangle} & \mbox{latex s = 1$ and m_s = 1} \\ \frac{1}{{\sqrt{2}}} \left( {\left\lvert {+-} \right\rangle} + {\left\lvert {-+} \right\rangle} \right) & \mbox{s = 1 and m_s = 0} \\ {\left\lvert {–} \right\rangle} & \mbox{s = 1 and m_s = -1} \\ \frac{1}{{\sqrt{2}}} \left( {\left\lvert {+-} \right\rangle} – {\left\lvert {-+} \right\rangle} \right) & \mbox{s = 0 and m_s = 0}\end{array}\end{aligned} \hspace{\stretch{1}}(3.37)$

The first three kets here can be grouped into a triplet in a 3D Hilbert space, whereas the last treated as a singlet in a 1D Hilbert space.

Form a grouping

\begin{aligned}H = H_1 \otimes H_2\end{aligned} \hspace{\stretch{1}}(3.38)

Can write

\begin{aligned}\frac{1}{{2}} \otimes \frac{1}{{2}} = 1 \oplus 0\end{aligned} \hspace{\stretch{1}}(3.39)

where the 1 and 0 here refer to the spin index s.

Other examples

Consider, perhaps, the l=5 state of the hydrogen atom

\begin{aligned}J_1^2 {\left\lvert {j_1 m_1} \right\rangle} &= j_1(j_1+1)\hbar^2 {\left\lvert {j_1 m_1} \right\rangle} \\ J_{1z} {\left\lvert {j_1 m_1} \right\rangle} &= \hbar m_1 {\left\lvert {j_1 m_1} \right\rangle} \end{aligned} \hspace{\stretch{1}}(3.40)

\begin{aligned}J_2^2 {\left\lvert {j_2 m_2} \right\rangle} &= j_2(j_2+1)\hbar^2 {\left\lvert {j_2 m_2} \right\rangle} \\ J_{2z} {\left\lvert {j_2 m_2} \right\rangle} &= \hbar m_2 {\left\lvert {j_2 m_2} \right\rangle} \end{aligned} \hspace{\stretch{1}}(3.42)

Consider the Hilbert space spanned by {\left\lvert {j_1 m_1} \right\rangle} \otimes {\left\lvert {j_2 m_2} \right\rangle}, a (2 j_1 + 1)(2 j_2 + 1) dimensional space. How to find the eigenkets of J^2 and J_z?


[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] Wikipedia. Spin.orbit interaction — wikipedia, the free encyclopedia [online]. 2011. [Online; accessed 2-November-2011].\%E2\%80\%93orbit_interaction&oldid=451606718.

Posted in Math and Physics Learning. | Tagged: , , , , , , , | Leave a Comment »

PHY456H1F: Quantum Mechanics II. Lecture 15 (Taught by Prof J.E. Sipe). Rotation operator in spin space

Posted by peeterjoot on October 31, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]


Peeter’s lecture notes from class. May not be entirely coherent.

Rotation operator in spin space.

We can formally expand our rotation operator in Taylor series

\begin{aligned}e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{S}/\hbar}= I +\left(-i \theta \hat{\mathbf{n}} \cdot \mathbf{S}/\hbar\right)+\frac{1}{{2!}}\left(-i \theta \hat{\mathbf{n}} \cdot \mathbf{S}/\hbar\right)^2+\frac{1}{{3!}}\left(-i \theta \hat{\mathbf{n}} \cdot \mathbf{S}/\hbar\right)^3+ \cdots\end{aligned} \hspace{\stretch{1}}(2.1)


\begin{aligned}e^{-i \theta \hat{\mathbf{n}} \cdot \boldsymbol{\sigma}/2}&= I +\left(-i \theta \hat{\mathbf{n}} \cdot \boldsymbol{\sigma}/2\right)+\frac{1}{{2!}}\left(-i \theta \hat{\mathbf{n}} \cdot \boldsymbol{\sigma}/2\right)^2+\frac{1}{{3!}}\left(-i \theta \hat{\mathbf{n}} \cdot \boldsymbol{\sigma}/2\right)^3+ \cdots \\ &=\sigma_0 +\left(\frac{-i \theta}{2}\right) (\hat{\mathbf{n}} \cdot \boldsymbol{\sigma})+\frac{1}{{2!}} \left(\frac{-i \theta}{2}\right) (\hat{\mathbf{n}} \cdot \boldsymbol{\sigma})^2+\frac{1}{{3!}} \left(\frac{-i \theta}{2}\right) (\hat{\mathbf{n}} \cdot \boldsymbol{\sigma})^3+ \cdots \\ &=\sigma_0 +\left(\frac{-i \theta}{2}\right) (\hat{\mathbf{n}} \cdot \boldsymbol{\sigma})+\frac{1}{{2!}} \left(\frac{-i \theta}{2}\right) \sigma_0+\frac{1}{{3!}} \left(\frac{-i \theta}{2}\right) (\hat{\mathbf{n}} \cdot \boldsymbol{\sigma}) + \cdots \\ &=\sigma_0 \left( 1 - \frac{1}{{2!}}\left(\frac{\theta}{2}\right)^2 + \cdots \right) +(\hat{\mathbf{n}} \cdot \boldsymbol{\sigma}) \left( \frac{\theta}{2} - \frac{1}{{3!}}\left(\frac{\theta}{2}\right)^3 + \cdots \right) \\ &=\cos(\theta/2) \sigma_0 + \sin(\theta/2) (\hat{\mathbf{n}} \cdot \boldsymbol{\sigma})\end{aligned}

where we’ve used the fact that (\hat{\mathbf{n}} \cdot \boldsymbol{\sigma})^2 = \sigma_0.

So our representation of the spin operator is

\begin{aligned}\begin{aligned}e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{S}/\hbar} &\rightarrow \cos(\theta/2) \sigma_0 + \sin(\theta/2) (\hat{\mathbf{n}} \cdot \boldsymbol{\sigma}) \\ &=\cos(\theta/2) \sigma_0 + \sin(\theta/2) \left(n_x \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} + n_y \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} + n_z \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} \right) \\ &=\begin{bmatrix}\cos(\theta/2) -i n_z \sin(\theta/2) & -i (n_x -i n_y) \sin(\theta/2) \\ -i (n_x + i n_y) \sin(\theta/2) & \cos(\theta/2) +i n_z \sin(\theta/2) \end{bmatrix}\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.2)

Note that, in particular,

\begin{aligned}e^{-2 \pi i \hat{\mathbf{n}} \cdot \mathbf{S}/\hbar} \rightarrow \cos\pi \sigma_0 = -\sigma_0\end{aligned} \hspace{\stretch{1}}(2.3)

This “rotates” the ket, but introduces a phase factor.

Can do this in general for other degrees of spin, for s = 1/2, 3/2, 5/2, \cdots.

Unfortunate interjection by me

I mentioned the half angle rotation operator that requires a half angle operator sandwich. Prof. Sipe thought I might be talking about a Heisenberg picture representation, where we have something like this in expectation values

\begin{aligned}{\left\lvert {\psi'} \right\rangle} = e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{J}/\hbar} {\left\lvert {\psi} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.4)

so that

\begin{aligned}{\left\langle {\psi'} \right\rvert}\mathcal{O}{\left\lvert {\psi'} \right\rangle} = {\left\langle {\psi} \right\rvert} e^{i \theta \hat{\mathbf{n}} \cdot \mathbf{J}/\hbar} \mathcal{O}e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{J}/\hbar} {\left\lvert {\psi} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.5)

However, what I was referring to, was that a general rotation of a vector in a Pauli matrix basis

\begin{aligned}R(\sum a_k \sigma_k) = R( \mathbf{a} \cdot \boldsymbol{\sigma})\end{aligned} \hspace{\stretch{1}}(2.6)

can be expressed by sandwiching the Pauli vector representation by two half angle rotation operators like our spin 1/2 operators from class today

\begin{aligned}R( \mathbf{a} \cdot \boldsymbol{\sigma}) = e^{-\theta \hat{\mathbf{u}} \cdot \boldsymbol{\sigma} \hat{\mathbf{v}} \cdot \boldsymbol{\sigma}/2} \mathbf{a} \cdot \boldsymbol{\sigma} e^{\theta \hat{\mathbf{u}} \cdot \boldsymbol{\sigma} \hat{\mathbf{v}} \cdot \boldsymbol{\sigma}/2}\end{aligned} \hspace{\stretch{1}}(2.7)

where \hat{\mathbf{u}} and \hat{\mathbf{v}} are two non-colinear orthogonal unit vectors that define the oriented plane that we are rotating in.

For example, rotating in the x-y plane, with \hat{\mathbf{u}} = \hat{\mathbf{x}} and \hat{\mathbf{v}} = \hat{\mathbf{y}}, we have

\begin{aligned}R( \mathbf{a} \cdot \boldsymbol{\sigma}) = e^{-\theta \sigma_1 \sigma_2/2} (a_1 \sigma_1 + a_2 \sigma_2 + a_3 \sigma_3) e^{\theta \sigma_1 \sigma_2/2} \end{aligned} \hspace{\stretch{1}}(2.8)

Observe that these exponentials commute with \sigma_3, leaving

\begin{aligned}R( \mathbf{a} \cdot \boldsymbol{\sigma}) &= (a_1 \sigma_1 + a_2 \sigma_2) e^{\theta \sigma_1 \sigma_2} +  a_3 \sigma_3 \\ &= (a_1 \sigma_1 + a_2 \sigma_2) (\cos\theta + \sigma_1 \sigma_2 \sin\theta)+a_3 \sigma_3 \\ &= \sigma_1 (a_1 \cos\theta - a_2 \sin\theta)+ \sigma_2 (a_2 \cos\theta + a_1 \sin\theta)+ \sigma_3 (a_3)\end{aligned}

yielding our usual coordinate rotation matrix. Expressed in terms of a unit normal to that plane, we form the normal by multiplication with the unit spatial volume element I = \sigma_1 \sigma_2 \sigma_3. For example:

\begin{aligned}\sigma_1 \sigma_2 \sigma_3( \sigma_3 )=\sigma_1 \sigma_2 \end{aligned} \hspace{\stretch{1}}(2.9)

and can in general write a spatial rotation in a Pauli basis representation as a sandwich of half angle rotation matrix exponentials

\begin{aligned}R( \mathbf{a} \cdot \boldsymbol{\sigma}) = e^{-I \theta (\hat{\mathbf{n}} \cdot \boldsymbol{\sigma})/2} (\mathbf{a} \cdot \boldsymbol{\sigma})e^{I \theta (\hat{\mathbf{n}} \cdot \boldsymbol{\sigma})/2} \end{aligned} \hspace{\stretch{1}}(2.10)

when \hat{\mathbf{n}} \cdot \mathbf{a} = 0 we get the complex-number like single sided exponential rotation exponentials (since \mathbf{a} \cdot \boldsymbol{\sigma} commutes with \mathbf{n} \cdot \boldsymbol{\sigma} in that case)

\begin{aligned}R( \mathbf{a} \cdot \boldsymbol{\sigma}) = (\mathbf{a} \cdot \boldsymbol{\sigma} )e^{I \theta (\hat{\mathbf{n}} \cdot \boldsymbol{\sigma})} \end{aligned} \hspace{\stretch{1}}(2.11)

I believe it was pointed out in one of [1] or [2] that rotations expressed in terms of half angle Pauli matrices has caused some confusion to students of quantum mechanics, because this 2 \pi “rotation” only generates half of the full spatial rotation. It was argued that this sort of confusion can be avoided if one observes that these half angle rotations exponentials are exactly what we require for general spatial rotations, and that a pair of half angle operators are required to produce a full spatial rotation.

The book [1] takes this a lot further, and produces a formulation of spin operators that is devoid of the normal scalar imaginary i (using the Clifford algebra spatial unit volume element instead), and also does not assume a specific matrix representation of the spin operators. They argue that this leads to some subtleties associated with interpretation, but at the time I was attempting to read that text I did know enough QM to appreciate what they were doing, and haven’t had time to attempt a new study of that content.

Spin dynamics

At least classically, the angular momentum of charged objects is associated with a magnetic moment as illustrated in figure (\ref{fig:qmTwoL15:qmTwoL15fig1})

\caption{Magnetic moment due to steady state current}

\begin{aligned}\boldsymbol{\mu} = I A \mathbf{e}_\perp\end{aligned} \hspace{\stretch{1}}(3.12)

In our scheme, following the (cgs?) text conventions of [3], where the \mathbf{E} and \mathbf{B} have the same units, we write

\begin{aligned}\boldsymbol{\mu} = \frac{I A}{c} \mathbf{e}_\perp\end{aligned} \hspace{\stretch{1}}(3.13)

For a charge moving in a circle as in figure (\ref{fig:qmTwoL15:qmTwoL15fig2})
\caption{Charge moving in circle.}

\begin{aligned}\begin{aligned}I &= \frac{\text{charge}}{\text{time}} \\ &= \frac{\text{distance}}{\text{time}} \frac{\text{charge}}{\text{distance}} \\ &= \frac{q v}{ 2 \pi r}\end{aligned}\end{aligned} \hspace{\stretch{1}}(3.14)

so the magnetic moment is

\begin{aligned}\begin{aligned}\mu &= \frac{q v}{ 2 \pi r} \frac{\pi r^2}{c}  \\ &= \frac{q }{ 2 m c } (m v r) \\ &= \gamma L\end{aligned}\end{aligned} \hspace{\stretch{1}}(3.15)

Here \gamma is the gyromagnetic ratio

Recall that we have a torque, as shown in figure (\ref{fig:qmTwoL15:qmTwoL15fig3})
\caption{Induced torque in the presence of a magnetic field.}

\begin{aligned}\mathbf{T} = \boldsymbol{\mu} \times \mathbf{B}\end{aligned} \hspace{\stretch{1}}(3.16)

tending to line up \boldsymbol{\mu} with \mathbf{B}. The energy is then

\begin{aligned}-\boldsymbol{\mu} \cdot \mathbf{B}\end{aligned} \hspace{\stretch{1}}(3.17)

Also recall that this torque leads to precession as shown in figure (\ref{fig:qmTwoL15:qmTwoL15fig4})

\begin{aligned}\frac{d{\mathbf{L}}}{dt} = \mathbf{T} = \gamma \mathbf{L} \times \mathbf{B},\end{aligned} \hspace{\stretch{1}}(3.18)

\caption{Precession due to torque.}

with precession frequency

\begin{aligned}\boldsymbol{\omega} = - \gamma \mathbf{B}.\end{aligned} \hspace{\stretch{1}}(3.19)

For a current due to a moving electron

\begin{aligned}\gamma = -\frac{e}{2 m c} < 0\end{aligned} \hspace{\stretch{1}}(3.20)

where we are, here, writing for charge on the electron -e.

Question: steady state currents only?. Yes, this is only true for steady state currents.

For the translational motion of an electron, even if it is not moving in a steady way, regardless of it’s dynamics

\begin{aligned}\boldsymbol{\mu}_0 = - \frac{e}{2 m c} \mathbf{L}\end{aligned} \hspace{\stretch{1}}(3.21)

Now, back to quantum mechanics, we turn \boldsymbol{\mu}_0 into a dipole moment operator and \mathbf{L} is “promoted” to an angular momentum operator.

\begin{aligned}H_{\text{int}} = - \boldsymbol{\mu}_0 \cdot \mathbf{B}\end{aligned} \hspace{\stretch{1}}(3.22)

What about the “spin”?


\begin{aligned}\boldsymbol{\mu}_s = \gamma_s \mathbf{S}\end{aligned} \hspace{\stretch{1}}(3.23)

we write this as

\begin{aligned}\boldsymbol{\mu}_s = g \left( -\frac{e}{ 2 m c} \right)\mathbf{S}\end{aligned} \hspace{\stretch{1}}(3.24)

so that

\begin{aligned}\gamma_s = - \frac{g e}{ 2 m c} \end{aligned} \hspace{\stretch{1}}(3.25)

Experimentally, one finds to very good approximation

\begin{aligned}g = 2\end{aligned} \hspace{\stretch{1}}(3.26)

There was a lot of trouble with this in early quantum mechanics where people got things wrong, and canceled the wrong factors of 2.

In fact, Dirac’s relativistic theory for the electron predicts g=2.

When this is measured experimentally, one does not get exactly g=2, and a theory that also incorporates photon creation and destruction and the interaction with the electron with such (virtual) photons. We get

\begin{aligned}\begin{aligned}g_{\text{theory}} &= 2 \left(1.001159652140 (\pm 28)\right) \\ g_{\text{experimental}} &= 2 \left(1.0011596521884 (\pm 43)\right)\end{aligned}\end{aligned} \hspace{\stretch{1}}(3.27)

Richard Feynman compared the precision of quantum mechanics, referring to this measurement, “to predicting a distance as great as the width of North America to an accuracy of one human hair’s breadth”.


[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[2] D. Hestenes. New Foundations for Classical Mechanics. Kluwer Academic Publishers, 1999.

[3] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , | Leave a Comment »

PHY456H1F: Quantum Mechanics II. Lecture 14 (Taught by Prof J.E. Sipe). Representation of two state kets and Pauli spin matrices.

Posted by peeterjoot on October 26, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]


Peeter’s lecture notes from class. May not be entirely coherent.

Representation of kets.

Reading: section 5.1 – section 5.9 and section 26 in [1].

We found the representations of the spin operators

\begin{aligned}S_x &\rightarrow \frac{\hbar}{2} \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} \\ S_y &\rightarrow \frac{\hbar}{2} \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} \\ S_z &\rightarrow \frac{\hbar}{2} \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(2.1)

How about kets? For example for {\left\lvert {\chi} \right\rangle} \in H_s

\begin{aligned}{\left\lvert {\chi} \right\rangle} \rightarrow \begin{bmatrix}\left\langle{{+}} \vert {{\chi}}\right\rangle \\ \left\langle{{-}} \vert {{\chi}}\right\rangle\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(2.4)


\begin{aligned}{\left\lvert {+} \right\rangle} &\rightarrow \begin{bmatrix}1 \\ 0\end{bmatrix} \\ {\left\lvert {0} \right\rangle} &\rightarrow \begin{bmatrix}0 \\ 1\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(2.5)

So, for example

\begin{aligned}S_y{\left\lvert {+} \right\rangle} \rightarrow \frac{\hbar}{2} \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix}\begin{bmatrix}1 \\ 0\end{bmatrix} =\frac{i\hbar}{2}\begin{bmatrix}0 \\ 1\end{bmatrix} \end{aligned} \hspace{\stretch{1}}(2.7)

Kets in H_o \otimes H_s

\begin{aligned}{\left\lvert {\psi} \right\rangle} \rightarrow \begin{bmatrix}\left\langle{{\mathbf{r}+}} \vert {{\psi}}\right\rangle \\ \left\langle{{\mathbf{r}-}} \vert {{\psi}}\right\rangle\end{bmatrix}=\begin{bmatrix}\psi_{+}(\mathbf{r}) \\ \psi_{-}(\mathbf{r})\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.8)

This is a “spinor”


\begin{aligned}\begin{aligned}\left\langle{{\mathbf{r} \pm}} \vert {{\psi}}\right\rangle&= \psi_{\pm}(\mathbf{r}) \\ &= \psi_{+} \begin{bmatrix}1 \\ 0\end{bmatrix}+\psi_{-} \begin{bmatrix}0 \\ 1 \end{bmatrix}\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.9)


\begin{aligned}\left\langle{{\psi}} \vert {{\psi}}\right\rangle = 1\end{aligned} \hspace{\stretch{1}}(2.10)


\begin{aligned}\begin{aligned}I &= I_o \otimes I_s \\ &= \int d^3 \mathbf{r} {\left\lvert {\mathbf{r}} \right\rangle}{\left\langle {\mathbf{r}} \right\rvert} \otimes \left( {\left\lvert {{+}} \right\rangle}{\left\langle {{+}} \right\rvert}+{\left\lvert {{-}} \right\rangle}{\left\langle {{-}} \right\rvert}\right) \\ &=\int d^3 \mathbf{r} {\left\lvert {\mathbf{r}} \right\rangle}{\left\langle {\mathbf{r}} \right\rvert} \otimes \sum_{\sigma=\pm} {\left\lvert {{\sigma}} \right\rangle}{\left\langle {{\sigma}} \right\rvert} \\ &=\sum_{\sigma = \pm} \int d^3 \mathbf{r} {\left\lvert {{\mathbf{r} \sigma}} \right\rangle}{\left\langle {{\mathbf{r} \sigma}} \right\rvert} \end{aligned}\end{aligned} \hspace{\stretch{1}}(2.11)


\begin{aligned}\begin{aligned}{\left\langle {\psi} \right\rvert} I {\left\lvert {\psi} \right\rangle} &= \sum_{\sigma = \pm} \int d^3 \mathbf{r} \left\langle{{\psi}} \vert {{\mathbf{r} \sigma}}\right\rangle \left\langle{{\mathbf{r} \sigma}} \vert {{\psi}}\right\rangle  \\ &= \int d^3 \mathbf{r} \left( {\left\lvert{\psi_{+}(\mathbf{r})}\right\rvert}^2+{\left\lvert{\psi_{-}(\mathbf{r})}\right\rvert}^2\right)\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.12)


\begin{aligned}\begin{aligned}{\left\lvert {\psi} \right\rangle} &= I {\left\lvert {\psi} \right\rangle} \\ &=\int d^3 \mathbf{r} \sum_{\sigma = \pm} {\left\lvert {\mathbf{r} \sigma} \right\rangle}\left\langle{{\mathbf{r} \sigma}} \vert {{\psi}}\right\rangle \\ &=\sum_{\sigma = \pm} \left(\int d^3 \mathbf{r} \psi_\sigma(\mathbf{r})\right){\left\lvert {\mathbf{r} \sigma} \right\rangle} \\ &=\sum_{\sigma = \pm} \left(\int d^3 \mathbf{r} \psi_\sigma(\mathbf{r}) {\left\lvert {\mathbf{r}} \right\rangle}\right)\otimes {\left\lvert {\sigma} \right\rangle} \end{aligned}\end{aligned} \hspace{\stretch{1}}(2.13)

In braces we have a ket in H_o, let’s call it

\begin{aligned}{\left\lvert {\psi_\sigma} \right\rangle} = \int d^3 \mathbf{r} \psi_\sigma(\mathbf{r}) {\left\lvert {\mathbf{r}} \right\rangle},\end{aligned} \hspace{\stretch{1}}(2.14)


\begin{aligned}{\left\lvert {\psi} \right\rangle} = {\left\lvert {\psi_{+}} \right\rangle} {\left\lvert {+} \right\rangle} + {\left\lvert {\psi_{-}} \right\rangle} {\left\lvert {-} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.15)

where the direct product \otimes is implied.

We can form a ket in H_s as

\begin{aligned}\left\langle{\mathbf{r}} \vert {{\psi}}\right\rangle = \psi_{+}(\mathbf{r}) {\left\lvert {+} \right\rangle} + \psi_{-}(\mathbf{r}) {\left\lvert {-} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.16)

An operator O_o which acts on H_o alone can be promoted to O_o \otimes I_s, which is now an operator that acts on H_o \otimes H_s. We are sometimes a little cavalier in notation and leave this off, but we should remember this.

\begin{aligned}O_o {\left\lvert {\psi} \right\rangle} = (O_o {\left\lvert {\psi+} \right\rangle}) {\left\lvert {+} \right\rangle}+ (O_o {\left\lvert {\psi+} \right\rangle}) {\left\lvert {+} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.17)

and likewise

\begin{aligned}O_s {\left\lvert {\psi} \right\rangle} = {\left\lvert {\psi+} \right\rangle} (O_s {\left\lvert {+} \right\rangle})+{\left\lvert {\psi-} \right\rangle} (O_s {\left\lvert {-} \right\rangle})\end{aligned} \hspace{\stretch{1}}(2.18)


\begin{aligned}O_o O_s {\left\lvert {\psi} \right\rangle} = (O_o {\left\lvert {\psi+} \right\rangle}) (O_s {\left\lvert {+} \right\rangle})+(O_o {\left\lvert {\psi-} \right\rangle}) (O_s {\left\lvert {-} \right\rangle})\end{aligned} \hspace{\stretch{1}}(2.19)

Suppose we want to rotate a ket, we do this with a full angular momentum operator

\begin{aligned}e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{J}/\hbar} {\left\lvert {\psi} \right\rangle}=e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{L}/\hbar} e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{S}/\hbar} {\left\lvert {\psi} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.20)

(recalling that \mathbf{L} and \mathbf{S} commute)


\begin{aligned}e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{J}/\hbar} {\left\lvert {\psi} \right\rangle}=(e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{L}/\hbar} {\left\lvert {\psi+} \right\rangle}) (e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{S}/\hbar} {\left\lvert {+} \right\rangle})+(e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{L}/\hbar} {\left\lvert {\psi-} \right\rangle}) (e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{S}/\hbar} {\left\lvert {-} \right\rangle})\end{aligned} \hspace{\stretch{1}}(2.21)

A simple example.

\begin{aligned}{\left\lvert {\psi} \right\rangle} = {\left\lvert {\psi_+} \right\rangle} {\left\lvert {+} \right\rangle}+{\left\lvert {\psi_-} \right\rangle} {\left\lvert {-} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.22)


\begin{aligned}{\left\lvert {\psi_+} \right\rangle} &= \alpha {\left\lvert {\psi_0} \right\rangle} \\ {\left\lvert {\psi_-} \right\rangle} &= \beta {\left\lvert {\psi_0} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.23)


\begin{aligned}{\left\lvert{\alpha}\right\rvert}^2 + {\left\lvert{\beta}\right\rvert}^2 = 1\end{aligned} \hspace{\stretch{1}}(2.25)


\begin{aligned}{\left\lvert {\psi} \right\rangle} = {\left\lvert {\psi_0} \right\rangle} {\left\lvert {\chi} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.26)


\begin{aligned}{\left\lvert {\chi} \right\rangle} = \alpha {\left\lvert {+} \right\rangle} + \beta {\left\lvert {-} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.27)


\begin{aligned}\left\langle{{\psi}} \vert {{\psi}}\right\rangle = 1,\end{aligned} \hspace{\stretch{1}}(2.28)

\begin{aligned}\left\langle{{\psi_0}} \vert {{\psi_0}}\right\rangle \left\langle{{\chi}} \vert {{\chi}}\right\rangle  = 1\end{aligned} \hspace{\stretch{1}}(2.29)


\begin{aligned}\left\langle{{\psi_0}} \vert {{\psi_0}}\right\rangle = 1\end{aligned} \hspace{\stretch{1}}(2.30)

We are going to concentrate on the unentagled state of 2.26.

How about with

\begin{aligned}{\left\lvert{\alpha}\right\rvert}^2 = 1, \beta = 0\end{aligned} \hspace{\stretch{1}}(2.31)

{\left\lvert {\chi} \right\rangle} is an eigenket of S_z with eigenvalue \hbar/2.

\begin{aligned}{\left\lvert{\beta}\right\rvert}^2 = 1, \alpha = 0\end{aligned} \hspace{\stretch{1}}(2.32)

{\left\lvert {\chi} \right\rangle} is an eigenket of S_z with eigenvalue -\hbar/2.

What is {\left\lvert {\chi} \right\rangle} if it is an eigenket of \hat{\mathbf{n}} \cdot \mathbf{S}?

FIXME: F1: standard spherical projection picture, with \hat{\mathbf{n}} projected down onto the x,y plane at angle \phi and at an angle \theta from the z axis.

The eigenvalues will still be \pm \hbar/2 since there is nothing special about the z direction.

\begin{aligned}\begin{aligned}\hat{\mathbf{n}} \cdot \mathbf{S} &= n_x S_x+n_y S_y+n_z S_z \\ &\rightarrow\frac{\hbar}{2} \begin{bmatrix}n_z & n_x - i n_y \\ n_x + i n_y & -n_z\end{bmatrix} \\ &=\frac{\hbar}{2} \begin{bmatrix}\cos\theta & \sin\theta e^{-i\phi}\sin\theta e^{i\phi} & -\cos\theta\end{bmatrix}\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.33)

To find the eigenkets we diagonalize this, and we find representations of the eigenkets are

\begin{aligned}{\left\lvert {\hat{\mathbf{n}}+} \right\rangle} &\rightarrow \begin{bmatrix}\cos\left(\frac{\theta}{2}\right) e^{-i\phi/2} \\ \sin\left(\frac{\theta}{2}\right) e^{i\phi/2} \end{bmatrix} \\ {\left\lvert {\hat{\mathbf{n}}-} \right\rangle} &\rightarrow \begin{bmatrix}-\sin\left(\frac{\theta}{2}\right) e^{-i\phi/2} \\ \cos\left(\frac{\theta}{2}\right) e^{i\phi/2} \end{bmatrix},\end{aligned} \hspace{\stretch{1}}(2.34)

with eigenvalues \hbar/2 and -\hbar/2 respectively.

So in the abstract notation, tossing the specific representation, we have

\begin{aligned}{\left\lvert {\hat{\mathbf{n}}+} \right\rangle} &\rightarrow \cos\left(\frac{\theta}{2}\right) e^{-i\phi/2} {\left\lvert {+} \right\rangle}\sin\left(\frac{\theta}{2}\right) e^{i\phi/2}  {\left\lvert {-} \right\rangle} \\ {\left\lvert {\hat{\mathbf{n}}-} \right\rangle} &\rightarrow -\sin\left(\frac{\theta}{2}\right) e^{-i\phi/2} {\left\lvert {+} \right\rangle}\cos\left(\frac{\theta}{2}\right) e^{i\phi/2}  {\left\lvert {-} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.36)

Representation of two state kets

Every ket

\begin{aligned}{\left\lvert {\chi} \right\rangle} \rightarrow \begin{bmatrix}\alpha \\ \beta\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.38)

for which

\begin{aligned}{\left\lvert{\alpha}\right\rvert}^2 + {\left\lvert{\beta}\right\rvert}^2 = 1\end{aligned} \hspace{\stretch{1}}(3.39)

can be written in the form 2.34 for some \theta and \phi, neglecting an overall phase factor.

For any ket in H_s, that ket is “spin up” in some direction.

FIXME: show this.

Pauli spin matrices.

It is useful to write

\begin{aligned}S_x &= \frac{\hbar}{2} \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} \equiv \frac{\hbar}{2} \sigma_x \\ S_y &= \frac{\hbar}{2} \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} \equiv \frac{\hbar}{2} \sigma_y \\ &= \frac{\hbar}{2} \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} \equiv \frac{\hbar}{2} \sigma_z \end{aligned} \hspace{\stretch{1}}(4.40)


\begin{aligned}\sigma_x &= \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} \\ \sigma_y &= \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} \\ \sigma_z &= \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(4.43)

These are the Pauli spin matrices.

Interesting properties.


\begin{aligned}\left[{\sigma_i},{\sigma_j}\right] = \sigma_i \sigma_j + \sigma_j \sigma_i = 0, \qquad \mbox{ if latex i < j$}\end{aligned} \hspace{\stretch{1}}(4.46)$


\begin{aligned}\sigma_x \sigma_y = i \sigma_z\end{aligned} \hspace{\stretch{1}}(4.47)

(and cyclic permutations)


\begin{aligned}\text{Tr}(\sigma_i) = 0\end{aligned} \hspace{\stretch{1}}(4.48)


\begin{aligned}(\hat{\mathbf{n}} \cdot \boldsymbol{\sigma})^2 = \sigma_0\end{aligned} \hspace{\stretch{1}}(4.49)


\begin{aligned}\hat{\mathbf{n}} \cdot \boldsymbol{\sigma} \equiv n_x \sigma_x + n_y \sigma_y + n_z \sigma_z,\end{aligned} \hspace{\stretch{1}}(4.50)


\begin{aligned}\sigma_0 = \begin{bmatrix}1 & 0 \\ 0 & 1\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(4.51)

(note \text{Tr}(\sigma_0) \ne 0)


\begin{aligned}\left[{\sigma_i},{\sigma_j}\right] &= 2 \delta_{ij} \sigma_0 \\ \left[{\sigma_x},{\sigma_y}\right] &= 2 i \sigma_z\end{aligned} \hspace{\stretch{1}}(4.52)

(and cyclic permutations of the latter).

Can combine these to show that

\begin{aligned}(\mathbf{A} \cdot \boldsymbol{\sigma})(\mathbf{B} \cdot \boldsymbol{\sigma})=(\mathbf{A} \cdot \mathbf{B}) \sigma_0 + i (\mathbf{A} \times \mathbf{B}) \cdot \boldsymbol{\sigma}\end{aligned} \hspace{\stretch{1}}(4.54)

where \mathbf{A} and \mathbf{B} are vectors (or more generally operators that commute with the \boldsymbol{\sigma} matrices).


\begin{aligned}\text{Tr}(\sigma_i \sigma_j) = 2 \delta_{ij}\end{aligned} \hspace{\stretch{1}}(4.55)


\begin{aligned}\text{Tr}(\sigma_\alpha \sigma_\beta) = 2 \delta_{\alpha \beta},\end{aligned} \hspace{\stretch{1}}(4.56)

where \alpha, \beta = 0, x, y, z

Note that any complext matrix M can be written as

\begin{aligned}\begin{aligned}M &= \sum_\alpha m_a \sigma_\alpha \\   &=\begin{bmatrix}m_0 + m_z & m_x - i m_y \\ m_x + i m_y & m_0 - m_z\end{bmatrix}\end{aligned}\end{aligned} \hspace{\stretch{1}}(4.57)

for any four complex numbers m_0, m_x, m_y, m_z


\begin{aligned}m_\beta = \frac{1}{{2}} \text{Tr}(M \sigma_\beta).\end{aligned} \hspace{\stretch{1}}(4.58)


[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , , , | Leave a Comment »

PHY456H1F: Quantum Mechanics II. Lecture 11 (Taught by Prof J.E. Sipe). Spin and Spinors

Posted by peeterjoot on October 17, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]


Peeter’s lecture notes from class. May not be entirely coherent.


Covered in section 26 of the text [1].

Example: Time translation

\begin{aligned}{\lvert {\psi(t)} \rangle} = e^{-i H t/\hbar} {\lvert {\psi(0)} \rangle} .\end{aligned} \hspace{\stretch{1}}(2.1)

The Hamiltonian “generates” evolution (or translation) in time.

Example: Spatial translation

\begin{aligned}{\lvert {\mathbf{r} + \mathbf{a}} \rangle} = e^{-i \mathbf{a} \cdot \mathbf{P}/\hbar} {\lvert {\mathbf{r}} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.2)

\caption{Vector translation.}


\mathbf{P} is the operator that generates translations. Written out, we have

\begin{aligned}\begin{aligned}e^{-i \mathbf{a} \cdot \mathbf{P}/\hbar} &= e^{- i (a_x P_x + a_y P_y + a_z P_z)/\hbar} \\ &= e^{- i a_x P_x/\hbar}e^{- i a_y P_y/\hbar}e^{- i a_z P_z/\hbar},\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.3)

where the factorization was possible because P_x, P_y, and P_z commute

\begin{aligned}\left[{P_i},{P_j}\right] = 0,\end{aligned} \hspace{\stretch{1}}(2.4)

for any i, j (including i = i as I dumbly questioned in class … this is a commutator, so \left[{P_i},{P_j}\right] = P_i P_i - P_i P_i = 0).

The fact that the P_i commute means that successive translations can be done in any orderr and have the same result.

In class we were rewarded with a graphic demo of translation component commutation as Professor Sipe pulled a giant wood carving of a cat (or tiger?) out from beside the desk and proceeded to translate it around on the desk in two different orders, with the cat ending up in the same place each time.

Exponential commutation.

Note that in general

\begin{aligned}e^{A + B} \ne e^A e^B,\end{aligned} \hspace{\stretch{1}}(2.5)

unless \left[{A},{B}\right] = 0. To show this one can compare

\begin{aligned}\begin{aligned}e^{A + B} &= 1 + A + B + \frac{1}{{2}}(A + B)^2 + \cdots \\ &= 1 + A + B + \frac{1}{{2}}(A^2 + A B + BA + B^2) + \cdots \\ \end{aligned}\end{aligned} \hspace{\stretch{1}}(2.6)


\begin{aligned}\begin{aligned}e^A e^B &= \left(1 + A + \frac{1}{{2}}A^2 + \cdots\right)\left(1 + B + \frac{1}{{2}}B^2 + \cdots\right) \\ &= 1 + A + B + \frac{1}{{2}}( A^2 + 2 A B + B^2 ) + \cdots\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.7)

Comparing the second order (for example) we see that we must have for equality

\begin{aligned}A B + B A = 2 A B,\end{aligned} \hspace{\stretch{1}}(2.8)


\begin{aligned}B A = A B,\end{aligned} \hspace{\stretch{1}}(2.9)


\begin{aligned}\left[{A},{B}\right] = 0\end{aligned} \hspace{\stretch{1}}(2.10)

Translating a ket

If we consider the quantity

\begin{aligned}e^{-i \mathbf{a} \cdot \mathbf{P}/\hbar} {\lvert {\psi} \rangle} = {\lvert {\psi'} \rangle} ,\end{aligned} \hspace{\stretch{1}}(2.11)

does this ket “translated” by \mathbf{a} make any sense? The vector \mathbf{a} lives in a 3D space and our ket {\lvert {\psi} \rangle} lives in Hilbert space. A quantity like this deserves some careful thought and is the subject of some such thought in the Interpretations of Quantum mechanics course. For now, we can think of the operator and ket as a “gadget” that prepares a state.

A student in class pointed out that {\lvert {\psi} \rangle} can be dependent on many degress of freedom, for example, the positions of eight different particles. This translation gadget in such a case acts on the whole kit and kaboodle.

Now consider the matrix element

\begin{aligned}\left\langle{\mathbf{r}} \vert {{\psi'}}\right\rangle = {\langle {\mathbf{r}} \rvert} e^{-i \mathbf{a} \cdot \mathbf{P}/\hbar} {\lvert {\psi} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.12)

Note that

\begin{aligned}{\langle {\mathbf{r}} \rvert} e^{-i \mathbf{a} \cdot \mathbf{P}/\hbar} &= \left( e^{i \mathbf{a} \cdot \mathbf{P}/\hbar} {\lvert {\mathbf{r}} \rangle} \right)^\dagger \\ &= \left( {\lvert {\mathbf{r} - \mathbf{a}} \rangle} \right)^\dagger,\end{aligned}


\begin{aligned}\left\langle{\mathbf{r}} \vert {{\psi'}}\right\rangle = \left\langle{{\mathbf{r} -\mathbf{a}}} \vert {{\psi}}\right\rangle,\end{aligned} \hspace{\stretch{1}}(2.13)


\begin{aligned}\psi'(\mathbf{r}) = \psi(\mathbf{r} - \mathbf{a})\end{aligned} \hspace{\stretch{1}}(2.14)

This is what we expect of a translated function, as illustrated in figure (\ref{fig:qmTwoL11:qmTwoL11fig2})
\caption{Active spatial translation.}

Example: Spatial rotation

We’ve been introduced to the angular momentum operator

\begin{aligned}\mathbf{L} = \mathbf{R} \times \mathbf{P},\end{aligned} \hspace{\stretch{1}}(2.15)


\begin{aligned}L_x &= Y P_z - Z P_y \\ L_y &= Z P_x - X P_z \\ L_z &= X P_y - Y P_x.\end{aligned} \hspace{\stretch{1}}(2.16)

We also found that

\begin{aligned}\left[{L_i},{L_j}\right] = i \hbar \sum_k \epsilon_{ijk} L_k.\end{aligned} \hspace{\stretch{1}}(2.19)

These non-zero commutators show that the components of angular momentum do not commute.


\begin{aligned}{\lvert {\mathcal{R}(\mathbf{r})} \rangle} = e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{L}/\hbar}{\lvert {\mathbf{r}} \rangle} .\end{aligned} \hspace{\stretch{1}}(2.20)

This is the vecvtor that we get by actively rotating the vector \mathbf{r} by an angule \theta counterclockwise about \hat{\mathbf{n}}, as in figure (\ref{fig:qmTwoL11:qmTwoL11fig3})

\caption{Active vector rotations}

An active rotation rotates the vector, leaving the coordinate system fixed, whereas a passive rotation is one for which the coordinate system is rotated, and the vector is left fixed.

Note that rotations do not commute. Suppose that we have a pair of rotations as in figure (\ref{fig:qmTwoL11:qmTwoL11fig4})
\caption{A example pair of non-commuting rotations.}

Again, we get the graphic demo, with Professor Sipe rotating the big wooden cat sculpture. Did he bring that in to class just to make this point (too bad I missed the first couple minutes of the lecture).

Rather amusingly, he points out that most things in life do not commute. We get much different results if we apply the operations of putting water into the teapot and turning on the stove in different orders.

Rotating a ket

With a rotation gadget

\begin{aligned}{\lvert {\psi'} \rangle} = e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{L}/\hbar }{\lvert {\psi} \rangle},\end{aligned} \hspace{\stretch{1}}(2.21)

we can form the matrix element

\begin{aligned}\left\langle{\mathbf{r}} \vert {{\psi'}}\right\rangle = {\langle {\mathbf{r}} \rvert} e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{L}/\hbar }{\lvert {\psi} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.22)

In this we have

\begin{aligned}{\langle {\mathbf{r}} \rvert} e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{L}/\hbar }&=\left( e^{i \theta \hat{\mathbf{n}} \cdot \mathbf{L}/\hbar } {\lvert {\mathbf{r}} \rangle} \right)^\dagger \\ &=\left( {\lvert {\mathcal{R}^{-1}(\mathbf{r}) } \rangle} \right)^\dagger,\end{aligned}


\begin{aligned}\left\langle{\mathbf{r}} \vert {{\psi'}}\right\rangle = \left\langle{{\mathcal{R}^{-1}(\mathbf{r}) }} \vert {{\psi'}}\right\rangle,\end{aligned} \hspace{\stretch{1}}(2.23)


\begin{aligned}\psi'(\mathbf{r}) = \psi( \mathcal{R}^{-1}(\mathbf{r}) )\end{aligned} \hspace{\stretch{1}}(2.24)


Recall what you did last year, where H, \mathbf{P}, and \mathbf{L} were defined mechaniccally. We found

\item H generates time evolution (or translation in time).
\item \mathbf{P} generates spatial translation.
\item \mathbf{L} generates spatial rotation.

For our mechanical definitions we have

\begin{aligned}\left[{P_i},{P_j}\right] = 0,\end{aligned} \hspace{\stretch{1}}(3.25)


\begin{aligned}\left[{L_i},{L_j}\right] = i \hbar \sum_k \epsilon_{ijk} L_k.\end{aligned} \hspace{\stretch{1}}(3.26)

These are the relations that show us the way translations and rotations combine. We want to move up to a higher plane, a new level of abstraction. To do so we define H as the operator that generates time evolution. If we have a theory that covers the behaviour of how anything evolves in time, H encodes the rules for this time evolution.

Define \mathbf{P} as the operator that generates translations in space.

Define \mathbf{J} as the operator that generates rotations in space.

In order that these match expectations, we require

\begin{aligned}\left[{P_i},{P_j}\right] = 0,\end{aligned} \hspace{\stretch{1}}(3.27)


\begin{aligned}\left[{J_i},{J_j}\right] = i \hbar \sum_k \epsilon_{ijk} J_k.\end{aligned} \hspace{\stretch{1}}(3.28)

In the simple theory of a spinless particle we have

\begin{aligned}\mathbf{J} \equiv \mathbf{L} = \mathbf{R} \times \mathbf{P}.\end{aligned} \hspace{\stretch{1}}(3.29)

We actually need a generalization of this since this is, in fact, not good enought, even for low energy physics.

Many component wave functions.

We are free to construct tuples of spatial vector functions like

\begin{aligned}\begin{bmatrix}\Psi_I(\mathbf{r}, t) \\ \Psi_{II}(\mathbf{r}, t)\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(3.30)


\begin{aligned}\begin{bmatrix}\Psi_I(\mathbf{r}, t) \\ \Psi_{II}(\mathbf{r}, t) \\ \Psi_{III}(\mathbf{r}, t)\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(3.31)


We will see that these behave qualitatively different than one component wave functions. We also don’t have to be considering multiple particle wave functions, but just one particle that requires three functions in \mathbb{R}^{3} to describe it (ie: we are moving in on spin).

Question: Do these live in the same vector space?
Answer: We will get to this.

A classical analogy.

“There’s only bad analogies, since if the are good they’d be describing the same thing. We can however, produce some useful bad analogies”

\item A temperature field

\begin{aligned}T(\mathbf{r})\end{aligned} \hspace{\stretch{1}}(3.32)

\item Electric field

\begin{aligned}\begin{bmatrix}E_x(\mathbf{r}) \\ E_y(\mathbf{r}) \\ E_z(\mathbf{r}) \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.33)


These behave in a much different way. If we rotate a scalar field like T(\mathbf{r}) as in figure (\ref{fig:qmTwoL11:qmTwoL11fig5})
\caption{Rotated temperature (scalar) field}

Suppose we have a temperature field generated by, say, a match. Rotating the match above, we have

\begin{aligned}T'(\mathbf{r}) = T(\mathcal{R}^{-1}(\mathbf{r})).\end{aligned} \hspace{\stretch{1}}(3.34)

Compare this to the rotation of an electric field, perhaps one produced by a capacitor, as in figure (\ref{fig:qmTwoL11:qmTwoL11fig6})

\caption{Rotating a capacitance electric field}

Is it true that we have

\begin{aligned}\begin{bmatrix}E_x(\mathbf{r}) \\ E_y(\mathbf{r}) \\ E_z(\mathbf{r}) \end{bmatrix}\stackrel{?}{=}\begin{bmatrix}E_x(\mathcal{R}^{-1}(\mathbf{r})) \\ E_y(\mathcal{R}^{-1}(\mathbf{r})) \\ E_z(\mathcal{R}^{-1}(\mathbf{r})) \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.35)

No. Because the components get mixed as well as the positions at which those components are evaluated.

We will work with many component wave functions, some of which will behave like vectors, and will have to develope the methods and language to tackle this.


[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , | Leave a Comment »

PHY456H1F, Quantum Mechanics II. My solutions to problem set 1 (ungraded).

Posted by peeterjoot on September 19, 2011

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Harmonic oscillator.


\begin{aligned}H_0 = \frac{P^2}{2m} + \frac{1}{{2}} m \omega^2 X^2\end{aligned} \hspace{\stretch{1}}(1.1)

Since it’s been a while let’s compute the raising and lowering factorization that was used so extensively for this problem.

It was of the form

\begin{aligned}H_0 = (a X - i b P)(a X + i b P) + \cdots\end{aligned} \hspace{\stretch{1}}(1.2)

Why this factorization has an imaginary in it is a good question. It’s not one that is given any sort of rationale in the text ([1]).

It’s clear that we want a = \sqrt{m/2} \omega and b = 1/\sqrt{2m}. The difference is then

\begin{aligned}H_0 - (a X - i b P)(a X + i b P)=- i a b \left[{X},{P}\right]  = - i \frac{\omega}{2} \left[{X},{P}\right]\end{aligned} \hspace{\stretch{1}}(1.3)

That commutator is an i\hbar value, but what was the sign? Let’s compute so we don’t get it wrong

\begin{aligned}\left[{x},{ p}\right] \psi&= -i \hbar \left[{x},{\partial_x}\right] \psi \\ &= -i \hbar ( x \partial_x \psi - \partial_x (x \psi) ) \\ &= -i \hbar ( - \psi ) \\ &= i \hbar \psi\end{aligned}

So we have

\begin{aligned}H_0 =\left(\omega \sqrt{\frac{m}{2}} X - i \sqrt{\frac{1}{2m}} P\right)\left(\omega \sqrt{\frac{m}{2}} X + i \sqrt{\frac{1}{2m}} P\right)+ \frac{\hbar \omega}{2}\end{aligned} \hspace{\stretch{1}}(1.4)

Factoring out an \hbar \omega produces the form of the Hamiltonian that we used before

\begin{aligned}H_0 =\hbar \omega \left(\left(\sqrt{\frac{m \omega}{2 \hbar}} X - i \sqrt{\frac{1}{2m \hbar \omega}} P\right)\left(\sqrt{\frac{m \omega}{2 \hbar}} X + i \sqrt{\frac{1}{2m \hbar \omega}} P\right)+ \frac{1}{{2}}\right).\end{aligned} \hspace{\stretch{1}}(1.5)

The factors were labeled the uppering (a^\dagger) and lowering (a) operators respectively, and written

\begin{aligned}H_0 &= \hbar \omega \left( a^\dagger a + \frac{1}{{2}} \right) \\ a &= \sqrt{\frac{m \omega}{2 \hbar}} X + i \sqrt{\frac{1}{2m \hbar \omega}} P \\ a^\dagger &= \sqrt{\frac{m \omega}{2 \hbar}} X - i \sqrt{\frac{1}{2m \hbar \omega}} P.\end{aligned} \hspace{\stretch{1}}(1.6)

Observe that we can find the inverse relations

\begin{aligned}X &= \sqrt{ \frac{\hbar}{2 m \omega} } \left( a + a^\dagger \right) \\ P &= i \sqrt{ \frac{m \hbar \omega}{2} } \left( a^\dagger  - a \right)\end{aligned} \hspace{\stretch{1}}(1.9)

What is a good reason that we chose this particular factorization? For example, a quick computation shows that we could have also picked

\begin{aligned}H_0 = \hbar \omega \left( a a^\dagger - \frac{1}{{2}} \right).\end{aligned} \hspace{\stretch{1}}(1.11)

I don’t know that answer. That said, this second factorization is useful in that it provides the commutator relation between the raising and lowering operators, since subtracting 1.11 and 1.6 yields

\begin{aligned}\left[{a},{a^\dagger}\right] = 1.\end{aligned} \hspace{\stretch{1}}(1.12)

If we suppose that we have eigenstates for the operator a^\dagger a of the form

\begin{aligned}a^\dagger a {\lvert {n} \rangle} = \lambda_n {\lvert {n} \rangle},\end{aligned} \hspace{\stretch{1}}(1.13)

then the problem of finding the eigensolution of H_0 reduces to solving this problem. Because a^\dagger a commutes with 1/2, an eigenstate of a^\dagger a is also an eigenstate of H_0. Utilizing 1.12 we then have

\begin{aligned}a^\dagger a ( a {\lvert {n} \rangle} )&= (a a^\dagger - 1 ) a {\lvert {n} \rangle} \\ &= a (a^\dagger a - 1 ) {\lvert {n} \rangle} \\ &= a (\lambda_n - 1 ) {\lvert {n} \rangle} \\ &= (\lambda_n - 1 ) a {\lvert {n} \rangle},\end{aligned}

so we see that a {\lvert {n} \rangle} is an eigenstate of a^\dagger a with eigenvalue \lambda_n - 1.

Similarly for the raising operator

\begin{aligned}a^\dagger a ( a^\dagger {\lvert {n} \rangle} )&=a^\dagger (a  a^\dagger) {\lvert {n} \rangle} ) \\ &=a^\dagger (a^\dagger a + 1) {\lvert {n} \rangle} ) \\ &=a^\dagger (\lambda_n + 1) {\lvert {n} \rangle} ),\end{aligned}

and find that a^\dagger {\lvert {n} \rangle} is also an eigenstate of a^\dagger a with eigenvalue \lambda_n + 1.

Supposing that there is a lowest energy level (because the potential V(x) = m \omega x^2 /2 has a lower bound of zero) then the state {\lvert {0} \rangle} for which the energy is the lowest when operated on by a we have

\begin{aligned}a {\lvert {0} \rangle} = 0\end{aligned} \hspace{\stretch{1}}(1.14)


\begin{aligned}a^\dagger a {\lvert {0} \rangle} = 0,\end{aligned} \hspace{\stretch{1}}(1.15)


\begin{aligned}\lambda_0 = 0.\end{aligned} \hspace{\stretch{1}}(1.16)

This seems like a small bit of slight of hand, since it sneakily supplies an integer value to \lambda_0 where up to this point 0 was just a label.

If the eigenvalue equation we are trying to solve for the Hamiltonian is

\begin{aligned}H_0 {\lvert {n} \rangle} = E_n {\lvert {n} \rangle}.\end{aligned} \hspace{\stretch{1}}(1.17)

Then we must then have

\begin{aligned}E_n = \hbar \omega \left(\lambda_n + \frac{1}{{2}} \right) = \hbar \omega \left(n + \frac{1}{{2}} \right)\end{aligned} \hspace{\stretch{1}}(1.18)

Part (a)

We’ve now got enough context to attempt the first part of the question, calculation of

\begin{aligned}{\langle {n} \rvert} X^4 {\lvert {n} \rangle}\end{aligned} \hspace{\stretch{1}}(1.19)

We’ve calculated things like this before, such as

\begin{aligned}{\langle {n} \rvert} X^2 {\lvert {n} \rangle}&=\frac{\hbar}{2 m \omega} {\langle {n} \rvert} (a + a^\dagger)^2 {\lvert {n} \rangle}\end{aligned}

To continue we need an exact relation between {\lvert {n} \rangle} and {\lvert {n \pm 1} \rangle}. Recall that a {\lvert {n} \rangle} was an eigenstate of a^\dagger a with eigenvalue n - 1. This implies that the eigenstates a {\lvert {n} \rangle} and {\lvert {n-1} \rangle} are proportional

\begin{aligned}a {\lvert {n} \rangle} = c_n {\lvert {n - 1} \rangle},\end{aligned} \hspace{\stretch{1}}(1.20)


\begin{aligned}{\langle {n} \rvert} a^\dagger a {\lvert {n} \rangle} &= {\left\lvert{c_n}\right\rvert}^2 \left\langle{{n - 1}} \vert {{n-1}}\right\rangle = {\left\lvert{c_n}\right\rvert}^2 \\ n \left\langle{{n}} \vert {{n}}\right\rangle &= \\ n &=\end{aligned}

so that

\begin{aligned}a {\lvert {n} \rangle} = \sqrt{n} {\lvert {n - 1} \rangle}.\end{aligned} \hspace{\stretch{1}}(1.21)

Similarly let

\begin{aligned}a^\dagger {\lvert {n} \rangle} = b_n {\lvert {n + 1} \rangle},\end{aligned} \hspace{\stretch{1}}(1.22)


\begin{aligned}{\langle {n} \rvert} a a^\dagger {\lvert {n} \rangle} &= {\left\lvert{b_n}\right\rvert}^2 \left\langle{{n - 1}} \vert {{n-1}}\right\rangle = {\left\lvert{b_n}\right\rvert}^2 \\ {\langle {n} \rvert} (1 + a^\dagger a) {\lvert {n} \rangle} &= \\ 1 + n &=\end{aligned}

so that

\begin{aligned}a^\dagger {\lvert {n} \rangle} = \sqrt{n+1} {\lvert {n + 1} \rangle}.\end{aligned} \hspace{\stretch{1}}(1.23)

We can now return to 1.19, and find

\begin{aligned}{\langle {n} \rvert} X^4 {\lvert {n} \rangle}&=\frac{\hbar^2}{4 m^2 \omega^2} {\langle {n} \rvert} (a + a^\dagger)^4 {\lvert {n} \rangle}\end{aligned}

Consider half of this braket

\begin{aligned}(a + a^\dagger)^2 {\lvert {n} \rangle}&=\left( a^2 + (a^\dagger)^2 + a^\dagger a + a a^\dagger \right) {\lvert {n} \rangle} \\ &=\left( a^2 + (a^\dagger)^2 + a^\dagger a + (1 + a^\dagger a) \right) {\lvert {n} \rangle} \\ &=\left( a^2 + (a^\dagger)^2 + 1 + 2 a^\dagger a \right) {\lvert {n} \rangle} \\ &=\sqrt{n-1}\sqrt{n-2} {\lvert {n-2} \rangle}+\sqrt{n+1}\sqrt{n+2} {\lvert {n + 2} \rangle}+{\lvert {n} \rangle}+  2 n {\lvert {n} \rangle}\end{aligned}

Squaring, utilizing the Hermitian nature of the X operator

\begin{aligned}{\langle {n} \rvert} X^4 {\lvert {n} \rangle}=\frac{\hbar^2}{4 m^2 \omega^2}\left((n-1)(n-2) + (n+1)(n+2) + (1 + 2n)^2\right)=\frac{\hbar^2}{4 m^2 \omega^2}\left( 6 n^2 + 4 n + 5 \right)\end{aligned} \hspace{\stretch{1}}(1.24)

Part (b)

Find the ground state energy of the Hamiltonian H = H_0 + \gamma X^2 for \gamma > 0.

The new Hamiltonian has the form

\begin{aligned}H = \frac{P^2}{2m} + \frac{1}{{2}} m \left(\omega^2 + \frac{2 \gamma}{m} \right) X^2 =\frac{P^2}{2m} + \frac{1}{{2}} m {\omega'}^2 X^2,\end{aligned} \hspace{\stretch{1}}(1.25)


\begin{aligned}\omega' = \sqrt{ \omega^2 + \frac{2 \gamma}{m} }\end{aligned} \hspace{\stretch{1}}(1.26)

The energy states of the Hamiltonian are thus

\begin{aligned}E_n = \hbar \sqrt{ \omega^2 + \frac{2 \gamma}{m} } \left( n + \frac{1}{{2}} \right)\end{aligned} \hspace{\stretch{1}}(1.27)

and the ground state of the modified Hamiltonian H is thus

\begin{aligned}E_0 = \frac{\hbar}{2} \sqrt{ \omega^2 + \frac{2 \gamma}{m} }\end{aligned} \hspace{\stretch{1}}(1.28)

Part (c)

Find the ground state energy of the Hamiltonian H = H_0 - \alpha X.

With a bit of play, this new Hamiltonian can be factored into

\begin{aligned}H= \hbar \omega \left( b^\dagger b + \frac{1}{{2}} \right) - \frac{\alpha^2}{2 m \omega^2}= \hbar \omega \left( b b^\dagger - \frac{1}{{2}} \right) - \frac{\alpha^2}{2 m \omega^2},\end{aligned} \hspace{\stretch{1}}(1.29)


\begin{aligned}b &= \sqrt{\frac{m \omega}{2\hbar}} X + \frac{i P}{\sqrt{2 m \hbar \omega}} - \frac{\alpha}{\omega \sqrt{ 2 m \hbar \omega }} \\ b^\dagger &= \sqrt{\frac{m \omega}{2\hbar}} X - \frac{i P}{\sqrt{2 m \hbar \omega}} - \frac{\alpha}{\omega \sqrt{ 2 m \hbar \omega }}.\end{aligned} \hspace{\stretch{1}}(1.30)

From 1.29 we see that we have the same sort of commutator relationship as in the original Hamiltonian

\begin{aligned}\left[{b},{b^\dagger}\right] = 1,\end{aligned} \hspace{\stretch{1}}(1.32)

and because of this, all the preceding arguments follow unchanged with the exception that the energy eigenstates of this Hamiltonian are shifted by a constant

\begin{aligned}H {\lvert {n} \rangle} = \left( \hbar \omega \left( n + \frac{1}{{2}} \right) - \frac{\alpha^2}{2 m \omega^2} \right) {\lvert {n} \rangle},\end{aligned} \hspace{\stretch{1}}(1.33)

where the {\lvert {n} \rangle} states are simultaneous eigenstates of the b^\dagger b operator

\begin{aligned}b^\dagger b {\lvert {n} \rangle} = n {\lvert {n} \rangle}.\end{aligned} \hspace{\stretch{1}}(1.34)

The ground state energy is then

\begin{aligned}E_0 = \frac{\hbar \omega }{2} - \frac{\alpha^2}{2 m \omega^2}.\end{aligned} \hspace{\stretch{1}}(1.35)

This makes sense. A translation of the entire position of the system should not effect the energy level distribution of the system, but we have set our reference potential differently, and have this constant energy adjustment to the entire system.

Hydrogen atom and spherical harmonics.

We are asked to show that for any eigenkets of the hydrogen atom {\lvert {\Phi_{nlm}} \rangle} we have

\begin{aligned}{\langle {\Phi_{nlm}} \rvert} X {\lvert {\Phi_{nlm}} \rangle} ={\langle {\Phi_{nlm}} \rvert} Y {\lvert {\Phi_{nlm}} \rangle} ={\langle {\Phi_{nlm}} \rvert} Z {\lvert {\Phi_{nlm}} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.36)

The summary sheet provides us with the wavefunction

\begin{aligned}\left\langle{\mathbf{r}} \vert {{\Phi_{nlm}}}\right\rangle = \frac{2}{n^2 a_0^{3/2}} \sqrt{\frac{(n-l-1)!}{(n+l)!)^3}} F_{nl}\left( \frac{2r}{n a_0} \right) Y_l^m(\theta, \phi),\end{aligned} \hspace{\stretch{1}}(2.37)

where F_{nl} is a real valued function defined in terms of Lagueere polynomials. Working with the expectation of the X operator to start with we have

\begin{aligned}{\langle {\Phi_{nlm}} \rvert} X {\lvert {\Phi_{nlm}} \rangle} &=\int \left\langle{{\Phi_{nlm}}} \vert {{\mathbf{r}'}}\right\rangle {\langle {\mathbf{r}'} \rvert} X {\lvert {\mathbf{r}} \rangle} \left\langle{\mathbf{r}} \vert {{\Phi_{nlm}}}\right\rangle d^3 \mathbf{r} d^3 \mathbf{r}' \\ &=\int \left\langle{{\Phi_{nlm}}} \vert {{\mathbf{r}'}}\right\rangle \delta(\mathbf{r} - \mathbf{r}') r \sin\theta \cos\phi \left\langle{\mathbf{r}} \vert {{\Phi_{nlm}}}\right\rangle d^3 \mathbf{r} d^3 \mathbf{r}' \\ &=\int \Phi_{nlm}^{*}(\mathbf{r}) r \sin\theta \cos\phi \Phi_{nlm}(\mathbf{r}) d^3 \mathbf{r} \\ &\sim\int r^2 dr {\left\lvert{ F_{nl}\left(\frac{2 r}{ n a_0} \right)}\right\rvert}^2 r \int \sin\theta d\theta d\phi{Y_l^m}^{*}(\theta, \phi) \sin\theta \cos\phi Y_l^m(\theta, \phi) \\ \end{aligned}

Recalling that the only \phi dependence in Y_l^m is e^{i m \phi} we can perform the d\phi integration directly, which is

\begin{aligned}\int_{\phi=0}^{2\pi} \cos\phi d\phi e^{-i m \phi} e^{i m \phi} = 0.\end{aligned} \hspace{\stretch{1}}(2.38)

We have the same story for the Y expectation which is

\begin{aligned}{\langle {\Phi_{nlm}} \rvert} X {\lvert {\Phi_{nlm}} \rangle} \sim\int r^2 dr {\left\lvert{F_{nl}\left( \frac{2 r}{ n a_0} \right)}\right\rvert}^2 r \int \sin\theta d\theta d\phi{Y_l^m}^{*}(\theta, \phi) \sin\theta \sin\phi Y_l^m(\theta, \phi).\end{aligned} \hspace{\stretch{1}}(2.39)

Our \phi integral is then just

\begin{aligned}\int_{\phi=0}^{2\pi} \sin\phi d\phi e^{-i m \phi} e^{i m \phi} = 0,\end{aligned} \hspace{\stretch{1}}(2.40)

also zero. The Z expectation is a slightly different story. There we have

\begin{aligned}\begin{aligned}{\langle {\Phi_{nlm}} \rvert} Z {\lvert {\Phi_{nlm}} \rangle} &\sim\int dr {\left\lvert{F_{nl}\left( \frac{2 r}{ n a_0} \right)}\right\rvert}^2 r^3  \\ &\quad \int_0^{2\pi} d\phi\int_0^\pi \sin \theta d\theta\left( \sin\theta \right)^{-2m}\left( \frac{d^{l - m}}{d (\cos\theta)^{l-m}} \sin^{2l}\theta \right)^2\cos\theta.\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.41)

Within this last integral we can make the substitution

\begin{aligned}u &= \cos\theta \\ \sin\theta d\theta &= - d(\cos\theta) = -du \\ u &\in [1, -1],\end{aligned} \hspace{\stretch{1}}(2.42)

and the integral takes the form

\begin{aligned}-\int_{-1}^1 (-du) \frac{1}{{(1 - u^2)^m}} \left( \frac{d^{l-m}}{d u^{l -m }} (1 - u^2)^l\right)^2 u.\end{aligned} \hspace{\stretch{1}}(2.45)

Here we have the product of two even functions, times one odd function (u), over a symmetric interval, so the end result is zero, completing the problem.

I wasn’t able to see how to exploit the parity result suggested in the problem, but it wasn’t so bad to show these directly.

Angular momentum operator.

Working with the appropriate expressions in Cartesian components, confirm that L_i {\lvert {\psi} \rangle} = 0 for each component of angular momentum L_i, if \left\langle{\mathbf{r}} \vert {{\psi}}\right\rangle = \psi(\mathbf{r}) is in fact only a function of r = {\left\lvert{\mathbf{r}}\right\rvert}.

In order to proceed, we will have to consider a matrix element, so that we can operate on {\lvert {\psi} \rangle} in position space. For that matrix element, we can proceed to insert complete states, and reduce the problem to a question of wavefunctions. That is

\begin{aligned}{\langle {\mathbf{r}} \rvert} L_i {\lvert {\psi} \rangle}&=\int d^3 \mathbf{r}' {\langle {\mathbf{r}} \rvert} L_i {\lvert {\mathbf{r}'} \rangle} \left\langle{{\mathbf{r}'}} \vert {{\psi}}\right\rangle \\ &=\int d^3 \mathbf{r}' {\langle {\mathbf{r}} \rvert} \epsilon_{i a b} X_a P_b {\lvert {\mathbf{r}'} \rangle} \left\langle{{\mathbf{r}'}} \vert {{\psi}}\right\rangle \\ &=-i \hbar \epsilon_{i a b} \int d^3 \mathbf{r}' x_a {\langle {\mathbf{r}} \rvert} \frac{\partial {\psi(\mathbf{r}')}}{\partial {X_b}} {\lvert {\mathbf{r}'} \rangle}  \\ &=-i \hbar \epsilon_{i a b} \int d^3 \mathbf{r}' x_a \frac{\partial {\psi(\mathbf{r}')}}{\partial {x_b}} \left\langle{\mathbf{r}} \vert {{\mathbf{r}'}}\right\rangle  \\ &=-i \hbar \epsilon_{i a b} \int d^3 \mathbf{r}' x_a \frac{\partial {\psi(\mathbf{r}')}}{\partial {x_b}} \delta^3(\mathbf{r} - \mathbf{r}') \\ &=-i \hbar \epsilon_{i a b} x_a \frac{\partial {\psi(\mathbf{r})}}{\partial {x_b}} \end{aligned}

With \psi(\mathbf{r}) = \psi(r) we have

\begin{aligned}{\langle {\mathbf{r}} \rvert} L_i {\lvert {\psi} \rangle}&=-i \hbar \epsilon_{i a b} x_a \frac{\partial {\psi(r)}}{\partial {x_b}}  \\ &=-i \hbar \epsilon_{i a b} x_a \frac{\partial {r}}{\partial {x_b}} \frac{d\psi(r)}{dr}  \\ &=-i \hbar \epsilon_{i a b} x_a \frac{1}{{2}} 2 x_b \frac{1}{{r}} \frac{d\psi(r)}{dr}  \\ \end{aligned}

We are left with an sum of a symmetric product x_a x_b with the antisymmetric tensor \epsilon_{i a b} so this is zero for all i \in [1,3].


[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , | Leave a Comment »

A problem on spherical harmonics.

Posted by peeterjoot on January 10, 2011

[Click here for a PDF of this post with nicer formatting]


One of the PHY356 exam questions from the final I recall screwing up on, and figuring it out after the fact on the drive home. The question actually clarified a difficulty I’d had, but unfortunately I hadn’t had the good luck to perform such a question, to help figure this out before the exam.

From what I recall the question provided an initial state, with some degeneracy in m, perhaps of the following form

\begin{aligned}{\lvert {\phi(0)} \rangle} = \sqrt{\frac{1}{7}} {\lvert { 12 } \rangle}+\sqrt{\frac{2}{7}} {\lvert { 10 } \rangle}+\sqrt{\frac{4}{7}} {\lvert { 20 } \rangle},\end{aligned} \hspace{\stretch{1}}(1.1)

and a Hamiltonian of the form

\begin{aligned}H = \alpha L_z\end{aligned} \hspace{\stretch{1}}(1.2)

From what I recall of the problem, I am going to reattempt it here now.

Evolved state.

One part of the question was to calculate the evolved state. Application of the time evolution operator gives us

\begin{aligned}{\lvert {\phi(t)} \rangle} = e^{-i \alpha L_z t/\hbar} \left(\sqrt{\frac{1}{7}} {\lvert { 12 } \rangle}+\sqrt{\frac{2}{7}} {\lvert { 10 } \rangle}+\sqrt{\frac{4}{7}} {\lvert { 20 } \rangle} \right).\end{aligned} \hspace{\stretch{1}}(1.3)

Now we note that L_z {\lvert {12} \rangle} = 2 \hbar {\lvert {12} \rangle}, and L_z {\lvert { l 0} \rangle} = 0 {\lvert {l 0} \rangle}, so the exponentials reduce this nicely to just

\begin{aligned}{\lvert {\phi(t)} \rangle} = \sqrt{\frac{1}{7}} e^{ -2 i \alpha t } {\lvert { 12 } \rangle}+\sqrt{\frac{2}{7}} {\lvert { 10 } \rangle}+\sqrt{\frac{4}{7}} {\lvert { 20 } \rangle}.\end{aligned} \hspace{\stretch{1}}(1.4)

Probabilities for L_z measurement outcomes.

I believe we were also asked what the probabilities for the outcomes of a measurement of L_z at this time would be. Here is one place that I think that I messed up, and it is really a translation error, attempting to get from the english description of the problem to the math description of the same. I’d had trouble with this process a few times in the problems, and managed to blunder through use of language like “measure”, and “outcome”, but don’t think I really understood how these were used properly.

What are the outcomes that we measure? We measure operators, but the result of a measurement is the eigenvalue associated with the operator. What are the eigenvalues of the L_z operator? These are the m \hbar values, from the operation L_z {\lvert {l m} \rangle} = m \hbar {\lvert {l m} \rangle}. So, given this initial state, there are really two outcomes that are possible, since we have two distinct eigenvalues. These are 2 \hbar and 0 for m = 2, and m= 0 respectively.

A measurement of the “outcome” 2 \hbar, will be the probability associated with the amplitude \left\langle{{ 1 2 }} \vert {{\phi(t)}}\right\rangle (ie: the absolute square of this value). That is

\begin{aligned}{\left\lvert{ \left\langle{{ 1 2 }} \vert {{\phi(t) }}\right\rangle }\right\rvert}^2 = \frac{1}{7}.\end{aligned} \hspace{\stretch{1}}(1.5)

Now, the only other outcome for a measurement of L_z for this state is a measurement of 0 \hbar, and the probability of this is then just 1 - \frac{1}{7} = \frac{6}{7}. On the exam, I think I listed probabilities for three outcomes, with values \frac{1}{7}, \frac{2}{7}, \frac{4}{7} respectively, but in retrospect that seems blatently wrong.

Probabilities for \mathbf{L}^2 measurement outcomes.

What are the probabilities for the outcomes for a measurement of \mathbf{L}^2 after this? The first question is really what are the outcomes. That’s really a question of what are the possible eigenvalues of \mathbf{L}^2 that can be measured at this point. Recall that we have

\begin{aligned}\mathbf{L}^2 {\lvert {l m} \rangle} = \hbar^2 l (l + 1) {\lvert {l m} \rangle}\end{aligned} \hspace{\stretch{1}}(1.6)

So for a state that has only l=1,2 contributions before the measurement, the eigenvalues that can be observed for the \mathbf{L}^2 operator are respectively 2 \hbar^2 and 6 \hbar^2 respectively.

For the l=2 case, our probability is 4/7, leaving 3/7 as the probability for measurement of the l=1 (2 \hbar^2) eigenvalue. We can compute this two ways, and it seems worthwhile to consider both. This first method makes use of the fact that the L_z operator leaves the state vector intact, but it also seems like a bit of a cheat. Consider instead two possible results of measurement after the L_z observation. When an L_z measurement of 0 \hbar is performed our state will be left with only the m=0 kets. That is

\begin{aligned}{\lvert {\psi_a} \rangle} = \frac{1}{{\sqrt{3}}} \left( {\lvert {10} \rangle} + \sqrt{2} {\lvert {20} \rangle} \right),\end{aligned} \hspace{\stretch{1}}(1.7)

whereas, when a 2 \hbar measurement of L_z is performed our state would then only have the m=2 contribution, and would be

\begin{aligned}{\lvert {\psi_b} \rangle} = e^{-2 i \alpha t} {\lvert {12 } \rangle}.\end{aligned} \hspace{\stretch{1}}(1.8)

We have two possible ways of measuring the 2 \hbar^2 eigenvalue for \mathbf{L}^2. One is when our state was {\lvert {\psi_a} \rangle} (, and the resulting state has a {\lvert {10} \rangle} component, and the other is after the m=2 measurement, where our state is left with a {\lvert {12} \rangle} component.

The resulting probability is then a conditional probability result

\begin{aligned}\frac{6}{7} {\left\lvert{ \left\langle{{10}} \vert {{\psi_a}}\right\rangle }\right\rvert}^2 + \frac{1}{7} {\left\lvert{ \left\langle{{12 }} \vert {{\psi_b}}\right\rangle}\right\rvert}^2 = \frac{3}{7}\end{aligned} \hspace{\stretch{1}}(1.9)

The result is the same, as expected, but this is likely a more convicing argument.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , | Leave a Comment »

Some worked problems from old PHY356 exams.

Posted by peeterjoot on January 9, 2011

[Click here for a PDF of this post with nicer formatting]


Some of the old exam questions that I did for preparation for the exam I liked, and thought I’d write up some of them for potential future reference.

Questions from the Dec 2007 PHY355H1F exam.

1b. Parity operator.

\paragraph{Q:} If \Pi is the parity operator, defined by \Pi {\lvert {x} \rangle} = {\lvert {-x} \rangle}, where {\lvert {x} \rangle} is the eigenket of the position operator X with eigenvalue x), and P is the momentum operator conjugate to X, show (carefully) that \Pi P \Pi = -P.


Consider the matrix element {\langle {-x'} \rvert} \left[{\Pi},{P}\right] {\lvert {x} \rangle}. This is

\begin{aligned}{\langle {-x'} \rvert} \left[{\Pi},{P}\right] {\lvert {x} \rangle}&={\langle {-x'} \rvert} \Pi P - P \Pi {\lvert {x} \rangle} \\ &={\langle {-x'} \rvert} \Pi P {\lvert {x} \rangle} - {\langle {-x} \rvert} P \Pi {\lvert {x} \rangle} \\ &={\langle {x'} \rvert} P {\lvert {x} \rangle} - {\langle {-x} \rvert} P {\lvert {-x} \rangle} \\ &=- i \hbar \left(\delta(x'-x) \frac{\partial {}}{\partial {x}}-\underbrace{\delta(-x -(-x'))}_{= \delta(x'-x) = \delta(x-x')} \frac{\partial {}}{\partial {-x}}\right) \\ &=- 2 i \hbar \delta(x'-x) \frac{\partial {}}{\partial {x}} \\ &=2 {\langle {x'} \rvert} P {\lvert {x} \rangle} \\ &=2 {\langle {-x'} \rvert} \Pi P {\lvert {x} \rangle} \\ \end{aligned}

We’ve taken advantage of the Hermitian property of P and \Pi here, and can rearrange for

\begin{aligned}{\langle {-x'} \rvert} \Pi P - P \Pi - 2 \Pi P {\lvert {x} \rangle} = 0\end{aligned} \hspace{\stretch{1}}(2.1)

Since this is true for all {\langle {-x} \rvert} and {\lvert {x} \rangle} we have

\begin{aligned}\Pi P + P \Pi = 0.\end{aligned} \hspace{\stretch{1}}(2.2)

Right multiplication by \Pi and rearranging we have

\begin{aligned}\Pi P \Pi = - P \Pi \Pi = - P.\end{aligned} \hspace{\stretch{1}}(2.3)

1f. Free particle propagator.

\paragraph{Q:} For a free particle moving in one-dimension, the propagator (i.e. the coordinate representation of the evolution operator),

\begin{aligned}G(x,x';t) = {\langle {x} \rvert} U(t) {\lvert {x'} \rangle}\end{aligned} \hspace{\stretch{1}}(2.4)

is given by

\begin{aligned}G(x,x';t) = \sqrt{\frac{m}{2 \pi i \hbar t}} e^{i m (x-x')^2/ (2 \hbar t)}.\end{aligned} \hspace{\stretch{1}}(2.5)


This problem is actually fairly straightforward, but it is nice to work it having had a similar problem set question where we were asked about this time evolution operator matrix element (ie: what it’s physical meaning is). Here we have a concrete example of the form of this matrix operator.

Proceeding directly, we have

\begin{aligned}{\langle {x} \rvert} U {\lvert {x'} \rangle}&=\int \left\langle{x} \vert {p'}\right\rangle {\langle {p'} \rvert} U {\lvert {p} \rangle} \left\langle{p} \vert {x'}\right\rangle dp dp' \\ &=\int u_{p'}(x) {\langle {p'} \rvert} e^{-i P^2 t/(2 m \hbar)} {\lvert {p} \rangle} u_p^{*}(x') dp dp' \\ &=\int u_{p'}(x) e^{-i p^2 t/(2 m \hbar)} \delta(p-p') u_p^{*}(x') dp dp' \\ &=\int u_{p}(x) e^{-i p^2 t/(2 m \hbar)} u_p^{*}(x') dp \\ &=\frac{1}{(\sqrt{2 \pi \hbar})^2} \int e^{i p (x-x')/\hbar} e^{-i p^2 t/(2 m \hbar)} dp \\ &=\frac{1}{2 \pi \hbar} \int e^{i p (x-x')/\hbar} e^{-i p^2 t/(2 m \hbar)} dp \\ &=\frac{1}{2 \pi} \int e^{i k (x-x')} e^{-i \hbar k^2 t/(2 m)} dk \\ &=\frac{1}{2 \pi} \int dk e^{- \left(k^2 \frac{ i \hbar t}{2m} - i k (x-x')\right)} \\ &=\frac{1}{2 \pi} \int dk e^{- \frac{ i \hbar t}{2m}\left(k - i \frac{2m}{i \hbar t}\frac{(x-x')}{2} \right)^2- \frac{i^2 2 m (x-x')^2}{4 i \hbar t} } \\ &=\frac{1}{2 \pi}  \sqrt{\pi} \sqrt{\frac{2m}{i \hbar t}}e^{\frac{ i m (x-x')^2}{2 \hbar t}},\end{aligned}

which is the desired result. Now, let’s look at how this would be used. We can express our time evolved state using this matrix element by introducing an identity

\begin{aligned}\left\langle{{x}} \vert {{\psi(t)}}\right\rangle &={\langle {x} \rvert} U {\lvert {\psi(0)} \rangle} \\ &=\int dx' {\langle {x} \rvert} U {\lvert {x'} \rangle} \left\langle{{x'}} \vert {{\psi(0)}}\right\rangle \\ &=\sqrt{\frac{m}{2 \pi i \hbar t}} \int dx' e^{i m (x-x')^2/ (2 \hbar t)}\left\langle{{x'}} \vert {{\psi(0)}}\right\rangle \\ \end{aligned}

This gives us

\begin{aligned}\psi(x, t)=\sqrt{\frac{m}{2 \pi i \hbar t}} \int dx' e^{i m (x-x')^2/ (2 \hbar t)} \psi(x', 0)\end{aligned} \hspace{\stretch{1}}(2.6)

However, note that our free particle wave function at time zero is

\begin{aligned}\psi(x, 0) = \frac{e^{i p x/\hbar}}{\sqrt{2 \pi \hbar}}\end{aligned} \hspace{\stretch{1}}(2.7)

So the convolution integral 2.6 does not exist. We likely have to require that the solution be not a pure state, but instead a superposition of a set of continuous states (a wave packet in position or momentum space related by Fourier transforms). That is

\begin{aligned}\psi(x, 0) &= \frac{1}{{\sqrt{2 \pi \hbar}}} \int \hat{\psi}(p, 0) e^{i p x/\hbar} dp \\ \hat{\psi}(p, 0) &= \frac{1}{{\sqrt{2 \pi \hbar}}} \int \psi(x'', 0) e^{-i p x''/\hbar} dx''\end{aligned} \hspace{\stretch{1}}(2.8)

The time evolution of this wave packet is then determined by the propagator, and is

\begin{aligned}\psi(x,t) =\sqrt{\frac{m}{2 \pi i \hbar t}} \frac{1}{{\sqrt{2 \pi \hbar}}} \int dx' dpe^{i m (x-x')^2/ (2 \hbar t)}\hat{\psi}(p, 0) e^{i p x'/\hbar} ,\end{aligned} \hspace{\stretch{1}}(2.10)

or in terms of the position space wave packet evaluated at time zero

\begin{aligned}\psi(x,t) =\sqrt{\frac{m}{2 \pi i \hbar t}}\frac{1}{{2 \pi}}\int dx' dx'' dke^{i m (x-x')^2/ (2 \hbar t)}e^{i k (x' - x'')} \psi(x'', 0)\end{aligned} \hspace{\stretch{1}}(2.11)

We see that the propagator also ends up with a Fourier transform structure, and we have

\begin{aligned}\psi(x,t) &= \int dx' U(x, x' ; t) \psi(x', 0) \\ U(x, x' ; t) &=\sqrt{\frac{m}{2 \pi i \hbar t}}\frac{1}{{2 \pi}}\int du dke^{i m (x - x' - u)^2/ (2 \hbar t)}e^{i k u }\end{aligned} \hspace{\stretch{1}}(2.12)

Does that Fourier transform exist? I’d not be surprised if it ended up with a delta function representation. I’ll hold off attempting to evaluate and reduce it until another day.

4. Hydrogen atom.

This problem deals with the hydrogen atom, with an initial ket

\begin{aligned}{\lvert {\psi(0)} \rangle} = \frac{1}{{\sqrt{3}}} {\lvert {100} \rangle}+\frac{1}{{\sqrt{3}}} {\lvert {210} \rangle}+\frac{1}{{\sqrt{3}}} {\lvert {211} \rangle},\end{aligned} \hspace{\stretch{1}}(2.14)


\begin{aligned}\left\langle{\mathbf{r}} \vert {{100}}\right\rangle = \Phi_{100}(\mathbf{r}),\end{aligned} \hspace{\stretch{1}}(2.15)


\paragraph{Q: (a)}

If no measurement is made until time t = t_0,

\begin{aligned}t_0 = \frac{\pi \hbar}{ \frac{3}{4} (13.6 \text{eV}) } = \frac{ 4 \pi \hbar }{ 3 E_I},\end{aligned} \hspace{\stretch{1}}(2.16)

what is the ket {\lvert {\psi(t)} \rangle} just before the measurement is made?


Our time evolved state is

\begin{aligned}{\lvert {\psi{t_0}} \rangle} = \frac{1}{{\sqrt{3}}} e^{-i E_1 t_0 /\hbar } {\lvert {100} \rangle}+\frac{1}{{\sqrt{3}}} e^{- i E_2 t_0/\hbar } ({\lvert {210} \rangle} + {\lvert {211} \rangle}).\end{aligned} \hspace{\stretch{1}}(2.17)

Also observe that this initial time was picked to make the exponential values come out nicely, and we have

\begin{aligned}\frac{E_n t_0 }{\hbar} &= - \frac{E_I \pi \hbar }{\frac{3}{4} E_I n^2 \hbar} \\ &= - \frac{4 \pi }{ 3 n^2 },\end{aligned}

so our time evolved state is just

\begin{aligned}{\lvert {\psi(t_0)} \rangle} = \frac{1}{{\sqrt{3}}} e^{-i 4 \pi / 3} {\lvert {100} \rangle}+\frac{1}{{\sqrt{3}}} e^{- i \pi / 3 } ({\lvert {210} \rangle} + {\lvert {211} \rangle}).\end{aligned} \hspace{\stretch{1}}(2.18)

\paragraph{Q: (b)}

Suppose that at time t_0 an L_z measurement is made, and the outcome 0 is recorded. What is the appropriate ket \psi_{\text{after}}(t_0) right after the measurement?


A measurement with outcome 0, means that the L_z operator measurement found the state at that point to be the eigenstate for L_z eigenvalue 0. Recall that if {\lvert {\phi} \rangle} is an eigenstate of L_z we have

\begin{aligned}L_z {\lvert {\phi} \rangle} = m \hbar {\lvert {\phi} \rangle},\end{aligned} \hspace{\stretch{1}}(2.19)

so a measurement of L_z with outcome zero means that we have m=0. Our measurement of L_z at time t_0 therefore filters out all but the m=0 states and our new state is proportional to the projection over all m=0 states as follows

\begin{aligned}{\lvert {\psi_{\text{after}}(t_0)} \rangle}&\propto \left( \sum_{n l} {\lvert {n l 0} \rangle}{\langle {n l 0} \rvert} \right) {\lvert {\psi(t_0)} \rangle}  \\ &\propto \left( {\lvert {1 0 0} \rangle}{\langle {1 0 0} \rvert} +{\lvert {2 1 0} \rangle}{\langle {2 1 0} \rvert} \right) {\lvert {\psi(t_0)} \rangle}  \\ &= \frac{1}{{\sqrt{3}}} e^{-i 4 \pi / 3} {\lvert {100} \rangle}+\frac{1}{{\sqrt{3}}} e^{- i \pi / 3 } {\lvert {210} \rangle} \end{aligned}

A final normalization yields

\begin{aligned}{\lvert {\psi_{\text{after}}(t_0)} \rangle}= \frac{1}{{\sqrt{2}}} ({\lvert {210} \rangle} - {\lvert {100} \rangle})\end{aligned} \hspace{\stretch{1}}(2.20)

\paragraph{Q: (c)}

Right after this L_z measurement, what is {\left\lvert{\psi_{\text{after}}(t_0)}\right\rvert}^2?


Our amplitude is

\begin{aligned}\left\langle{\mathbf{r}} \vert {{\psi_{\text{after}}(t_0)}}\right\rangle&= \frac{1}{{\sqrt{2}}} (\left\langle{\mathbf{r}} \vert {{210}}\right\rangle - \left\langle{\mathbf{r}} \vert {{100}}\right\rangle) \\ &= \frac{1}{{\sqrt{2 \pi a_0^3}}}\left(\frac{r}{4\sqrt{2} a_0} e^{-r/2a_0} \cos\theta-e^{-r/a_0}\right) \\ &= \frac{1}{{\sqrt{2 \pi a_0^3}}}e^{-r/2 a_0} \left(\frac{r}{4\sqrt{2} a_0} \cos\theta-e^{-r/2 a_0}\right),\end{aligned}

so the probability density is

\begin{aligned}{\left\lvert{\left\langle{\mathbf{r}} \vert {{\psi_{\text{after}}(t_0)}}\right\rangle}\right\rvert}^2= \frac{1}{{2 \pi a_0^3}}e^{-r/a_0} \left(\frac{r}{4\sqrt{2} a_0} \cos\theta-e^{-r/2 a_0}\right)^2 \end{aligned} \hspace{\stretch{1}}(2.21)

\paragraph{Q: (d)}

If then a position measurement is made immediately, which if any components of the expectation value of \mathbf{R} will be nonvanishing? Justify your answer.


The expectation value of this vector valued operator with respect to a radial state {\lvert {\psi} \rangle} = \sum_{nlm} a_{nlm} {\lvert {nlm} \rangle} can be expressed as

\begin{aligned}\left\langle{\mathbf{R}}\right\rangle = \sum_{i=1}^3 \mathbf{e}_i \sum_{nlm, n'l'm'} a_{nlm}^{*} a_{n'l'm'} {\langle {nlm} \rvert} X_i{\lvert {n'l'm'} \rangle},\end{aligned} \hspace{\stretch{1}}(2.22)

where X_1 = X = R \sin\Theta \cos\Phi, X_2 = Y = R \sin\Theta \sin\Phi, X_3 = Z = R \cos\Phi.

Consider one of the matrix elements, and expand this by introducing an identity twice

\begin{aligned}{\langle {nlm} \rvert} X_i {\lvert {n'l'm'} \rangle}&=\int r^2 \sin\theta dr d\theta d\phi{r'}^2 \sin\theta' dr' d\theta' d\phi'\left\langle{{nlm}} \vert {{r \theta \phi}}\right\rangle {\langle {r \theta \phi} \rvert} X_i {\lvert {r' \theta' \phi' } \rangle}\left\langle{{r' \theta' \phi'}} \vert {{n'l'm'}}\right\rangle \\ &=\int r^2 \sin\theta dr d\theta d\phi{r'}^2 \sin\theta' dr' d\theta' d\phi'R_{nl}(r) Y_{lm}^{*}(\theta,\phi)\delta^3(\mathbf{x} - \mathbf{x}') x_iR_{n'l'}(r') Y_{l'm'}(\theta',\phi')\\ &=\int r^2 \sin\theta dr d\theta d\phi{r'}^2 \sin\theta' dr' d\theta' d\phi'R_{nl}(r) Y_{lm}^{*}(\theta,\phi) \\ &\qquad{r'}^2 \sin\theta' \delta(r-r') \delta(\theta - \theta') \delta(\phi-\phi')x_iR_{n'l'}(r') Y_{l'm'}(\theta',\phi')\\ &=\int r^2 \sin\theta dr d\theta d\phidr' d\theta' d\phi'R_{nl}(r) Y_{lm}^{*}(\theta,\phi) \delta(r-r') \delta(\theta - \theta') \delta(\phi-\phi')x_iR_{n'l'}(r') Y_{l'm'}(\theta',\phi')\\ &=\int r^2 \sin\theta dr d\theta d\phiR_{nl}(r) R_{n'l'}(r) Y_{lm}^{*}(\theta,\phi) Y_{l'm'}(\theta,\phi)x_i\\ \end{aligned}

Because our state has only m=0 contributions, the only \phi dependence for the X and Y components of \mathbf{R} come from those components themselves. For X, we therefore integrate \int_0^{2\pi} \cos\phi d\phi = 0, and for Y we integrate \int_0^{2\pi} \sin\phi d\phi = 0, and these terms vanish. Our expectation value for \mathbf{R} for this state, therefore lies completely on the z axis.

Questions from the Dec 2008 PHY355H1F exam.

1b. Trace invariance for unitary transformation.

\paragraph{Q:} Show that the trace of an operator is invariant under unitary transforms, i.e. if A' = U^\dagger A U, where U is a unitary operator, prove \text{Tr}(A') = \text{Tr}(A).


The bulk of this question is really to show that commutation of operators leaves the trace invariant (unless this is assumed). To show that we start with the definition of the trace

\begin{aligned}\text{Tr}(AB) &= \sum_n {\langle {n} \rvert} A B {\lvert {n} \rangle} \\ &= \sum_{n m} {\langle {n} \rvert} A {\lvert {m} \rangle} {\langle {m} \rvert} B {\lvert {n} \rangle} \\ &= \sum_{n m} {\langle {m} \rvert} B {\lvert {n} \rangle} {\langle {n} \rvert} A {\lvert {m} \rangle} \\ &= \sum_{m} {\langle {m} \rvert} B A {\lvert {m} \rangle}.\end{aligned}

Thus we have

\begin{aligned}\text{Tr}(A B) = \text{Tr}( B A ).\end{aligned} \hspace{\stretch{1}}(3.23)

For the unitarily transformed operator we have

\begin{aligned}\text{Tr}(A') &= \text{Tr}( U^\dagger A U ) \\ &= \text{Tr}( U^\dagger (A U) ) \\ &= \text{Tr}( (A U) U^\dagger ) \\ &= \text{Tr}( A (U U^\dagger) ) \\ &= \text{Tr}( A ) \qquad \square\end{aligned}

1d. Determinant of an exponential operator in terms of trace.

\paragraph{Q:} If A is an Hermitian operator, show that

\begin{aligned}\text{Det}( \exp A ) = \exp ( \text{Tr}(A) )\end{aligned} \hspace{\stretch{1}}(3.24)

where the Determinant (\text{Det}) of an operator is the product of all its eigenvectors.


The eigenvalues clue in the question provides the starting point. We write the exponential in its series form

\begin{aligned}e^A = 1 + \sum_{k=1}^\infty \frac{1}{{k!}} A^k\end{aligned} \hspace{\stretch{1}}(3.25)

Now, suppose that we have the following eigenvalue relationships for A

\begin{aligned}A {\lvert {n} \rangle} = \lambda_n {\lvert {n} \rangle}.\end{aligned} \hspace{\stretch{1}}(3.26)

From this the exponential is

\begin{aligned}e^A {\lvert {n} \rangle} &= {\lvert {n} \rangle} + \sum_{k=1}^\infty \frac{1}{{k!}} A^k {\lvert {n} \rangle} \\ &= {\lvert {n} \rangle} + \sum_{k=1}^\infty \frac{1}{{k!}} (\lambda_n)^k {\lvert {n} \rangle} \\ &= e^{\lambda_n} {\lvert {n} \rangle}.\end{aligned}

We see that the eigenstates of e^A are those of A, with eigenvalues e^{\lambda_n}.

By the definition of the determinant given we have

\begin{aligned}\text{Det}( e^A ) &= \Pi_n e^{\lambda_n} \\ &= e^{\sum_n \lambda_n} \\ &= e^{\text{Tr}ace(A)} \qquad \square\end{aligned}

1e. Eigenvectors of the Harmonic oscillator creation operator.

\paragraph{Q:} Prove that the only eigenvector of the Harmonic oscillator creation operator is {\lvert {\text{null}} \rangle}.


Recall that the creation (raising) operator was given by

\begin{aligned}a^\dagger = \sqrt{\frac{m \omega}{2 \hbar}} X - \frac{ i }{\sqrt{2 m \omega \hbar} } P= \frac{1}{{ \alpha \sqrt{2} }} X - \frac{ i \alpha }{\sqrt{2} \hbar } P,\end{aligned} \hspace{\stretch{1}}(3.27)

where \alpha = \sqrt{\hbar/m \omega}. Now assume that a^\dagger {\lvert {\phi} \rangle} = \lambda {\lvert {\phi} \rangle} so that

\begin{aligned}{\langle {x} \rvert} a^\dagger {\lvert {\phi} \rangle} = {\langle {x} \rvert} \lambda {\lvert {\phi} \rangle}.\end{aligned} \hspace{\stretch{1}}(3.28)

Write \left\langle{{x}} \vert {{\phi}}\right\rangle = \phi(x), and expand the LHS using 3.27 for

\begin{aligned}\lambda \phi(x) &= {\langle {x} \rvert} a^\dagger {\lvert {\phi} \rangle}  \\ &= {\langle {x} \rvert} \left( \frac{1}{{ \alpha \sqrt{2} }} X - \frac{ i \alpha }{\sqrt{2} \hbar } P \right) {\lvert {\phi} \rangle} \\ &= \frac{x \phi(x)}{ \alpha \sqrt{2} } - \frac{ i \alpha }{\sqrt{2} \hbar } (-i\hbar)\frac{\partial {}}{\partial {x}} \phi(x) \\ &= \frac{x \phi(x)}{ \alpha \sqrt{2} } - \frac{ \alpha }{\sqrt{2} } \frac{\partial {\phi(x)}}{\partial {x}}.\end{aligned}

As usual write \xi = x/\alpha, and rearrange. This gives us

\begin{aligned}\frac{\partial {\phi}}{\partial {\xi}} +\sqrt{2} \lambda \phi - \xi \phi = 0.\end{aligned} \hspace{\stretch{1}}(3.29)

Observe that this can be viewed as a homogeneous LDE of the form

\begin{aligned}\frac{\partial {\phi}}{\partial {\xi}} - \xi \phi = 0,\end{aligned} \hspace{\stretch{1}}(3.30)

augmented by a forcing term \sqrt{2}\lambda \phi. The homogeneous equation has the solution \phi = A e^{\xi^2/2}, so for the complete equation we assume a solution

\begin{aligned}\phi(\xi) = A(\xi) e^{\xi^2/2}.\end{aligned} \hspace{\stretch{1}}(3.31)

Since \phi' = (A' + A \xi) e^{\xi^2/2}, we produce a LDE of

\begin{aligned}0 &= (A' + A \xi -\xi A + \sqrt{2} \lambda A ) e^{\xi^2/2} \\ &= (A' + \sqrt{2} \lambda A ) e^{\xi^2/2},\end{aligned}


\begin{aligned}0 = A' + \sqrt{2} \lambda A.\end{aligned} \hspace{\stretch{1}}(3.32)

This has solution A = B e^{-\sqrt{2} \lambda \xi}, so our solution for 3.29 is

\begin{aligned}\phi(\xi) = B e^{\xi^2/2 - \sqrt{2} \lambda \xi} = B' e^{ (\xi - \lambda \sqrt{2} )^2/2}.\end{aligned} \hspace{\stretch{1}}(3.33)

This wave function is an imaginary Gaussian with minimum at \xi = \lambda\sqrt{2}. It is also unnormalizable since we require B' = 0 for any \lambda if \int {\left\lvert{\phi}\right\rvert}^2 < \infty. Since \left\langle{{\xi}} \vert {{\phi}}\right\rangle = \phi(\xi) = 0, we must also have {\lvert {\phi} \rangle} = 0, completing the exercise.

2. Two level quantum system.

Consider a two-level quantum system, with basis states \{{\lvert {a} \rangle}, {\lvert {b} \rangle}\}. Suppose that the Hamiltonian for this system is given by

\begin{aligned}H = \frac{\hbar \Delta}{2} ( {\lvert {b} \rangle}{\langle {b} \rvert}- {\lvert {a} \rangle}{\langle {a} \rvert})+ i \frac{\hbar \Omega}{2} ( {\lvert {a} \rangle}{\langle {b} \rvert}- {\lvert {b} \rangle}{\langle {a} \rvert})\end{aligned} \hspace{\stretch{1}}(3.34)

where \Delta and \Omega are real positive constants.

\paragraph{Q: (a)} Find the energy eigenvalues and the normalized energy eigenvectors (expressed in terms of the \{{\lvert {a} \rangle}, {\lvert {b} \rangle}\} basis). Write the time evolution operator U(t) = e^{-i H t/\hbar} using these eigenvectors.


The eigenvalue part of this problem is probably easier to do in matrix form. Let

\begin{aligned}{\lvert {a} \rangle} &= \begin{bmatrix}1 \\ 0\end{bmatrix} \\ {\lvert {b} \rangle} &= \begin{bmatrix}0 \\ 1\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.35)

Our Hamiltonian is then

\begin{aligned}H = \frac{\hbar}{2} \begin{bmatrix}-\Delta & i \Omega \\ -i \Omega & \Delta\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.37)

Computing \det{H - \lambda I} = 0, we get

\begin{aligned}\lambda = \pm \frac{\hbar}{2} \sqrt{ \Delta^2 + \Omega^2 }.\end{aligned} \hspace{\stretch{1}}(3.38)

Let \delta = \sqrt{ \Delta^2 + \Omega^2 }. Our normalized eigenvectors are found to be

\begin{aligned}{\lvert {\pm} \rangle} = \frac{1}{{\sqrt{ 2 \delta (\delta \pm \Delta)} }}\begin{bmatrix}i \Omega \\ \Delta \pm \delta\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.39)

In terms of {\lvert {a} \rangle} and {\lvert {b} \rangle}, we then have

\begin{aligned}{\lvert {\pm} \rangle} = \frac{1}{{\sqrt{ 2 \delta (\delta \pm \Delta)} }}\left(i \Omega {\lvert {a} \rangle}+ (\Delta \pm \delta) {\lvert {b} \rangle} \right).\end{aligned} \hspace{\stretch{1}}(3.40)

Note that our Hamiltonian has a simple form in this basis. That is

\begin{aligned}H = \frac{\delta \hbar}{2} ({\lvert {+} \rangle}{\langle {+} \rvert} - {\lvert {-} \rangle}{\langle {-} \rvert} )\end{aligned} \hspace{\stretch{1}}(3.41)

Observe that once we do the diagonalization, we have a Hamiltonian that appears to have the form of a scaled projector for an open Stern-Gerlach aparatus.

Observe that the diagonalized Hamiltonian operator makes the time evolution operator’s form also simple, which is, by inspection

\begin{aligned}U(t) = e^{-i t \frac{\delta}{2}} {\lvert {+} \rangle}{\langle {+} \rvert} + e^{i t \frac{\delta}{2}} {\lvert {-} \rangle}{\langle {-} \rvert}.\end{aligned} \hspace{\stretch{1}}(3.42)

Since we are asked for this in terms of {\lvert {a} \rangle}, and {\lvert {b} \rangle}, the projectors {\lvert {\pm} \rangle}{\langle {\pm} \rvert} are required. These are

\begin{aligned}{\lvert {\pm} \rangle}{\langle {\pm} \rvert} &= \frac{1}{{2 \delta (\delta \pm \Delta)}}\Bigl( i \Omega {\lvert {a} \rangle} + (\Delta \pm \delta) {\lvert {b} \rangle} \Bigr)\Bigl( -i \Omega {\langle {a} \rvert} + (\Delta \pm \delta) {\langle {b} \rvert} \Bigr) \\ \end{aligned}

\begin{aligned}{\lvert {\pm} \rangle}{\langle {\pm} \rvert} = \frac{1}{{2 \delta (\delta \pm \Delta)}}\Bigl(\Omega^2 {\lvert {a} \rangle}{\langle {a} \rvert}+(\delta \pm \delta)^2 {\lvert {b} \rangle}{\langle {b} \rvert}+i \Omega (\Delta \pm \delta) ({\lvert {a} \rangle}{\langle {b} \rvert}-{\lvert {b} \rangle}{\langle {a} \rvert})\Bigr)\end{aligned} \hspace{\stretch{1}}(3.43)

Substitution into 3.42 and a fair amount of algebra leads to

\begin{aligned}U(t) = \cos(\delta t/2) \Bigl( {\lvert {a} \rangle}{\langle {a} \rvert} + {\lvert {b} \rangle}{\langle {b} \rvert} \Bigr)+ i \frac{\Omega}{\delta} \sin(\delta t/2) \Bigl( {\lvert {a} \rangle}{\langle {a} \rvert} - {\lvert {b} \rangle}{\langle {b} \rvert} -i ({\lvert {a} \rangle}{\langle {b} \rvert} - {\lvert {b} \rangle}{\langle {a} \rvert} )\Bigr).\end{aligned} \hspace{\stretch{1}}(3.44)

Note that while a big cumbersome, we can also verify that we can recover the original Hamiltonian from 3.41 and 3.43.

\paragraph{Q: (b)}

Suppose that the initial state of the system at time t = 0 is {\lvert {\phi(0)} \rangle}= {\lvert {b} \rangle}. Find an expression for the state at some later time t > 0, {\lvert {\phi(t)} \rangle}.


Most of the work is already done. Computation of {\lvert {\phi(t)} \rangle} = U(t) {\lvert {\phi(0)} \rangle} follows from 3.44

\begin{aligned}{\lvert {\phi(t)} \rangle} =\cos(\delta t/2) {\lvert {b} \rangle}- i \frac{\Omega}{\delta} \sin(\delta t/2) \Bigl( {\lvert {b} \rangle} +i {\lvert {a} \rangle}\Bigr).\end{aligned} \hspace{\stretch{1}}(3.45)

\paragraph{Q: (c)}

Suppose that an observable, specified by the operator X = {\lvert {a} \rangle}{\langle {b} \rvert} + {\lvert {b} \rangle}{\langle {a} \rvert}, is measured for this system. What is the probabilbity that, at time t, the result 1 is obtained? Plot this probability as a function of time, showing the maximum and minimum values of the function, and the corresponding values of t.


The language of questions like these attempt to bring some physics into the mathematics. The phrase “the result 1 is obtained”, is really a statement that the operator X, after measurement is found to have the eigenstate with numeric value 1.

We can calcuate the eigenvectors for this operator easily enough and find them to be \pm 1. For the positive eigenvalue we can also compute the eigenstate to be

\begin{aligned}{\lvert {X+} \rangle} = \frac{1}{{\sqrt{2}}} \Bigl( {\lvert {a} \rangle} + {\lvert {b} \rangle} \Bigr).\end{aligned} \hspace{\stretch{1}}(3.46)

The question of what the probability for this measurement is then really a question asking for the computation of the amplitude

\begin{aligned}{\left\lvert{\frac{1}{{\sqrt{2}}}\left\langle{{ (a + b)}} \vert {{\phi(t)}}\right\rangle}\right\rvert}^2\end{aligned} \hspace{\stretch{1}}(3.47)

From 3.45 we find this probability to be

\begin{aligned}{\left\lvert{\frac{1}{{\sqrt{2}}}\left\langle{{ (a + b)}} \vert {{\phi(t)}}\right\rangle}\right\rvert}^2&=\frac{1}{{2}} \left(\left(\cos(\delta t/2) + \frac{\Omega}{\delta} \sin(\delta t/2)\right)^2+ \frac{ \Omega^2 \sin^2(\delta t/2)}{\delta^2}\right) \\ &=\frac{1}{{4}} \left( 1 + 3 \frac{\Omega^2}{\delta^2} + \frac{\Delta^2}{\delta^2} \cos (\delta t) + 2 \frac{ \Omega}{\delta} \sin(\delta t) \right)\end{aligned}

We have a simple superposition of two sinusuiods out of phase, periodic with period 2 \pi/\delta. I’d attempted a rough sketch of this on paper, but won’t bother scanning it here or describing it further.

\paragraph{Q: (d)}

Suppose an experimenter has control over the values of the parameters \Delta and \Omega. Explain how she might prepare the state ({\lvert {a} \rangle} + {\lvert {b} \rangle})/\sqrt{2}.


For this part of the question I wasn’t sure what approach to take. I thought perhaps this linear combination of states could be made to equal one of the energy eigenstates, and if one could prepare the system in that state, then for certain values of \delta and \Delta one would then have this desired state.

To get there I note that we can express the states {\lvert {a} \rangle}, and {\lvert {b} \rangle} in terms of the eigenstates by inverting

\begin{aligned}\begin{bmatrix}{\lvert {+} \rangle} \\ {\lvert {-} \rangle} \\ \end{bmatrix}=\frac{1}{{\sqrt{2\delta}}}\begin{bmatrix}\frac{i \Omega}{\sqrt{\delta + \Delta}} & \sqrt{\delta + \Delta} \\ \frac{i \Omega}{\sqrt{\delta - \Delta}} & -\sqrt{\delta - \Delta}\end{bmatrix}\begin{bmatrix}{\lvert {a} \rangle} \\ {\lvert {b} \rangle} \\ \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.48)

Skipping all the algebra one finds

\begin{aligned}\begin{bmatrix}{\lvert {a} \rangle} \\ {\lvert {b} \rangle} \\ \end{bmatrix}=\begin{bmatrix}-i\sqrt{\delta - \Delta} & -i\sqrt{\delta + \Delta} \\ \frac{\Omega}{\sqrt{\delta - \Delta}} &-\frac{\Omega}{\sqrt{\delta + \Delta}} \end{bmatrix}\begin{bmatrix}{\lvert {+} \rangle} \\ {\lvert {-} \rangle} \\ \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.49)

Unfortunately, this doesn’t seem helpful. I find

\begin{aligned}\frac{1}{{\sqrt{2}}} ( {\lvert {a} \rangle} + {\lvert {b} \rangle} ) = \frac{{\lvert {+} \rangle}}{\sqrt{\delta - \Delta}}( \Omega - i (\delta - \Delta) )-\frac{{\lvert {-} \rangle}}{\sqrt{\delta + \Delta}}( \Omega + i (\delta + \Delta) )\end{aligned} \hspace{\stretch{1}}(3.50)

There’s no obvious way to pick \Omega and \Delta to leave just {\lvert {+} \rangle} or {\lvert {-} \rangle}. When I did this on paper originally I got a different answer for this sum, but looking at it now, I can’t see how I managed to get that answer (it had no factors of i in the result as the one above does).

3. One dimensional harmonic oscillator.

Consider a one-dimensional harmonic oscillator with the Hamiltonian

\begin{aligned}H = \frac{1}{{2m}}P^2 + \frac{1}{{2}} m \omega^2 X^2\end{aligned} \hspace{\stretch{1}}(3.51)

Denote the ground state of the system by {\lvert {0} \rangle}, the first excited state by {\lvert {1} \rangle} and so on.

\paragraph{Q: (a)}
Evaluate {\langle {n} \rvert} X {\lvert {n} \rangle} and {\langle {n} \rvert} X^2 {\lvert {n} \rangle} for arbitrary {\lvert {n} \rangle}.


Writing X in terms of the raising and lowering operators we have

\begin{aligned}X = \frac{\alpha}{\sqrt{2}} (a^\dagger + a),\end{aligned} \hspace{\stretch{1}}(3.52)

so \left\langle{{X}}\right\rangle is proportional to

\begin{aligned}{\langle {n} \rvert} a^\dagger + a {\lvert {n} \rangle} = \sqrt{n+1} \left\langle{{n}} \vert {{n+1}}\right\rangle + \sqrt{n} \left\langle{{n}} \vert {{n-1}}\right\rangle = 0.\end{aligned} \hspace{\stretch{1}}(3.53)

For \left\langle{{X^2}}\right\rangle we have

\begin{aligned}\left\langle{{X^2}}\right\rangle&=\frac{\alpha^2}{2}{\langle {n} \rvert} (a^\dagger + a)(a^\dagger + a) {\lvert {n} \rangle} \\ &=\frac{\alpha^2}{2}{\langle {n} \rvert} (a^\dagger + a) \left( \sqrt{n+1} {\lvert {n+1} \rangle} + \sqrt{n-1} {\lvert {n-1} \rangle}\right)  \\ &=\frac{\alpha^2}{2}{\langle {n} \rvert} \Bigl( (n+1) {\lvert {n} \rangle} + \sqrt{n(n-1)} {\lvert {n-2} \rangle}+ \sqrt{(n+1)(n+2)} {\lvert {n+2} \rangle} + n {\lvert {n} \rangle} \Bigr).\end{aligned}

We are left with just

\begin{aligned}\left\langle{{X^2}}\right\rangle = \frac{\hbar}{2 m \omega} (2n + 1).\end{aligned} \hspace{\stretch{1}}(3.54)

\paragraph{Q: (b)}

Suppose that at t=0 the system is prepared in the state

\begin{aligned}{\lvert {\psi(0)} \rangle} = \frac{1}{{\sqrt{2}}} ( {\lvert {0} \rangle} + i {\lvert {1} \rangle} ).\end{aligned} \hspace{\stretch{1}}(3.55)

If a measurement of position X were performaed immediately, sketch the propability distribution P(x) that a particle would be found within dx of x. Justify how you construct the sketch.


The probability that we started in state {\lvert {\psi(0)} \rangle} and ended up in position x is governed by the amplitude \left\langle{{x}} \vert {{\psi(0)}}\right\rangle, and the probability of being within an interval \Delta x, surrounding the point x is given by

\begin{aligned}\int_{x'=x-\Delta x/2}^{x+\Delta x/2} {\left\lvert{ \left\langle{{x'}} \vert {{\psi(0)}}\right\rangle }\right\rvert}^2 dx'.\end{aligned} \hspace{\stretch{1}}(3.56)

In the limit as \Delta x \rightarrow 0, this is just the squared amplitude itself evaluated at the point x, so we are interested in the quantity

\begin{aligned}{\left\lvert{ \left\langle{{x}} \vert {{\psi(0)}}\right\rangle }\right\rvert}^2  = \frac{1}{{2}} {\left\lvert{ \left\langle{{x}} \vert {{0}}\right\rangle + i \left\langle{{x}} \vert {{1}}\right\rangle }\right\rvert}^2.\end{aligned} \hspace{\stretch{1}}(3.57)

We are given these wave functions in the supplemental formulas. Namely,

\begin{aligned}\left\langle{{x}} \vert {{0}}\right\rangle &= \psi_0(x) = \frac{e^{-x^2/2\alpha^2}}{ \sqrt{\alpha \sqrt{\pi}}} \\ \left\langle{{x}} \vert {{1}}\right\rangle &= \psi_1(x) = \frac{e^{-x^2/2\alpha^2} 2 x }{ \alpha \sqrt{2 \alpha \sqrt{\pi}}}.\end{aligned} \hspace{\stretch{1}}(3.58)

Substituting these into 3.57 we have

\begin{aligned}{\left\lvert{ \left\langle{{x}} \vert {{\psi(0)}}\right\rangle }\right\rvert}^2 = \frac{1}{{2}} e^{-x^2/\alpha^2}\frac{1}{{ \alpha \sqrt{\pi}}}{\left\lvert{ 1 + \frac{2 i x}{\alpha \sqrt{2} } }\right\rvert}^2=\frac{e^{-x^2/\alpha^2}}{ 2\alpha \sqrt{\pi}}\left( 1 + \frac{2 x^2}{\alpha^2 } \right).\end{aligned} \hspace{\stretch{1}}(3.60)

This \href{^(-x^2)+(1+

\paragraph{Q: (c)}

Now suppose the state given in (b) above were allowed to evolve for a time t, determine the expecation value of X and \Delta X at that time.


Our time evolved state is

\begin{aligned}U(t) {\lvert {\psi(0)} \rangle} = \frac{1}{{\sqrt{2}}}\left(e^{-i \hbar \omega \left( 0 + \frac{1}{{2}} \right) t/\hbar } {\lvert {0} \rangle}+ i e^{-i \hbar \omega \left( 1 + \frac{1}{{2}} \right) t/\hbar } {\lvert {0} \rangle}\right)=\frac{1}{{\sqrt{2}}}\left(e^{-i \omega t/2 } {\lvert {0} \rangle}+ i e^{- 3 i \omega t/2 } {\lvert {1} \rangle}\right).\end{aligned} \hspace{\stretch{1}}(3.61)

The position expectation is therefore

\begin{aligned}{\langle {\psi(t)} \rvert} X {\lvert {\psi(t)} \rangle}&= \frac{\alpha}{2 \sqrt{2}}\left(e^{i \omega t/2 } {\langle {0} \rvert}- i e^{ 3 i \omega t/2 } {\langle {1} \rvert}\right)(a^\dagger + a)\left(e^{-i \omega t/2 } {\lvert {0} \rangle}+ i e^{- 3 i \omega t/2 } {\lvert {1} \rangle}\right) \\ \end{aligned}

We have already demonstrated that {\langle {n} \rvert} X {\lvert {n} \rangle} = 0, so we must only expand the cross terms, but those are just {\langle {0} \rvert} a^\dagger + a {\lvert {1} \rangle} = 1. This leaves

\begin{aligned}{\langle {\psi(t)} \rvert} X {\lvert {\psi(t)} \rangle}= \frac{\alpha}{2 \sqrt{2}}\left( -i e^{i \omega t} + i e^{-i \omega t} \right)=\sqrt{\frac{\hbar}{2 m \omega}} \cos(\omega t)\end{aligned} \hspace{\stretch{1}}(3.62)

For the squared position expectation

\begin{aligned}{\langle {\psi(t)} \rvert} X^2 {\lvert {\psi(t)} \rangle}&= \frac{\alpha^2}{4 (2)}\left(e^{i \omega t/2 } {\langle {0} \rvert}- i e^{ 3 i \omega t/2 } {\langle {1} \rvert}\right)(a^\dagger + a)^2\left(e^{-i \omega t/2 } {\lvert {0} \rangle}+ i e^{- 3 i \omega t/2 } {\lvert {1} \rangle}\right) \\ &=\frac{1}{{2}} ( {\langle {0} \rvert} X^2 {\lvert {0} \rangle} + {\langle {1} \rvert} X^2 {\lvert {1} \rangle} )+ i \frac{\alpha^2 }{8} ( - e^{ i \omega t} {\langle {1} \rvert} (a^\dagger + a)^2 {\lvert {0} \rangle}+ e^{ -i \omega t} {\langle {0} \rvert} (a^\dagger + a)^2 {\lvert {1} \rangle})\end{aligned}

Noting that (a^\dagger + a) {\lvert {0} \rangle} = {\lvert {1} \rangle}, and (a^\dagger + a)^2 {\lvert {0} \rangle} = (a^\dagger + a){\lvert {1} \rangle} = \sqrt{2} {\lvert {2} \rangle} + {\lvert {0} \rangle}, so we see the last two terms are zero. The first two we can evaluate using our previous result 3.54 which was \left\langle{{X^2}}\right\rangle = \frac{\alpha^2}{2} (2n + 1). This leaves

\begin{aligned}{\langle {\psi(t)} \rvert} X^2 {\lvert {\psi(t)} \rangle} = \alpha^2 \end{aligned} \hspace{\stretch{1}}(3.63)

Since \left\langle{{X}}\right\rangle^2 = \alpha^2 \cos^2(\omega t)/2, we have

\begin{aligned}(\Delta X)^2 = \left\langle{{X^2}}\right\rangle - \left\langle{{X}}\right\rangle^2 = \alpha^2 \left(1 - \frac{1}{{2}} \cos^2(\omega t) \right)\end{aligned} \hspace{\stretch{1}}(3.64)

\paragraph{Q: (d)}

Now suppose that initially the system were prepared in the ground state {\lvert {0} \rangle}, and then the resonance frequency is changed abrubtly from \omega to \omega' so that the Hamiltonian becomes

\begin{aligned}H = \frac{1}{{2m}}P^2 + \frac{1}{{2}} m {\omega'}^2 X^2.\end{aligned} \hspace{\stretch{1}}(3.65)

Immediately, an energy measurement is performed ; what is the probability of obtaining the result E = \hbar \omega' (3/2)?


This energy measurement E = \hbar \omega' (3/2) = \hbar \omega' (1 + 1/2), corresponds to an observation of state {\lvert {1'} \rangle}, after an initial observation of {\lvert {0} \rangle}. The probability of such a measurement is

\begin{aligned}{\left\lvert{ \left\langle{{1'}} \vert {{0}}\right\rangle }\right\rvert}^2\end{aligned} \hspace{\stretch{1}}(3.66)

Note that

\begin{aligned}\left\langle{{1'}} \vert {{0}}\right\rangle &=\int dx \left\langle{{1'}} \vert {{x}}\right\rangle\left\langle{{x}} \vert {{0}}\right\rangle \\ &=\int dx \psi_{1'}^{*} \psi_0(x) \\ \end{aligned}

The wave functions above are

\begin{aligned}\phi_{1'}(x) &= \frac{ 2 x e^{-x^2/2 {\alpha'}^2 }}{ \alpha' \sqrt{ 2 \alpha' \sqrt{\pi} } } \\ \phi_{0}(x) &= \frac{ e^{-x^2/2 {\alpha}^2 } } { \sqrt{ \alpha \sqrt{\pi} } } \end{aligned} \hspace{\stretch{1}}(3.67)

Putting the pieces together we have

\begin{aligned}\left\langle{{1'}} \vert {{0}}\right\rangle &=\frac{2 }{ \alpha' \sqrt{ 2 \alpha' \alpha \pi } }\int dxx e^{-\frac{x^2}{2}\left( \frac{1}{{{\alpha'}^2}} + \frac{1}{{\alpha^2}} \right) }\end{aligned} \hspace{\stretch{1}}(3.69)

Since this is an odd integral kernel over an even range, this evaluates to zero, and we conclude that the probability of measuring the specified energy is zero when the system is initially prepared in the ground state associated with the original Hamiltonian. Intuitively this makes some sense, if one thinks of the Fourier coefficient problem: one cannot construct an even function from linear combinations of purely odd functions.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

Notes for Desai Chapter 26

Posted by peeterjoot on December 9, 2010

[Click here for a PDF of this post with nicer formatting]


Chapter 26 notes for [1].


Trig relations.

To verify equations 26.3-5 in the text it’s worth noting that

\begin{aligned}\cos(a + b) &= \Re( e^{ia} e^{ib} ) \\ &= \Re( (\cos a + i \sin a)( \cos b + i \sin b) ) \\ &= \cos a \cos b - \sin a \sin b\end{aligned}


\begin{aligned}\sin(a + b) &= \Im( e^{ia} e^{ib} ) \\ &= \Im( (\cos a + i \sin a)( \cos b + i \sin b) ) \\ &= \cos a \sin b + \sin a \cos b\end{aligned}

So, for

\begin{aligned}x &= \rho \cos\alpha \\ y &= \rho \sin\alpha \end{aligned} \hspace{\stretch{1}}(2.1)

the transformed coordinates are

\begin{aligned}x' &= \rho \cos(\alpha + \phi) \\ &= \rho (\cos \alpha \cos \phi - \sin \alpha \sin \phi) \\ &= x \cos \phi - y \sin \phi\end{aligned}


\begin{aligned}y' &= \rho \sin(\alpha + \phi) \\ &= \rho (\cos \alpha \sin \phi + \sin \alpha \cos \phi) \\ &= x \sin \phi + y \cos \phi \\ \end{aligned}

This allows us to read off the rotation matrix. Without all the messy trig, we can also derive this matrix with geometric algebra.

\begin{aligned}\mathbf{v}' &= e^{- \mathbf{e}_1 \mathbf{e}_2 \phi/2 } \mathbf{v} e^{ \mathbf{e}_1 \mathbf{e}_2 \phi/2 } \\ &= v_3 \mathbf{e}_3 + (v_1 \mathbf{e}_1 + v_2 \mathbf{e}_2) e^{ \mathbf{e}_1 \mathbf{e}_2 \phi } \\ &= v_3 \mathbf{e}_3 + (v_1 \mathbf{e}_1 + v_2 \mathbf{e}_2) (\cos \phi + \mathbf{e}_1 \mathbf{e}_2 \sin\phi) \\ &= v_3 \mathbf{e}_3 + \mathbf{e}_1 (v_1 \cos\phi - v_2 \sin\phi)+ \mathbf{e}_2 (v_2 \cos\phi + v_1 \sin\phi)\end{aligned}

Here we use the Pauli-matrix like identities

\begin{aligned}\mathbf{e}_k^2 &= 1 \\ \mathbf{e}_i \mathbf{e}_j &= -\mathbf{e}_j \mathbf{e}_i,\quad i\ne j\end{aligned} \hspace{\stretch{1}}(2.3)

and also note that \mathbf{e}_3 commutes with the bivector for the x,y plane \mathbf{e}_1 \mathbf{e}_2. We can also read off the rotation matrix from this.

Infinitesimal transformations.

Recall that in the problems of Chapter 5, one representation of spin one matrices were calculated [2]. Since the choice of the basis vectors was arbitrary in that exersize, we ended up with a different representation. For S_x, S_y, S_z as found in (26.20) and (26.23) we can also verify easily that we have eigenvalues 0, \pm \hbar. We can also show that our spin kets in this non-diagonal representation have the following column matrix representations:

\begin{aligned}{\lvert {1,\pm 1} \rangle}_x &=\frac{1}{{\sqrt{2}}} \begin{bmatrix}0 \\ 1 \\ \pm i\end{bmatrix} \\ {\lvert {1,0} \rangle}_x &=\begin{bmatrix}1 \\ 0 \\ 0 \end{bmatrix} \\ {\lvert {1,\pm 1} \rangle}_y &=\frac{1}{{\sqrt{2}}} \begin{bmatrix}\pm i \\ 0 \\ 1 \end{bmatrix} \\ {\lvert {1,0} \rangle}_y &=\begin{bmatrix}0 \\ 1 \\ 0 \end{bmatrix} \\ {\lvert {1,\pm 1} \rangle}_z &=\frac{1}{{\sqrt{2}}} \begin{bmatrix}1 \\ \pm i \\ 0\end{bmatrix} \\ {\lvert {1,0} \rangle}_z &=\begin{bmatrix}0 \\ 0 \\ 1\end{bmatrix} \end{aligned} \hspace{\stretch{1}}(2.5)

Verifying the commutator relations.

Given the (summation convention) matrix representation for the spin one operators

\begin{aligned}(S_i)_{jk} = - i \hbar \epsilon_{ijk},\end{aligned} \hspace{\stretch{1}}(2.11)

let’s demonstrate the commutator relation of (26.25).

\begin{aligned}{\left[{S_i},{S_j}\right]}_{rs} &=(S_i S_j - S_j S_i)_{rs} \\ &=\sum_t (S_i)_{rt} (S_j)_{ts} - (S_j)_{rt} (S_i)_{ts} \\ &=(-i\hbar)^2 \sum_t \epsilon_{irt} \epsilon_{jts} - \epsilon_{jrt} \epsilon_{its} \\ &=-(-i\hbar)^2 \sum_t \epsilon_{tir} \epsilon_{tjs} - \epsilon_{tjr} \epsilon_{tis} \\ \end{aligned}

Now we can employ the summation rule for sums products of antisymmetic tensors over one free index (4.179)

\begin{aligned}\sum_i \epsilon_{ijk} \epsilon_{iab}= \delta_{ja}\delta_{kb}-\delta_{jb}\delta_{ka}.\end{aligned} \hspace{\stretch{1}}(2.12)

Continuing we get

\begin{aligned}{\left[{S_i},{S_j}\right]}_{rs} &=-(-i\hbar)^2 \left(\delta_{ij}\delta_{rs}-\delta_{is}\delta_{rj}-\delta_{ji}\delta_{rs}+\delta_{js}\delta_{ri} \right) \\ &=(-i\hbar)^2 \left( \delta_{is}\delta_{jr}-\delta_{ir} \delta_{js}\right)\\ &=(-i\hbar)^2 \sum_t \epsilon_{tij} \epsilon_{tsr}\\ &=i\hbar \sum_t \epsilon_{tij} (S_t)_{rs}\qquad\square\end{aligned}

General infinitesimal rotation.

Equation (26.26) has for an infinitesimal rotation counterclockwise around the unit axis of rotation vector \mathbf{n}

\begin{aligned}\mathbf{V}' = \mathbf{V} + \epsilon \mathbf{n} \times \mathbf{V}.\end{aligned} \hspace{\stretch{1}}(2.13)

Let’s derive this using the geometric algebra rotation expression for the same

\begin{aligned}\mathbf{V}' &=e^{-I\mathbf{n} \alpha/2}\mathbf{V} e^{I\mathbf{n} \alpha/2} \\ &=e^{-I\mathbf{n} \alpha/2}\left((\mathbf{V} \cdot \mathbf{n})\mathbf{n}+(\mathbf{V} \wedge \mathbf{n})\mathbf{n}\right)e^{I\mathbf{n} \alpha/2} \\ &=(\mathbf{V} \cdot \mathbf{n})\mathbf{n}+(\mathbf{V} \wedge \mathbf{n})\Bne^{I\mathbf{n} \alpha}\end{aligned}

We note that I\mathbf{n} and thus the exponential commutes with \mathbf{n}, and the projection component in the normal direction. Similarily I\mathbf{n} anticommutes with (\mathbf{V} \wedge \mathbf{n}) \mathbf{n}. This leaves us with

\begin{aligned}\mathbf{V}' &=(\mathbf{V} \cdot \mathbf{n})\mathbf{n}\left(+(\mathbf{V} \wedge \mathbf{n})\mathbf{n}\right)( \cos \alpha + I \mathbf{n} \sin\alpha)\end{aligned}

For \alpha = \epsilon \rightarrow 0, this is

\begin{aligned}\mathbf{V}' &=(\mathbf{V} \cdot \mathbf{n})\mathbf{n}+(\mathbf{V} \wedge \mathbf{n})\mathbf{n}( 1 + I \mathbf{n} \epsilon) \\ &=(\mathbf{V} \cdot \mathbf{n})\mathbf{n} +(\mathbf{V} \wedge \mathbf{n})\mathbf{n}+\epsilon I^2(\mathbf{V} \times \mathbf{n})\mathbf{n}^2 \\ &=\mathbf{V}+ \epsilon (\mathbf{n} \times \mathbf{V}) \qquad\square\end{aligned}

Position and angular momentum commutator.

Equation (26.71) is

\begin{aligned}\left[{x_i},{L_j}\right] = i \hbar \epsilon_{ijk} x_k.\end{aligned} \hspace{\stretch{1}}(2.14)

Let’s derive this. Recall that we have for the position-momentum commutator

\begin{aligned}\left[{x_i},{p_j}\right] = i \hbar \delta_{ij},\end{aligned} \hspace{\stretch{1}}(2.15)

and for each of the angular momentum operator components we have

\begin{aligned}L_m = \epsilon_{mab} x_a p_b.\end{aligned} \hspace{\stretch{1}}(2.16)

The commutator of interest is thus

\begin{aligned}\left[{x_i},{L_j}\right] &= x_i \epsilon_{jab} x_a p_b -\epsilon_{jab} x_a p_b x_i \\ &= \epsilon_{jab} x_a\left(x_i p_b -p_b x_i \right) \\ &=\epsilon_{jab} x_ai \hbar \delta_{ib} \\ &=i \hbar \epsilon_{jai} x_a \\ &=i \hbar \epsilon_{ija} x_a \qquad\square\end{aligned}

A note on the angular momentum operator exponential sandwiches.

In (26.73-74) we have

\begin{aligned}e^{i \epsilon L_z/\hbar} x e^{-i \epsilon L_z/\hbar} = x + \frac{i \epsilon}{\hbar} \left[{L_z},{x}\right]\end{aligned} \hspace{\stretch{1}}(2.17)

Observe that

\begin{aligned}\left[{x},{\left[{L_z},{x}\right]}\right] = 0\end{aligned} \hspace{\stretch{1}}(2.18)

so from the first two terms of (10.99)

\begin{aligned}e^{A} B e^{-A}= B + \left[{A},{B}\right]+\frac{1}{{2}} \left[{A},{\left[{A},{B}\right]}\right] \cdots\end{aligned} \hspace{\stretch{1}}(2.19)

we get the desired result.

Trace relation to the determinant.

Going from (26.90) to (26.91) we appear to have a mystery identity

\begin{aligned}\det \left( \mathbf{1} + \mu \mathbf{A} \right) = 1 + \mu \text{Tr} \mathbf{A}\end{aligned} \hspace{\stretch{1}}(2.20)

According to wikipedia, under derivative of a determinant, [3], this is good for small \mu, and related to something called the Jacobi identity. Someday I should really get around to studying determinants in depth, and will take this one for granted for now.


[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] Peeter Joot. Notes and problems for Desai Chapter V. [online].

[3] Wikipedia. Determinant — wikipedia, the free encyclopedia [online]. 2010. [Online; accessed 10-December-2010].

Posted in Math and Physics Learning. | Tagged: , , , , , | Leave a Comment »