• 159,208

# Posts Tagged ‘quantum mechanics’

## Updated notes compilations for phy356 and phy456 (QM I & II)

Posted by peeterjoot on July 1, 2012

Here’s two updates of class notes compilations for Quantum Mechanics

The QM I notes updates are strictly cosmetic (the book template is updated to that of classicthesis since it was originally posted). The chapters in QM II are reorganized a bit, grouping things by topic instead by lecture dates.

## Second order time evolution for the coefficients of an initially pure ket with an adiabatically changing Hamiltonian.

Posted by peeterjoot on November 6, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

# Motivation.

In lecture 9, Prof Sipe developed the equations governing the evolution of the coefficients of a given state for an adiabatically changing Hamiltonian. He also indicated that we could do an approximation, finding the evolution of an initially pure state in powers of $\lambda$ (like we did for the solutions of a non-time dependent perturbed Hamiltonian $H = H_0 + \lambda H'$). I tried doing that a couple of times and always ended up going in circles. I’ll show that here and also develop an expansion in time up to second order as an alternative, which appears to work out nicely.

# Review.

We assumed that an adiabatically changing Hamiltonian was known with instantaneous eigenkets governed by

\begin{aligned}H(t) {\left\lvert {\hat{\psi}_n(t)} \right\rangle} = \hbar \omega_n {\left\lvert {\hat{\psi}_n(t)} \right\rangle} \end{aligned} \hspace{\stretch{1}}(2.1)

The problem was to determine the time evolutions of the coefficients $\bar{b}_n(t)$ of some state ${\left\lvert {\psi(t)} \right\rangle}$, and this was found to be

\begin{aligned}{\left\lvert {\psi(t)} \right\rangle} &= \sum_n \bar{b}_n(t) e^{-i \gamma_n(t)} {\left\lvert {\hat{\psi}_n(t)} \right\rangle} \\ \gamma_s(t) &= \int_0^t dt' (\omega_s(t') - \Gamma_s(t')) \\ \Gamma_s(t) &= i {\left\langle {\hat{\psi}_s(t)} \right\rvert} \frac{d{{}}}{dt} {\left\lvert {\hat{\psi}_s(t)} \right\rangle} \end{aligned} \hspace{\stretch{1}}(2.2)

where the $\bar{b}_s(t)$ coefficient must satisfy the set of LDEs

\begin{aligned}\frac{d{{\bar{b}_s(t)}}}{dt} = - \sum_{n \ne s} \bar{b}_n(t) e^{i \gamma_{sn}(t) } {\left\langle {\hat{\psi}_s(t)} \right\rvert} \frac{d{{}}}{dt} {\left\lvert {\hat{\psi}_n(t)} \right\rangle},\end{aligned} \hspace{\stretch{1}}(2.5)

where

\begin{aligned}\gamma_{sn}(t) = \gamma_{s}(t) - \gamma_{n}(t).\end{aligned} \hspace{\stretch{1}}(2.6)

Solving these in general doesn’t look terribly fun, but perhaps we can find an explicit solution for all the $\bar{b}_s$‘s, if we simplify the problem somewhat. Suppose that our initial state is found to be in the $m$th energy level at the time before we start switching on the changing Hamiltonian.

\begin{aligned}{\left\lvert {\psi(0)} \right\rangle} = \bar{b}_m(0) {\left\lvert {\hat{\psi}_m(0)} \right\rangle}.\end{aligned} \hspace{\stretch{1}}(2.7)

We therefore require (up to a phase factor)

\begin{aligned}\begin{array}{l l}\bar{b}_m(0) = 1 & \\ \bar{b}_s(0) = 0 & \quad \mbox{iflatex s \ne m}.\end{array}\end{aligned} \hspace{\stretch{1}}(2.8)

Equivalently we can write

\begin{aligned}\bar{b}_s(0) = \delta_{ms}\end{aligned} \hspace{\stretch{1}}(2.9)

# Going in circles with a $\lambda$ expansion.

In class it was hinted that we could try a $\lambda$ expansion of the following form to determine a solution for the $\bar{b}_s$ coefficients at later times

\begin{aligned}\bar{b}_s(t) = \delta_{ms} + \lambda \bar{b}^{(1)}_s(t) + \cdots\end{aligned} \hspace{\stretch{1}}(3.10)

I wasn’t able to figure out how to make that work. Trying this first to first order, and plugging in, we find

\begin{aligned}\lambda \frac{d{{}}}{dt} \bar{b}^{(1)}_s(t) = - \sum_{n \ne s} ( \delta_{mn} + \lambda \bar{b}^{(1)}_n(t) ) e^{i \gamma_{sn}(t) } {\left\langle {\hat{\psi}_s(t)} \right\rvert} \frac{d{{}}}{dt} {\left\lvert {\hat{\psi}_n(t)} \right\rangle},\end{aligned} \hspace{\stretch{1}}(3.11)

equating powers of $\lambda$ yields two equations

\begin{aligned}\frac{d{{}}}{dt} \bar{b}_s^{(1)}(t) &= - \sum_{n \ne s} \bar{b}^{(1)}_n(t) e^{i \gamma_{sn}(t) } {\left\langle {\hat{\psi}_s(t)} \right\rvert} \frac{d{{}}}{dt} {\left\lvert {\hat{\psi}_n(t)} \right\rangle} \\ 0 &= - \sum_{n \ne s} \delta_{mn} e^{i \gamma_{sn}(t) } {\left\langle {\hat{\psi}_s(t)} \right\rvert} \frac{d{{}}}{dt} {\left\lvert {\hat{\psi}_n(t)} \right\rangle}.\end{aligned} \hspace{\stretch{1}}(3.12)

Observe that the first identity is exactly what we started with in 2.5, but has just replaced the $\bar{b}_n$‘s with $\bar{b}^{(1)}_n$‘s. Worse is that the second equation is only satisfied for $s = m$, and for $s \ne m$ we have

\begin{aligned}0 = - e^{i \gamma_{sm}(t) } {\left\langle {\hat{\psi}_s(t)} \right\rvert} \frac{d{{}}}{dt} {\left\lvert {\hat{\psi}_m(t)} \right\rangle}.\end{aligned} \hspace{\stretch{1}}(3.14)

So this $\lambda$ power series only appears to work if we somehow had ${\left\lvert {\hat{\psi}_s(t)} \right\rangle}$ always orthonormal to the derivative of ${\left\lvert {\hat{\psi}_m(t)} \right\rangle}$. Perhaps this could be done if the Hamiltonian was also expanded in powers of $\lambda$, but such a beastie seems foreign to the problem. Note that we don’t even have any explicit dependence on the Hamiltonian in the final $\bar{b}_n$ differential equations, as we’d probably need for such an expansion to work out.

# A Taylor series expansion in time.

What we can do is to expand the $\bar{b}_n$‘s in a power series parametrized by time. That is, again, assuming we started with energy equal to $\hbar \omega_m$, form

\begin{aligned}\bar{b}_s(t) = \delta_{sm} + \frac{t}{1!} \left( {\left.{{ \frac{d{{}}}{dt}\bar{b}_s(t) }}\right\vert}_{{t=0}} \right)+ \frac{t^2}{2!} \left( {\left.{{ \frac{d^2}{dt^2} \bar{b}_s(t) }}\right\vert}_{{t=0}} \right)+ \cdots\end{aligned} \hspace{\stretch{1}}(4.15)

The first order term we can grab right from 2.5 and find

\begin{aligned}{\left.{{\frac{d{{\bar{b}_s(t)}}}{dt}}}\right\vert}_{{t=0}} &= - \sum_{n \ne s} \bar{b}_n(0) {\left.{{{\left\langle {\hat{\psi}_s(t)} \right\rvert} \frac{d{{}}}{dt} {\left\lvert {\hat{\psi}_n(t)} \right\rangle}}}\right\vert}_{{t=0}} \\ &= - \sum_{n \ne s} \delta_{nm}{\left.{{{\left\langle {\hat{\psi}_s(t)} \right\rvert} \frac{d{{}}}{dt} {\left\lvert {\hat{\psi}_n(t)} \right\rangle}}}\right\vert}_{{t=0}} \\ &=\left\{\begin{array}{l l}0 & \quad \mbox{latex s = m} \\ – {\left.{{{\left\langle {\hat{\psi}_s(t)} \right\rvert} \frac{d{{}}}{dt} {\left\lvert {\hat{\psi}_m(t)} \right\rangle}}}\right\vert}_{{t=0}} \\ & \quad \mbox{$s \ne m$} \\ \end{array}\right.\end{aligned}

Let’s write

\begin{aligned}{\left\lvert {n} \right\rangle} &= {\left\lvert {\hat{\psi}_n(0)} \right\rangle} \\ {\left\lvert {n'} \right\rangle} &= {\left.{{ \frac{d{{}}}{dt}{\left\lvert {\hat{\psi}_n(t)} \right\rangle} }}\right\vert}_{{t=0}}\end{aligned} \hspace{\stretch{1}}(4.16)

So we can write

\begin{aligned}{\left.{{\frac{d{{\bar{b}_s(t)}}}{dt}}}\right\vert}_{{t=0}} =- (1 - \delta_{sm}) \left\langle{{s}} \vert {{m'}}\right\rangle,\end{aligned} \hspace{\stretch{1}}(4.18)

and form, to first order in time our approximation for the coefficient is

\begin{aligned}\bar{b}_s(t) =\delta_{sm} - t (1 - \delta_{sm}) \left\langle{{s}} \vert {{m'}}\right\rangle.\end{aligned} \hspace{\stretch{1}}(4.19)

Let’s do the second order term too. For that we have

\begin{aligned}{\left.{{\frac{d^2}{dt^2} \bar{b}_s(t)}}\right\vert}_{{t=0}} &= - \sum_{n \ne s} {\left.{{\left(\left(\frac{d{{}}}{dt} \bar{b}_n(t) +\delta_{nm} i \frac{d{{\gamma_{sn}(t)}}}{dt}\right)\left\langle{{s}} \vert {{n'}}\right\rangle+\delta_{nm} \frac{d{{}}}{dt} \left( {\left\langle {\hat{\psi}_s(t)} \right\rvert} \frac{d{{}}}{dt} {\left\lvert {\hat{\psi}_n(t)} \right\rangle} \right) \right)}}\right\vert}_{{t=0}}\end{aligned}

For the $\gamma_{sn}$ derivative we note that

\begin{aligned}{\left.{{\frac{d{{}}}{dt} \gamma_s(t)}}\right\vert}_{{t=0}} = \omega_s(0) - i\left\langle{{s}} \vert {{s'}}\right\rangle,\end{aligned} \hspace{\stretch{1}}(4.20)

So we have

\begin{aligned}{\left.{{\frac{d^2}{dt^2} \bar{b}_s(t)}}\right\vert}_{{t=0}} &= - \sum_{n \ne s} \Bigl(- (1 - \delta_{nm}) \left\langle{{n}} \vert {{m'}}\right\rangle+\delta_{nm} i (\omega_{sn}(0) - i\left\langle{{s}} \vert {{s'}}\right\rangle + i\left\langle{{n}} \vert {{n'}}\right\rangle)\Bigr)\left\langle{{s}} \vert {{n'}}\right\rangle+\delta_{nm} \Bigl( \left\langle{{s'}} \vert {{n'}}\right\rangle+\left\langle{{s}} \vert {{n''}}\right\rangle\Bigr)\end{aligned}

Again for $s = m$, all terms are killed. That’s somewhat surprising, but suggests that we will need to normalize the coefficients after the perturbation calculation, since we have unity for one of them.

For $s \ne m$ we have

\begin{aligned}{\left.{{\frac{d^2}{dt^2} \bar{b}_s(t)}}\right\vert}_{{t=0}} &= \sum_{n \ne s} \Bigl(\left\langle{{n}} \vert {{m'}}\right\rangle-\delta_{nm} i (\omega_{sn}(0) - i\left\langle{{s}} \vert {{s'}}\right\rangle + i\left\langle{{n}} \vert {{n'}}\right\rangle)\Bigr)\left\langle{{s}} \vert {{n'}}\right\rangle-\delta_{nm} \Bigl( \left\langle{{s'}} \vert {{n'}}\right\rangle+\left\langle{{s}} \vert {{n''}}\right\rangle\Bigr) \\ &= -i (\omega_{sm}(0) - i\left\langle{{s}} \vert {{s'}}\right\rangle + i\left\langle{{m}} \vert {{m'}}\right\rangle)\Bigr)\left\langle{{s}} \vert {{m'}}\right\rangle-\Bigl( \left\langle{{s'}} \vert {{m'}}\right\rangle+\left\langle{{s}} \vert {{m''}}\right\rangle\Bigr) +\sum_{n \ne s} \left\langle{{n}} \vert {{m'}}\right\rangle \left\langle{{s}} \vert {{n'}}\right\rangle.\end{aligned}

So we have, for $s \ne m$

\begin{aligned}{\left.{{\frac{d^2}{dt^2} \bar{b}_s(t)}}\right\vert}_{{t=0}} = (\left\langle{{m}} \vert {{m'}}\right\rangle - \left\langle{{s}} \vert {{s'}}\right\rangle ) \left\langle{{s}} \vert {{m'}}\right\rangle-i \omega_{sm}(0) \left\langle{{s}} \vert {{m'}}\right\rangle-\left\langle{{s'}} \vert {{m'}}\right\rangle-\left\langle{{s}} \vert {{m''}}\right\rangle+\sum_{n \ne s} \left\langle{{n}} \vert {{m'}}\right\rangle \left\langle{{s}} \vert {{n'}}\right\rangle.\end{aligned} \hspace{\stretch{1}}(4.21)

It’s not particularly illuminating looking, but possible to compute, and we can use it to form a second order approximate solution for our perturbed state.

\begin{aligned}\begin{aligned}\bar{b}_s(t) &=\delta_{sm} - t (1 - \delta_{sm}) \left\langle{{s}} \vert {{m'}}\right\rangle \\ &+(1 - \delta_{sm})\left((\left\langle{{m}} \vert {{m'}}\right\rangle - \left\langle{{s}} \vert {{s'}}\right\rangle ) \left\langle{{s}} \vert {{m'}}\right\rangle-i \omega_{sm}(0) \left\langle{{s}} \vert {{m'}}\right\rangle-\left\langle{{s'}} \vert {{m'}}\right\rangle-\left\langle{{s}} \vert {{m''}}\right\rangle+\sum_{n \ne s} \left\langle{{n}} \vert {{m'}}\right\rangle \left\langle{{s}} \vert {{n'}}\right\rangle\right) \frac{t^2}{2}\end{aligned}\end{aligned} \hspace{\stretch{1}}(4.22)

## PHY456H1F: Quantum Mechanics II. Lecture 16 (Taught by Prof J.E. Sipe). Hydrogen atom with spin, and two spin systems.

Posted by peeterjoot on November 2, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

# Disclaimer.

Peeter’s lecture notes from class. May not be entirely coherent.

# The hydrogen atom with spin.

READING: what chapter of [1] ?

For a spinless hydrogen atom, the Hamiltonian was

\begin{aligned}H = H_{\text{CM}} \otimes H_{\text{rel}}\end{aligned} \hspace{\stretch{1}}(2.1)

where we have independent Hamiltonian’s for the motion of the center of mass and the relative motion of the electron to the proton.

The basis kets for these could be designated ${\left\lvert {\mathbf{p}_\text{CM}} \right\rangle}$ and ${\left\lvert {\mathbf{p}_\text{rel}} \right\rangle}$ respectively.

Now we want to augment this, treating

\begin{aligned}H = H_{\text{CM}} \otimes H_{\text{rel}} \otimes H_{\text{s}}\end{aligned} \hspace{\stretch{1}}(2.2)

where $H_{\text{s}}$ is the Hamiltonian for the spin of the electron. We are neglecting the spin of the proton, but that could also be included (this turns out to be a lesser effect).

We’ll introduce a Hamiltonian including the dynamics of the relative motion and the electron spin

\begin{aligned}H_{\text{rel}} \otimes H_{\text{s}}\end{aligned} \hspace{\stretch{1}}(2.3)

Covering the Hilbert space for this system we’ll use basis kets

\begin{aligned}{\left\lvert {nlm\pm} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.4)

\begin{aligned}\begin{aligned}{\left\lvert {nlm+} \right\rangle} &\rightarrow \begin{bmatrix}\left\langle{{\mathbf{r}+}} \vert {{nlm+}}\right\rangle \\ \left\langle{{\mathbf{r}-}} \vert {{nlm+}}\right\rangle \\ \end{bmatrix}=\begin{bmatrix}\Phi_{nlm}(\mathbf{r}) \\ 0\end{bmatrix} \\ {\left\lvert {nlm-} \right\rangle} &\rightarrow \begin{bmatrix}\left\langle{{\mathbf{r}+}} \vert {{nlm-}}\right\rangle \\ \left\langle{{\mathbf{r}-}} \vert {{nlm-}}\right\rangle \\ \end{bmatrix}=\begin{bmatrix}0 \\ \Phi_{nlm}(\mathbf{r}) \end{bmatrix}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.5)

Here $\mathbf{r}$ should be understood to really mean $\mathbf{r}_\text{rel}$. Our full Hamiltonian, after introducing a magnetic pertubation is

\begin{aligned}H = \frac{P_\text{CM}^2}{2M} + \left(\frac{P_\text{rel}^2}{2\mu}-\frac{e^2}{R_\text{rel}}\right)- \boldsymbol{\mu}_0 \cdot \mathbf{B}- \boldsymbol{\mu}_s \cdot \mathbf{B}\end{aligned} \hspace{\stretch{1}}(2.6)

where

\begin{aligned}M = m_\text{proton} + m_\text{electron},\end{aligned} \hspace{\stretch{1}}(2.7)

and

\begin{aligned}\frac{1}{{\mu}} = \frac{1}{{m_\text{proton}}} + \frac{1}{{m_\text{electron}}}.\end{aligned} \hspace{\stretch{1}}(2.8)

For a uniform magnetic field

\begin{aligned}\boldsymbol{\mu}_0 &= \left( -\frac{e}{2 m c} \right) \mathbf{L} \\ \boldsymbol{\mu}_s &= g \left( -\frac{e}{2 m c} \right) \mathbf{S}\end{aligned} \hspace{\stretch{1}}(2.9)

We also have higher order terms (higher order multipoles) and relativistic corrections (like spin orbit coupling [2]).

# Two spins.

Example: Consider two electrons, 1 in each of 2 quantum dots.

\begin{aligned}H = H_{1} \otimes H_{2}\end{aligned} \hspace{\stretch{1}}(3.11)

where $H_1$ and $H_2$ are both spin Hamiltonian’s for respective 2D Hilbert spaces. Our complete Hilbert space is thus a 4D space.

We’ll write

\begin{aligned}\begin{aligned}{\left\lvert {+} \right\rangle}_1 \otimes {\left\lvert {+} \right\rangle}_2 &= {\left\lvert {++} \right\rangle} \\ {\left\lvert {+} \right\rangle}_1 \otimes {\left\lvert {-} \right\rangle}_2 &= {\left\lvert {+-} \right\rangle} \\ {\left\lvert {-} \right\rangle}_1 \otimes {\left\lvert {+} \right\rangle}_2 &= {\left\lvert {-+} \right\rangle} \\ {\left\lvert {-} \right\rangle}_1 \otimes {\left\lvert {-} \right\rangle}_2 &= {\left\lvert {--} \right\rangle} \end{aligned}\end{aligned} \hspace{\stretch{1}}(3.12)

Can introduce

\begin{aligned}\mathbf{S}_1 &= \mathbf{S}_1^{(1)} \otimes I^{(2)} \\ \mathbf{S}_2 &= I^{(1)} \otimes \mathbf{S}_2^{(2)}\end{aligned} \hspace{\stretch{1}}(3.13)

Here we “promote” each of the individual spin operators to spin operators in the complete Hilbert space.

We write

\begin{aligned}S_{1z}{\left\lvert {++} \right\rangle} &= \frac{\hbar}{2} {\left\lvert {++} \right\rangle} \\ S_{1z}{\left\lvert {+-} \right\rangle} &= \frac{\hbar}{2} {\left\lvert {+-} \right\rangle}\end{aligned} \hspace{\stretch{1}}(3.15)

Write

\begin{aligned}\mathbf{S} = \mathbf{S}_1 + \mathbf{S}_2,\end{aligned} \hspace{\stretch{1}}(3.17)

for the full spin angular momentum operator. The $z$ component of this operator is

\begin{aligned}S_z = S_{1z} + S_{2z}\end{aligned} \hspace{\stretch{1}}(3.18)

\begin{aligned}S_z{\left\lvert {++} \right\rangle} &= (S_{1z} + S_{2z}) {\left\lvert {++} \right\rangle} = \left( \frac{\hbar}{2} +\frac{\hbar}{2} \right) {\left\lvert {++} \right\rangle} = \hbar {\left\lvert {++} \right\rangle} \\ S_z{\left\lvert {+-} \right\rangle} &= (S_{1z} + S_{2z}) {\left\lvert {+-} \right\rangle} = \left( \frac{\hbar}{2} -\frac{\hbar}{2} \right) {\left\lvert {+-} \right\rangle} = 0 \\ S_z{\left\lvert {-+} \right\rangle} &= (S_{1z} + S_{2z}) {\left\lvert {-+} \right\rangle} = \left( -\frac{\hbar}{2} +\frac{\hbar}{2} \right) {\left\lvert {-+} \right\rangle} = 0 \\ S_z{\left\lvert {--} \right\rangle} &= (S_{1z} + S_{2z}) {\left\lvert {--} \right\rangle} = \left( -\frac{\hbar}{2} -\frac{\hbar}{2} \right) {\left\lvert {--} \right\rangle} = -\hbar {\left\lvert {--} \right\rangle} \end{aligned} \hspace{\stretch{1}}(3.19)

So, we find that ${\left\lvert {x x} \right\rangle}$ are all eigenkets of $S_z$. These will also all be eigenkets of $\mathbf{S}_1^2 = S_{1x}^2 +S_{1y}^2 +S_{1z}^2$ since we have

\begin{aligned}S_1^2 {\left\lvert {x x} \right\rangle} &= \hbar^2 \left(\frac{1}{{2}}\right) \left(1 + \frac{1}{{2}}\right) {\left\lvert {x x} \right\rangle} = \frac{3}{4} \hbar^2 {\left\lvert {x x} \right\rangle} \\ S_2^2 {\left\lvert {x x} \right\rangle} &= \hbar^2 \left(\frac{1}{{2}}\right) \left(1 + \frac{1}{{2}}\right) {\left\lvert {x x} \right\rangle} = \frac{3}{4} \hbar^2 {\left\lvert {x x} \right\rangle} \end{aligned} \hspace{\stretch{1}}(3.23)

\begin{aligned}\begin{aligned}S^2 &= (\mathbf{S}_1^2+\mathbf{S}_2^2) \cdot(\mathbf{S}_1^2+\mathbf{S}_2^2) \\ &= S_1^2 + S_2^2 + 2 \mathbf{S}_1 \cdot \mathbf{S}_2\end{aligned}\end{aligned} \hspace{\stretch{1}}(3.25)

Are all the product kets also eigenkets of $S^2$? Calculate

\begin{aligned}S^2 {\left\lvert {+-} \right\rangle} &= (S_1^2 + S_2^2 + 2 \mathbf{S}_1 \cdot \mathbf{S}_2) {\left\lvert {+-} \right\rangle} \\ &=\left(\frac{3}{4}\hbar^2+\frac{3}{4}\hbar^2\right)+ 2 S_{1x} S_{2x} {\left\lvert {+-} \right\rangle} + 2 S_{1y} S_{2y} {\left\lvert {+-} \right\rangle} + 2 S_{1z} S_{2z} {\left\lvert {+-} \right\rangle} \end{aligned}

For the $z$ mixed terms, we have

\begin{aligned}2 S_{1z} S_{2z} {\left\lvert {+-} \right\rangle} = 2 \left(\frac{\hbar}{2}\right)\left(-\frac{\hbar}{2}\right){\left\lvert {+-} \right\rangle}\end{aligned} \hspace{\stretch{1}}(3.26)

So

\begin{aligned}S^2{\left\lvert {+-} \right\rangle} = \hbar^2 {\left\lvert {+-} \right\rangle} + 2 S_{1x} S_{2x} {\left\lvert {+-} \right\rangle} + 2 S_{1y} S_{2y} {\left\lvert {+-} \right\rangle} \end{aligned} \hspace{\stretch{1}}(3.27)

Since we have set our spin direction in the z direction with

\begin{aligned}{\left\lvert {+} \right\rangle} &\rightarrow \begin{bmatrix}1 \\ 0\end{bmatrix} \\ {\left\lvert {-} \right\rangle} &\rightarrow \begin{bmatrix}0 \\ 1 \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.28)

We have

\begin{aligned}S_x{\left\lvert {+} \right\rangle} &\rightarrow \frac{\hbar}{2} \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}\begin{bmatrix}1 \\ 0\end{bmatrix} =\frac{\hbar}{2}\begin{bmatrix}0 \\ 1 \end{bmatrix}=\frac{\hbar}{2} {\left\lvert {-} \right\rangle} \\ S_x{\left\lvert {-} \right\rangle} &\rightarrow \frac{\hbar}{2} \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}\begin{bmatrix}0 \\ 1 \end{bmatrix} =\frac{\hbar}{2}\begin{bmatrix}1 \\ 0 \end{bmatrix}=\frac{\hbar}{2} {\left\lvert {+} \right\rangle} \\ S_y{\left\lvert {+} \right\rangle} &\rightarrow \frac{\hbar}{2} \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix}\begin{bmatrix}1 \\ 0 \end{bmatrix} =\frac{i\hbar}{2}\begin{bmatrix}0 \\ 1 \end{bmatrix}=\frac{i\hbar}{2} {\left\lvert {-} \right\rangle} \\ S_y{\left\lvert {-} \right\rangle} &\rightarrow \frac{\hbar}{2} \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix}\begin{bmatrix}0 \\ 1 \end{bmatrix} =\frac{-i\hbar}{2}\begin{bmatrix}1 \\ 0 \end{bmatrix}=-\frac{i\hbar}{2} {\left\lvert {+} \right\rangle} \\ \end{aligned}

And are able to arrive at the action of $S^2$ on our mixed composite state

\begin{aligned}S^2{\left\lvert {+-} \right\rangle} = \hbar^2 ({\left\lvert {+-} \right\rangle} + {\left\lvert {-+} \right\rangle} ).\end{aligned} \hspace{\stretch{1}}(3.30)

For the action on the ${\left\lvert {++} \right\rangle}$ state we have

\begin{aligned}S^2 {\left\lvert {++} \right\rangle} &=\left(\frac{3}{4}\hbar^2 +\frac{3}{4}\hbar^2\right){\left\lvert {++} \right\rangle} + 2 \frac{\hbar^2}{4} {\left\lvert {--} \right\rangle} + 2 i^2 \frac{\hbar^2}{4} {\left\lvert {--} \right\rangle} +2 \left(\frac{\hbar}{2}\right)\left(\frac{\hbar}{2}\right){\left\lvert {++} \right\rangle} \\ &=2 \hbar^2 {\left\lvert {++} \right\rangle} \\ \end{aligned}

and on the ${\left\lvert {--} \right\rangle}$ state we have

\begin{aligned}S^2 {\left\lvert {--} \right\rangle} &=\left(\frac{3}{4}\hbar^2 +\frac{3}{4}\hbar^2\right){\left\lvert {--} \right\rangle} + 2 \frac{(-\hbar)^2}{4} {\left\lvert {++} \right\rangle} + 2 i^2 \frac{\hbar^2}{4} {\left\lvert {++} \right\rangle} +2 \left(-\frac{\hbar}{2}\right)\left(-\frac{\hbar}{2}\right){\left\lvert {--} \right\rangle} \\ &=2 \hbar^2 {\left\lvert {--} \right\rangle} \end{aligned}

All of this can be assembled into a tidier matrix form

\begin{aligned}S^2\rightarrow \hbar^2\begin{bmatrix}2 & 0 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 2 \\ \end{bmatrix},\end{aligned} \hspace{\stretch{1}}(3.31)

where the matrix is taken with respect to the (ordered) basis

\begin{aligned}\{{\left\lvert {++} \right\rangle},{\left\lvert {+-} \right\rangle},{\left\lvert {-+} \right\rangle},{\left\lvert {--} \right\rangle}\}.\end{aligned} \hspace{\stretch{1}}(3.32)

However,

\begin{aligned}\left[{S^2},{S_z}\right] &= 0 \\ \left[{S_i},{S_j}\right] &= i \hbar \sum_k \epsilon_{ijk} S_k\end{aligned} \hspace{\stretch{1}}(3.33)

It should be possible to find eigenkets of $S^2$ and $S_z$

\begin{aligned}S^2 {\left\lvert {s m_s} \right\rangle} &= s(s+1)\hbar^2 {\left\lvert {s m_s} \right\rangle} \\ S_z {\left\lvert {s m_s} \right\rangle} &= \hbar m_s {\left\lvert {s m_s} \right\rangle} \end{aligned} \hspace{\stretch{1}}(3.35)

An orthonormal set of eigenkets of $S^2$ and $S_z$ is found to be

\begin{aligned}\begin{array}{l l}{\left\lvert {++} \right\rangle} & \mbox{latex s = 1and $m_s = 1$} \\ \frac{1}{{\sqrt{2}}} \left( {\left\lvert {+-} \right\rangle} + {\left\lvert {-+} \right\rangle} \right) & \mbox{$s = 1$ and $m_s = 0$} \\ {\left\lvert {–} \right\rangle} & \mbox{$s = 1$ and $m_s = -1$} \\ \frac{1}{{\sqrt{2}}} \left( {\left\lvert {+-} \right\rangle} – {\left\lvert {-+} \right\rangle} \right) & \mbox{$s = 0$ and $m_s = 0$}\end{array}\end{aligned} \hspace{\stretch{1}}(3.37)

The first three kets here can be grouped into a triplet in a 3D Hilbert space, whereas the last treated as a singlet in a 1D Hilbert space.

Form a grouping

\begin{aligned}H = H_1 \otimes H_2\end{aligned} \hspace{\stretch{1}}(3.38)

Can write

\begin{aligned}\frac{1}{{2}} \otimes \frac{1}{{2}} = 1 \oplus 0\end{aligned} \hspace{\stretch{1}}(3.39)

where the $1$ and $0$ here refer to the spin index $s$.

## Other examples

Consider, perhaps, the $l=5$ state of the hydrogen atom

\begin{aligned}J_1^2 {\left\lvert {j_1 m_1} \right\rangle} &= j_1(j_1+1)\hbar^2 {\left\lvert {j_1 m_1} \right\rangle} \\ J_{1z} {\left\lvert {j_1 m_1} \right\rangle} &= \hbar m_1 {\left\lvert {j_1 m_1} \right\rangle} \end{aligned} \hspace{\stretch{1}}(3.40)

\begin{aligned}J_2^2 {\left\lvert {j_2 m_2} \right\rangle} &= j_2(j_2+1)\hbar^2 {\left\lvert {j_2 m_2} \right\rangle} \\ J_{2z} {\left\lvert {j_2 m_2} \right\rangle} &= \hbar m_2 {\left\lvert {j_2 m_2} \right\rangle} \end{aligned} \hspace{\stretch{1}}(3.42)

Consider the Hilbert space spanned by ${\left\lvert {j_1 m_1} \right\rangle} \otimes {\left\lvert {j_2 m_2} \right\rangle}$, a $(2 j_1 + 1)(2 j_2 + 1)$ dimensional space. How to find the eigenkets of $J^2$ and $J_z$?

# References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] Wikipedia. Spin.orbit interaction — wikipedia, the free encyclopedia [online]. 2011. [Online; accessed 2-November-2011]. http://en.wikipedia.org/w/index.php?title=Spin\%E2\%80\%93orbit_interaction&oldid=451606718.

## A different derivation of the adiabatic perturbation coefficient equation

Posted by peeterjoot on October 27, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

# Motivation.

Professor Sipe’s adiabatic perturbation and that of the text [1] in section 17.5.1 and section 17.5.2 use different notation for $\gamma_m$ and take a slightly different approach. We can find Prof Sipe’s final result with a bit less work, if a hybrid of the two methods is used.

# Guts

Our starting point is the same, we have a time dependent slowly varying Hamiltonian

\begin{aligned}H = H(t),\end{aligned} \hspace{\stretch{1}}(2.1)

where our perturbation starts at some specific time from a given initial state

\begin{aligned}H(t) = H_0, \qquad t \le 0.\end{aligned} \hspace{\stretch{1}}(2.2)

We assume that instantaneous eigenkets can be found, satisfying

\begin{aligned}H(t) {\left\lvert {n(t)} \right\rangle} = E_n(t) {\left\lvert {n(t)} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.3)

Here I’ll use ${\left\lvert {n} \right\rangle} \equiv {\left\lvert {n(t)} \right\rangle}$ instead of the ${\left\lvert {\hat{\psi}_n(t)} \right\rangle}$ that we used in class because its easier to write.

Now suppose that we have some arbitrary state, expressed in terms of the instantaneous basis kets ${\left\lvert {n} \right\rangle}$

\begin{aligned}{\left\lvert {\psi} \right\rangle} = \sum_n \bar{b}_n(t) e^{-i\alpha_n + i \gamma_n} {\left\lvert {n} \right\rangle},\end{aligned} \hspace{\stretch{1}}(2.4)

where

\begin{aligned}\alpha_n(t) = \frac{1}{{\hbar}} \int_0^t dt' E_n(t'),\end{aligned} \hspace{\stretch{1}}(2.5)

and $\gamma_n$ (using the notation in the text, not in class) is to be determined.

For this state, we have at the time just before the perturbation

\begin{aligned}{\left\lvert {\psi(0)} \right\rangle} = \sum_n \bar{b}_n(0) e^{-i\alpha_n(0) + i \gamma_n(0)} {\left\lvert {n(0)} \right\rangle}.\end{aligned} \hspace{\stretch{1}}(2.6)

The question to answer is: How does this particular state evolve?

Another question, for those that don’t like sneaky bastard derivations, is where did that magic factor of $e^{-i\alpha_n}$ come from in our superposition state? We will see after we start taking derivatives that this is what we need to cancel the $H(t){\left\lvert {n} \right\rangle}$ in Schr\”{o}dinger’s equation.

Proceeding to plug into the evolution identity we have

\begin{aligned}0 &={\left\langle {m} \right\rvert} \left( i \hbar \frac{d{{}}}{dt} - H(t) \right) {\left\lvert {\psi} \right\rangle} \\ &={\left\langle {m} \right\rvert} \left(\sum_n e^{-i \alpha_n + i \gamma_n}(i \hbar) \left(\frac{d{{\bar{b}_n}}}{dt}+ \bar{b}_n \left(-i \not{{\frac{E_n}{\hbar}}} + i \dot{\gamma}_m \right)\right) {\left\lvert {n} \right\rangle}+ i \hbar \bar{b}_n \frac{d{{}}}{dt} {\left\lvert {n} \right\rangle}- \not{{E_n \bar{b}_n {\left\lvert {n} \right\rangle}}} \right)\\ &=e^{-i \alpha_m + i \gamma_m}(i \hbar) \frac{d{{\bar{b}_m}}}{dt}+e^{-i \alpha_m + i \gamma_m}(i \hbar) i \dot{\gamma}_m \bar{b}_m+ i \hbar \sum_n \bar{b}_n {\left\langle {m} \right\rvert} \frac{d{{}}}{dt} {\left\lvert {n} \right\rangle}e^{-i \alpha_n + i \gamma_n} \\ &\sim\frac{d{{\bar{b}_m}}}{dt}+i \dot{\gamma}_m \bar{b}_m+ \sum_n e^{-i \alpha_n + i \gamma_n}e^{i \alpha_m - i \gamma_m}\bar{b}_n {\left\langle {m} \right\rvert} \frac{d{{}}}{dt} {\left\lvert {n} \right\rangle} \\ &=\frac{d{{\bar{b}_m}}}{dt}+i \dot{\gamma}_m \bar{b}_m+ \bar{b}_m {\left\langle {m} \right\rvert} \frac{d{{}}}{dt} {\left\lvert {m} \right\rangle}+\sum_{n \ne m} e^{-i \alpha_n + i \gamma_n}e^{i \alpha_m - i \gamma_m}\bar{b}_n {\left\langle {m} \right\rvert} \frac{d{{}}}{dt} {\left\lvert {n} \right\rangle}\end{aligned}

We are free to pick $\gamma_m$ to kill the second and third terms

\begin{aligned}0 =i \dot{\gamma}_m \bar{b}_m+ \bar{b}_m {\left\langle {m} \right\rvert} \frac{d{{}}}{dt} {\left\lvert {m} \right\rangle},\end{aligned} \hspace{\stretch{1}}(2.7)

or

\begin{aligned}\dot{\gamma}_m = i {\left\langle {m} \right\rvert} \frac{d{{}}}{dt} {\left\lvert {m} \right\rangle},\end{aligned} \hspace{\stretch{1}}(2.8)

which after integration is

\begin{aligned}\gamma_m(t)= i \int_0^t dt' {\left\langle {m(t')} \right\rvert} \frac{d}{dt'} {\left\lvert {m(t)} \right\rangle},\end{aligned} \hspace{\stretch{1}}(2.9)

as in class we can observe that this is a purely real function. We are left with

\begin{aligned}\frac{d{{\bar{b}_m}}}{dt}=-\sum_{n \ne m} \bar{b}_n e^{-i \alpha_{nm} + i \gamma_{nm}}{\left\langle {m} \right\rvert} \frac{d{{}}}{dt} {\left\lvert {n} \right\rangle} ,\end{aligned} \hspace{\stretch{1}}(2.10)

where

\begin{aligned}\alpha_{nm} &= \alpha_{n} -\alpha_m \\ \gamma_{nm} &= \gamma_{n} -\gamma_m \end{aligned} \hspace{\stretch{1}}(2.11)

The task is now to find solutions for these $\bar{b}_m$ coefficients, and we can refer to the class notes for that without change.

# References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

## Believed to be typos in Desai’s QM Text

Posted by peeterjoot on June 19, 2011

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

I have found some of the obvious stuff in my reading of \citep{desai2009quantum}. Prof.\ Vatche Deyirmenjian who teaches our PHY356 course has pointed out still more (and pointed out where I’d identified the wrong source for some typos).

# Chapter 1.

\begin{itemize}
\item Page 1. Prof.\ Deyirmenjian: The Hermitian, not complex conjugate, of ${\lvert {} \rangle}$ is ${\langle {} \rvert}$.
\item Page 5-6. Prof.\ Deyirmenjian: Change the ${*}$ in (1.26), (1.31), and (1.33) to a dagger.
\item Page 7. Text before (1.43). $\alpha$ instead of $a$ used.
\item Page 19. Equation (1.122). $\dagger$s omitted after first equality.
\end{itemize}

# Chapter 2.

\begin{itemize}
\item Page 40. Text before (2.137). Reference to equation (2.133) should be (2.135)
\item Page 53. Is the “Also show that” here correct? I get a different answer.
\end{itemize}

# Chapter 3.

\begin{itemize}
\item Page 61. Equation (3.51). $1/\hbar$ missing.
\item Page 62. Equation (3.58). Prof.\ Deyirmenjian: Remove the $U_I$ operators from Eq. (3.58)
\item Page 66. Equation (3.92). $-(d/dt {\langle {\alpha} \rvert}) {\lvert {\alpha} \rangle}$ should be $+{\lvert {\alpha} \rangle} d/dt {\langle {\alpha} \rvert}$.
\item Page 66. Equation (3.93). $H$ on wrong side of ${\langle {\alpha} \rvert}$
\item Page 74,76. Prof.\ Deyirmenjian: remove the extra brackets from Eq (4.9) and (4.21).
\item Page 79. Prof.\ Deyirmenjian: “The probability of finding this particle” should read “The probability density for this state at point x is”
\end{itemize}

# Chapter 4.

\begin{itemize}
\item Page 81. Equation (4.52). Should be $-2\alpha$ in the exponent.
\item Page 82. Equation (4.65). Prof.\ Deyirmenjian: a $1/\sqrt{2\pi}$ is missing before the integral. Note that without this (4.67) appears incorrect (off by a factor of $\sqrt{2\pi}$, but the error is really just in (4.65).
\item Page 82. Equation (4.67). Prof.\ Deyirmenjian: the negative sign should appear inside the large square brackets.

\item Page 83. Equation (4.74). A normalized wave function isn’t required for the discussion, but if that was intended, a $1/\sqrt{2\pi}$ factor is missing.
\item Page 83-84. Prof.\ Deyirmenjian: In (4.67) and (4.77), the derivative should be evaluated at $k=k_0$.
\item Page 86. Equation (4.99). Extra brace in the exponent.
\item Page 87. Equation (4.106). Extra brace in the exponent.
\item Page 89. Equation (4.124-4.130). Prof.\ Deyirmenjian: $C e^{\pm \sqrt{\mu}\phi}$ is not a solution to (4.122). This should be $Q(\phi) = C e^{i \sqrt{\mu} \phi}$ and (4.126) should be $\sqrt{\mu} = m$. This fixes the apparent error in sign in equations 4.129 and 4.130 which are correct as is.
\item Page 92. Equation (4.158). Prof.\ Deyirmenjian: should read $P_l(1) = 1$.
\item Page 93. Equation (4.169). conjugation missing for $Y_{lm}$. $Y_{l'm'}$ is missing prime on the $l$ index.
\item Page 95. Second line of text. Language choice? “We now implement”. perhaps utilize would be better?
\item Page 95. Text before (4.193). $i$ is in bold.
\item Page 96. Text before (4.196). $i$ is in bold.
\item Page 97. (4.205). $i$ is in bold.
\item Page 97. (4.207-209). $\mathbf{i}$, and $\mathbf{j}$s aren’t in bold like $\mathbf{k}$
\item Page 101. (4.245). The right side should read $Y_{l,m+1}$
\item Page 101. (4.239-240). The approach here is unclear. FIXME: incorporate lecture notes from class that did this using braket notation.
\item Page 102. (4.248-249). Commas missing to separate $l$, and $m\pm 1$ in the kets.
\end{itemize}

# Chapter 5.

\begin{itemize}
\item Page 109. (5.49). Remove bold font in right hand side state ${\lvert {\chi_{n+}} \rangle}$.
\item Page 113. (5.86). One $\sigma$ isn’t in bold.
\item Page 114. (5.100). $\chi$ is in bold.
\item Page 115. Text before (5.106). $\alpha$ in bold.
\item Page 118. Switch of notation in problem 5 for ensemble averages. $[S_i]$ used instead of $\left\langle{{S_i}}\right\rangle_{\text{av}}$.
\end{itemize}

# Chapter 6.

\begin{itemize}
\item Page 120. $\phi$ in bold. $A$ not in bold.
\item Page 123. (6.26). $1/i \hbar$ factor missing on RHS.
\item Page 124. Text before (6.37). You say canonical momenta $P_k$, but call these mechanical momenta on prev page.
\item Page 125. (6.41). Some $\psi$s are in bold.
\item Page 126. (6.49). There’s no mention that $\mathbf{B}$ is constant, leaving it unclear how the gauge condition and how the curl of $\mathbf{A}$ reproduces $\mathbf{B}$. This would also help clarify how you are able to write $\boldsymbol{\mu} \cdot \mathbf{B} = \mathbf{B} \cdot \boldsymbol{\mu}$.
\item Page 128. (6.65). $\boldsymbol{\mu} \cdot \mathbf{L}$ should be $\boldsymbol{\mu} \cdot \mathbf{B}$.
\item Page 129. (6.75). $\boldsymbol{\mu} \cdot \mathbf{L}$ should be $\boldsymbol{\mu} \cdot \mathbf{B}$.
\item Page 130. (6.80). integral looks like it should be $\int_{\mathbf{r}' = \mathbf{r}_0}^\mathbf{r} \frac{e}{c \hbar} \mathbf{A}(\mathbf{r}') \cdot d\mathbf{r}'$. ie: Clarify bounds, and add a factor of $c$ in the denominator which is required for the cancellation of (6.82).
\item Page 131. (6.81,6.86). Factors of $c$s should be with each of the $\hbar$s.
\item Page 131. Problem 1. bold missing on $\mathbf{E}$.
\end{itemize}

# Chapter 8.

\begin{itemize}
\item Page 143. (8.58). $\beta$ should be negated.
\item Page 159. (8.6.3). Two references to Chapter 2 should be Chapter 4.
\item Page 160. (8.199). Want $\hbar^2$ not $\hbar$ in expression for $k$.
\item Page 162. (Fig 8.9). Figure is backwards compared to text (a bump instead of a well).
\item Page 165. (8.235). Extra $R_l$ factor inside parens.
\end{itemize}

# Chapter 9.

\begin{itemize}
\item Page 174. (9.5). Have $\hbar/2m\omega$ instead of $\hbar m \omega/2$ in expression for $P$.
\item Page 181. (9.57). Factor of two missing. Want $\frac{\alpha}{2 \sqrt{\pi}}$.
\item Page 186. (Problem 10). Sequencing the text and problems is off. The green’s function technique isn’t introduced until chapter 10.
\end{itemize}

# Chapter 10.

\begin{itemize}
\item Page 189. (10.22). It would be nice to have a reference to the appendix (ie: 10.100) for the chapter so that this identity isn’t pulled out of a magic hat.
\item Page 192. (10.44, 10.45). $2 \alpha {\alpha^{*}}'$ should be $\alpha {\alpha^{*}}' + \alpha' \alpha^{*}$
\item Page 193. (10.51). Application (slowly, step by step explicitly) of 10.100 to expand the $e^{\frac{i}{\hbar}(p_0 X - x_0 P)}$ in the braket gives

\begin{aligned}{\langle {x} \rvert} e^{\frac{i}{\hbar}(p_0 X - x_0 P)} {\lvert {0} \rangle}&={\langle {x} \rvert} e^{\frac{i}{\hbar}p_0 X }e^{-\frac{i}{\hbar}x_0 P}e^{-\frac{i}{2\hbar}x_0 p_0 \left[{X},{P}\right]}{\lvert {0} \rangle} \\ &={\langle {x} \rvert} e^{\frac{i}{\hbar}p_0 X }e^{-\frac{i}{\hbar}x_0 P}e^{\frac{x_0 p_0}{2} }{\lvert {0} \rangle} \\ &=e^{\frac{x_0 p_0}{2} }{\langle {x} \rvert} e^{\frac{i}{\hbar}p_0 X} e^{-\frac{i}{\hbar}x_0 P}{\lvert {0} \rangle} \\ &=e^{\frac{x_0 p_0}{2} }\left({\langle {0} \rvert} e^{\frac{i}{\hbar}x_0 P}e^{-\frac{i}{\hbar}p_0 X} {\lvert {x} \rangle}\right)^{*} \\ &=e^{\frac{x_0 p_0}{2} }\left({\langle {0} \rvert} e^{\frac{i}{\hbar}x_0 P}{\lvert {x} \rangle}e^{-\frac{i}{\hbar}p_0 x} \right)^{*} \\ &=e^{\frac{x_0 p_0}{2} } e^{\frac{i}{\hbar}p_0 x} \left({\langle {0} \rvert} e^{\frac{i}{\hbar}x_0 P}{\lvert {x} \rangle}\right)^{*} \\ &=e^{\frac{x_0 p_0}{2} } e^{\frac{i}{\hbar}p_0 x} \left(\left\langle{{0}} \vert {{x - x_0}}\right\rangle\right)^{*} \\ &=e^{\frac{x_0 p_0}{2} } e^{\frac{i}{\hbar}p_0 x} \left\langle{{x - x_0}} \vert {{0}}\right\rangle \\ &=e^{\frac{x_0 p_0}{2} } e^{\frac{i}{\hbar}p_0 x} \psi_0(x - x_0, 0)\end{aligned}

This is the same as (10.51) with the exception of a real scalar constant $e^{ x_0 p_0/2}$ multiplying the wave function. Because of this I think that (10.51) should be a proportionality statement, and not an equality as in

\begin{aligned}{\langle {x} \rvert} e^{\frac{i}{\hbar}(p_0 X - x_0 P)} {\lvert {0} \rangle} \propto e^{\frac{i}{\hbar}p_0 x} \psi_0(x - x_0, 0)\end{aligned}

\item Page 196. (text after 10.76). Looks like reference to Chapter 9, should be Chapter 9 problem 5.

\item Page 197. (text after 10.85). Reference to Chapter 1 should be Chapter 2.
\end{itemize}

# Chapter 26.

\begin{itemize}
\item Page 486. (26.60). $\mathbf{n} \times \mathbf{r} \cdot \boldsymbol{\nabla}$ ought to have braces and read $(\mathbf{n} \times \mathbf{r}) \cdot \boldsymbol{\nabla}$.
\item Page 496. (26.154). Remove $Y_{l'm}(\theta, \phi)$ term from the integral.
\end{itemize}

# Chapter 31.

\begin{itemize}
\item Page 562. (31.56). $T' = L T \tilde{M}$ is given for a mixed tensor representation. This is $T^\mu_{.\nu}$. The other mixed representation $T_{\mu}^{.\nu}$ transforms as $T' = M T \tilde{L}$.
\end{itemize}

# Chapter 32.

\begin{itemize}
\item Page 575. minor: $E t - \mathbf{p} . \mathbf{r}$ written instead of $E t - \mathbf{p} \cdot \mathbf{r}$
\item Page 576. minor: $\boldsymbol{\nabla} . \mathbf{j}$ instead of $\boldsymbol{\nabla} \cdot \mathbf{j}$
\item Page 577. (32.23). $\mathbf{j}$ accidentally includes the divergence.
\item Page 579. (32.35). Sign missing in exponential. Should be $e^{-i k \cdot x}$ not $e^{i k \cdot x}$.
\item Page 579. $\hbar \omega_k$ is the energy of the particle, not $\omega_k$. There’s also an $\hbar$ missing in the expression for $\omega_k$. That is $\omega_k = \sqrt{ c^2 \mathbf{k}^2 + m_0^2 c^4/\hbar^2}$.
\item Page 580. (32.40). The factor of $g$ presumed constant ought to be incorporated into $\chi$ if this is to be consistent with the (32.45) that follows.
\item Page 583. (32.70). sign error. negate integral.
\item Page 584. (32.74). sign error in the both the square root and subsequent approximation, which should be $p_4 = \pm \sqrt{ \mathbf{p}^2 + m^2 - i \epsilon} \approx \pm (\omega_p - i \epsilon')$. (I’ve added approximately equal for the second part since that wasn’t specified which I found confusing).
\item Page 584. (32.75-76). there are multiple sign errors in these equations which should be

\begin{aligned}\frac{1}{{p^2 - m^2 + i\epsilon}} &\approx \frac{1}{{(p_4 - \omega_p + i\epsilon')(p_4 + \omega_p - i\epsilon')}} \\ &\approx\frac{1}{{2 \omega_p}}\left(\frac{1}{{p_4 - \omega_p + i\epsilon'}}-\frac{1}{{p_4 + \omega_p - i\epsilon'}}\right)\end{aligned}

Note that an attempt to confirm (32.76) yields

\begin{aligned}\frac{1}{{p_4 - \omega_p + i\epsilon'}}-\frac{1}{{p_4 + \omega_p - i\epsilon'}}=\frac{2 \omega_p - 2 i \epsilon'}{ p^2 - m^2 + i \epsilon + \epsilon^2/4(\mathbf{p}^2 + m^2)}\end{aligned}

So we need approximations twice for the “equality”.

\item Page 584. (before 32.78). minor: bold script used for $\mathbf{p} \cdot \mathbf{r}$ on second like of the change of variables.
\item Page 585. (32.82, 32.83). minor: $p_n . x$ instead of $p_n \cdot x$.
\item Page 585. (32.82, 32.83). wrong normalization? wouldn’t we want $1/\sqrt{2 \omega_{p_n}}$.
\item Page 585. (32.84). notation switch? $0n$ index whereas $n0$ used above? What are the definitions of $\psi_{0n}$? that allow the integral to be converted to a sum?
\item Page 586. (32.88). minor: $i \mathbf{k} .$ instead of $i \mathbf{k} \cdot$
\item Page 586. (32.88). I calculate a negated $G(\mathbf{k})$ from (32.88). Guessing that (32.88), and (32.93 on pg 587) where intended to be negated like done earlier (for example in (32.57)).
\item Page 587. (32.93). minor: $i \mathbf{k} .$ instead of $i \mathbf{k} \cdot$
\item Page 588-589. (32.104-105). This substitution doesn’t appear to work?

\begin{aligned}&(\boldsymbol{\nabla}^2 - \mu^2)(\phi' e^{-\mu r}) \\ &= \frac{1}{{r^2}} \frac{\partial {}}{\partial {r}} \left( r^2 \frac{\partial {}}{\partial {r}} \left( \phi' e^{-\mu r} \right) \right) - \mu^2 \phi' e^{-\mu r} \\ &= \frac{1}{{r^2}} \frac{\partial {}}{\partial {r}} \left( r^2 \left( \frac{\partial {\phi'}}{\partial {r}} e^{-\mu r} -\mu \phi' e^{-\mu r} \right) \right) - \mu^2 \phi' e^{-\mu r} \\ &= \frac{\partial {}}{\partial {r}} \left( \left( \frac{\partial {\phi'}}{\partial {r}} e^{-\mu r} -\mu \phi' e^{-\mu r} \right) \right) + 2 \frac{1}{{r}} \left( \frac{\partial {\phi'}}{\partial {r}} e^{-\mu r} -\mu \phi' e^{-\mu r} \right) - \mu^2 \phi' e^{-\mu r} \\ &= \frac{\partial^2 {{\phi'}}}{\partial {{r}}^2} e^{-\mu r} -\mu \frac{\partial {\phi'}}{\partial {r}} e^{-\mu r} -\mu \left( \left( \frac{\partial {\phi'}}{\partial {r}} e^{-\mu r} - {\mu \phi' e^{-\mu r}} \right) \right) + 2 \frac{1}{{r}} \left( \frac{\partial {\phi'}}{\partial {r}} e^{-\mu r} -\mu \phi' e^{-\mu r} \right) - {\mu^2 \phi' e^{-\mu r}} \\ &=\frac{\partial^2 {{\phi'}}}{\partial {{r}}^2} e^{-\mu r} -\mu \frac{\partial {\phi'}}{\partial {r}} e^{-\mu r} -\mu \frac{\partial {\phi'}}{\partial {r}} e^{-\mu r} + 2 \frac{1}{{r}} \left( \frac{\partial {\phi'}}{\partial {r}} e^{-\mu r} -\mu \phi' e^{-\mu r} \right) \\ &=\left( \frac{\partial^2 {{\phi'}}}{\partial {{r}}^2} + 2 \frac{1}{{r}} \frac{\partial {\phi'}}{\partial {r}} \right) e^{-\mu r} -2 \mu \frac{\partial {\phi'}}{\partial {r}} e^{-\mu r} -2 \mu \frac{1}{{r}} \phi' e^{-\mu r} \\ &=(\boldsymbol{\nabla}^2 \phi') e^{-\mu r}- 2 \mu \left( \frac{\partial {\phi'}}{\partial {r}} + \frac{1}{{r}} \phi' \right) e^{-\mu r} \\ &=(\boldsymbol{\nabla}^2 \phi') e^{-\mu r}- 2 \frac{\mu}{r} \left( \frac{\partial {(r \phi')}}{\partial {r}} \right) e^{-\mu r} \\ \end{aligned}

There’s an extra term here that doesn’t show up in (32.105) with this transformation. Can that be argued away somehow?

\item Page 589. (32.107). minor: $i \mathbf{k}_f .$ instead of $i \mathbf{k}_f \cdot$
\item Page 589. (32.108). minor: $i \mathbf{q} .$ instead of $i \mathbf{q} \cdot$
\item Page 590. (after 32.118). minor: $i \mathbf{q} .$ instead of $i \mathbf{q} \cdot$
\item Page 591. (32.124). minor: $i \mathbf{q} .$ instead of $i \mathbf{q} \cdot$
\end{itemize}

NOTE: up to commit efc6cd3bfee91f43949016f8ba851de273e4fa8d of these notes emailed to Desai May 10, 2011.

# Chapter 33.

\begin{itemize}
\item Page 597. (before 33.5). Last paragraph references chapter 12. Chapter 32 meant here? (or chapter 4).
\end{itemize}

# Chapter 35.

\begin{itemize}
\item Page 635. (35.46). minor: Bold on gamma.
\item Page 636. (35.50). minor: Bold on gamma.
\item Page 636-638. (35.51-35.58). minor: incomplete notation switch. This chapter uses $E_p$ instead of ${\left\lvert{E}\right\rvert}$, but many formulas on these pages continue to use the ${\left\lvert{E}\right\rvert}$ notation from chapter 33, even mixing the two in some places.
\item Page 643. (35.107). $e_\mu^{\nu}$ should be $e_\mu^{.\nu}$. There are also some missing positional indicators in (35.105) and (35.106).
\item Page 643. (35.114). $\bar{\psi}'(x')\psi(x')$ should be $\bar{\psi}'(x')\psi'(x')$.
\item Page 645. (35.134). Wrong sign on $\gamma_5$. It should be $-i \gamma^1 \gamma^2 \gamma^3 \gamma^4$.
\end{itemize}

# Chapter 36.

\begin{itemize}
\item Page 648. (36.16-17). $\hbar$s should be omitted for consistency.
\item Page 648. (36.16-17). It appears that the $- e \boldsymbol{\sigma}'$ should be $+ e \boldsymbol{\sigma}'$.
\end{itemize}

# References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , | Comments Off

## Collection of PHY356H1F course notes from 2010 Quantum Physics I course at UofT.

Posted by peeterjoot on January 10, 2011

Available to anybody inclined, I have now assembled all of my reading notes, problems, and lecture notes from the PHY356H1F course that I recently completed at the University of Toronto.

I have separated this from any of my other notes collections since it was fairly self contained. This includes the following individual postings:

Jan 9, 2011 A problem on spherical harmonics.

Jan 4, 2011 Some worked problems from old PHY356 exams.
Some worked problems from old PHY356 exams.

Dec 9, 2010 Notes for Desai Chapter 26.
Notes for Desai Chapter 26.

Dec 7, 2010 PHY356F lecture notes.
PHY356F lecture notes.

Nov 25, 2010 PHY356 Problem Set 5.
PHY356 Problem Set 5.

Nov 24, 2010 Hydrogen like atom, and Laguerre polynomials.
Playing with the math of Laguerre polynomials for the Hydrogen like atom.

Nov 20, 2010 Desai Chapter 10 notes and problems.
Desai Chapter 10 notes and problems.

Nov 20, 2010 Desai Chapter 10 notes and problems.
Desai Chapter 10 notes and problems.

Nov 19, 2010 Desai Chapter 9 notes and problems.
Desai Chapter 9 notes and problems.

Nov 16, 2010 PHY356 Problem Set 4.
PHY356 Problem Set 4.

Oct 31, 2010 Believed to be typos in Desai’s QM Text
Believed typos, some confirmed, in Desai’s QM book.

Oct 23, 2010 PHY356 Problem Set III.
My submission for problem set III.

Oct 23, 2010 PHY356 Problem Set II.
A couple more problems from my QM1 course.

Oct 22, 2010 Classical Electrodynamic gauge interaction.
Momentum and Energy transformation to derive Lorentz force law from a free particle Hamiltonian.

Oct 18, 2010 Notes and problems for Desai Chapter VI.
Notes and problems for Desai Chapter VI.

Oct 18, 2010 Notes and problems for Desai Chapter V.
Problems from Desai Chapter V.

Oct 10, 2010 Notes and problems for Desai chapter IV.
Chapter IV Notes and problems for Desai’s “Quantum Mechanics and Introductory Field Theory” text.

Oct 7, 2010 PHY356 Problem Set I.
A couple problems from my QM1 course.

Oct 1, 2010 Notes and problems for Desai chapter III.
Chapter III Notes and problems for Desai’s “Quantum Mechanics and Introductory Field Theory” text.

Sept 27, 2010 Unitary exponential sandwich
Unitary transformation using anticommutators.

Sept 19, 2010 Desai Chapter II notes and problems.
Chapter II Notes and problems for Desai’s “Quantum Mechanics and Introductory Field Theory” text.

July 23, 2010 Dirac Notation Ponderings.
Chapter 1 solutions and some associated notes.

## My submission for PHY356 (Quantum Mechanics I) Problem Set 4.

Posted by peeterjoot on December 7, 2010

# Problem 1.

## Statement

Is it possible to derive the eigenvalues and eigenvectors presented in Section 8.2 from those in Section 8.1.2? What does this say about the potential energy operator in these two situations?

For reference 8.1.2 was a finite potential barrier, $V(x) = V_0, {\left\lvert{x}\right\rvert} > a$, and zero in the interior of the well. This had trigonometric solutions in the interior, and died off exponentially past the boundary of the well.

On the other hand, 8.2 was a delta function potential $V(x) = -g \delta(x)$, which had the solution $u(x) = \sqrt{\beta} e^{-\beta {\left\lvert{x}\right\rvert}}$, where $\beta = m g/\hbar^2$.

## Solution

The pair of figures in the text [1] for these potentials doesn’t make it clear that there are possibly any similarities. The attractive delta function potential isn’t illustrated (although the delta function is, but with opposite sign), and the scaling and the reference energy levels are different. Let’s illustrate these using the same reference energy level and sign conventions to make the similarities more obvious.

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.4\textheight]{FiniteWellPotential}
\caption{8.1.2 Finite Well potential (with energy shifted downwards by $V_0$)}
\end{figure}

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.4\textheight]{deltaFunctionPotential}
\caption{8.2 Delta function potential.}
\end{figure}

The physics isn’t changed by picking a different point for the reference energy level, so let’s compare the two potentials, and their solutions using $V(x) = 0$ outside of the well for both cases. The method used to solve the finite well problem in the text is hard to follow, so re-doing this from scratch in a slightly tidier way doesn’t hurt.

Schr\”{o}dinger’s equation for the finite well, in the ${\left\lvert{x}\right\rvert} > a$ region is

\begin{aligned}-\frac{\hbar^2}{2m} u'' = E u = - E_B u,\end{aligned} \hspace{\stretch{1}}(2.1)

where a positive bound state energy $E_B = -E > 0$ has been introduced.

Writing

\begin{aligned}\beta = \sqrt{\frac{2 m E_B}{\hbar^2}},\end{aligned} \hspace{\stretch{1}}(2.2)

the wave functions outside of the well are

\begin{aligned}u(x) =\left\{\begin{array}{l l}u(-a) e^{\beta(x+a)} &\quad \mbox{latex x a} \\ \end{array}\right.\end{aligned} \hspace{\stretch{1}}(2.3)

Within the well Schr\”{o}dinger’s equation is

\begin{aligned}-\frac{\hbar^2}{2m} u'' - V_0 u = E u = - E_B u,\end{aligned} \hspace{\stretch{1}}(2.4)

or

\begin{aligned}\frac{\hbar^2}{2m} u'' = - \frac{2m}{\hbar^2} (V_0 - E_B) u,\end{aligned} \hspace{\stretch{1}}(2.5)

Noting that the bound state energies are the $E_B < V_0$ values, let $\alpha^2 = 2m (V_0 - E_B)/\hbar^2$, so that the solutions are of the form

\begin{aligned}u(x) = A e^{i\alpha x} + B e^{-i\alpha x}.\end{aligned} \hspace{\stretch{1}}(2.6)

As was done for the wave functions outside of the well, the normalization constants can be expressed in terms of the values of the wave functions on the boundary. That provides a pair of equations to solve

\begin{aligned}\begin{bmatrix}u(a) \\ u(-a)\end{bmatrix}=\begin{bmatrix}e^{i \alpha a} & e^{-i \alpha a} \\ e^{-i \alpha a} & e^{i \alpha a}\end{bmatrix}\begin{bmatrix}A \\ B\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.7)

Inverting this and substitution back into 2.6 yields

\begin{aligned}u(x) &=\begin{bmatrix}e^{i\alpha x} & e^{-i\alpha x}\end{bmatrix}\begin{bmatrix}A \\ B\end{bmatrix} \\ &=\begin{bmatrix}e^{i\alpha x} & e^{-i\alpha x}\end{bmatrix}\frac{1}{{e^{2 i \alpha a} - e^{-2 i \alpha a}}}\begin{bmatrix}e^{i \alpha a} & -e^{-i \alpha a} \\ -e^{-i \alpha a} & e^{i \alpha a}\end{bmatrix}\begin{bmatrix}u(a) \\ u(-a)\end{bmatrix} \\ &=\begin{bmatrix}\frac{\sin(\alpha (a + x))}{\sin(2 \alpha a)} &\frac{\sin(\alpha (a - x))}{\sin(2 \alpha a)}\end{bmatrix}\begin{bmatrix}u(a) \\ u(-a)\end{bmatrix}.\end{aligned}

Expanding the last of these matrix products the wave function is close to completely specified.

\begin{aligned}u(x) =\left\{\begin{array}{l l}u(-a) e^{\beta(x+a)} & \quad \mbox{latex x < -a} \\ u(a) \frac{\sin(\alpha (a + x))}{\sin(2 \alpha a)} +u(-a) \frac{\sin(\alpha (a – x))}{\sin(2 \alpha a)} & \quad \mbox{${\left\lvert{x}\right\rvert} a$} \\ \end{array}\right.\end{aligned} \hspace{\stretch{1}}(2.8)

There are still two unspecified constants $u(\pm a)$ and the constraints on $E_B$ have not been determined (both $\alpha$ and $\beta$ are functions of that energy level). It should be possible to eliminate at least one of the $u(\pm a)$ by computing the wavefunction normalization, and since the well is being narrowed the $\alpha$ term will not be relevant. Since only the vanishingly narrow case where $a \rightarrow 0, x \in [-a,a]$ is of interest, the wave function in that interval approaches

\begin{aligned}u(x) \rightarrow \frac{1}{{2}} (u(a) + u(-a)) + \frac{x}{2} ( u(a) - u(-a) ) \rightarrow \frac{1}{{2}} (u(a) + u(-a)).\end{aligned} \hspace{\stretch{1}}(2.9)

Since no discontinuity is expected this is just $u(a) = u(-a)$. Let’s write $\lim_{a\rightarrow 0} u(a) = A$ for short, and the limited width well wave function becomes

\begin{aligned}u(x) =\left\{\begin{array}{l l}A e^{\beta x} & \quad \mbox{latex x 0} \\ \end{array}\right.\end{aligned} \hspace{\stretch{1}}(2.10)

This is now the same form as the delta function potential, and normalization also gives $A = \sqrt{\beta}$.

One task remains before the attractive delta function potential can be considered a limiting case for the finite well, since the relation between $a, V_0$, and $g$ has not been established. To do so integrate the Schr\”{o}dinger equation over the infinitesimal range $[-a,a]$. This was done in the text for the delta function potential, and that provided the relation

\begin{aligned}\beta = \frac{mg}{\hbar^2}\end{aligned} \hspace{\stretch{1}}(2.11)

For the finite well this is

\begin{aligned}\int_{-a}^a -\frac{\hbar^2}{2m} u'' - V_0 \int_{-a}^a u = -E_B \int_{-a}^a u \\ \end{aligned} \hspace{\stretch{1}}(2.12)

In the limit as $a \rightarrow 0$ this is

\begin{aligned}\frac{\hbar^2}{2m} (u'(a) - u'(-a)) + V_0 2 a u(0) = 2 E_B a u(0).\end{aligned} \hspace{\stretch{1}}(2.13)

Some care is required with the $V_0 a$ term since $a \rightarrow 0$ as $V_0 \rightarrow \infty$, but the $E_B$ term is unambiguously killed, leaving

\begin{aligned}\frac{\hbar^2}{2m} u(0) (-2\beta e^{-\beta a}) = -V_0 2 a u(0).\end{aligned} \hspace{\stretch{1}}(2.14)

The exponential vanishes in the limit and leaves

\begin{aligned}\beta = \frac{m (2 a) V_0}{\hbar^2}\end{aligned} \hspace{\stretch{1}}(2.15)

Comparing to 2.11 from the attractive delta function completes the problem. The conclusion is that when the finite well is narrowed with $a \rightarrow 0$, also letting $V_0 \rightarrow \infty$ such that the absolute area of the well $g = (2 a) V_0$ is maintained, the finite potential well produces exactly the attractive delta function wave function and associated bound state energy.

# Problem 2.

## Statement

For the hydrogen atom, determine ${\langle {nlm} \rvert}(1/R){\lvert {nlm} \rangle}$ and $1/{\langle {nlm} \rvert}R{\lvert {nlm} \rangle}$ such that $(nlm)=(211)$ and $R$ is the radial position operator $(X^2+Y^2+Z^2)^{1/2}$. What do these quantities represent physically and are they the same?

## Solution

Both of the computation tasks for the hydrogen like atom require expansion of a braket of the following form

\begin{aligned}{\langle {nlm} \rvert} A(R) {\lvert {nlm} \rangle},\end{aligned} \hspace{\stretch{1}}(3.16)

where $A(R) = R = (X^2 + Y^2 + Z^2)^{1/2}$ or $A(R) = 1/R$.

The spherical representation of the identity resolution is required to convert this braket into integral form

\begin{aligned}\mathbf{1} = \int r^2 \sin\theta dr d\theta d\phi {\lvert { r \theta \phi} \rangle}{\langle { r \theta \phi} \rvert},\end{aligned} \hspace{\stretch{1}}(3.17)

where the spherical wave function is given by the braket $\left\langle{{ r \theta \phi}} \vert {{nlm}}\right\rangle = R_{nl}(r) Y_{lm}(\theta,\phi)$.

Additionally, the radial form of the delta function will be required, which is

\begin{aligned}\delta(\mathbf{x} - \mathbf{x}') = \frac{1}{{r^2 \sin\theta}} \delta(r - r') \delta(\theta - \theta') \delta(\phi - \phi')\end{aligned} \hspace{\stretch{1}}(3.18)

Two applications of the identity operator to the braket yield

\begin{aligned}\rvert} A(R) {\lvert {nlm} \rangle} \\ &={\langle {nlm} \rvert} \mathbf{1} A(R) \mathbf{1} {\lvert {nlm} \rangle} \\ &=\int dr d\theta d\phi dr' d\theta' d\phi'r^2 \sin\theta {r'}^2 \sin\theta' \left\langle{{nlm}} \vert {{ r \theta \phi}}\right\rangle{\langle { r \theta \phi} \rvert} A(R) {\lvert { r' \theta' \phi'} \rangle}\left\langle{{ r' \theta' \phi'}} \vert {{nlm}}\right\rangle \\ &=\int dr d\theta d\phi dr' d\theta' d\phi'r^2 \sin\theta {r'}^2 \sin\theta' R_{nl}(r) Y_{lm}^{*}(\theta, \phi){\langle { r \theta \phi} \rvert} A(R) {\lvert { r' \theta' \phi'} \rangle}R_{nl}(r') Y_{lm}(\theta', \phi') \\ \end{aligned}

To continue an assumption about the matrix element ${\langle { r \theta \phi} \rvert} A(R) {\lvert { r' \theta' \phi'} \rangle}$ is required. It seems reasonable that this would be

\begin{aligned}{\langle { r \theta \phi} \rvert} A(R) {\lvert { r' \theta' \phi'} \rangle} = \\ \delta(\mathbf{x} - \mathbf{x}') A(r) = \frac{1}{{r^2 \sin\theta}} \delta(r-r') \delta(\theta -\theta')\delta(\phi-\phi') A(r).\end{aligned} \hspace{\stretch{1}}(3.19)

The braket can now be written completely in integral form as

\begin{aligned}\rvert} A(R) {\lvert {nlm} \rangle} \\ &=\int dr d\theta d\phi dr' d\theta' d\phi'r^2 \sin\theta {r'}^2 \sin\theta' R_{nl}(r) Y_{lm}^{*}(\theta, \phi)\frac{1}{{r^2 \sin\theta}} \delta(r-r') \delta(\theta -\theta')\delta(\phi-\phi') A(r)R_{nl}(r') Y_{lm}(\theta', \phi') \\ &=\int dr d\theta d\phi {r'}^2 \sin\theta' dr' d\theta' d\phi'R_{nl}(r) Y_{lm}^{*}(\theta, \phi)\delta(r-r') \delta(\theta -\theta')\delta(\phi-\phi') A(r)R_{nl}(r') Y_{lm}(\theta', \phi') \\ \end{aligned}

Application of the delta functions then reduces the integral, since the only $\theta$, and $\phi$ dependence is in the (orthonormal) $Y_{lm}$ terms they are found to drop out

\begin{aligned}{\langle {nlm} \rvert} A(R) {\lvert {nlm} \rangle}&=\int dr d\theta d\phi r^2 \sin\theta R_{nl}(r) Y_{lm}^{*}(\theta, \phi)A(r)R_{nl}(r) Y_{lm}(\theta, \phi) \\ &=\int dr r^2 R_{nl}(r) A(r)R_{nl}(r) \underbrace{\int\sin\theta d\theta d\phi Y_{lm}^{*}(\theta, \phi)Y_{lm}(\theta, \phi) }_{=1}\\ \end{aligned}

This leaves just the radial wave functions in the integral

\begin{aligned}{\langle {nlm} \rvert} A(R) {\lvert {nlm} \rangle}=\int dr r^2 R_{nl}^2(r) A(r)\end{aligned} \hspace{\stretch{1}}(3.20)

As a consistency check, observe that with $A(r) = 1$, this integral evaluates to 1 according to equation (8.274) in the text, so we can think of $(r R_{nl}(r))^2$ as the radial probability density for functions of $r$.

The problem asks specifically for these expectation values for the ${\lvert {211} \rangle}$ state. For that state the radial wavefunction is found in (8.277) as

\begin{aligned}R_{21}(r) = \left(\frac{Z}{2a_0}\right)^{3/2} \frac{ Z r }{a_0 \sqrt{3}} e^{-Z r/2 a_0}\end{aligned} \hspace{\stretch{1}}(3.21)

The braket can now be written explicitly

\begin{aligned}{\langle {21m} \rvert} A(R) {\lvert {21m} \rangle}=\frac{1}{{24}} \left(\frac{ Z }{a_0 } \right)^5\int_0^\inftydr r^4 e^{-Z r/ a_0}A(r)\end{aligned} \hspace{\stretch{1}}(3.22)

Now, let’s consider the two functions $A(r)$ separately. First for $A(r) = r$ we have

\begin{aligned}{\langle {21m} \rvert} R {\lvert {21m} \rangle}&=\frac{1}{{24}} \left(\frac{ Z }{a_0 } \right)^5\int_0^\inftydr r^5 e^{-Z r/ a_0} \\ &=\frac{ a_0 }{ 24 Z } \int_0^\inftydu u^5 e^{-u} \\ \end{aligned}

The last integral evaluates to $120$, leaving

\begin{aligned}{\langle {21m} \rvert} R {\lvert {21m} \rangle}=\frac{ 5 a_0 }{ Z }.\end{aligned} \hspace{\stretch{1}}(3.23)

The expectation value associated with this ${\lvert {21m} \rangle}$ state for the radial position is found to be proportional to the Bohr radius. For the hydrogen atom where $Z=1$ this average value for repeated measurements of the physical quantity associated with the operator $R$ is found to be 5 times the Bohr radius for $n=2, l=1$ states.

Our problem actually asks for the inverse of this expectation value, and for reference this is

\begin{aligned}1/ {\langle {21m} \rvert} R {\lvert {21m} \rangle}=\frac{ Z }{ 5 a_0 } \end{aligned} \hspace{\stretch{1}}(3.24)

Performing the same task for $A(R) = 1/R$

\begin{aligned}{\langle {21m} \rvert} 1/R {\lvert {21m} \rangle}&=\frac{1}{{24}} \left(\frac{ Z }{a_0 } \right)^5\int_0^\inftydr r^3e^{-Z r/ a_0} \\ &=\frac{1}{{24}} \frac{ Z }{ a_0 } \int_0^\inftydu u^3e^{-u}.\end{aligned}

This last integral has value $6$, and we have the second part of the computational task complete

\begin{aligned}{\langle {21m} \rvert} 1/R {\lvert {21m} \rangle} = \frac{1}{{4}} \frac{ Z }{ a_0 } \end{aligned} \hspace{\stretch{1}}(3.25)

The question of whether or not 3.24, and 3.25 are equal is answered. They are not.

Still remaining for this problem is the question of the what these quantities represent physically.

The quantity ${\langle {nlm} \rvert} R {\lvert {nlm} \rangle}$ is the expectation value for the radial position of the particle measured from the center of mass of the system. This is the average outcome for many measurements of this radial distance when the system is prepared in the state ${\lvert {nlm} \rangle}$ prior to each measurement.

Interestingly, the physical quantity that we associate with the operator $R$ has a different measurable value than the inverse of the expectation value for the inverted operator $1/R$. Regardless, we have a physical (observable) quantity associated with the operator $1/R$, and when the system is prepared in state ${\lvert {21m} \rangle}$ prior to each measurement, the average outcome of many measurements of this physical quantity produces this value ${\langle {21m} \rvert} 1/R {\lvert {21m} \rangle} = Z/n^2 a_0$, a quantity inversely proportional to the Bohr radius.

## ASIDE: Comparing to the general case.

As a confirmation of the results obtained, we can check 3.24, and 3.25 against the general form of the expectation values $\left\langle{{R^s}}\right\rangle$ for various powers $s$ of the radial position operator. These can be found in locations such as farside.ph.utexas.edu which gives for $Z=1$ (without proof), and in [2] (where these and harder looking ones expectation values are left as an exercise for the reader to prove). Both of those give:

\begin{aligned}\left\langle{{R}}\right\rangle &= \frac{a_0}{2} ( 3 n^2 -l (l+1) ) \\ \left\langle{{1/R}}\right\rangle &= \frac{1}{n^2 a_0} \end{aligned} \hspace{\stretch{1}}(3.26)

It is curious to me that the general expectation values noted in 3.26 we have a $l$ quantum number dependence for $\left\langle{{R}}\right\rangle$, but only the $n$ quantum number dependence for $\left\langle{{1/R}}\right\rangle$. It is not obvious to me why this would be the case.

# References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] R. Liboff. Introductory quantum mechanics. Cambridge: Addison-Wesley Press, Inc, 2003.

## Notes and problems for Desai Chapter V.

Posted by peeterjoot on November 8, 2010

# Motivation.

Chapter V notes for [1].

# Problems

## Problem 1.

### Statement.

Obtain $S_x, S_y, S_z$ for spin 1 in the representation in which $S_z$ and $S^2$ are diagonal.

### Solution.

For spin 1, we have

\begin{aligned}S^2 = 1 (1+1) \hbar^2 \mathbf{1}\end{aligned} \hspace{\stretch{1}}(3.1)

and are interested in the states ${\lvert {1,-1} \rangle}, {\lvert {1, 0} \rangle}, and {\lvert {1,1} \rangle}$. If, like angular momentum, we assume that we have for $m_s = -1,0,1$

\begin{aligned}S_z {\lvert {1,m_s} \rangle} = m_s \hbar {\lvert {1, m_s} \rangle}\end{aligned} \hspace{\stretch{1}}(3.2)

and introduce a column matrix representations for the kets as follows

\begin{aligned}{\lvert {1,1} \rangle} &=\begin{bmatrix}1 \\ 0 \\ 0\end{bmatrix} \\ {\lvert {1,0} \rangle} &=\begin{bmatrix}0 \\ 1 \\ 0\end{bmatrix} \\ {\lvert {1,-1} \rangle} &=\begin{bmatrix}0 \\ 0 \\ -1\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(3.3)

then we have, by inspection

\begin{aligned}S_z &= \hbar\begin{bmatrix}1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -1\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.6)

Note that, like the Pauli matrices, and unlike angular momentum, the spin states ${\lvert {-1, m_s} \rangle}, {\lvert {0, m_s} \rangle}$ have not been considered. Do those have any physical interpretation?

That question aside, we can proceed as in the text, utilizing the ladder operator commutators

\begin{aligned}S_{\pm} &= S_x \pm i S_y,\end{aligned} \hspace{\stretch{1}}(3.7)

to determine the values of $S_x$ and $S_y$ indirectly. We find

\begin{aligned}\left[{S_{+}},{S_{-}}\right] &= 2 \hbar S_z \\ \left[{S_{+}},{S_{z}}\right] &= -\hbar S_{+} \\ \left[{S_{-}},{S_{z}}\right] &= \hbar S_{-}.\end{aligned} \hspace{\stretch{1}}(3.8)

Let

\begin{aligned}S_{+} &=\begin{bmatrix}a & b & c \\ d & e & f \\ g & h & i\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.11)

Looking for equality between $\left[{S_{z}},{S_{+}}\right]/\hbar = S_{+}$, we find

\begin{aligned}\begin{bmatrix}0 & b & 2 c \\ -d & 0 & f \\ -2g & -h & 0\end{bmatrix}&=\begin{bmatrix}a & b & c \\ d & e & f \\ g & h & i\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(3.12)

so we must have

\begin{aligned}S_{+} &=\begin{bmatrix}0 & b & 0 \\ 0 & 0 & f \\ 0 & 0 & 0\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.13)

Furthermore, from $\left[{S_{+}},{S_{-}}\right] = 2 \hbar S_z$, we find

\begin{aligned}\begin{bmatrix}{\left\lvert{b}\right\rvert}^2 & 0 & 0 \\ 0 \right\rvert}^2 - {\left\lvert{b}\right\rvert}^2 & 0 \\ 0 & 0 & -{\left\lvert{f}\right\rvert}^2\end{bmatrix} &= 2 \hbar^2\begin{bmatrix}1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -1\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.14)

We must have ${\left\lvert{b}\right\rvert}^2 = {\left\lvert{f}\right\rvert}^2 = 2 \hbar^2$. We could probably pick any
$b = \sqrt{2} \hbar e^{i\phi}$, and $f = \sqrt{2} \hbar e^{i\theta}$, but assuming we have no reason for a non-zero phase we try

\begin{aligned}S_{+}&=\sqrt{2} \hbar\begin{bmatrix}0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.15)

Putting all the pieces back together, with $S_x = (S_{+} + S_{-})/2$, and $S_y = (S_{+} - S_{-})/2i$, we finally have

\begin{aligned}S_x &=\frac{\hbar}{\sqrt{2}}\begin{bmatrix}0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0\end{bmatrix} \\ S_y &=\frac{\hbar}{\sqrt{2} i}\begin{bmatrix}0 & 1 & 0 \\ -1 & 0 & 1 \\ 0 & -1 & 0\end{bmatrix} \\ S_z &=\hbar\begin{bmatrix}1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -1\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.16)

A quick calculation verifies that we have $S_x^2 + S_y^2 + S_z^2 = 2 \hbar \mathbf{1}$, as expected.

## Problem 2.

### Statement.

Obtain eigensolution for operator $A = a \sigma_y + b \sigma_z$. Call the eigenstates ${\lvert {1} \rangle}$ and ${\lvert {2} \rangle}$, and determine the probabilities that they will correspond to $\sigma_x = +1$.

### Solution.

The first part is straight forward, and we have

\begin{aligned}A &= a \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} + b \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} \\ &=\begin{bmatrix}b & -i a \\ ia & -b\end{bmatrix}.\end{aligned}

Taking ${\left\lvert{A - \lambda I}\right\rvert} = 0$ we get

\begin{aligned}\lambda &= \pm \sqrt{a^2 + b^2},\end{aligned} \hspace{\stretch{1}}(3.19)

with eigenvectors proportional to

\begin{aligned}{\lvert {\pm} \rangle} &=\begin{bmatrix}i a \\ b \mp \sqrt{a^2 + b^2}\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.20)

The normalization constant is $1/\sqrt{2 (a^2 + b^2) \mp 2 b \sqrt{a^2 + b^2}}$. Now we can call these ${\lvert {1} \rangle}$, and ${\lvert {2} \rangle}$ but what does the last part of the question mean? What’s meant by $\sigma_x = +1$?

“I think it means that the result of a measurement of the x component of spin is $+1$. This corresponds to the eigenvalue of $\sigma_x$ being $+1$. The spin operator $S_x$ has eigenvalue $+\hbar/2$”.

Aside: Question to consider later. Is is significant that ${\langle {1} \rvert} \sigma_x {\lvert {1} \rangle} = {\langle {2} \rvert} \sigma_x {\lvert {2} \rangle} = 0$?

So, how do we translate this into a mathematical statement?

First let’s recall a couple of details. Recall that the x spin operator has the matrix representation

\begin{aligned}\sigma_x = \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.21)

This has eigenvalues $\pm 1$, with eigenstates $(1,\pm 1)/\sqrt{2}$. At the point when the x component spin is observed to be $+1$, the state of the system was then

\begin{aligned}{\lvert {x+} \rangle} =\frac{1}{{\sqrt{2}}}\begin{bmatrix}1 \\ 1\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.22)

Let’s look at the ways that this state can be formed as linear combinations of our states ${\lvert {1} \rangle}$, and ${\lvert {2} \rangle}$. That is

\begin{aligned}\frac{1}{{\sqrt{2}}}\begin{bmatrix}1 \\ 1\end{bmatrix}&=\alpha {\lvert {1} \rangle}+ \beta {\lvert {2} \rangle},\end{aligned} \hspace{\stretch{1}}(3.23)

or

\begin{aligned}\begin{bmatrix}1 \\ 1\end{bmatrix}&=\frac{\alpha}{\sqrt{(a^2 + b^2) - b \sqrt{a^2 + b^2}}}\begin{bmatrix}i a \\ b - \sqrt{a^2 + b^2}\end{bmatrix}+\frac{\beta}{\sqrt{(a^2 + b^2) + b \sqrt{a^2 + b^2}}}\begin{bmatrix}i a \\ b + \sqrt{a^2 + b^2}\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.24)

Letting $c = \sqrt{a^2 + b^2}$, this is

\begin{aligned}\begin{bmatrix}1 \\ 1\end{bmatrix}&=\frac{\alpha}{\sqrt{c^2 - b c}}\begin{bmatrix}i a \\ b - c\end{bmatrix}+\frac{\beta}{\sqrt{c^2 + b c}}\begin{bmatrix}i a \\ b + c\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.25)

We can solve the $\alpha$ and $\beta$ with Cramer’s rule, yielding

\begin{aligned}\begin{vmatrix}1 & i a \\ 1 & b - c\end{vmatrix}&=\frac{\beta}{\sqrt{c^2 + b c}}\begin{vmatrix}i a & i a \\ b + c & b - c\end{vmatrix} \\ \begin{vmatrix}1 & i a \\ 1 & b + c\end{vmatrix}&=\frac{\alpha}{\sqrt{c^2 - b c}}\begin{vmatrix}i a & i a \\ b - c & b + c\end{vmatrix},\end{aligned}

or

\begin{aligned}\alpha &= \frac{(b + c - ia)\sqrt{c^2 - b c}}{2 i a c} \\ \beta &= \frac{(b - c - ia)\sqrt{c^2 + b c}}{-2 i a c} \end{aligned} \hspace{\stretch{1}}(3.26)

It is ${\left\lvert{\alpha}\right\rvert}^2$ and ${\left\lvert{\beta}\right\rvert}^2$ that are probabilities, and after a bit of algebra we find that those are

\begin{aligned}{\left\lvert{\alpha}\right\rvert}^2 = {\left\lvert{\beta}\right\rvert}^2 = \frac{1}{{2}},\end{aligned} \hspace{\stretch{1}}(3.28)

so if the x spin of the system is measured as $+1$, we have a 50\ Is that what the question was asking? I think that I’ve actually got it backwards. I think that the question was asking for the probability of finding state ${\lvert {x+} \rangle}$ (measuring a spin 1 value for $\sigma_x$) given the state ${\lvert {1} \rangle}$ or ${\lvert {2} \rangle}$. So, suppose that we have \begin{aligned}\mu_{+} {\lvert {x+} \rangle} + \nu_{+} {\lvert {x-} \rangle} &= {\lvert {1} \rangle} \\ \mu_{-} {\lvert {x+} \rangle} + \nu_{-} {\lvert {x-} \rangle} &= {\lvert {2} \rangle},\end{aligned} \hspace{\stretch{1}}(3.29) or (considering both cases simultaneously), \begin{aligned}\mu_{\pm}\begin{bmatrix}1 \\ 1\end{bmatrix}+ \nu_{\pm}\begin{bmatrix}1 \\ -1\end{bmatrix}&= \frac{1}{{\sqrt{ c^2 \mp b c }}} \begin{bmatrix}i a \\ b \mp c\end{bmatrix} \\ \implies \\ \mu_{\pm}\begin{vmatrix}1 & 1 \\ 1 & -1\end{vmatrix}&= \frac{1}{{\sqrt{ c^2 \mp b c }}} \begin{vmatrix}i a & 1 \\ b \mp c & -1\end{vmatrix},\end{aligned} or \begin{aligned}\mu_{\pm} &= \frac{ia + b \mp c}{2 \sqrt{c^2 \mp bc}} .\end{aligned} \hspace{\stretch{1}}(3.31) Unsurprisingly, this mirrors the previous scenario and we find that we have a probability ${\left\lvert{\mu}\right\rvert}^2 = 1/2$ of measuring a spin 1 value for $\sigma_x$ when the state of the operator $A$ has been measured as $\pm \sqrt{a^2 + b^2}$ (ie: in the states ${\lvert {1} \rangle}$, or ${\lvert {2} \rangle}$ respectively). No measurement of the operator $A = a \sigma_y + b\sigma_z$ gives a biased prediction of the state of the state $\sigma_x$. Loosely, this seems to justify calling these operators orthogonal. This is consistent with the geometrical antisymmetric nature of the spin components where we have $\sigma_y \sigma_x = -\sigma_x \sigma_y$, just like two orthogonal vectors under the Clifford product. ## Problem 3. ### Statement. Obtain the expectation values of $S_x, S_y, S_z$ for the case of a spin $1/2$ particle with the spin pointed in the direction of a vector with azimuthal angle $\beta$ and polar angle $\alpha$. ### Solution. Let’s work with $\sigma_k$ instead of $S_k$ to eliminate the $\hbar/2$ factors. Before considering the expectation values in the arbitrary spin orientation, let’s consider just the expectation values for $\sigma_k$. Introducing a matrix representation (assumed normalized) for a reference state \begin{aligned}{\lvert {\psi} \rangle} &= \begin{bmatrix}a \\ b\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(3.32) we find \begin{aligned}{\langle {\psi} \rvert} \sigma_x {\lvert {\psi} \rangle}&=\begin{bmatrix}a^{*} & b^{*}\end{bmatrix}\begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}\begin{bmatrix}a \\ b\end{bmatrix}= a^{*} b + b^{*} a\\ {\langle {\psi} \rvert} \sigma_y {\lvert {\psi} \rangle}&=\begin{bmatrix}a^{*} & b^{*}\end{bmatrix}\begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix}\begin{bmatrix}a \\ b\end{bmatrix}= - i a^{*} b + i b^{*} a \\ {\langle {\psi} \rvert} \sigma_x {\lvert {\psi} \rangle}&=\begin{bmatrix}a^{*} & b^{*}\end{bmatrix}\begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix}\begin{bmatrix}a \\ b\end{bmatrix}= a^{*} a - b^{*} b \end{aligned} \hspace{\stretch{1}}(3.33) Each of these expectation values are real as expected due to the Hermitian nature of $\sigma_k$. We also find that \begin{aligned}\sum_{k=1}^3 {{\langle {\psi} \rvert} \sigma_k {\lvert {\psi} \rangle}}^2 &= ({\left\lvert{a}\right\rvert}^2 + {\left\lvert{b}\right\rvert}^2)^2 = 1\end{aligned} \hspace{\stretch{1}}(3.36) So a vector formed with the expectation values as components is a unit vector. This doesn’t seem too unexpected from the section on the projection operators in the text where it was stated that ${\langle {\chi} \rvert} \boldsymbol{\sigma} {\lvert {\chi} \rangle} = \mathbf{p}$, where $\mathbf{p}$ was a unit vector, and this seems similar. Let’s now consider the arbitrarily oriented spin vector $\boldsymbol{\sigma} \cdot \mathbf{n}$, and look at its expectation value. With $\mathbf{n}$ as the the rotated image of $\hat{\mathbf{z}}$ by an azimuthal angle $\beta$, and polar angle $\alpha$, we have \begin{aligned}\mathbf{n} = (\sin\alpha \cos\beta,\sin\alpha \sin\beta,\cos\alpha)\end{aligned} \hspace{\stretch{1}}(3.37) that is \begin{aligned}\boldsymbol{\sigma} \cdot \mathbf{n} &= \sin\alpha \cos\beta \sigma_x + \sin\alpha \sin\beta \sigma_y + \cos\alpha \sigma_z \end{aligned} \hspace{\stretch{1}}(3.38) The $k = x,y,y$ projections of this operator \begin{aligned}\frac{1}{{2}} \text{Tr} { \sigma_k (\boldsymbol{\sigma} \cdot \mathbf{n})} \sigma_k\end{aligned} \hspace{\stretch{1}}(3.39) are just the Pauli matrices scaled by the components of $\mathbf{n}$ \begin{aligned}\frac{1}{{2}} \text{Tr} { \sigma_x (\boldsymbol{\sigma} \cdot \mathbf{n})} \sigma_x &= \sin\alpha \cos\beta \sigma_x \\ \frac{1}{{2}} \text{Tr} { \sigma_y (\boldsymbol{\sigma} \cdot \mathbf{n})} \sigma_y &= \sin\alpha \sin\beta \sigma_y \\ \frac{1}{{2}} \text{Tr} { \sigma_z (\boldsymbol{\sigma} \cdot \mathbf{n})} \sigma_z &= \cos\alpha \sigma_z,\end{aligned} \hspace{\stretch{1}}(3.40) so our $S_k$ expectation values are by inspection \begin{aligned}{\langle {\psi} \rvert} S_x {\lvert {\psi} \rangle} &= \frac{\hbar}{2} \sin\alpha \cos\beta ( a^{*} b + b^{*} a ) \\ {\langle {\psi} \rvert} S_y {\lvert {\psi} \rangle} &= \frac{\hbar}{2} \sin\alpha \sin\beta ( - i a^{*} b + i b^{*} a ) \\ {\langle {\psi} \rvert} S_z {\lvert {\psi} \rangle} &= \frac{\hbar}{2} \cos\alpha ( a^{*} a - b^{*} b )\end{aligned} \hspace{\stretch{1}}(3.43) Is this correct? While $(\boldsymbol{\sigma} \cdot \mathbf{n})^2 = \mathbf{n}^2 = I$ is a unit norm operator, we find that the expectation values of the coordinates of $\boldsymbol{\sigma} \cdot \mathbf{n}$ cannot be viewed as the coordinates of a unit vector. Let’s consider a specific case, with $\mathbf{n} = (0,0,1)$, where the spin is oriented in the $x,y$ plane. That gives us \begin{aligned}\boldsymbol{\sigma} \cdot \mathbf{n} = \sigma_z\end{aligned} \hspace{\stretch{1}}(3.46) so the expectation values of $S_k$ are \begin{aligned}\left\langle{{S_x}}\right\rangle &= 0 \\ \left\langle{{S_y}}\right\rangle &= 0 \\ \left\langle{{S_z}}\right\rangle &= \frac{\hbar}{2} ( a^{*} a - b^{*} b )\end{aligned} \hspace{\stretch{1}}(3.47) Given this is seems reasonable that from 3.43 we find \begin{aligned}\sum_k {{\langle {\psi} \rvert} S_k {\lvert {\psi} \rangle}}^2 \ne \hbar^2/4,\end{aligned} \hspace{\stretch{1}}(3.50) (since we don’t have any reason to believe that in general $( a^{*} a - b^{*} b )^2 = 1$ is true). The most general statement we can make about these expectation values (an average observed value for the measurement of the operator) is that \begin{aligned}{\left\lvert{\left\langle{{S_k}}\right\rangle}\right\rvert} \le \frac{\hbar}{2} \end{aligned} \hspace{\stretch{1}}(3.51) with equality for specific states and orientations only. ## Problem 4. ### Statement. Take the azimuthal angle, $\beta = 0$, so that the spin is in the x-z plane at an angle $\alpha$ with respect to the z-axis, and the unit vector is $\mathbf{n} = (\sin\alpha, 0, \cos\alpha)$. Write \begin{aligned}{\lvert {\chi_{n+}} \rangle} = {\lvert {+\alpha} \rangle}\end{aligned} \hspace{\stretch{1}}(3.52) for this case. Show that the probability that it is in the spin-up state in the direction $\theta$ with respect to the z-axis is \begin{aligned}{\left\lvert{ \left\langle{{+\theta}} \vert {{+\alpha}}\right\rangle }\right\rvert}^2 = \cos^2 \frac{\alpha - \theta}{2}\end{aligned} \hspace{\stretch{1}}(3.53) Also obtain the expectation value of $\boldsymbol{\sigma} \cdot \mathbf{n}$ with respect to the state ${\lvert {+\theta} \rangle}$. ### Solution. For this orientation we have \begin{aligned}\boldsymbol{\sigma} \cdot \mathbf{n}&=\sin\alpha \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} + \cos\alpha \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix}=\begin{bmatrix}\cos\alpha & \sin\alpha \\ \sin\alpha & -\cos\alpha\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.54) Confirmation that our eigenvalues are $\pm 1$ is simple, and our eigenstates for the $+1$ eigenvalue is found to be \begin{aligned}{\lvert {+\alpha} \rangle} \propto \begin{bmatrix}\sin\alpha \\ 1 - \cos\alpha\end{bmatrix}= \begin{bmatrix}\sin\alpha/2 \cos\alpha/2 \\ 2 \sin^2 \alpha/2\end{bmatrix}\propto\begin{bmatrix}\cos \alpha/2 \\ \sin\alpha/2 \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.55) This last has unit norm, so we can write \begin{aligned}{\lvert {+\alpha} \rangle} =\begin{bmatrix}\cos \alpha/2 \\ \sin\alpha/2 \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.56) If the state has been measured to be \begin{aligned}{\lvert {\phi} \rangle} = 1 {\lvert {+\alpha} \rangle} + 0 {\lvert {-\alpha} \rangle},\end{aligned} \hspace{\stretch{1}}(3.57) then the probability of a second measurement obtaining ${\lvert {+\theta} \rangle}$ is \begin{aligned}{\left\lvert{ \left\langle{{+\theta}} \vert {{\phi}}\right\rangle }\right\rvert}^2&={\left\lvert{ \left\langle{{+\theta}} \vert {{+\alpha}}\right\rangle }\right\rvert}^2 .\end{aligned} \hspace{\stretch{1}}(3.58) Expanding just the inner product first we have \begin{aligned}\left\langle{{+\theta}} \vert {{+\alpha}}\right\rangle &=\begin{bmatrix}C_{\theta/2} & S_{\theta/2} \end{bmatrix}\begin{bmatrix}C_{\alpha/2} \\ S_{\alpha/2} \end{bmatrix} \\ &=S_{\theta/2} S_{\alpha/2} + C_{\theta/2} C_{\alpha/2} \\ &= \cos\left( \frac{\theta - \alpha}{2} \right)\end{aligned} So our probability of measuring spin up state ${\lvert {+\theta} \rangle}$ given the state was known to have been in spin up state ${\lvert {+\alpha} \rangle}$ is \begin{aligned}{\left\lvert{ \left\langle{{+\theta}} \vert {{+\alpha}}\right\rangle }\right\rvert}^2 = \cos^2\left( \frac{\theta - \alpha}{2} \right)\end{aligned} \hspace{\stretch{1}}(3.59) Finally, the expectation value for $\boldsymbol{\sigma} \cdot \mathbf{n}$ with respect to ${\lvert {+\theta} \rangle}$ is \begin{aligned}\begin{bmatrix}C_{\theta/2} & S_{\theta/2} \end{bmatrix}\begin{bmatrix}C_\alpha & S_\alpha \\ S_\alpha & -C_\alpha\end{bmatrix}\begin{bmatrix}C_{\theta/2} \\ S_{\theta/2} \end{bmatrix} &=\begin{bmatrix}C_{\theta/2} & S_{\theta/2} \end{bmatrix}\begin{bmatrix}C_\alpha C_{\theta/2} + S_\alpha S_{\theta/2} \\ S_\alpha C_{\theta/2} - C_\alpha S_{\theta/2} \end{bmatrix} \\ &=C_{\theta/2} C_\alpha C_{\theta/2} + C_{\theta/2} S_\alpha S_{\theta/2} + S_{\theta/2} S_\alpha C_{\theta/2} - S_{\theta/2} C_\alpha S_{\theta/2} \\ &=C_\alpha ( C_{\theta/2}^2 -S_{\theta/2}^2 )+ 2 S_\alpha S_{\theta/2} C_{\theta/2} \\ &= C_\alpha C_\theta+ S_\alpha S_\theta \\ &= \cos( \alpha - \theta )\end{aligned} Sanity checking this we observe that we have $+1$ as desired for the $\alpha = \theta$ case. ## Problem 5. ### Statement. Consider an arbitrary density matrix, $\rho$, for a spin $1/2$ system. Express each matrix element in terms of the ensemble averages $[S_i]$ where $i = x,y,z$. ### Solution. Let’s omit the spin direction temporarily and write for the density matrix \begin{aligned}\rho &= w_{+} {\lvert {+} \rangle}{\langle {+} \rvert}+w_{-} {\lvert {-} \rangle}{\langle {-} \rvert} \\ &=w_{+} {\lvert {+} \rangle}{\langle {+} \rvert}+(1 - w_{+}){\lvert {-} \rangle}{\langle {-} \rvert} \\ &={\lvert {-} \rangle}{\langle {-} \rvert} +w_{+} ({\lvert {+} \rangle}{\langle {+} \rvert} -{\lvert {+} \rangle}{\langle {+} \rvert})\end{aligned} For the ensemble average (no sum over repeated indexes) we have \begin{aligned}[S] = \left\langle{{S}}\right\rangle_{av} &= w_{+} {\langle {+} \rvert} S {\lvert {+} \rangle} +w_{-} {\langle {-} \rvert} S {\lvert {-} \rangle} \\ &= \frac{\hbar}{2}( w_{+} -w_{-} ) \\ &= \frac{\hbar}{2}( w_{+} -(1 - w_{+}) ) \\ &= \hbar w_{+} - \frac{1}{{2}}\end{aligned} This gives us \begin{aligned}w_{+} = \frac{1}{{\hbar}} [S] + \frac{1}{{2}}\end{aligned} and our density matrix becomes \begin{aligned}\rho &=\frac{1}{{2}} ( {\lvert {+} \rangle}{\langle {+} \rvert} +{\lvert {-} \rangle}{\langle {-} \rvert} )+\frac{1}{{\hbar}} [S] ({\lvert {+} \rangle}{\langle {+} \rvert} -{\lvert {+} \rangle}{\langle {+} \rvert}) \\ &=\frac{1}{{2}} I+\frac{1}{{\hbar}} [S] ({\lvert {+} \rangle}{\langle {+} \rvert} -{\lvert {+} \rangle}{\langle {+} \rvert}) \\ \end{aligned} Utilizing \begin{aligned}{\lvert {x+} \rangle} &= \frac{1}{{\sqrt{2}}}\begin{bmatrix}1 \\ 1\end{bmatrix} \\ {\lvert {x-} \rangle} &= \frac{1}{{\sqrt{2}}}\begin{bmatrix}1 \\ -1\end{bmatrix} \\ {\lvert {y+} \rangle} &= \frac{1}{{\sqrt{2}}}\begin{bmatrix}1 \\ 1\end{bmatrix} \\ {\lvert {y-} \rangle} &= \frac{1}{{\sqrt{2}}}\begin{bmatrix}1 \\ -i\end{bmatrix} \\ {\lvert {z+} \rangle} &= \begin{bmatrix}1 \\ 0\end{bmatrix} \\ {\lvert {z-} \rangle} &= \begin{bmatrix}0 \\ 1\end{bmatrix}\end{aligned} We can easily find \begin{aligned}{\lvert {x+} \rangle}{\langle {x+} \rvert} -{\lvert {x+} \rangle}{\langle {x+} \rvert} &= \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} = \sigma_x \\ {\lvert {y+} \rangle}{\langle {y+} \rvert} -{\lvert {y+} \rangle}{\langle {y+} \rvert} &= \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} = \sigma_y \\ {\lvert {z+} \rangle}{\langle {z+} \rvert} -{\lvert {z+} \rangle}{\langle {z+} \rvert} &= \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} = \sigma_z\end{aligned} So we can write the density matrix in terms of any of the ensemble averages as \begin{aligned}\rho =\frac{1}{{2}} I+\frac{1}{{\hbar}} [S_i] \sigma_i=\frac{1}{{2}} (I + [\sigma_i] \sigma_i )\end{aligned} Alternatively, defining $\mathbf{P}_i = [\sigma_i] \mathbf{e}_i$, for any of the directions $i = 1,2,3$ we can write \begin{aligned}\rho = \frac{1}{{2}} (I + \boldsymbol{\sigma} \cdot \mathbf{P}_i )\end{aligned} \hspace{\stretch{1}}(3.60) In equation (5.109) we had a similar result in terms of the polarization vector $\mathbf{P} = {\langle {\alpha} \rvert} \boldsymbol{\sigma} {\lvert {\alpha} \rangle}$, and the individual weights $w_\alpha$, and $w_\beta$, but we see here that this $(w_\alpha - w_\beta)\mathbf{P}$ factor can be written exclusively in terms of the ensemble average. Actually, this is also a result in the text, down in (5.113), but we see it here in a more concrete form having picked specific spin directions. ## Problem 6. ### Statement. If a Hamiltonian is given by $\boldsymbol{\sigma} \cdot \mathbf{n}$ where $\mathbf{n} = (\sin\alpha\cos\beta, \sin\alpha\sin\beta, \cos\alpha)$, determine the time evolution operator as a 2 x 2 matrix. If a state at $t = 0$ is given by \begin{aligned}{\lvert {\phi(0)} \rangle} = \begin{bmatrix}a \\ b\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(3.61) then obtain ${\lvert {\phi(t)} \rangle}$. ### Solution. Before diving into the meat of the problem, observe that a tidy factorization of the Hamiltonian is possible as a composition of rotations. That is \begin{aligned}H &= \boldsymbol{\sigma} \cdot \mathbf{n} \\ &= \sin\alpha \sigma_1 ( \cos\beta + \sigma_1 \sigma_2 \sin\beta ) + \cos\alpha \sigma_3 \\ &= \sigma_3 \left(\cos\alpha + \sin\alpha \sigma_3 \sigma_1 e^{ i \sigma_3 \beta }\right) \\ &= \sigma_3 \exp\left( \alpha i \sigma_2 \exp\left( \beta i \sigma_3 \right)\right)\end{aligned} So we have for the time evolution operator \begin{aligned}U(\Delta t) &=\exp( -i \Delta t H /\hbar )= \exp \left(- \frac{\Delta t}{\hbar} i \sigma_3 \exp\Bigl( \alpha i \sigma_2 \exp\left( \beta i \sigma_3 \right)\Bigr)\right).\end{aligned} \hspace{\stretch{1}}(3.62) Does this really help? I guess not, but it is nice and tidy. Returning to the specifics of the problem, we note that squaring the Hamiltonian produces the identity matrix \begin{aligned}(\boldsymbol{\sigma} \cdot \mathbf{n})^2 &= I \mathbf{n}^2 = I.\end{aligned} \hspace{\stretch{1}}(3.63) This allows us to exponentiate $H$ by inspection utilizing \begin{aligned}e^{i \mu (\boldsymbol{\sigma} \cdot \mathbf{n}) } = I \cos\mu + i (\boldsymbol{\sigma} \cdot \mathbf{n}) \sin\mu\end{aligned} \hspace{\stretch{1}}(3.64) Writing $\sin\mu = S_\mu$, and $\cos\mu = C_\mu$, we have \begin{aligned}\boldsymbol{\sigma} \cdot \mathbf{n} &=\begin{bmatrix}C_\alpha & S_\alpha e^{-i\beta} \\ S_\alpha e^{i\beta} & -C_\alpha\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(3.65) and thus \begin{aligned}U(\Delta t) = \exp( -i \Delta t H /\hbar )=\begin{bmatrix}C_{\Delta t/\hbar} -i S_{\Delta t/\hbar} C_\alpha & -i S_{\Delta t/\hbar} S_\alpha e^{-i\beta} \\ -i S_{\Delta t/\hbar} S_\alpha e^{i\beta} & C_{\Delta t/\hbar} + i S_{\Delta t/\hbar} C_\alpha\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.66) Note that as a sanity check we can calculate that $U(\Delta t) U(\Delta t)^\dagger = 1$ as expected. Now for $\Delta t = t$, we have \begin{aligned}U(t,0) \begin{bmatrix}a \\ b\end{bmatrix}&=\begin{bmatrix}a C_{t/\hbar} -a i S_{t/\hbar} C_\alpha - b i S_{t/\hbar} S_\alpha e^{-i\beta} \\ -a i S_{t/\hbar} S_\alpha e^{i\beta} + b C_{t/\hbar} + b i S_{t/\hbar} C_\alpha\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.67) It doesn’t seem terribly illuminating to multiply this all out, but we can factor the results slightly to tidy it up. That gives us \begin{aligned}U(t,0) \begin{bmatrix}a \\ b\end{bmatrix}&=\cos(t/\hbar)\begin{bmatrix}a \\ b\end{bmatrix}+ \sin(t/\hbar) \cos\alpha\begin{bmatrix}-a \\ b\end{bmatrix}+ i\sin(t/\hbar) \sin\alpha\begin{bmatrix}b e^{-i\beta} \\ -a e^{i \beta}\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.68) ## Problem 7. ### Statement. Consider a system of spin $1/2$ particles in a mixed ensemble containing a mixture of 25\ ### Solution. We have \begin{aligned}\rho &= \frac{1}{4} {\lvert {z+} \rangle}{\langle {z+} \rvert}+\frac{3}{4} {\lvert {x-} \rangle}{\langle {x-} \rvert} \\ &=\frac{1}{{4}} \begin{bmatrix}1 \\ 0\end{bmatrix}\begin{bmatrix}1 & 0\end{bmatrix}+\frac{3}{4} \frac{1}{{2}}\begin{bmatrix}1 \\ -1\end{bmatrix}\begin{bmatrix}1 & -1\end{bmatrix} \\ &=\frac{1}{{4}} \left(\frac{1}{{2}}\begin{bmatrix}2 & 0 \\ 0 & 0\end{bmatrix}+\frac{3}{2}\begin{bmatrix}1 & -1 \\ -1 & 1\end{bmatrix}\right) \\ \end{aligned} Giving us \begin{aligned}\rho =\frac{1}{{8}}\begin{bmatrix}5 & -3 \\ -3 & 3\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(3.69) Note that we can also factor the identity out of this for \begin{aligned}\rho &=\frac{1}{{2}}\begin{bmatrix}5/4 & -3/4 \\ -3/4 & 3/4\end{bmatrix}\\ &=\frac{1}{{2}}\left(I +\begin{bmatrix}1/4 & -3/4 \\ -3/4 & -1/4\end{bmatrix}\right)\end{aligned} which is just: \begin{aligned}\rho = \frac{1}{{2}} \left( I + \frac{1}{{4}} \sigma_z -\frac{3}{4} \sigma_x \right)\end{aligned} \hspace{\stretch{1}}(3.70) Recall that the ensemble average is related to the trace of the density and operator product \begin{aligned}\text{Tr}( \rho A )&=\sum_\beta {\langle {\beta} \rvert} \rho A {\lvert {\beta} \rangle} \\ &=\sum_{\beta} {\langle {\beta} \rvert} \left( \sum_\alpha w_\alpha {\lvert {\alpha} \rangle}{\langle {\alpha} \rvert} \right) A {\lvert {\beta} \rangle} \\ &=\sum_{\alpha, \beta} w_\alpha \left\langle{{\beta}} \vert {{\alpha}}\right\rangle{\langle {\alpha} \rvert} A {\lvert {\beta} \rangle} \\ &=\sum_{\alpha, \beta} w_\alpha {\langle {\alpha} \rvert} A {\lvert {\beta} \rangle} \left\langle{{\beta}} \vert {{\alpha}}\right\rangle\\ &=\sum_{\alpha} w_\alpha {\langle {\alpha} \rvert} A \left( \sum_\beta {\lvert {\beta} \rangle} {\langle {\beta} \rvert} \right) {\lvert {\alpha} \rangle}\\ &=\sum_\alpha w_\alpha {\langle {\alpha} \rvert} A {\lvert {\alpha} \rangle}\end{aligned} But this, by definition of the ensemble average, is just \begin{aligned}\text{Tr}( \rho A )&=\left\langle{{A}}\right\rangle_{\text{av}}.\end{aligned} \hspace{\stretch{1}}(3.71) We can use this to compute the ensemble averages of the Pauli matrices \begin{aligned}\left\langle{{\sigma_x}}\right\rangle_{\text{av}} &= \text{Tr} \left(\frac{1}{{8}}\begin{bmatrix}5 & -3 \\ -3 & 3\end{bmatrix}\begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}\right) = -\frac{3}{4} \\ \left\langle{{\sigma_y}}\right\rangle_{\text{av}} &= \text{Tr} \left(\frac{1}{{8}}\begin{bmatrix}5 & -3 \\ -3 & 3\end{bmatrix}\begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix}\right) = 0 \\ \left\langle{{\sigma_z}}\right\rangle_{\text{av}} &= \text{Tr} \left(\frac{1}{{8}}\begin{bmatrix}5 & -3 \\ -3 & 3\end{bmatrix}\begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix}\right) = \frac{1}{4} \\ \end{aligned} We can also find without the explicit matrix multiplication from 3.70 \begin{aligned}\left\langle{{\sigma_x}}\right\rangle_{\text{av}} &= \text{Tr} \frac{1}{{2}}\left(\sigma_x + \frac{1}{{4}} \sigma_z \sigma_x -\frac{3}{4} \sigma_x^2\right) = -\frac{3}{4} \\ \left\langle{{\sigma_y}}\right\rangle_{\text{av}} &= \text{Tr} \frac{1}{{2}}\left(\sigma_y + \frac{1}{{4}} \sigma_z \sigma_y -\frac{3}{4} \sigma_x \sigma_y\right) = 0 \\ \left\langle{{\sigma_z}}\right\rangle_{\text{av}} &= \text{Tr} \frac{1}{{2}}\left(\sigma_z + \frac{1}{{4}} \sigma_z^2 -\frac{3}{4} \sigma_x \sigma_z\right) = \frac{1}{{4}}.\end{aligned} (where to do so we observe that $\text{Tr} \sigma_i \sigma_j = 0$ for $i\ne j$ and $\text{Tr} \sigma_i = 0$, and $\text{Tr} \sigma_i^2 = 2$.) We see that the traces of the density operator and Pauli matrix products act very much like dot products extracting out the ensemble averages, which end up very much like the magnitudes of the projections in each of the directions. ## Problem 8. ### Statement. Show that the quantity $\boldsymbol{\sigma} \cdot \mathbf{p} V(r) \boldsymbol{\sigma} \cdot \mathbf{p}$, when simplified, has a term proportional to $\mathbf{L} \cdot \boldsymbol{\sigma}$. ### Solution. Consider the operation \begin{aligned}\boldsymbol{\sigma} \cdot \mathbf{p} V(r) \Psi&=- i \hbar \sigma_k \partial_k V(r) \Psi \\ &=- i \hbar \sigma_k (\partial_k V(r)) \Psi + V(r) (\boldsymbol{\sigma} \cdot \mathbf{p} ) \Psi \\ \end{aligned} With $r = \sqrt{\sum_j x_j^2}$, we have \begin{aligned}\partial_k V(r) = \frac{1}{{2}}\frac{1}{{r}} 2 x_k \frac{\partial {V(r)}}{\partial {r}},\end{aligned} which gives us the commutator \begin{aligned}\left[{ \boldsymbol{\sigma} \cdot \mathbf{p}},{V(r)}\right]&=- \frac{i \hbar}{r} \frac{\partial {V(r)}}{\partial {r}} (\boldsymbol{\sigma} \cdot \mathbf{x}) \end{aligned} \hspace{\stretch{1}}(3.72) Insertion into the operator in question we have \begin{aligned}\boldsymbol{\sigma} \cdot \mathbf{p} V(r) \boldsymbol{\sigma} \cdot \mathbf{p} =- \frac{i \hbar}{r} \frac{\partial {V(r)}}{\partial {r}} (\boldsymbol{\sigma} \cdot \mathbf{x}) (\boldsymbol{\sigma} \cdot \mathbf{p} ) + V(r) (\boldsymbol{\sigma} \cdot \mathbf{p} )^2\end{aligned} \hspace{\stretch{1}}(3.73) With decomposition of the $(\boldsymbol{\sigma} \cdot \mathbf{x}) (\boldsymbol{\sigma} \cdot \mathbf{p} )$ into symmetric and antisymmetric components, we should have in the second term our $\boldsymbol{\sigma} \cdot \mathbf{L}$ \begin{aligned}(\boldsymbol{\sigma} \cdot \mathbf{x}) (\boldsymbol{\sigma} \cdot \mathbf{p} )=\frac{1}{{2}} \left\{{\boldsymbol{\sigma} \cdot \mathbf{x}},{\boldsymbol{\sigma} \cdot \mathbf{p}}\right\}+\frac{1}{{2}} \left[{\boldsymbol{\sigma} \cdot \mathbf{x}},{\boldsymbol{\sigma} \cdot \mathbf{p}}\right]\end{aligned} \hspace{\stretch{1}}(3.74) where we expect $\boldsymbol{\sigma} \cdot \mathbf{L} \propto \left[{\boldsymbol{\sigma} \cdot \mathbf{x}},{\boldsymbol{\sigma} \cdot \mathbf{p}}\right]$. Alternately in components \begin{aligned}(\boldsymbol{\sigma} \cdot \mathbf{x}) (\boldsymbol{\sigma} \cdot \mathbf{p} )&=\sigma_k x_k \sigma_j p_j \\ &=x_k p_k I + \sum_{j\ne k} \sigma_k \sigma_j x_k p_j \\ &=x_k p_k I + i \sum_m \epsilon_{kjm} \sigma_m x_k p_j \\ &=I (\mathbf{x} \cdot \mathbf{p}) + i (\boldsymbol{\sigma} \cdot \mathbf{L})\end{aligned} ## Problem 9. ### Statement. ### Solution. TODO. # References [1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009. ## PHY356F: Quantum Mechanics I. Lecture 8 — Making Sense of Quantum Mechanics Posted by peeterjoot on November 3, 2010 My notes from Lecture 8, November 2, 2010. Taught by Prof. Vatche Deyirmenjian. [Click here for a PDF of this post with nicer formatting] ## Discussion Desai: “Quantum Theory is a linear theory …” We can discuss SHM without using sines and cosines or complex exponentials, say, only using polynomials, but it would be HARD to do so, and much more work. We want the framework of Hilbert space, linear operators and all the rest to make our life easier. Dirac: “Mathematics is only a tool, and one should learn the … (FIXME: LOOKUP)” You have to be able to understand the concepts and apply the concepts as well as the mathematics. Deyirmenjian: “Think before you compute.” Joke: With his name included it is the 3Ds. There’s a lot of information included in the question, so read it carefully. Q: The equation $A {\lvert {a_n} \rangle} = a_n {\lvert {a_n} \rangle}$ for operator $A$, eigenvalue $a_n$, $n = 1,2$ and eigenvector ${\lvert {a_n} \rangle}$ that is identified by the eigenvalue $a_n$ says that \begin{itemize} \item (a) measuring the physical quantity associated with $A$ gives result $a_n$ \item (b) $A$ acting on the state ${\lvert {a_n} \rangle}$ gives outcome $a_n$ \item (c) the possible outcomes of measuring the physical quantity associated with $A$ are the eigenvalues $a_n$ \item (d) Quantum mechanics is hard. \end{itemize} ${\lvert {a_n} \rangle}$ is a vector in a vector space or Hilbert space identified by some quantum number $a_n, n \in 1,2, \cdots$. The $a_n$ values could be expressions. Example, Angular momentum is describe by states ${\lvert {lm} \rangle}, l = 0,1,2,\cdots$ and $m = 0, \pm 1, \pm 2$ Recall that the problem is \begin{aligned}\mathbf{L}^2 {\lvert {lm} \rangle} &= l(l+1) \hbar^2 {\lvert {lm} \rangle} \\ L_z {\lvert {lm} \rangle} &= m \hbar {\lvert {lm} \rangle}\end{aligned} \hspace{\stretch{1}}(4.72) We have respectively eigenvalues $l(l+1)\hbar^2$, and $m \hbar$. A: Answer is (c). $a_n$ isn’t a measurement itself. These represent possibilities. Contrast this to classical mechanics where time evolution is given without probabilities \begin{aligned}\mathbf{F}_{\text{net}} &= m \mathbf{a} \\ \mathbf{x}(0), \mathbf{x}'(0) &\implies \mathbf{x}(t), \mathbf{x}'(t)\end{aligned} \hspace{\stretch{1}}(4.74) The eigenvalues are the possible outcomes, but we only know statistically that these are the possibilities. (a),(b) are incorrect because we do not know what the initial state is, nor what the final outcome is. We also can’t say “gives result $a_n$”. That statement is too strong! Q: We wouldn’t say that $A$ acts on pure state ${\lvert {a_n} \rangle}$, instead. If the state of the system is ${\lvert {\psi} \rangle} = {\lvert {a_5} \rangle}$, the probability of measuring outcome $a_5$ is \begin{itemize} \item (a) $a_5$ \item (b) $a_5^2$ \item (c) $\left\langle{{a_5}} \vert {{\psi}}\right\rangle = \left\langle{{a_5}} \vert {{a_5}}\right\rangle = 1$. \item (d) ${\left\lvert{\left\langle{{a_5}} \vert {{\psi}}\right\rangle}\right\rvert}^2 = {\left\lvert{\left\langle{{a_5}} \vert {{a_5}}\right\rangle}\right\rvert}^2 = {\left\lvert{1}\right\rvert}^2 = 1$. \end{itemize} A: (d) The eigenformula equation doesn’t say anything about any specific outcome. We want to talk about probability amplitudes. When the system is prepared in a particular pure eigenstate, then we have a guarentee that the probability of measuring that state is unity. We wouldn’t say (c) because the probability amplitudes are the absolute square of the complex number $\left\langle{{a_n}} \vert {{a_n}}\right\rangle$. The probability of outcome $a_n$, given initial state ${\lvert {\Psi} \rangle}$ is ${\left\lvert{\left\langle{{a_n}} \vert {{\Psi}}\right\rangle}\right\rvert}^2$. Wave function collapse: When you make a measurement of the physical quantity associated with $A$, then the state of the system will be the value ${\lvert {a_5} \rangle}$. The state is not the number (eigenvalue) $a_5$. Example: SGZ. With a “spin-up” measurement in the z-direction, the state of the system is ${\lvert {z+} \rangle}$. The state before the measurement, by the magnet, was ${\lvert {\Psi} \rangle}$. After the measurement, the state describing the system is ${\lvert {\phi} \rangle} = {\lvert {z+} \rangle}$. The measurement outcome is $+\frac{\hbar}{2}$ for the spin angular momentum along the z-direction. FIXME: SGZ picture here. There is an interaction between the magnet and the silver atoms coming out of the oven. Before that interaction we have a state described by ${\lvert {\Psi} \rangle}$. After the measurement, we have a new state ${\lvert {\phi} \rangle}$. We call this the collapse of the wave function. In a future course (QM interpretations) the language used and interpretations associated with this language can be discussed. Q: Express Hermitian operator $A$ in terms of its eigenvectors. Q: The above question is vague because \begin{itemize} \item (a) The eigenvectors may form a descrete set. \item (b) The eigenvectors may form a continuous set. \item (c) The eigenvectors may not form a complete set. \item (d) The eigenvectors are not given. \end{itemize} A: None of the above. A Hermitian operator is guarenteed to have a complete set of eigenvectors. The operator may also be both discrete and continuous (example: the complete spin wave function). discrete: \begin{aligned}A &= A \mathbf{1} \\ &= A \left( \sum_n {\lvert {a_n} \rangle} {\langle {a_n} \rvert} \right) \\ &= \sum_n (A {\lvert {a_n} \rangle} ){\langle {a_n} \rvert} \\ &= \sum_n (a_n {\lvert {a_n} \rangle}) {\langle {a_n} \rvert} \\ &= \sum_n a_n {\lvert {a_n} \rangle} {\langle {a_n} \rvert}\end{aligned} continuous: \begin{aligned}A &= A \mathbf{1} \\ &= A \left( \int d\alpha {\lvert {\alpha} \rangle} {\langle {\alpha} \rvert} \right) \\ &= \int d\alpha (A {\lvert {\alpha} \rangle} ){\langle {\alpha} \rvert} \\ &= \int d\alpha (\alpha {\lvert {\alpha} \rangle}) {\langle {\alpha} \rvert} \\ &= \int d\alpha \alpha {\lvert {\alpha} \rangle} {\langle {\alpha} \rvert}\end{aligned} An example is the position eigenstate ${\lvert {x} \rangle}$, eigenstate of the Hermitian operator $X$. $\alpha$ is a label indicating the summation. general case with both discrete and continuous: \begin{aligned}A &= A \mathbf{1} \\ &= A \left( \sum_n {\lvert {a_n} \rangle} {\langle {a_n} \rvert} + \int d\alpha {\lvert {\alpha} \rangle} {\langle {\alpha} \rvert} \right) \\ &= \sum_n \left(A {\lvert {a_n} \rangle} \right){\langle {a_n} \rvert} + \int d\alpha \left(A {\lvert {\alpha} \rangle} \right){\langle {\alpha} \rvert} \\ &= \sum_n \left(a_n {\lvert {a_n} \rangle}\right) {\langle {a_n} \rvert} + \int d\alpha \left(\alpha {\lvert {\alpha} \rangle}\right) {\langle {\alpha} \rvert} \\ &= \sum_n a_n {\lvert {a_n} \rangle} {\langle {a_n} \rvert} + \int d\alpha \alpha {\lvert {\alpha} \rangle} {\langle {\alpha} \rvert}\end{aligned} Problem Solving \begin{itemize} \item MODEL — Quantum, linear vector space \item VISUALIZE — Operators can have discrete, continuous or both discrete and continuous eigenvectors. \item SOLVE — Use the identity operator. \item CHECK — Does the above expression give $A {\lvert {a_n} \rangle} = a_n {\lvert {a_n} \rangle}$. \end{itemize} Check \begin{aligned}A {\lvert {a_m} \rangle}&= \sum_n a_n {\lvert {a_n} \rangle} \left\langle{{a_n}} \vert {{a_m}}\right\rangle + \int d\alpha \alpha {\lvert {\alpha} \rangle} \left\langle{{\alpha}} \vert {{a_n}}\right\rangle \\ &= \sum_n a_n {\lvert {a_n} \rangle} \delta_{nm} \\ &= a_m {\lvert {a_m} \rangle}\end{aligned} What remains to be shown, used above, is that the continous and discrete eigenvectors are orthonormal. He has an example vector space, not yet discussed. Q: what is ${\langle {\Psi_1} \rvert} A {\lvert {\Psi_1} \rangle}$, where $A$ is a Hermitian operator, and ${\lvert {\Psi_1} \rangle}$ is a general state. A: ${\langle {\Psi_1} \rvert} A {\lvert {\Psi_1} \rangle} =$ average outcome for many measurements of the physical quantity associated with $A$ such that the system is prepared in state ${\lvert {\Psi_1} \rangle}$ prior to each measurement. Q: What if the preparation is ${\lvert {\Psi_2} \rangle}$. This isn’t neccessarily an eigenstate of $A$, it is some linear combination of eigenstates. It is a general state. A: ${\langle {\Psi_2} \rvert} A {\lvert {\Psi_2} \rangle} =$ average of the physical quantity associated with $A$, but the preparation is ${\lvert {\Psi_2} \rangle}$, not ${\lvert {\Psi_1} \rangle}$. Q: What if our initial state is a little bit of ${\lvert {\Psi_1} \rangle}$, and a little bit of ${\lvert {\Psi_2} \rangle}$, and a little bit of ${\lvert {\Psi_N} \rangle}$. ie: how to describe what comes out of the oven in the SG experiment. That spin is a statistical mixture. We could understand this as only a statistical mix. This is a physical relavent problem. A: To describe that statistical situtation we have the following. \begin{aligned}\left\langle{{A}}\right\rangle_{\text{average}} = \sum_j w_j {\langle {\Psi_j} \rvert} A {\lvert {\Psi_j} \rangle}\end{aligned} \hspace{\stretch{1}}(4.76) We sum up all the expectation values modified by statistical weighting factors. These $w_j$‘s are statistical weighting factors for a preparation associated with ${\lvert {\Psi_j} \rangle}$, real numbers (that sum to unity). Note that these states ${\lvert {\Psi_j} \rangle}$ are not neccessarily orthonormal. With insertion of the identity operator we have \begin{aligned}\left\langle{{A}}\right\rangle_{\text{average}}&= \sum_j w_j {\langle {\Psi_j} \rvert} \mathbf{1} A {\lvert {\Psi_j} \rangle} \\ &= \sum_j w_j {\langle {\Psi_j} \rvert} \left( \sum_n {\lvert {a_n} \rangle} {\langle {a_n} \rvert} \right) A {\lvert {\Psi_j} \rangle} \\ &= \sum_j \sum_n w_j \left\langle{{\Psi_j}} \vert {{a_n}}\right\rangle {\langle {a_n} \rvert} A {\lvert {\Psi_j} \rangle} \\ &= \sum_j \sum_n w_j {\langle {a_n} \rvert} A {\lvert {\Psi_j} \rangle} \left\langle{{\Psi_j}} \vert {{a_n}}\right\rangle \\ &= \sum_n {\langle {a_n} \rvert} A \left( \sum_j w_j {\lvert {\Psi_j} \rangle} {\langle {\Psi_j} \rvert} \right) {\lvert {a_n} \rangle} \\ \end{aligned} This inner bit is called the density operator $\rho$ \begin{aligned}\rho &\equiv \sum_j w_j {\lvert {\Psi_j} \rangle} {\langle {\Psi_j} \rvert}\end{aligned} \hspace{\stretch{1}}(4.77) Returning to the average we have \begin{aligned}\left\langle{{A}}\right\rangle_{\text{average}} = \sum_n {\langle {a_n} \rvert} A \rho {\lvert {a_n} \rangle} \equiv \text{Tr}(A \rho)\end{aligned} \hspace{\stretch{1}}(4.78) The trace of an operator $A$ is \begin{aligned}\text{Tr}(A) = \sum_j {\langle {a_j} \rvert} A {\lvert {a_j} \rangle} = \sum_j A_{jj}\end{aligned} \hspace{\stretch{1}}(4.79) ## Section 5.9, Projection operator. Returning to the last lecture. From chapter 1, we have \begin{aligned}P_n = {\lvert {a_n} \rangle} {\langle {a_n} \rvert}\end{aligned} \hspace{\stretch{1}}(4.80) is called the projection operator. This is physically relavent. This takes a general state and gives you the component of that state associated with that eigenvector. Observe \begin{aligned}P_n {\lvert {\phi} \rangle} ={\lvert {a_n} \rangle} \left\langle{{a_n}} \vert {{\phi}}\right\rangle =\underbrace{\left\langle{{a_n}} \vert {{\phi}}\right\rangle}_{\text{coefficient}} {\lvert {a_n} \rangle}\end{aligned} \hspace{\stretch{1}}(4.81) Example: Projection operator for the ${\lvert {z+} \rangle}$ state \begin{aligned}P_{z+} = {\lvert {z+} \rangle} {\langle {z+} \rvert}\end{aligned} \hspace{\stretch{1}}(4.82) We see that the density operator \begin{aligned}\rho &\equiv \sum_j w_j {\lvert {\Psi_j} \rangle} {\langle {\Psi_j} \rvert},\end{aligned} \hspace{\stretch{1}}(4.83) can be written in terms of the Projection operators \begin{aligned}{\lvert {\Psi_j} \rangle} {\langle {\Psi_j} \rvert} = \text{Projection operator for state} {\lvert {\Psi_j} \rangle}\end{aligned} The projection operator is like a dot product, determining the quantity of a state that lines in the direction of another state. Q: What is the projection operator for spin-up along the z-direction. A: \begin{aligned}P_{z+} = {\lvert {z+} \rangle}{\langle {z+} \rvert}\end{aligned} \hspace{\stretch{1}}(4.84) Or in matrix form with \begin{aligned}{\langle {z+} \rvert} &=\begin{bmatrix}1 \\ 0\end{bmatrix} \\ {\langle {z-} \rvert} &=\begin{bmatrix}0 \\ 1\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(4.85) so \begin{aligned}P_{z+} = {\lvert {z+} \rangle}{\langle {z+} \rvert} =\begin{bmatrix}1 \\ 0\end{bmatrix}\begin{bmatrix}1 & 0\end{bmatrix}=\begin{bmatrix}1 & 0 \\ 0 & 0\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(4.87) Q: A harder problem. What is $P_\chi$, where \begin{aligned}{\lvert {\chi} \rangle} =\begin{bmatrix}c_1 \\ c_2\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(4.88) Note: We want normalized states, with $\left\langle{{\chi}} \vert {{\chi}}\right\rangle = {\left\lvert{c_1}\right\rvert}^2 + {\left\lvert{c_2}\right\rvert}^2 = 1$. A: \begin{aligned}P_{\chi} = {\lvert {\chi} \rangle}{\langle {\chi} \rvert} =\begin{bmatrix}c_1^{*} \\ c_2^{*}\end{bmatrix}\begin{bmatrix}c_1 & c_2\end{bmatrix}=\begin{bmatrix}c_1^{*} c_1 & c_1^{*} c_2 \\ c_2^{*} c_1 & c_2^{*} c_2\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(4.89) Observe that this has the proper form of a projection operator is that the square is itself \begin{aligned}({\lvert {\chi} \rangle}{\langle {\chi} \rvert}) ({\lvert {\chi} \rangle}{\langle {\chi} \rvert})&= {\lvert {\chi} \rangle} (\left\langle{{\chi}} \vert {{\chi}}\right\rangle ){\langle {\chi} \rvert} \\ &= {\lvert {\chi} \rangle} {\langle {\chi} \rvert}\end{aligned} Q: Show that $P_{\chi} = a_0 \mathbf{1} + \mathbf{a} \cdot \boldsymbol{\sigma}$, where $\mathbf{a} = (a_x, a_y, a_z)$ and $\boldsymbol{\sigma} = (\sigma_x, \sigma_y, \sigma_z)$. A: See Section 5.9. Note the following about computing $(\boldsymbol{\sigma} \cdot \mathbf{a})^2$. \begin{aligned}(\boldsymbol{\sigma} \cdot \mathbf{a})^2&=(a_x \sigma_x+ a_y \sigma_y+ a_z \sigma_z)(a_x \sigma_x+ a_y \sigma_y+ a_z \sigma_z) \\ &=a_x a_x \sigma_x \sigma_x+a_x a_y \sigma_x \sigma_y+a_x a_z \sigma_x \sigma_z+a_y a_x \sigma_y \sigma_x+a_y a_y \sigma_y \sigma_y+a_y a_z \sigma_y \sigma_z+a_z a_x \sigma_z \sigma_x+a_z a_y \sigma_z \sigma_y+a_z a_z \sigma_z \sigma_z \\ &= (a_x^2 + a_y^2 + a_z^2) I+ a_x a_y ( \sigma_x \sigma_y + \sigma_y \sigma_x)+ a_y a_z ( \sigma_y \sigma_z + \sigma_z \sigma_y)+ a_z a_x ( \sigma_z \sigma_x + \sigma_x \sigma_z) \\ &= {\left\lvert{\mathbf{x}}\right\rvert}^2 I\end{aligned} So we have \begin{aligned}(\boldsymbol{\sigma} \cdot \mathbf{a})^2 = (\mathbf{a} \cdot \mathbf{a}) \mathbf{1} \equiv \mathbf{a}^2\end{aligned} \hspace{\stretch{1}}(4.90) Where the matrix representations \begin{aligned}\sigma_x &\leftrightarrow \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} \\ \sigma_y &\leftrightarrow \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} \\ \sigma_z &\leftrightarrow \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix},\end{aligned} \hspace{\stretch{1}}(4.91) would be used to show that \begin{aligned}\sigma_x^2 = \sigma_y^2 = \sigma_z^2 = I\end{aligned} \hspace{\stretch{1}}(4.94) and \begin{aligned}\sigma_x \sigma_y &= -\sigma_y \sigma_x \\ \sigma_y \sigma_z &= -\sigma_z \sigma_y \\ \sigma_z \sigma_x &= -\sigma_x \sigma_z\end{aligned} \hspace{\stretch{1}}(4.95) ## New version of my Geometric Algebra notes compilation posted. Posted by peeterjoot on October 31, 2010 New versions of my Geometric Algebra Notes and my miscellaneous non-Geometric Algebra physics notes are now posted. Changes since the last posting likely include the incorporation of the following individual notes: Oct 30, 2010 Multivector commutators and Lorentz boosts. Use of commutator and anticommutator to find components of a multivector that are effected by a Lorentz boost. Utilize this to boost the electrodynamic field bivector, and show how a small velocity introduction perpendicular to the a electrostatics field results in a specific magnetic field. ie. consider the magnetic field seen by the electron as it orbits a proton. Oct 23, 2010 PHY356 Problem Set II. A couple more problems from my QM1 course. Oct 22, 2010 Classical Electrodynamic gauge interaction. Momentum and Energy transformation to derive Lorentz force law from a free particle Hamiltonian. Oct 20, 2010 Derivation of the spherical polar Laplacian A derivation of the spherical polar Laplacian. Oct 10, 2010 Notes and problems for Desai chapter IV. Chapter IV Notes and problems for Desai’s “Quantum Mechanics and Introductory Field Theory” text. Oct 7, 2010 PHY356 Problem Set I. A couple problems from my QM1 course. Oct 1, 2010 Notes and problems for Desai chapter III. Chapter III Notes and problems for Desai’s “Quantum Mechanics and Introductory Field Theory” text. Sept 27, 2010 Unitary exponential sandwich Unitary transformation using anticommutators. Sept 19, 2010 Desai Chapter II notes and problems. Chapter II Notes and problems for Desai’s “Quantum Mechanics and Introductory Field Theory” text. July 27, 2010 Rotations using matrix exponentials Calculating the exponential form for a unitary operator. A unitary operator can be expressed as the exponential of a Hermitian operator. Show how this can be calculated for the matrix representation of an operator. Explicitly calculate this matrix for a plane rotionation yields one of the Pauli spin matrices. While not unitary, the same procedure can be used to calculate such a rotation like angle for a Lorentz boost, and we also find that the result can be expressed in terms of one of the Pauli spin matrices. July 23, 2010 Dirac Notation Ponderings. Chapter 1 solutions and some associated notes. June 25, 2010 More problems from Liboff chapter 4 Liboff problems 4.11, 4.12, 4.14 June 19, 2010 Hoop and spring oscillator problem. A linear appromation to a hoop and spring problem. May 31, 2010 Infinite square well wavefunction. A QM problem from Liboff chapter 4. May 30, 2010 On commutation of exponentials Show that commutation of exponentials occurs if exponentiated terms also commute. May 29, 2010 Fourier transformation of the Pauli QED wave equation (Take I). Unsuccessful attempt to find a solution to the Pauli QM Hamiltonian using Fourier transforms. Also try to figure out the notation from the Feynman book where I saw this. May 28, 2010 Errata for Feynman’s Quantum Electrodynamics (Addison-Wesley)? My collection of errata notes for some Feynman lecture notes on QED compiled by a student. May 23, 2010 Effect of sinusoid operators Liboff, problem 3.19. May 23, 2010 Time evolution of some wave functions Liboff, problem 3.14. May 15, 2010 Center of mass of a toroidal segment. Calculate the volume element for a toroidal segment, and then the center of mass. This is a nice application of bivector rotation exponentials. Mar 7, 2010 Newton’s method for intersection of curves in a plane. Refresh my memory on Newton’s method. Then take the same idea and apply it to finding the intersection of two arbitrary curves in a plane. This provides a nice example for the use of the wedge product in linear system solutions. Curiously, the more general result for the iteration of an intersection estimate is tidier and prettier than that of a curve with a line. Mar 3, 2010 Notes on Goldstein’s Routh’s procedure. Puzzle through Routh’s procedure as outlined in Goldstein. Feb 19, 2010 1D forced harmonic oscillator. Quick solution of non-homogeneous problem. Solve the one dimensional harmonic oscillator problem using matrix methods. Jan 1, 2010 Integrating the equation of motion for a one dimensional problem. Solve for time for an arbitary one dimensional potential. Dec 21, 2009 Energy and momentum for assumed Fourier transform solutions to the homogeneous Maxwell equation. Fourier transform instead of series treatment of the previous, determining the Hamiltonian like energy expression for a wave packet. Dec 16, 2009 Electrodynamic field energy for vacuum. Apply the previous complex energy momentum tensor results to the calculation that Bohm does in his QM book for vacuum energy of a periodic electromagnetic field. I’d tried to do this a couple times using complex exponentials and never really gotten it right because of attempting to use the pseudoscalar as the imaginary for the phasors, instead of introducing a completely separate commuting imaginary. The end result is an energy expression for the volume element that has the structure of a mechanical Hamiltonian. Dec 13, 2009 Energy and momentum for Complex electric and magnetic field phasors. Work out the conservation equations for the energy and Poynting vectors in a complex representation. This fills in some gaps in Jackson, but tackles the problem from a GA starting point. Dec 6, 2009 Jacobians and spherical polar gradient. Dec 1, 2009 Polar form for the gradient and Laplacian. Explore a chain rule derivation of the polar form of the Laplacian, and the validity of my old First year Proffessor’s statements about divergence of the gradient being the only way to express the general Laplacian. His insistence that the grad dot grad not being generally valid is reconciled here with reality, and the key is that the action on the unit vectors also has to be considered. Nov 15, 2009 Force free relativistic motion. Nov 11, 2009 question on elliptic function paper. Nov 4, 2009 Spherical polar pendulum for one and multiple masses (Take II) The constraints required to derive the equations of motion from a bivector parameterized Lagrangian for the multiple spherical pendulum make the problem considerably more complicated than would be the case with a plain scalar parameterization. Take the previous multiple spherical pendulum and rework it with only scalar spherical polar angles. I later rework this once more removing all the geometric algebra, which simplifies it further. Oct 27, 2009 Spherical polar pendulum for one and multiple masses, and multivector Euler-Lagrange formulation. Derive the multivector Euler-Lagrange relationships. These were given in Doran/Lasenby but I did not understand it there. Apply these to the multiple spherical pendulum with the Lagrangian expressed in terms of a bivector angle containing all the phi dependence a scalar polar angle. Sept 26, 2009 Hamiltonian notes. Sept 24, 2009 Electromagnetic Gauge invariance. Show the gauge invariance of the Lorentz force equations. Start with the four vector representation since these transformation relations are simpler there and then show the invariance in the explicit space and time representation. Sept 22, 2009 Lorentz force from Lagrangian (non-covariant) Show that the non-covariant Lagrangian from Jackson does produce the Lorentz force law (an exersize for the reader). Sept 20, 2009 Spherical Polar unit vectors in exponential form. An exponential representation of spherical polar unit vectors. This was observed when considering the bivector form of the angular momentum operator, and is reiterated here independent of any quantum mechanical context. Sept 13, 2009 Relativistic classical proton electron interaction. An attempt to setup (but not yet solve) the equations for relativistically correct proton electron interaction. Sept 10, 2009 Decoding the Merced Florez article. Sept 6, 2009 bivectorSelectWrong Sept 6, 2009 Bivector grades of the squared angular momentum operator. The squared angular momentum operator can potentially have scalar, bivector, and (four) pseudoscalar components (depending on the dimension of the space). Here just the bivector grades of that product are calculated. With this the complete factorization of the Laplacian can be obtained. Sept 5, 2009 Maxwell Lagrangian, rotation of coordinates. Sept 4, 2009 Translation and rotation Noether field currents. Review Lagrangian field concepts. Derive the field versions of the Euler-Lagrange equations. Calculate the conserved current and conservation law, a divergence, for a Lagrangian with a single parameter symmetry (such as rotation or boost by a scalar angle or rapidity). Next, spacetime symmetries are considered, starting with the question of the symmetry existance, then a calculation of the canonical energy momentum tensor and its associated divergence relation. Next an attempt to use a similar procedure to calculate a conserved current for an incremental spacetime translation. A divergence relation is found, but it is not a conservation relationship having a nonzero difference of energy momentum tensors. Aug 31, 2009 Generator of rotations in arbitrary dimensions. Similar to the exponential translation operator, the exponential operator that generates rotations is derived. Geometric Algebra is used (with an attempt to make this somewhat understandable without a lot of GA background). Explicit coordinate expansion is also covered, as well as a comparison to how the same derivation technique could be done with matrix only methods. The results obtained apply to Euclidean and other metrics and also to all dimensions, both 2D and greater or equal to 3D (unlike the cross product form). Aug 20, 2009 Introduction to Geometric Algebra. Aug 16, 2009 Graphical representation of Spherical Harmonics forl=1$Observations that the first set of spherical harmonic associated Legendre eigenfunctions have a natural representation as projections from rotated spherical polar rotation points. Aug 14, 2009 (INCOMPLETE) Geometry of Maxwell radiation solutions After having some trouble with pseudoscalar phasor representations of the wave equation, step back and examine the geometry that these require. Find that the use of$I\zcap\$ for the imaginary means that only transverse solutions can be encoded.

Aug 11, 2009 Dot product of vector and bivector

Aug 11, 2009 Dot product of vector and blade

Aug 10, 2009 Covariant Maxwell equation in media
Formulate the Maxwell equation in media (from Jackson) without an explicit spacetime split.