# Peeter Joot's (OLD) Blog.

• ## Archives

 Adam C Scott on avoiding gdb signal noise… Ken on Scotiabank iTrade RESP …… Alan Ball on Oops. Fixing a drill hole in P… Peeter Joot's B… on Stokes theorem in Geometric… Exploring Stokes The… on Stokes theorem in Geometric…

• 293,785

# Posts Tagged ‘spin’

## Final version of my phy452.pdf notes posted

Posted by peeterjoot on September 5, 2013

I’d intended to rework the exam problems over the summer and make that the last update to my stat mech notes. However, I ended up studying world events and some other non-mainstream ideas intensively over the summer, and never got around to that final update.

Since I’m starting a new course (condensed matter) soon, I’ll end up having to focus on that, and have now posted a final version of my notes as is.

September 05, 2013 Large volume fermi gas density

April 30, 2013 Ultra relativistic spin zero condensation temperature

April 24, 2013 Low temperature Fermi gas chemical potential

## Summary of statistical mechanics relations and helpful formulas (cheat sheet fodder)

Posted by peeterjoot on April 29, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Central limit theorem

If $\left\langle{{x}}\right\rangle = \mu$ and $\sigma^2 = \left\langle{{x^2}}\right\rangle - \left\langle{{x}}\right\rangle^2$, and $X = \sum x$, then in the limit

\begin{aligned}\lim_{N \rightarrow \infty} P(X)= \frac{1}{{\sigma \sqrt{2 \pi N}}} \exp\left( - \frac{ (x - N \mu)^2}{2 N \sigma^2} \right)\end{aligned} \hspace{\stretch{1}}(1.0.1a)

\begin{aligned}\left\langle{{X}}\right\rangle = N \mu\end{aligned} \hspace{\stretch{1}}(1.0.1b)

\begin{aligned}\left\langle{{X^2}}\right\rangle - \left\langle{{X}}\right\rangle^2 = N \sigma^2\end{aligned} \hspace{\stretch{1}}(1.0.1c)

Binomial distribution

\begin{aligned}P_N(X) = \left\{\begin{array}{l l}\left(\frac{1}{{2}}\right)^N \frac{N!}{\left(\frac{N-X}{2}\right)!\left(\frac{N+X}{2}\right)!}& \quad \mbox{if X and N have same parity} \\ 0 & \quad \mbox{otherwise} \end{array},\right.\end{aligned} \hspace{\stretch{1}}(1.0.2)

where $X$ was something like number of Heads minus number of Tails.

Generating function

Given the Fourier transform of a probability distribution $\tilde{P}(k)$ we have

\begin{aligned}{\left.{{ \frac{\partial^n}{\partial k^n} \tilde{P}(k) }}\right\vert}_{{k = 0}}= (-i)^n \left\langle{{x^n}}\right\rangle\end{aligned} \hspace{\stretch{1}}(1.0.2)

Handy mathematics

\begin{aligned}\ln( 1 + x ) = x - \frac{x^2}{2} + \frac{x^3}{3} - \frac{x^4}{4}\end{aligned} \hspace{\stretch{1}}(1.0.2)

\begin{aligned}N! \approx \sqrt{ 2 \pi N} N^N e^{-N}\end{aligned} \hspace{\stretch{1}}(1.0.5)

\begin{aligned}\ln N! \approx \frac{1}{{2}} \ln 2 \pi -N + \left( N + \frac{1}{{2}} \right)\ln N \approx N \ln N - N\end{aligned} \hspace{\stretch{1}}(1.0.6)

\begin{aligned}\text{erf}(z) = \frac{2}{\sqrt{\pi}} \int_0^z e^{-t^2} dt\end{aligned} \hspace{\stretch{1}}(1.0.7)

\begin{aligned}\Gamma(\alpha) = \int_0^\infty dy e^{-y} y^{\alpha - 1}\end{aligned} \hspace{\stretch{1}}(1.0.8)

\begin{aligned}\Gamma(\alpha + 1) = \alpha \Gamma(\alpha)\end{aligned} \hspace{\stretch{1}}(1.0.9)

\begin{aligned}\Gamma\left( 1/2 \right) = \sqrt{\pi}\end{aligned} \hspace{\stretch{1}}(1.0.10)

\begin{aligned}\zeta(s) = \sum_{k=1}^{\infty} k^{-s}\end{aligned} \hspace{\stretch{1}}(1.0.10)

\begin{aligned}\begin{aligned}\zeta(3/2) &\approx 2.61238 \\ \zeta(2) &\approx 1.64493 \\ \zeta(5/2) &\approx 1.34149 \\ \zeta(3) &\approx 1.20206\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.12)

\begin{aligned}\Gamma(z) \Gamma(1-z) = \frac{\pi}{\sin(\pi z)}\end{aligned} \hspace{\stretch{1}}(1.0.12)

\begin{aligned}P(x, t) = \int_{-\infty}^\infty \frac{dk}{2 \pi} \tilde{P}(k, t) \exp\left( i k x \right)\end{aligned} \hspace{\stretch{1}}(1.0.14a)

\begin{aligned}\tilde{P}(k, t) = \int_{-\infty}^\infty dx P(x, t) \exp\left( -i k x \right)\end{aligned} \hspace{\stretch{1}}(1.0.14b)

Heavyside theta

\begin{aligned}\Theta(x) = \left\{\begin{array}{l l}1 & \quad x \ge 0 \\ 0 & \quad x < 0\end{array}\right.\end{aligned} \hspace{\stretch{1}}(1.0.15a)

\begin{aligned}\frac{d\Theta}{dx} = \delta(x)\end{aligned} \hspace{\stretch{1}}(1.0.15b)

\begin{aligned}\sum_{m = -l}^l a^m=\frac{a^{l + 1/2} - a^{-(l+1/2)}}{a^{1/2} - a^{-1/2}}\end{aligned} \hspace{\stretch{1}}(1.0.16.16)

\begin{aligned}\sum_{m = -l}^l e^{b m}=\frac{\sinh(b(l + 1/2))}{\sinh(b/2)}\end{aligned} \hspace{\stretch{1}}(1.0.16b)

\begin{aligned}\int_{-\infty}^\infty q^{2 N} e^{-a q^2} dq=\frac{(2 N - 1)!!}{(2a)^N} \sqrt{\frac{\pi}{a}}\end{aligned} \hspace{\stretch{1}}(1.0.17.17)

\begin{aligned}\int_{-\infty}^\infty e^{-a q^2} dq=\sqrt{\frac{\pi}{a}}\end{aligned} \hspace{\stretch{1}}(1.0.17.17)

\begin{aligned}\binom{-\left\lvert {m} \right\rvert}{k} = (-1)^k \frac{\left\lvert {m} \right\rvert}{\left\lvert {m} \right\rvert + k} \binom{\left\lvert {m} \right\rvert+k}{\left\lvert {m} \right\rvert}\end{aligned} \hspace{\stretch{1}}(1.0.18)

\begin{aligned}\int_0^\infty d\epsilon \frac{\epsilon^3}{e^{\beta \epsilon} - 1} =\frac{\pi ^4}{15 \beta ^4},\end{aligned} \hspace{\stretch{1}}(1.0.18)

volume in mD

\begin{aligned}V_m= \frac{ \pi^{m/2} R^{m} }{ \Gamma\left( m/2 + 1 \right)}\end{aligned} \hspace{\stretch{1}}(1.0.20)

area of ellipse

\begin{aligned}A = \pi a b\end{aligned} \hspace{\stretch{1}}(1.0.21)

Radius of gyration of a 3D polymer

With radius $a$, we have

\begin{aligned}r_N \approx a \sqrt{N}\end{aligned} \hspace{\stretch{1}}(1.0.21)

Velocity random walk

Find

\begin{aligned}\mathcal{P}_{N_{\mathrm{c}}}(\mathbf{v}) \propto e^{-\frac{(\mathbf{v} - \mathbf{v}_0)^2}{2 N_{\mathrm{c}}}}\end{aligned} \hspace{\stretch{1}}(1.0.23)

Random walk

1D Random walk

\begin{aligned}\mathcal{P}( x, t ) = \frac{1}{{2}} \mathcal{P}(x + \delta x, t - \delta t)+\frac{1}{{2}} \mathcal{P}(x - \delta x, t - \delta t)\end{aligned} \hspace{\stretch{1}}(1.0.23)

\begin{aligned}\frac{\partial {\mathcal{P}}}{\partial {t}}(x, t) =\frac{1}{{2}} \frac{(\delta x)^2}{\delta t}\frac{\partial^2 {{\mathcal{P}}}}{\partial {{x}}^2}(x, t) = D \frac{\partial^2 {{\mathcal{P}}}}{\partial {{x}}^2}(x, t) = -\frac{\partial {J}}{\partial {x}},\end{aligned} \hspace{\stretch{1}}(1.0.25)

The diffusion constant relation to the probability current is referred to as Fick’s law

\begin{aligned}D = -\frac{\partial {J}}{\partial {x}}\end{aligned} \hspace{\stretch{1}}(1.0.25)

with which we can cast the probability diffusion identity into a continuity equation form

\begin{aligned}\frac{\partial {\mathcal{P}}}{\partial {t}} + \frac{\partial {J}}{\partial {x}} = 0 \end{aligned} \hspace{\stretch{1}}(1.0.25)

In 3D (with the Maxwell distribution frictional term), this takes the form

\begin{aligned}\mathbf{j} = -D \boldsymbol{\nabla}_\mathbf{v} c(\mathbf{v}, t) - \eta \mathbf{v} c(\mathbf{v}, t)\end{aligned} \hspace{\stretch{1}}(1.0.28a)

\begin{aligned}\frac{\partial {}}{\partial {t}} c(\mathbf{v}, t) + \boldsymbol{\nabla}_\mathbf{v} \cdot \mathbf{j}(\mathbf{v}, t) = 0\end{aligned} \hspace{\stretch{1}}(1.0.28b)

Maxwell distribution

Add a frictional term to the velocity space diffusion current

\begin{aligned}j_v = -D \frac{\partial {c}}{\partial {v}}(v, t) - \eta v c(v).\end{aligned} \hspace{\stretch{1}}(1.0.29)

For steady state the continity equation $0 = \frac{dc}{dt} = -\frac{\partial {j_v}}{\partial {v}}$ leads to

\begin{aligned}c(v) \propto \exp\left(- \frac{\eta v^2}{2 D}\right).\end{aligned} \hspace{\stretch{1}}(1.0.30)

We also find

\begin{aligned}\left\langle{{v^2}}\right\rangle = \frac{D}{\eta},\end{aligned} \hspace{\stretch{1}}(1.0.30)

and identify

\begin{aligned}\frac{1}{{2}} m \left\langle{{\mathbf{v}^2}}\right\rangle = \frac{1}{{2}} m \left( \frac{D}{\eta} \right) = \frac{1}{{2}} k_{\mathrm{B}} T\end{aligned} \hspace{\stretch{1}}(1.0.32)

Hamilton’s equations

\begin{aligned}\frac{\partial {H}}{\partial {p}} = \dot{x}\end{aligned} \hspace{\stretch{1}}(1.0.33a)

\begin{aligned}\frac{\partial {H}}{\partial {x}} = -\dot{p}\end{aligned} \hspace{\stretch{1}}(1.0.33b)

SHO

\begin{aligned}H = \frac{p^2}{2m} + \frac{1}{{2}} k x^2\end{aligned} \hspace{\stretch{1}}(1.0.34a)

\begin{aligned}\omega^2 = \frac{k}{m}\end{aligned} \hspace{\stretch{1}}(1.0.34b)

Quantum energy eigenvalues

\begin{aligned}E_n = \left( n + \frac{1}{{2}} \right) \hbar \omega\end{aligned} \hspace{\stretch{1}}(1.0.35)

Liouville’s theorem

\begin{aligned}\frac{d{{\rho}}}{dt} = \frac{\partial {\rho}}{\partial {t}} + \dot{x} \frac{\partial {\rho}}{\partial {x}} + \dot{p} \frac{\partial {\rho}}{\partial {p}}= \cdots = \frac{\partial {\rho}}{\partial {t}} + \frac{\partial {\left( \dot{x} \rho \right)}}{\partial {x}} + \frac{\partial {\left( \dot{x} \rho \right)}}{\partial {p}} = \frac{\partial {\rho}}{\partial {t}} + \boldsymbol{\nabla}_{x,p} \cdot (\rho \dot{x}, \rho \dot{p})= \frac{\partial {\rho}}{\partial {t}} + \boldsymbol{\nabla} \cdot \mathbf{J}= 0,\end{aligned} \hspace{\stretch{1}}(1.0.35)

Regardless of whether we have a steady state system, if we sit on a region of phase space volume, the probability density in that neighbourhood will be constant.

Ergodic

A system for which all accessible phase space is swept out by the trajectories. This and Liouville’s threorm allows us to assume that we can treat any given small phase space volume as if it is equally probable to the same time evolved phase space region, and switch to ensemble averaging instead of time averaging.

Thermodynamics

\begin{aligned}dE = T dS - P dV + \mu dN\end{aligned} \hspace{\stretch{1}}(1.0.37.37)

\begin{aligned}\frac{1}{{T}} = \left({\partial {S}}/{\partial {E}}\right)_{{N,V}}\end{aligned} \hspace{\stretch{1}}(1.0.37.37)

\begin{aligned}\frac{P}{T} = \left({\partial {S}}/{\partial {V}}\right)_{{N,E}}\end{aligned} \hspace{\stretch{1}}(1.0.37.37)

\begin{aligned}-\frac{\mu}{T} = \left({\partial {S}}/{\partial {N}}\right)_{{V,E}}\end{aligned} \hspace{\stretch{1}}(1.0.37.37)

\begin{aligned}P = - \left({\partial {E}}/{\partial {V}}\right)_{{N,S}}= - \left({\partial {F}}/{\partial {V}}\right)_{{N,T}}\end{aligned} \hspace{\stretch{1}}(1.0.37e)

\begin{aligned}\mu = \left({\partial {E}}/{\partial {N}}\right)_{{V,S}} = \left({\partial {F}}/{\partial {N}}\right)_{{V,T}}\end{aligned} \hspace{\stretch{1}}(1.0.37e)

\begin{aligned}T = \left({\partial {E}}/{\partial {S}}\right)_{{N,V}}\end{aligned} \hspace{\stretch{1}}(1.0.37e)

\begin{aligned}F = E - TS\end{aligned} \hspace{\stretch{1}}(1.0.37e)

\begin{aligned}G = F + P V = E - T S + P V = \mu N\end{aligned} \hspace{\stretch{1}}(1.0.37i)

\begin{aligned}H = E + P V = G + T S\end{aligned} \hspace{\stretch{1}}(1.0.37j)

\begin{aligned}C_{\mathrm{V}} = T \left({\partial {S}}/{\partial {T}}\right)_{{N,V}} = \left({\partial {E}}/{\partial {T}}\right)_{{N,V}} = - T \left( \frac{\partial^2 {{F}}}{\partial {{T}}^2} \right)_{N,V}\end{aligned} \hspace{\stretch{1}}(1.0.37k)

\begin{aligned}C_{\mathrm{P}} = T \left({\partial {S}}/{\partial {T}}\right)_{{N,P}} = \left({\partial {H}}/{\partial {T}}\right)_{{N,P}}\end{aligned} \hspace{\stretch{1}}(1.0.37l)

\begin{aligned}\underbrace{dE}_{\text{Change in energy}}=\underbrace{d W}_{\text{work done on the system}}+\underbrace{d Q}_{\text{Heat supplied to the system}}\end{aligned} \hspace{\stretch{1}}(1.0.38)

Example (work on gas): $d W = -P dV$. Adiabatic: $d Q = 0$. Cyclic: $dE = 0$.

Microstates

\begin{aligned}\beta = \frac{1}{k_{\mathrm{B}} T}\end{aligned} \hspace{\stretch{1}}(1.0.38)

\begin{aligned}S = k_{\mathrm{B}} \ln \Omega \end{aligned} \hspace{\stretch{1}}(1.0.40)

\begin{aligned}\Omega(N, V, E) = \frac{1}{h^{3N} N!} \int_V d\mathbf{x}_1 \cdots d\mathbf{x}_N \int d\mathbf{p}_1 \cdots d\mathbf{p}_N \delta \left(E - \frac{\mathbf{p}_1^2}{2 m} \cdots - \frac{\mathbf{p}_N^2}{2 m}\right)=\frac{V^N}{h^{3N} N!}\int d\mathbf{p}_1 \cdots d\mathbf{p}_N \delta \left(E - \frac{\mathbf{p}_1^2}{2m} \cdots - \frac{\mathbf{p}_N^2}{2m}\right)\end{aligned} \hspace{\stretch{1}}(1.0.40)

\begin{aligned}\Omega = \frac{d\gamma}{dE}\end{aligned} \hspace{\stretch{1}}(1.0.42)

\begin{aligned}\gamma=\frac{V^N}{h^{3N} N!}\int d\mathbf{p}_1 \cdots d\mathbf{p}_N \Theta \left(E - \frac{\mathbf{p}_1^2}{2m} \cdots - \frac{\mathbf{p}_N^2}{2m}\right)\end{aligned} \hspace{\stretch{1}}(1.0.43)

quantum

\begin{aligned}\gamma = \sum_i \Theta(E - \epsilon_i)\end{aligned} \hspace{\stretch{1}}(1.0.44)

Ideal gas

\begin{aligned}\Omega = \frac{V^N}{N!} \frac{1}{{h^{3N}}} \frac{( 2 \pi m E)^{3 N/2 }}{E} \frac{1}{\Gamma( 3N/2 ) }\end{aligned} \hspace{\stretch{1}}(1.0.45)

\begin{aligned}S_{\mathrm{ideal}} = k_{\mathrm{B}} \left(N \ln \frac{V}{N} + \frac{3 N}{2} \ln \left( \frac{4 \pi m E }{3 N h^2} \right) + \frac{5 N}{2} \right)\end{aligned} \hspace{\stretch{1}}(1.0.46)

Quantum free particle in a box

\begin{aligned}\Psi_{n_1, n_2, n_3}(x, y, z) = \left( \frac{2}{L} \right)^{3/2} \sin\left( \frac{ n_1 \pi x}{L} \right)\sin\left( \frac{ n_2 \pi x}{L} \right)\sin\left( \frac{ n_3 \pi x}{L} \right)\end{aligned} \hspace{\stretch{1}}(1.0.47a)

\begin{aligned}\epsilon_{n_1, n_2, n_3} = \frac{h^2}{8 m L^2} \left( n_1^2 + n_2^2 + n_3^2 \right)\end{aligned} \hspace{\stretch{1}}(1.0.47b)

\begin{aligned}\epsilon_k = \frac{\hbar^2 k^2}{2m},\end{aligned} \hspace{\stretch{1}}(1.0.47b)

Spin

magnetization

\begin{aligned}\mu = \frac{\partial {F}}{\partial {B}}\end{aligned} \hspace{\stretch{1}}(1.0.48)

moment per particle

\begin{aligned}m = \mu/N\end{aligned} \hspace{\stretch{1}}(1.0.49)

spin matrices

\begin{aligned}\sigma_x = \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.50a)

\begin{aligned}\sigma_y = \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.50b)

\begin{aligned}\sigma_z = \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.50c)

$l \ge 0, -l \le m \le l$

\begin{aligned}\mathbf{L}^2 {\left\lvert {lm} \right\rangle} = l(l+1)\hbar^2 {\left\lvert {lm} \right\rangle}\end{aligned} \hspace{\stretch{1}}(1.0.51a)

\begin{aligned}L_z {\left\lvert {l m} \right\rangle} = \hbar m {\left\lvert {l m} \right\rangle}\end{aligned} \hspace{\stretch{1}}(1.0.51b)

\begin{aligned}S(S + 1) \hbar^2\end{aligned} \hspace{\stretch{1}}(1.0.51b)

Canonical ensemble

classical

\begin{aligned}\Omega(N, E) = \frac{ V }{ h^3 N} \int d\mathbf{p}_1 e^{\frac{S}{k_{\mathrm{B}}}(N, E)}e^{-\frac{1}{{k_{\mathrm{B}}}} \left( \frac{\partial {S}}{\partial {N}} \right)_{E, V} }e^{-\frac{\mathbf{p}_1^2}{2m k_{\mathrm{B}}}\left( \frac{\partial {S}}{\partial {E}} \right)_{N, V}}\end{aligned} \hspace{\stretch{1}}(1.0.53)

quantum

\begin{aligned}\Omega(E) \approx\sum_{m \in \text{subsystem}} e^{\frac{1}{{k_{\mathrm{B}}}} S(E)}e^{-\beta \mathcal{E}_m}\end{aligned} \hspace{\stretch{1}}(1.0.54.54)

\begin{aligned}Z = \sum_m e^{-\beta \mathcal{E}_m} = \text{Tr} \left( e^{-\beta \hat{H}_{\text{subsystem}}} \right)\end{aligned} \hspace{\stretch{1}}(1.0.54b)

\begin{aligned}\left\langle{{E}}\right\rangle = \frac{\int He^{- \beta H }}{\int e^{- \beta H }}\end{aligned} \hspace{\stretch{1}}(1.0.55a)

\begin{aligned}\left\langle{{E^2}}\right\rangle = \frac{\int H^2e^{- \beta H }}{\int e^{- \beta H }}\end{aligned} \hspace{\stretch{1}}(1.0.55b)

\begin{aligned}Z \equiv \frac{1}{{h^{3N} N!}}\int e^{- \beta H }\end{aligned} \hspace{\stretch{1}}(1.0.55c)

\begin{aligned}\left\langle{{E}}\right\rangle = -\frac{1}{{Z}} \frac{\partial {Z}}{\partial {\beta}} = - \frac{\partial {\ln Z}}{\partial {\beta}} =\frac{\partial {(\beta F)}}{\partial {\beta}}\end{aligned} \hspace{\stretch{1}}(1.0.55d)

\begin{aligned}\sigma_{\mathrm{E}}^2= \left\langle{{E^2}}\right\rangle - \left\langle{{E}}\right\rangle^2 =\frac{\partial^2 {{\ln Z}}}{\partial {{\beta}}^2} = k_{\mathrm{B}} T^2 \frac{\partial {\left\langle{{E}}\right\rangle}}{\partial {T}}= k_{\mathrm{B}} T^2 C_{\mathrm{V}} \propto N\end{aligned} \hspace{\stretch{1}}(1.0.55e)

\begin{aligned}Z = e^{-\beta (\left\langle{{E}}\right\rangle - T S) } = e^{-\beta F}\end{aligned} \hspace{\stretch{1}}(1.0.55f)

\begin{aligned}F = \left\langle{{E}}\right\rangle - T S = -k_{\mathrm{B}} T \ln Z\end{aligned} \hspace{\stretch{1}}(1.0.55g)

Grand Canonical ensemble

\begin{aligned}S = - k_{\mathrm{B}} \sum_{r,s} P_{r,s} \ln P_{r,s}\end{aligned} \hspace{\stretch{1}}(1.0.56)

\begin{aligned}P_{r, s} = \frac{e^{-\alpha N_r - \beta E_s}}{Z_{\mathrm{G}}}\end{aligned} \hspace{\stretch{1}}(1.0.57a)

\begin{aligned}Z_{\mathrm{G}} = \sum_{r,s} e^{-\alpha N_r - \beta E_s} = \sum_{r,s} z^{N_r} e^{-\beta E_s} = \sum_{N_r} z^{N_r} Z_{N_r}\end{aligned} \hspace{\stretch{1}}(1.0.57b)

\begin{aligned}z = e^{-\alpha} = e^{\mu \beta}\end{aligned} \hspace{\stretch{1}}(1.0.57c)

\begin{aligned}q = \ln Z_{\mathrm{G}} = P V \beta\end{aligned} \hspace{\stretch{1}}(1.0.57d)

\begin{aligned}\left\langle{{H}}\right\rangle = -\left({\partial {q}}/{\partial {\beta}}\right)_{{z,V}} = k_{\mathrm{B}} T^2 \left({\partial {q}}/{\partial {\mu}}\right)_{{z,V}} = \sum_\epsilon \frac{\epsilon}{z^{-1} e^{\beta \epsilon} \pm 1}\end{aligned} \hspace{\stretch{1}}(1.0.57e)

\begin{aligned}\left\langle{{N}}\right\rangle = z \left({\partial {q}}/{\partial {z}}\right)_{{V,T}} = \sum_\epsilon \frac{1}{{z^{-1} e^{\beta\epsilon} \pm 1}}\end{aligned} \hspace{\stretch{1}}(1.0.57f)

\begin{aligned}F = - k_{\mathrm{B}} T \ln \frac{ Z_{\mathrm{G}} }{z^N}\end{aligned} \hspace{\stretch{1}}(1.0.57g)

\begin{aligned}\left\langle{{n_\epsilon}}\right\rangle = -\frac{1}{{\beta}} \left({\partial {q}}/{\partial {\epsilon}}\right)_{{z, T, \text{other} \epsilon}} = \frac{1}{{z^{-1} e^{\beta \epsilon} \pm 1}}\end{aligned} \hspace{\stretch{1}}(1.0.57h)

\begin{aligned}\text{var}(N) = \frac{1}{{\beta}} \left({\partial {\left\langle{{N}}\right\rangle}}/{\partial {\mu}}\right)_{{V, T}} = - \frac{1}{{\beta}} \left({\partial {\left\langle{{n_\epsilon}}\right\rangle}}/{\partial {\epsilon}}\right)_{{z,T}} = z^{-1} e^{\beta \epsilon}\end{aligned} \hspace{\stretch{1}}(1.0.57h)

\begin{aligned}\mathcal{P} \propto e^{\frac{\mu}{k_{\mathrm{B}} T} N_S}e^{-\frac{E_S}{k_{\mathrm{B}} T} }\end{aligned} \hspace{\stretch{1}}(1.0.59.59)

\begin{aligned}Z_{\mathrm{G}}= \sum_{N=0}^\infty e^{\beta \mu N}\sum_{n_k, \sum n_m = N} e^{-\beta \sum_m n_m \epsilon_m}=\prod_{k} \left( \sum_{n_k} e^{-\beta(\epsilon_k - \mu) n_k} \right)\end{aligned} \hspace{\stretch{1}}(1.0.59b)

\begin{aligned}Z_{\mathrm{G}}^{\mathrm{QM}} = {\text{Tr}}_{\{\text{energy}, N\}} \left( e^{ -\beta (\hat{H} - \mu \hat{N} ) } \right)\end{aligned} \hspace{\stretch{1}}(1.0.59b)

\begin{aligned}P V = \frac{2}{3} U\end{aligned} \hspace{\stretch{1}}(1.0.60a)

\begin{aligned}f_\nu^\pm(z) = \frac{1}{{\Gamma(\nu)}} \int_0^\infty dx \frac{x^{\nu - 1}}{z^{-1} e^x \pm 1}\end{aligned} \hspace{\stretch{1}}(1.0.60a)

\begin{aligned}f_\nu^\pm(z \approx 0) =z\mp\frac{z^{2}}{2^\nu}+\frac{z^{3}}{3^\nu}\mp\frac{z^{4}}{4^\nu}+ \cdots \end{aligned} \hspace{\stretch{1}}(1.0.60a)

\begin{aligned}z \frac{d f_\nu^{\pm}(z) }{dz} = f_{\nu-1}^{\pm}(z)\end{aligned} \hspace{\stretch{1}}(1.0.61)

\begin{aligned}\frac{d f_{3/2}^{\pm}(z) }{dT} = -\frac{3}{2T} f_{3/2}^{\pm}(z)f_{\nu-1}^{\pm}(z)\end{aligned} \hspace{\stretch{1}}(1.0.62)

Fermions

\begin{aligned}\sum_{n_k = 0}^1 e^{-\beta(\epsilon_k - \mu) n_k}=1 + e^{-\beta(\epsilon_k - \mu)}\end{aligned} \hspace{\stretch{1}}(1.0.62)

\begin{aligned}N = (2 S + 1) V \int_0^{k_{\mathrm{F}}} \frac{4 \pi k^2 dk}{(2 \pi)^3}\end{aligned} \hspace{\stretch{1}}(1.0.64)

\begin{aligned}k_{\mathrm{F}} = \left( \frac{ 6 \pi^2 \rho }{2 S + 1} \right)^{1/3}\end{aligned} \hspace{\stretch{1}}(1.0.65.65)

\begin{aligned}\epsilon_{\mathrm{F}} = \frac{\hbar^2}{2m} \left( \frac{6 \pi \rho}{2 S + 1} \right)^{2/3}\end{aligned} \hspace{\stretch{1}}(1.0.65.65)

\begin{aligned}\mu = \epsilon_{\mathrm{F}} - \frac{\pi^2}{12} \frac{(k_{\mathrm{B}} T)^2}{\epsilon_{\mathrm{F}}} + \cdots \end{aligned} \hspace{\stretch{1}}(1.0.65.65)

\begin{aligned}\lambda \equiv \frac{h}{\sqrt{2 \pi m k_{\mathrm{B}} T}}\end{aligned} \hspace{\stretch{1}}(1.0.65.65)

\begin{aligned}\frac{N}{V}=\frac{g}{\lambda^3} f_{3/2}(z)=\frac{g}{\lambda^3} \left( e^{\beta \mu} - \frac{e^{2 \beta \mu}}{2^{3/2}} + \cdots \right) \end{aligned} \hspace{\stretch{1}}(1.0.68)

(so $n = \frac{g}{\lambda^3} e^{\beta \mu}$ for large temperatures)

\begin{aligned}P \beta = \frac{g}{\lambda^3} f_{5/2}(z)\end{aligned} \hspace{\stretch{1}}(1.0.69a)

\begin{aligned}U= \frac{3}{2} N k_{\mathrm{B}} T \frac{f_{5/2}(z)}{f_{3/2}(z) }.\end{aligned} \hspace{\stretch{1}}(1.0.69a)

\begin{aligned}f_\nu^+(e^y) \approx\frac{y^\nu}{\Gamma(\nu + 1)}\left( 1 + 2 \nu \sum_{j = 1, 3, 5, \cdots } (\nu-1) \cdots (\nu - j) \left( 1 - 2^{-j} \right) \frac{\zeta(j+1)}{ y^{j + 1} } \right)\end{aligned} \hspace{\stretch{1}}(1.0.69a)

\begin{aligned}\frac{C}{N} = \frac{\pi^2}{2} k_{\mathrm{B}} \frac{ k_{\mathrm{B}} T}{\epsilon_{\mathrm{F}}}\end{aligned} \hspace{\stretch{1}}(1.0.71.71)

\begin{aligned}A = N k_{\mathrm{B}} T \left( \ln z - \frac{f_{5/2}(z)}{f_{3/2}(z)} \right)\end{aligned} \hspace{\stretch{1}}(1.0.71.71)

Bosons

\begin{aligned}Z_{\mathrm{G}} = \prod_\epsilon \frac{1}{{ 1 - z e^{-\beta \epsilon} }}\end{aligned} \hspace{\stretch{1}}(1.0.72)

\begin{aligned}P \beta = \frac{1}{{\lambda^3}} g_{5/2}(z)\end{aligned} \hspace{\stretch{1}}(1.0.73)

\begin{aligned}U = \frac{3}{2} k_{\mathrm{B}} T \frac{V}{\lambda^3} g_{5/2}(z)\end{aligned} \hspace{\stretch{1}}(1.0.74)

\begin{aligned}N_e = N - N_0 = N \left( \frac{T}{T_c} \right)^{3/2}\end{aligned} \hspace{\stretch{1}}(1.0.75)

For $T < T_c$, $z = 1$.

\begin{aligned}g_\nu(1) = \zeta(\nu).\end{aligned} \hspace{\stretch{1}}(1.0.76)

\begin{aligned}\sum_{n_k = 0}^\infty e^{-\beta(\epsilon_k - \mu) n_k} =\frac{1}{{1 - e^{-\beta(\epsilon_k - \mu)}}}\end{aligned} \hspace{\stretch{1}}(1.0.76)

\begin{aligned}f_\nu^-( e^{-\alpha} ) = \frac{ \Gamma(1 - \nu)}{ \alpha^{1 - \nu} } + \cdots \end{aligned} \hspace{\stretch{1}}(1.0.76)

\begin{aligned}\rho \lambda^3 = g_{3/2}(z) \le \zeta(3/2) \approx 2.612\end{aligned} \hspace{\stretch{1}}(1.0.79.79)

\begin{aligned}k_{\mathrm{B}} T_{\mathrm{c}} = \left( \frac{\rho}{\zeta(3/2)} \right)^{2/3} \frac{ 2 \pi \hbar^2}{m}\end{aligned} \hspace{\stretch{1}}(1.0.79.79)

BEC

\begin{aligned}\rho= \rho_{\mathbf{k} = 0}+ \frac{1}{{\lambda^3}} g_{3/2}(z)\end{aligned} \hspace{\stretch{1}}(1.0.80.80)

\begin{aligned}\rho_0 = \rho \left(1 - \left( \frac{T}{T_{\mathrm{c}}} \right)^{3/2}\right)\end{aligned} \hspace{\stretch{1}}(1.0.80b)

\begin{aligned}\frac{E}{V} \propto \left( k_{\mathrm{B}} T \right)^{5/2}\end{aligned} \hspace{\stretch{1}}(1.0.81.81)

\begin{aligned}\frac{C}{V} \propto \left( k_{\mathrm{B}} T \right)^{3/2}\end{aligned} \hspace{\stretch{1}}(1.0.81.81)

\begin{aligned}\frac{S}{N k_{\mathrm{B}}} = \frac{5}{2} \frac{g_{5/2}}{g_{3/2}} - \ln z \Theta(T - T_c)\end{aligned} \hspace{\stretch{1}}(1.0.81.81)

Density of states

Low velocities

\begin{aligned}N_1(\epsilon)=V \frac{m \hbar}{\hbar^2 \sqrt{ 2 m \epsilon}}\end{aligned} \hspace{\stretch{1}}(1.0.82a)

\begin{aligned}N_2(\epsilon)=V \frac{m}{\hbar^2}\end{aligned} \hspace{\stretch{1}}(1.0.82b)

\begin{aligned}N_3(\epsilon)=V \left( \frac{2 m}{\hbar^2} \right)^{3/2} \frac{1}{{4 \pi^2}} \sqrt{\epsilon}\end{aligned} \hspace{\stretch{1}}(1.0.82c)

relativistic

\begin{aligned}\mathcal{D}_1(\epsilon)=\frac{2 L}{ c h } \frac{ \sqrt{ \epsilon^2 - \left( m c^2 \right)^2} }{\epsilon}\end{aligned} \hspace{\stretch{1}}(1.0.83.83)

\begin{aligned}\mathcal{D}_2(\epsilon)=\frac{2 \pi A}{ (c h)^2 } \frac{ \epsilon^2 - \left( m c^2 \right)^2 }{ \epsilon }\end{aligned} \hspace{\stretch{1}}(1.0.83.83)

\begin{aligned}\mathcal{D}_3(\epsilon)=\frac{4 \pi V}{ (c h)^3 } \frac{\left( \epsilon^2 - \left( m c^2 \right)^2 \right)^{3/2}}{\epsilon}\end{aligned} \hspace{\stretch{1}}(1.0.83.83)

## An updated compilation of notes, for ‘PHY452H1S Basic Statistical Mechanics’, Taught by Prof. Arun Paramekanti

Posted by peeterjoot on March 27, 2013

Here’s my second update of my notes compilation for this course, including all of the following:

March 27, 2013 Fermi gas

March 26, 2013 Fermi gas thermodynamics

March 26, 2013 Fermi gas thermodynamics

March 23, 2013 Relativisitic generalization of statistical mechanics

March 21, 2013 Kittel Zipper problem

March 18, 2013 Pathria chapter 4 diatomic molecule problem

March 17, 2013 Gibbs sum for a two level system

March 16, 2013 open system variance of N

March 16, 2013 probability forms of entropy

March 14, 2013 Grand Canonical/Fermion-Bosons

March 13, 2013 Quantum anharmonic oscillator

March 12, 2013 Grand canonical ensemble

March 11, 2013 Heat capacity of perturbed harmonic oscillator

March 10, 2013 Langevin small approximation

March 10, 2013 Addition of two one half spins

March 10, 2013 Midterm II reflection

March 07, 2013 Thermodynamic identities

March 06, 2013 Temperature

March 05, 2013 Interacting spin

plus everything detailed in the description of my first update and before.

## Midterm II reflection, take II, with approximate anharmonic oscillator solution

Posted by peeterjoot on March 11, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

## Question: Perturbation of classical harmonic oscillator (2013 midterm II p2)

Consider a single particle perturbation of a classical simple harmonic oscillator Hamiltonian

\begin{aligned}H = \frac{1}{{2}} m \omega^2 \left( {x^2 + y^2} \right) + \frac{1}{{2 m}} \left( {p_x^2 + p_y^2} \right) + a x^4 + by^6\end{aligned} \hspace{\stretch{1}}(1.0.12)

Calculate the canonical partition function, mean energy and specific heat of this system.

This problem can be attempted in two ways, the first of which was how I did it on the midterm, differentiating under the integral sign, leaving the integrals in exact form, but not evaluated explicitly in any way.

Alternately, by Taylor expanding around $c = 0$ and $d = 0$ with those as the variables in the Taylor expansion (as now done in the Pathria 3.29 problem), we can form a solution in short order. Given my low midterm mark, it seems very likely that this was what was expected.

Performing a two variable Taylor expansion of $Z$, about $(c, d) = (0, 0)$ we have

\begin{aligned}Z \approx\frac{2 \pi m}{\beta}\int dx dye^{- \beta m \omega^2 x^2/2}e^{- \beta m \omega^2 y^2/2}\left( 1 - \beta a x^4 - \beta b y^6 \right)=\frac{2 \pi m}{\beta}\frac{ 2 \pi}{\beta m \omega^2}\left( 1 - \beta a \frac{3!!}{(\beta m \omega^2)^2} - \beta b \frac{5!!}{(\beta m \omega^2)^3} \right),\end{aligned} \hspace{\stretch{1}}(1.0.22)

or

\begin{aligned}\boxed{Z \approx\frac{(2 \pi/\omega)^2}{\beta^2}\left( 1 - \frac{3 a }{\beta (m \omega^2)^2} - \frac{15 b }{\beta^2 (m \omega^2)^3} \right).}\end{aligned} \hspace{\stretch{1}}(1.0.23)

Now we can calculate the average energy

\begin{aligned}\left\langle{{H}}\right\rangle = - \frac{\partial {}}{\partial {\beta}}\ln Z= - \frac{\partial {}}{\partial {\beta}}\left( -2 \ln \beta + \ln \left( 1 - \frac{3 a }{\beta (m \omega^2)^2} - \frac{15 b }{\beta^2 (m \omega^2)^3} \right) \right)=\frac{2 \beta}-\frac{ \frac{3 a }{\beta^2 (m \omega^2)^2}+ \frac{30 b }{\beta^3 (m \omega^2)^3}}{ 1 - \frac{3 a }{\beta (m \omega^2)^2} - \frac{15 b }{\beta^2 (m \omega^2)^3}}.\end{aligned} \hspace{\stretch{1}}(1.0.24)

Dropping the $c$, $d$ terms of the denominator above, we have

\begin{aligned}\boxed{\left\langle{{H}}\right\rangle=\frac{2 \beta}- \frac{3 a }{\beta^2 (m \omega^2)^2}- \frac{30 b }{\beta^3 (m \omega^2)^3}.}\end{aligned} \hspace{\stretch{1}}(1.0.25)

The heat capacity follows immediately

\begin{aligned}\boxed{C_{\mathrm{V}} = \frac{1}{{k_{\mathrm{B}}}} \frac{\partial {\left\langle{{H}}\right\rangle}}{\partial {T}}= 2 - \frac{6 a k_{\mathrm{B}} T}{(m \omega^2)^2} - \frac{90 k_{\mathrm{B}}^2 T^2 b }{(m \omega^2)^3}.}\end{aligned} \hspace{\stretch{1}}(1.0.26)

## Midterm II reflection

Posted by peeterjoot on March 10, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Here’s some reflection about this Thursday’s midterm, redoing the problems without the mad scramble. I don’t think my results are too different from what I did in the midterm, even doing them casually now, but I’ll have to see after grading if these solutions are good.

## Question: Magnetic field spin level splitting (2013 midterm II p1)

A particle with spin $S$ has $2 S + 1$ states $-S, -S + 1, \cdots S-1, S$. When exposed to a magnetic field, state splitting results in energy $E_m = \hbar m B$. Calculate the partition function, and use this to find the temperature specific magnetization. A “sum the geometric series” hint was given.

Our partition function is

\begin{aligned}Z &= \sum_{m = -S}^S e^{-\hbar \beta m B} \\ &= e^{-\hbar \beta S B}\sum_{m = -S}^S e^{-\hbar \beta (m + S) B} \\ &= e^{\hbar \beta S B}\sum_{n = 0}^{2 S} e^{-\hbar \beta n B}.\end{aligned} \hspace{\stretch{1}}(1.0.1)

Writing

\begin{aligned}a = e^{-\hbar \beta B},\end{aligned} \hspace{\stretch{1}}(1.0.2)

that is

\begin{aligned}Z &= a^{-S}\sum_{n = 0}^{2 S} a^n \\ &= a^{-S} \frac{ a^{2 S + 1} - 1 }{a - 1} \\ &= \frac{ a^{S + 1} - a^{-S} }{a - 1} \\ &= \frac{ a^{S + 1/2} - a^{-S - 1/2} }{a^{1/2} - a^{-1/2}}.\end{aligned} \hspace{\stretch{1}}(1.0.3)

Substitution of $a$ gives us

\begin{aligned}\boxed{Z = \frac{ \sinh( \hbar \beta B (S + 1/2) ) }{ \sinh( \hbar \beta B /2 ) }.}\end{aligned} \hspace{\stretch{1}}(1.0.4)

To calculate the magnetization $M$, I used

\begin{aligned}M = -\left\langle{{H}}\right\rangle/B.\end{aligned} \hspace{\stretch{1}}(1.0.5)

As [1] defines magnetization for a spin system. It was pointed out to me after the test that magnetization was defined differently in class as

\begin{aligned}\mu = \frac{\partial {B}}{\partial {F}}.\end{aligned} \hspace{\stretch{1}}(1.0.6)

These are, up to a sign, identical, at least in this case, since we have $\beta$ and $B$ travelling together in the partition function. In terms of the average energy

\begin{aligned}M &= -\frac{\left\langle{{H}}\right\rangle}{B} \\ &= \frac{1}{{B}} \frac{\partial {}}{\partial {\beta}} \ln Z(\beta B) \\ &= \frac{1}{{Z B}} \frac{\partial {}}{\partial {\beta}}Z(\beta B) \\ &= \frac{1}{{Z}} \frac{\partial {}}{\partial {(\beta B)}} Z(\beta B)\end{aligned} \hspace{\stretch{1}}(1.0.7)

Compare this to the in-class definition of magnetization

\begin{aligned}\mu &= \frac{\partial {F}}{\partial {B}} \\ &= \frac{\partial {}}{\partial {B}} \left( - k_{\mathrm{B}} T \ln Z(\beta B) \right) \\ &= -\frac{\partial {}}{\partial {B}} \frac{\ln Z (\beta B)}{\beta} \\ &= -\frac{1}{{\beta Z}} \frac{\partial {}}{\partial {B}} Z(\beta B) \\ &= -\frac{1}{{Z}} \frac{\partial {}}{\partial {(\beta B)}} Z(\beta B).\end{aligned} \hspace{\stretch{1}}(1.0.8)

For this derivative we have

\begin{aligned}\frac{\partial {}}{\partial {(\beta B)}} \ln Z \\ &= \frac{\partial {}}{\partial {(\beta B)}} \ln \frac{ \sinh( \hbar \beta B (S + 1/2) ) }{ \sinh( \hbar \beta B /2 ) } \\ &= \frac{\partial {}}{\partial {(\beta B)}} \left( \ln \sinh( \hbar \beta B (S + 1/2) ) - \ln \sinh( \hbar \beta B /2 ) \right) \\ &= \frac{\hbar }{2}\left( (2 S + 1) \coth( \hbar \beta B (S + 1/2) ) - \coth( \hbar \beta B /2 ) \right).\end{aligned} \hspace{\stretch{1}}(1.0.9)

This gives us

\begin{aligned}\mu &= -\frac{1}{{Z}} \frac{\hbar }{2}\left( (2 S + 1) \coth( \hbar \beta B (S + 1/2) ) - \coth( \hbar \beta B /2 ) \right) \\ &= -\frac{ \sinh( \hbar \beta B /2 ) }{ \sinh( \hbar \beta B (S + 1/2) ) }\frac{\hbar }{2}\left( (2 S + 1) \coth( \hbar \beta B (S + 1/2) ) - \coth( \hbar \beta B /2 ) \right)\end{aligned} \hspace{\stretch{1}}(1.0.10)

After some simplification (done offline in \nbref{midtermTwoQ1FinalSimplificationMu.nb}) we get

\begin{aligned}\boxed{\mu = \hbar \frac{(s+1) \sinh(\hbar \beta B s)-s \sinh(\hbar \beta B (s+1))}{\cosh(\hbar \beta B(2 s+1)) - 1}.}\end{aligned} \hspace{\stretch{1}}(1.0.11)

I got something like this on the midterm, but recall doing it somehow much differently.

## Question: Pertubation of classical harmonic oscillator (2013 midterm II p2)

Consider a single particle perturbation of a classical simple harmonic oscillator Hamiltonian

\begin{aligned}H = \frac{1}{{2}} m \omega^2 \left( x^2 + y^2 \right) + \frac{1}{{2 m}} \left( p_x^2 + p_y^2 \right) + a x^4 + b y^6\end{aligned} \hspace{\stretch{1}}(1.0.12)

Calculate the canonical partition function, mean energy and specific heat of this system.

There were some instructions about the form to put the integrals in.

The canonical partition function is

\begin{aligned}Z &= \int dx dy dp_x dp_y e^{-\beta H} \\ &= \int dx e^{-\beta \left( \frac{1}{{2}} m \omega^2 x^2 + a x^4 \right)}\int dy e^{-\beta \left( \frac{1}{{2}} m \omega^2 y^2 + b y^6 \right)}\int dp_x dp_y e^{-\beta p_x^2/2 m} e^{-\beta p_y^2/2 m}.\end{aligned} \hspace{\stretch{1}}(1.0.13)

With

\begin{aligned}u = \sqrt{\frac{\beta}{2m}} p_x\end{aligned} \hspace{\stretch{1}}(1.0.14a)

\begin{aligned}v = \sqrt{\frac{\beta}{2m}} p_y,\end{aligned} \hspace{\stretch{1}}(1.0.14b)

the momentum integrals are

\begin{aligned}\int dp_x dp_y e^{-\beta p_x^2/2 m} e^{-\beta p_y^2/2 m} \\ &= \frac{2m}{\beta}\int du du e^{- u^2 - v^2} \\ &= \frac{m}{\beta}2 \pi\int 2 r dr e^{- r^2} \\ &= \frac{2 \pi m}{\beta}.\end{aligned} \hspace{\stretch{1}}(1.0.15)

Writing

\begin{aligned}f(x) = \frac{1}{{2}} m \omega^2 x^2 + a x^4\end{aligned} \hspace{\stretch{1}}(1.0.16a)

\begin{aligned}g(x) = \frac{1}{{2}} m \omega^2 y^2 + b y^4,\end{aligned} \hspace{\stretch{1}}(1.0.16b)

we have

\begin{aligned}\boxed{Z = \frac{2 \pi m}{\beta}\int dx e^{- \beta f(x)}\int dy e^{- \beta g(y)}.}\end{aligned} \hspace{\stretch{1}}(1.0.17)

The mean energy is

\begin{aligned}\left\langle{{H}}\right\rangle &= \frac{\int H e^{-\beta H}}{\int e^{-\beta H}} \\ &= -\frac{\partial {}}{\partial {\beta}} \ln \int e^{-\beta H} \\ &= \frac{\partial {}}{\partial {\beta}} \left( \ln \beta -\ln \int dx e^{- \beta f(x)} -\ln \int dy e^{- \beta g(y)} \right) \\ &= \frac{1}{{\beta}} + \frac{\int dx f(x) e^{- \beta f(x)}}{\int dx e^{- \beta f(x)}}+ \frac{\int dy g(y) e^{- \beta g(y)}}{\int dy e^{- \beta g(y)}}.\end{aligned} \hspace{\stretch{1}}(1.0.18)

The specific heat follows by differentiating once more

\begin{aligned}C_{\mathrm{V}} &= \frac{\partial {\left\langle{{H}}\right\rangle}}{\partial {T}} \\ &= \frac{\partial {\beta}}{\partial {T}}\frac{\partial {\left\langle{{H}}\right\rangle}}{\partial {\beta}} \\ &= -\frac{1}{{k_{\mathrm{B}} T^2}}\frac{\partial {\left\langle{{H}}\right\rangle}}{\partial {\beta}} \\ &= -k_{\mathrm{B}} \beta^2\frac{\partial {\left\langle{{H}}\right\rangle}}{\partial {\beta}} \\ &= - k_{\mathrm{B}} \beta^2\left( -\frac{1}{{\beta^2}} + \frac{\partial {}}{\partial {\beta}} \left( \frac{ \int dx f(x) e^{- \beta f(x)} } { \int dx e^{- \beta f(x)} } + \frac{ \int dy g(y) e^{- \beta g(y)} } { \int dy e^{- \beta g(y)} } \right) \right).\end{aligned} \hspace{\stretch{1}}(1.0.19)

Differentiating the integral terms we have, for example,

\begin{aligned}\frac{\partial {}}{\partial {\beta}} \frac{\int dx f(x) e^{- \beta f(x)}}{\int dx e^{- \beta f(x)}}=-\frac{\int dx f^2(x) e^{- \beta f(x)}}{\int dx e^{- \beta f(x)}}-\left( \frac{ \int dx f(x) e^{- \beta f(x)} } { \int dx e^{- \beta f(x)} } \right)^2,\end{aligned} \hspace{\stretch{1}}(1.0.20)

so that the specific heat is

\begin{aligned}\boxed{C_{\mathrm{V}} =k_{\mathrm{B}} \left(1 + \frac{\int dx f^2(x) e^{- \beta f(x)}}{\int dx e^{- \beta f(x)}}+\left( \frac{ \int dx f(x) e^{- \beta f(x)} } { \int dx e^{- \beta f(x)} } \right)^2+ \frac{\int dy g^2(y) e^{- \beta g(y)}}{\int dy e^{- \beta g(y)}}+\left( \frac{ \int dy g(y) e^{- \beta g(y)} } { \int dy e^{- \beta g(y)} } \right)^2\right).}\end{aligned} \hspace{\stretch{1}}(1.0.21)

That’s as far as I took this problem. There was a discussion after the midterm with Eric about Taylor expansion of these integrals. That’s not something that I tried.

# References

[1] C. Kittel and H. Kroemer. Thermal physics. WH Freeman, 1980.

## Spin down of coffee in a bottomless cup.

Posted by peeterjoot on April 25, 2012

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

# Motivation.

Here’s a variation of a problem outlined in section 2 of [1], which looked at the time evolution of fluid with initial rotational motion, after the (cylindrical) rotation driver stops, later describing this as the spin down of a cup of tea. I’ll work the problem in more detail than in the text, and also make two refinements.

• I drink coffee and not tea.
• I stir my coffee in the interior of the cup and not on the outer edge.

Because of the second point I’ll model my stir stick as a rotating cylinder in the cup and not by somebody spinning the cup itself to stir the tea. This only changes the solution for the steady state part of the problem.

# Guts

We’ll work in cylindrical coordinates following the conventions of figure (1).

Figure 1: Fluid flow in nested cylinders.

We’ll assume a solution that with velocity azimuthal in direction, and both pressure and velocity that are only radially dependent.

\begin{aligned}\mathbf{u} = u(r) \hat{\boldsymbol{\phi}}.\end{aligned} \hspace{\stretch{1}}(2.1)

\begin{aligned}p = p(r)\end{aligned} \hspace{\stretch{1}}(2.2)

Let’s first verify that this meets the non-compressible condition that eliminates the $\mu \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{u})$ term from Navier-Stokes

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{u}&=\left(\hat{\mathbf{r}} \partial_r + \frac{\hat{\boldsymbol{\phi}}}{r} \partial_\phi + \hat{\mathbf{z}} \partial_z\right) \cdot \left(u \hat{\boldsymbol{\phi}}\right) \\ &=\hat{\boldsymbol{\phi}} \cdot\left(\hat{\mathbf{r}} \partial_r u + \frac{\hat{\boldsymbol{\phi}}}{r} \partial_\phi u + \hat{\mathbf{z}} \partial_z u\right) +u\left(\hat{\mathbf{r}} \cdot \partial_r \hat{\boldsymbol{\phi}} + \frac{\hat{\boldsymbol{\phi}}}{r} \cdot \partial_\phi \hat{\boldsymbol{\phi}} + \hat{\mathbf{z}} \cdot \partial_z \hat{\boldsymbol{\phi}}\right) \\ &=\hat{\boldsymbol{\phi}} \cdot \hat{\mathbf{r}} \partial_r u +u\frac{\hat{\boldsymbol{\phi}}}{r} \cdot \left(-\hat{\mathbf{r}}\right) \\ &= 0.\end{aligned}

Good. Now let’s express each of the terms of Navier-Stokes in cylindrical form. Our time dependence is

\begin{aligned}\rho \partial_t u(r, t) \hat{\boldsymbol{\phi}}=\rho \hat{\boldsymbol{\phi}} \partial_t u.\end{aligned} \hspace{\stretch{1}}(2.3)

Our inertial term is

\begin{aligned}\begin{aligned}\rho (\mathbf{u} \cdot \boldsymbol{\nabla}) \mathbf{u}&=\frac{\rho u}{r} \partial_\phi (u \hat{\boldsymbol{\phi}}) \\ &=\frac{\rho u^2}{r} (-\hat{\mathbf{r}}).\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.4)

Our pressure term is

\begin{aligned}-\boldsymbol{\nabla} p=-\hat{\mathbf{r}} \partial_r p,\end{aligned} \hspace{\stretch{1}}(2.5)

and our Laplacian term is

\begin{aligned}\begin{aligned}\mu \boldsymbol{\nabla}^2 \mathbf{u}&=\mu \left( \frac{1}{{r}} \partial_r ( r \partial_r) + \frac{1}{{r^2}} \partial_{\phi\phi} + \partial_{z z}\right)u(r) \hat{\boldsymbol{\phi}} \\ &=\mu \left( \frac{\hat{\boldsymbol{\phi}}}{r} \partial_r ( r \partial_r u) + \frac{-\hat{\mathbf{r}} u}{r^2} \right).\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.6)

Putting things together, we find that Navier-Stokes takes the form

\begin{aligned}\rho \hat{\boldsymbol{\phi}} \partial_t u+\frac{\rho u^2}{r} (-\hat{\mathbf{r}})=-\hat{\mathbf{r}} \partial_r p+\mu \left( \frac{\hat{\boldsymbol{\phi}}}{r} \partial_r ( r \partial_r u) + \frac{-\hat{\boldsymbol{\phi}} u}{r^2} \right),\end{aligned} \hspace{\stretch{1}}(2.7)

which nicely splits into an separate equations for the $\hat{\boldsymbol{\phi}}$ and $\hat{\mathbf{r}}$ directions respectively

\begin{aligned}\frac{1}{{\nu}} \partial_t u=\frac{1}{r} \partial_r ( r \partial_r u) - \frac{u}{r^2}\end{aligned} \hspace{\stretch{1}}(2.8a)

\begin{aligned}\frac{\rho u^2}{r}=\partial_r p.\end{aligned} \hspace{\stretch{1}}(2.8b)

Before $t = 0$ we seek the steady state, the solution of

\begin{aligned}r \partial_r ( r \partial_r u) - u = 0.\end{aligned} \hspace{\stretch{1}}(2.9)

We’ve seen that

\begin{aligned}u(r) = A r + \frac{B}{r}\end{aligned} \hspace{\stretch{1}}(2.10)

is the general solution, and can now fit this to the boundary value constraints. For the interior portion of the cup we have

\begin{aligned}{\left.{{A r + \frac{B}{r}}}\right\vert}_{{r = 0}} = 0\end{aligned} \hspace{\stretch{1}}(2.11)

so $B = 0$ is required. For the interface of the “stir-stick” (moving fast enough that we can consider it having a cylindrical effect) at $r = R_1$ we have

\begin{aligned}A R_1 = \Omega R_1,\end{aligned} \hspace{\stretch{1}}(2.12)

so the interior portion of our steady state coffee velocity is just

\begin{aligned}\mathbf{u} = \Omega r \hat{\boldsymbol{\phi}}.\end{aligned} \hspace{\stretch{1}}(2.13)

Between the cup edge and the stir-stick we have to solve

\begin{aligned}A R_1 + \frac{B}{R_1} &= \Omega R_1 \\ A R_2 + \frac{B}{R_2} &= 0,\end{aligned} \hspace{\stretch{1}}(2.14)

or

\begin{aligned}A R_1^2 + B &= \Omega R_1^2 \\ A R_2^2 + B &= 0.\end{aligned} \hspace{\stretch{1}}(2.16)

Subtracting we find

\begin{aligned}A = -\frac{\Omega R_1^2}{R_2^2 - R_1^2}\end{aligned} \hspace{\stretch{1}}(2.18a)

\begin{aligned}B = \frac{\Omega R_1^2 R_2^2}{R_2^2 - R_1^2},\end{aligned} \hspace{\stretch{1}}(2.18b)

so our steady state coffee flow is

\begin{aligned}\mathbf{u} =\left\{\begin{array}{l l}\Omega r \hat{\boldsymbol{\phi}}& \quad \mbox{latex r \in [0, R_1]} \\ \frac{\Omega R_1^2}{R_2^2 – R_1^2} \left( \frac{R_2^2}{r} -r \right)\hat{\boldsymbol{\phi}}& \quad \mbox{$r \in [R_1, R_2]$} \\ \end{array}\right.\end{aligned} \hspace{\stretch{1}}(2.19)

## Time evolution.

We can use a separation of variables technique with $u(r, t) = R(r) T(t)$ to find

\begin{aligned}\frac{1}{{\nu}} \frac{T'}{T} = \frac{1}{{R}} \left( \frac{1}{r} \partial_r ( r \partial_r R) - \frac{R}{r^2}\right)= -\lambda^2,\end{aligned} \hspace{\stretch{1}}(2.20)

which gives us

\begin{aligned}T \propto e^{-\lambda^2 \nu t},\end{aligned} \hspace{\stretch{1}}(2.21)

and $R$ specified by

\begin{aligned}0 = r^2 \frac{d^2 R}{dr^2} + r \frac{d R}{dr} + R \left( r^2 \lambda^2 - 1 \right).\end{aligned} \hspace{\stretch{1}}(2.22)

Checking [2] (9.1.1) we see that this can be put into the standard form of the Bessel equation if we eliminate the $\lambda$ term. We can do that writing $z = r \lambda$, $\mathcal{R}(z) = R(z/\lambda)$ and noting that $r d/dr = z d/dz$ and $r^2 d^2/dr^2 = z^2 d^2/dz^2$, which gives us

\begin{aligned}0 = z^2 \frac{d^2 \mathcal{R}}{dr^2} + z \frac{d \mathcal{R}}{dr} + \mathcal{R} \left( z^2 - 1 \right).\end{aligned} \hspace{\stretch{1}}(2.23)

The solutions are

\begin{aligned}\mathcal{R}(z) = J_{\pm 1}(z), Y_{\pm 1}(z).\end{aligned} \hspace{\stretch{1}}(2.24)

From (9.1.5) of the handbook we see that the plus and minus variations are linearly dependent since $J_{-1}(z) = -J_1(z)$ and $Y_{-1}(z) = -Y_1(z)$, and from (9.1.8) that $Y_1(z)$ is infinite at the origin, so our general solution has to be of the form

\begin{aligned}\mathbf{u}(r, t) = \hat{\boldsymbol{\phi}} \sum_\lambda c_\lambda e^{-\lambda^2 \nu t} J_{1}(r \lambda).\end{aligned} \hspace{\stretch{1}}(2.25)

In the text, I see that the transformation $\lambda \rightarrow \lambda/a$ (where $a$ was the radius of the cup) is made so that the Bessel function parameter was dimensionless. We can do that too but write

\begin{aligned}\mathbf{u}(r, t) = \hat{\boldsymbol{\phi}} \sum_\lambda c_\lambda e^{-\frac{\lambda^2}{R_2^2} \nu t} J_{1}\left(\lambda \frac{r}{R_2}\right).\end{aligned} \hspace{\stretch{1}}(2.26)

Our boundary value constraint is that we require this to match 2.19 at $t = 0$. Let’s write $R_2 = R$, $R_1 = a R$, $z = r/R$, so that we are working in the unit circle with $z \in [0, 1]$. Our boundary problem can now be expressed as

\begin{aligned}\frac{1}{{\Omega R}} \sum_\lambda c_\lambda J_{1}\left(\lambda z\right)=\left\{\begin{array}{l l}z & \quad \mbox{latex z \in [0, a]} \\ \frac{1}{\frac{R^2}{a^2} – 1} \left( \frac{1}{{z}} – z\right)& \quad \mbox{$z \in [a, 1]$} \\ \end{array}\right.\end{aligned} \hspace{\stretch{1}}(2.27)

Let’s pull the $\Omega R$ factor into $c_\lambda$ and state the problem to be solved as

\begin{aligned}\mathbf{u}(r, t) = \Omega R \hat{\boldsymbol{\phi}} \sum_{i=1}^n c_i e^{-\frac{\lambda_i^2}{R^2} \nu t} J_{1}\left(\lambda_i \frac{r}{R}\right)\end{aligned} \hspace{\stretch{1}}(2.28a)

\begin{aligned}\sum_{i = 1}^n c_i J_{1}\left(\lambda_i z\right) = \phi(z)\end{aligned} \hspace{\stretch{1}}(2.28b)

\begin{aligned}\phi(z) = \left\{\begin{array}{l l}z & \quad \mbox{latex z \in [0, a]} \\ \frac{a^2}{1 – a^2} \left( \frac{1}{{z}} – z\right)& \quad \mbox{$z \in [a, 1]$} \\ \end{array}\right..\end{aligned} \hspace{\stretch{1}}(2.28c)

Looking at section 2.7 of [3] it appears the solutions for $c_i$ can be obtained from

\begin{aligned}c_i = \frac{\int_0^1 r\phi(z) J_1(\lambda_i z) dz}{\int_0^1 r J_1^2(\lambda_i z) dz},\end{aligned} \hspace{\stretch{1}}(2.29)

where $\lambda_i$ are the zeros of $J_1$.

To get a feel for these, a plot of the first few of these fitting functions is shown in figure (2).

Figure 2: First four zero crossing Bessel functions.

Using Mathematica in bottomlessCoffee.cdf, these coefficients were calculated for $a = 0.6$. The $n = 1, 3, 5$ approximations to the fitting function are plotted with a comparison to the steady state velocity profile in figure (3).

Figure 3: Bessel function fitting for the steady state velocity profile for n = 1, 3, 5.

As indicated in the text, the spin down is way too slow to match reality (this can be seen visually in the worksheet by animating it).

# References

[1] D.J. Acheson. Elementary fluid dynamics. Oxford University Press, USA, 1990.

[2] M. Abramowitz and I.A. Stegun. {\em Handbook of mathematical functions with formulas, graphs, and mathematical tables}, volume 55. Dover publications, 1964.

[3] H. Sagan. Boundary and eigenvalue problems in mathematical physics. Dover Pubns, 1989.

## PHY456H1F: Quantum Mechanics II. Lecture 17 (Taught by Prof J.E. Sipe). Two spin systems and angular momentum.

Posted by peeterjoot on November 10, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

# Disclaimer.

Peeter’s lecture notes from class. May not be entirely coherent.

# More on two spin systems.

READING: Covering section 26.5 of the text [1].

\begin{aligned}\frac{1}{{2}} \otimes \frac{1}{{2}} = 1 \oplus 0\end{aligned} \hspace{\stretch{1}}(2.1)

where $1$ is a triplet state for $s=1$ and $0$ the “singlet” state with $s=0$. We want to consider the angular momentum of the entire system

\begin{aligned}j_1 \otimes j_2 = ?\end{aligned} \hspace{\stretch{1}}(2.2)

Why bother? Often it is true that

\begin{aligned}\left[{H},{\mathbf{J}}\right] = 0,\end{aligned} \hspace{\stretch{1}}(2.3)

so, in that case, the eigenstates of the total angular momentum are also energy eigenstates, so considering the angular momentum problem can help in finding these energy eigenstates.

Rotation operator

\begin{aligned}e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{J}/\hbar}\end{aligned} \hspace{\stretch{1}}(2.4)

\begin{aligned}\hat{\mathbf{n}} \cdot \mathbf{J} = n_x J_x + n_y J_y + n_z J_z\end{aligned} \hspace{\stretch{1}}(2.5)

Recall the definitions of the raising or lowering operators

\begin{aligned}J_\pm = J_x \pm i J_y,\end{aligned} \hspace{\stretch{1}}(2.6)

or

\begin{aligned}J_x &= \frac{1}{{2}} (J_{+} + J_{-}) \\ J_y &= \frac{1}{{2i}} (J_{+} - J_{-})\end{aligned} \hspace{\stretch{1}}(2.7)

We have

\begin{aligned}\hat{\mathbf{n}} \cdot \mathbf{J} = n_x \frac{1}{{2}} (J_{+} + J_{-})+ n_y \frac{1}{{2i}} (J_{+} - J_{-})+ n_z J_z,\end{aligned} \hspace{\stretch{1}}(2.9)

and

\begin{aligned}J_\pm {\left\lvert {j m} \right\rangle} = \hbar \Bigl((j \mp m)(j \pm m_1)\Bigr)^{1/2}{\left\lvert {j, m \pm 1} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.10)

So

\begin{aligned}{\left\langle {j' m'} \right\rvert} e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{J}/\hbar} {\left\lvert {j m} \right\rangle} = 0\end{aligned} \hspace{\stretch{1}}(2.11)

unless $j = j'$.

\begin{aligned}{\left\langle {j m'} \right\rvert} e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{J}/\hbar} {\left\lvert {j m} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.12)

is a $(2j + 1) \times (2 j+ 1)$ matrix.

Combining rotations

\begin{aligned}{\left\langle {j m'} \right\rvert} e^{-i \theta_b \hat{\mathbf{n}}_a \cdot \mathbf{J}/\hbar}e^{-i \theta_a \hat{\mathbf{n}}_b \cdot \mathbf{J}/\hbar} {\left\lvert {j m} \right\rangle}=\sum_{m''}{\left\langle {j m'} \right\rvert} e^{-i \theta_b \hat{\mathbf{n}}_a \cdot \mathbf{J}/\hbar}{\left\lvert {j m''} \right\rangle} {\left\langle {j m''} \right\rvert}e^{-i \theta_a \hat{\mathbf{n}}_b \cdot \mathbf{J}/\hbar} {\left\lvert {j m} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.13)

If

\begin{aligned}e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{J}/\hbar}=e^{-i \theta_b \hat{\mathbf{n}}_a \cdot \mathbf{J}/\hbar}e^{-i \theta_a \hat{\mathbf{n}}_b \cdot \mathbf{J}/\hbar}\end{aligned} \hspace{\stretch{1}}(2.14)

(something that may be hard to compute but possible), then

\begin{aligned}{\left\langle {j m'} \right\rvert} e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{J}/\hbar} {\left\lvert {j m} \right\rangle}=\sum_{m''}{\left\langle {j m'} \right\rvert} e^{-i \theta_b \hat{\mathbf{n}}_a \cdot \mathbf{J}/\hbar}{\left\lvert {j m''} \right\rangle} {\left\langle {j m''} \right\rvert}e^{-i \theta_a \hat{\mathbf{n}}_b \cdot \mathbf{J}/\hbar} {\left\lvert {j m} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.15)

For fixed $j$, the matrices ${\left\langle {j m'} \right\rvert} e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{J}/\hbar} {\left\lvert {j m} \right\rangle}$ form a representation of the rotation group. The $(2 j + 1)$ representations are irreducible. (This won’t be proven).

It may be that there may be big blocks of zeros in some of the matrices, but they cannot be simplified any further?

Back to the two particle system

\begin{aligned}j_1 \otimes j_2 = ?\end{aligned} \hspace{\stretch{1}}(2.16)

If we use

\begin{aligned}{\left\lvert {j_1 m_1} \right\rangle} \otimes {\left\lvert {j_2 m_2} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.17)

If a $j_1$ and a $j_2$ are picked then

\begin{aligned}{\left\langle {j_1 m_1' ; j_2 m_2'} \right\rvert} e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{J}/\hbar} {\left\lvert {j_1 m_1 ; j_2 m_2} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.18)

is also a representation of the rotation group, but these sort of matrices can be simplified a lot. This basis of dimensionality $(2 j_1 + 1)(2 j_2 + 1)$ is reducible.

A lot of this is motivation, and we still want a representation of $j_1 \otimes j_2$.

Recall that

\begin{aligned}\frac{1}{{2}} \otimes \frac{1}{{2}} = 1 \oplus 0= \left(\frac{1}{{2}} + \frac{1}{{2}} \right) \oplus \left(\frac{1}{{2}} - \frac{1}{{2}} \right)\end{aligned} \hspace{\stretch{1}}(2.19)

Might guess that, for $j_1 \ge j_2$

\begin{aligned}j_1 \otimes j_2 = \left( j_1 + j_2 \right) \oplus \left( j_1 + j_2 - 1 \right) \oplus \cdots\left( j_1 - j_2 \right)\end{aligned} \hspace{\stretch{1}}(2.20)

Suppose that this is right. Then

\begin{aligned}5 \otimes \frac{1}{{2}} = \frac{11}{2} \oplus \frac{9}{2}\end{aligned} \hspace{\stretch{1}}(2.21)

Check for dimensions.

\begin{aligned}\begin{array}{l l l l l l l l l}1 &\otimes &1 &= &2 &\oplus &1 &\oplus &0 \\ 3 & \times &3 &= &5 & + &3 & + &1\end{array}\end{aligned} \hspace{\stretch{1}}(2.22)

Q: What was this $\oplus$?

It was just made up. We are creating a shorthand to say that we have a number of different basis states for each of the groupings. I Need an example!

Check for dimensions in general

\begin{aligned}(2 j_1 + 1)(2 j_2 + 1) \stackrel{?}{=}\end{aligned} \hspace{\stretch{1}}(2.23)

We find

\begin{aligned}\sum_{j_1 - j_2}^{j_1 + j_2} (2 j+ 1) &= \sum_{j=0}^{j_1 + j_2} (2 j + 1) -\sum_{j=0}^{j_1 - j_2 - 1} (2 j + 1) \\ &=(2 j_1 + 1)(2 j_2 + 1) \end{aligned}

Using

\begin{aligned}\sum_{n=0}^N n = \frac{N(N+1)}{2}\end{aligned} \hspace{\stretch{1}}(2.24)

\begin{aligned}j_1 \otimes j_2= (j_1 + j_2) \oplus(j_1 + j_2 - 1) \oplus\cdots(j_1 - j_2) \end{aligned} \hspace{\stretch{1}}(2.25)

In fact, this is correct. Proof “by construction” to follow.

\begin{aligned}{\left\lvert {j_1 m_1} \right\rangle} \otimes{\left\lvert {j_2 m_2} \right\rangle} \end{aligned} \hspace{\stretch{1}}(2.26)

\begin{aligned}J^2 {\left\lvert {j m} \right\rangle} &= j (j+1) \hbar^2 {\left\lvert {j m} \right\rangle} \\ J_z {\left\lvert {j m} \right\rangle} &= m \hbar {\left\lvert {j m} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.27)

denote also by

\begin{aligned}{\left\lvert {j m ; j_1 j_2} \right\rangle},\end{aligned} \hspace{\stretch{1}}(2.29)

but will often omit the $; j_1 j_2$ portion.

With

\begin{aligned} \begin{array}{| l | l | l | l | l |} \hline j = & j_1 + j_2 & j_1 + j_2 -1 & \cdots & j_1 - j_2 \\ \hline \hline & \left\lvert j_1 + j_2, j_1 + j_2 \right\rangle & & & \\ \hline & \left\lvert j_1 + j_2, j_1 + j_2 - 1 \right\rangle & \left\lvert j_1 + j_2 - 1, j_1 + j_2 - 1 \right\rangle & & \\ \hline & & \left\lvert j_1 + j_2 - 1, j_1 + j_2 - 2 \right\rangle & & \\ \hline & \vdots & & & \left\lvert j_1 - j_2, j_1 - j_2 \right\rangle \\ \hline & \vdots & & & \vdots \\ \hline & \vdots & & & \left\lvert j_1 - j_2, -(j_1 - j_2) \right\rangle \\ \hline & \vdots & & & \\ \hline & \left\lvert j_1 + j_2, -(j_1 + j_2 - 1) \right\rangle & \left\lvert j_1 + j_2 -1, -(j_1 + j_2 - 1) \right\rangle & & \\ \hline & \left\lvert j_1 + j_2, -(j_1 + j_2) \right\rangle & & & \\ \hline \end{array} \end{aligned} \hspace{\stretch{1}}(2.30)

Look at

\begin{aligned}{\left\lvert {j_1 + j_2, j_1 + j_2} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.31)

\begin{aligned}J_z{\left\lvert {j_1 + j_2, j_1 + j_2} \right\rangle}= (j_1 + j_2) \hbar{\left\lvert {j_1 + j_2, j_1 + j_2} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.32)

\begin{aligned}J_z \Bigl({\left\lvert {j_1 m_1} \right\rangle} \otimes {\left\lvert {j_2 m_2} \right\rangle} \Bigr)= (m_1 + m_2) \hbar\Bigl({\left\lvert {j_1 m_1} \right\rangle} \otimes {\left\lvert {j_2 m_2} \right\rangle} \Bigr)\end{aligned} \hspace{\stretch{1}}(2.33)

we must have

\begin{aligned}{\left\lvert {j_1 + j_2, j_1 + j_2} \right\rangle}= e^{i\phi}\Bigl({\left\lvert {j_1 j_1} \right\rangle} \otimes {\left\lvert {j_2 j_2} \right\rangle} \Bigr)\end{aligned} \hspace{\stretch{1}}(2.34)

So ${\left\lvert {j_1 + j_2, j_1 + j_2} \right\rangle}$ must be a superposition of states ${\left\lvert {j_1 m_1} \right\rangle} \otimes {\left\lvert {j_2 m_2} \right\rangle}$ with $m_1 + m_2 = j_1 + j_2$. Choosing $e^{i\phi} = 1$ is called the Conbon Shotley convention.

\begin{aligned}{\left\lvert {j_1 + j_2, j_1 + j_2} \right\rangle}= {\left\lvert {j_1 j_1} \right\rangle} \otimes {\left\lvert {j_2 j_2} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.35)

We now move down column.

\begin{aligned}J_{-} {\left\lvert {j_1 + j_2, j_1 + j_2} \right\rangle}=\hbar\Bigl(2 (j_1 + j_2)\Bigr)^{1/2}{\left\lvert {j_1 + j_2, j_1 + j_2 - 1} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.36)

So

\begin{aligned}{\left\lvert {j_1 + j_2, j_1 + j_2 - 1} \right\rangle}&=\frac{J_{-} {\left\lvert {j_1 + j_2, j_1 + j_2} \right\rangle}}{\hbar\Bigl(2 (j_1 + j_2)\Bigr)^{1/2}} \\ &=\frac{(J_{1-} + J_{2-}) {\left\lvert {j_1 j_1} \right\rangle} \otimes {\left\lvert {j_2 j_2} \right\rangle}}{\hbar\Bigl(2 (j_1 + j_2)\Bigr)^{1/2}}\end{aligned}

# References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

## PHY456H1F: Quantum Mechanics II. Lecture 16 (Taught by Prof J.E. Sipe). Hydrogen atom with spin, and two spin systems.

Posted by peeterjoot on November 2, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

# Disclaimer.

Peeter’s lecture notes from class. May not be entirely coherent.

# The hydrogen atom with spin.

READING: what chapter of [1] ?

For a spinless hydrogen atom, the Hamiltonian was

\begin{aligned}H = H_{\text{CM}} \otimes H_{\text{rel}}\end{aligned} \hspace{\stretch{1}}(2.1)

where we have independent Hamiltonian’s for the motion of the center of mass and the relative motion of the electron to the proton.

The basis kets for these could be designated ${\left\lvert {\mathbf{p}_\text{CM}} \right\rangle}$ and ${\left\lvert {\mathbf{p}_\text{rel}} \right\rangle}$ respectively.

Now we want to augment this, treating

\begin{aligned}H = H_{\text{CM}} \otimes H_{\text{rel}} \otimes H_{\text{s}}\end{aligned} \hspace{\stretch{1}}(2.2)

where $H_{\text{s}}$ is the Hamiltonian for the spin of the electron. We are neglecting the spin of the proton, but that could also be included (this turns out to be a lesser effect).

We’ll introduce a Hamiltonian including the dynamics of the relative motion and the electron spin

\begin{aligned}H_{\text{rel}} \otimes H_{\text{s}}\end{aligned} \hspace{\stretch{1}}(2.3)

Covering the Hilbert space for this system we’ll use basis kets

\begin{aligned}{\left\lvert {nlm\pm} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.4)

\begin{aligned}\begin{aligned}{\left\lvert {nlm+} \right\rangle} &\rightarrow \begin{bmatrix}\left\langle{{\mathbf{r}+}} \vert {{nlm+}}\right\rangle \\ \left\langle{{\mathbf{r}-}} \vert {{nlm+}}\right\rangle \\ \end{bmatrix}=\begin{bmatrix}\Phi_{nlm}(\mathbf{r}) \\ 0\end{bmatrix} \\ {\left\lvert {nlm-} \right\rangle} &\rightarrow \begin{bmatrix}\left\langle{{\mathbf{r}+}} \vert {{nlm-}}\right\rangle \\ \left\langle{{\mathbf{r}-}} \vert {{nlm-}}\right\rangle \\ \end{bmatrix}=\begin{bmatrix}0 \\ \Phi_{nlm}(\mathbf{r}) \end{bmatrix}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.5)

Here $\mathbf{r}$ should be understood to really mean $\mathbf{r}_\text{rel}$. Our full Hamiltonian, after introducing a magnetic pertubation is

\begin{aligned}H = \frac{P_\text{CM}^2}{2M} + \left(\frac{P_\text{rel}^2}{2\mu}-\frac{e^2}{R_\text{rel}}\right)- \boldsymbol{\mu}_0 \cdot \mathbf{B}- \boldsymbol{\mu}_s \cdot \mathbf{B}\end{aligned} \hspace{\stretch{1}}(2.6)

where

\begin{aligned}M = m_\text{proton} + m_\text{electron},\end{aligned} \hspace{\stretch{1}}(2.7)

and

\begin{aligned}\frac{1}{{\mu}} = \frac{1}{{m_\text{proton}}} + \frac{1}{{m_\text{electron}}}.\end{aligned} \hspace{\stretch{1}}(2.8)

For a uniform magnetic field

\begin{aligned}\boldsymbol{\mu}_0 &= \left( -\frac{e}{2 m c} \right) \mathbf{L} \\ \boldsymbol{\mu}_s &= g \left( -\frac{e}{2 m c} \right) \mathbf{S}\end{aligned} \hspace{\stretch{1}}(2.9)

We also have higher order terms (higher order multipoles) and relativistic corrections (like spin orbit coupling [2]).

# Two spins.

Example: Consider two electrons, 1 in each of 2 quantum dots.

\begin{aligned}H = H_{1} \otimes H_{2}\end{aligned} \hspace{\stretch{1}}(3.11)

where $H_1$ and $H_2$ are both spin Hamiltonian’s for respective 2D Hilbert spaces. Our complete Hilbert space is thus a 4D space.

We’ll write

\begin{aligned}\begin{aligned}{\left\lvert {+} \right\rangle}_1 \otimes {\left\lvert {+} \right\rangle}_2 &= {\left\lvert {++} \right\rangle} \\ {\left\lvert {+} \right\rangle}_1 \otimes {\left\lvert {-} \right\rangle}_2 &= {\left\lvert {+-} \right\rangle} \\ {\left\lvert {-} \right\rangle}_1 \otimes {\left\lvert {+} \right\rangle}_2 &= {\left\lvert {-+} \right\rangle} \\ {\left\lvert {-} \right\rangle}_1 \otimes {\left\lvert {-} \right\rangle}_2 &= {\left\lvert {--} \right\rangle} \end{aligned}\end{aligned} \hspace{\stretch{1}}(3.12)

Can introduce

\begin{aligned}\mathbf{S}_1 &= \mathbf{S}_1^{(1)} \otimes I^{(2)} \\ \mathbf{S}_2 &= I^{(1)} \otimes \mathbf{S}_2^{(2)}\end{aligned} \hspace{\stretch{1}}(3.13)

Here we “promote” each of the individual spin operators to spin operators in the complete Hilbert space.

We write

\begin{aligned}S_{1z}{\left\lvert {++} \right\rangle} &= \frac{\hbar}{2} {\left\lvert {++} \right\rangle} \\ S_{1z}{\left\lvert {+-} \right\rangle} &= \frac{\hbar}{2} {\left\lvert {+-} \right\rangle}\end{aligned} \hspace{\stretch{1}}(3.15)

Write

\begin{aligned}\mathbf{S} = \mathbf{S}_1 + \mathbf{S}_2,\end{aligned} \hspace{\stretch{1}}(3.17)

for the full spin angular momentum operator. The $z$ component of this operator is

\begin{aligned}S_z = S_{1z} + S_{2z}\end{aligned} \hspace{\stretch{1}}(3.18)

\begin{aligned}S_z{\left\lvert {++} \right\rangle} &= (S_{1z} + S_{2z}) {\left\lvert {++} \right\rangle} = \left( \frac{\hbar}{2} +\frac{\hbar}{2} \right) {\left\lvert {++} \right\rangle} = \hbar {\left\lvert {++} \right\rangle} \\ S_z{\left\lvert {+-} \right\rangle} &= (S_{1z} + S_{2z}) {\left\lvert {+-} \right\rangle} = \left( \frac{\hbar}{2} -\frac{\hbar}{2} \right) {\left\lvert {+-} \right\rangle} = 0 \\ S_z{\left\lvert {-+} \right\rangle} &= (S_{1z} + S_{2z}) {\left\lvert {-+} \right\rangle} = \left( -\frac{\hbar}{2} +\frac{\hbar}{2} \right) {\left\lvert {-+} \right\rangle} = 0 \\ S_z{\left\lvert {--} \right\rangle} &= (S_{1z} + S_{2z}) {\left\lvert {--} \right\rangle} = \left( -\frac{\hbar}{2} -\frac{\hbar}{2} \right) {\left\lvert {--} \right\rangle} = -\hbar {\left\lvert {--} \right\rangle} \end{aligned} \hspace{\stretch{1}}(3.19)

So, we find that ${\left\lvert {x x} \right\rangle}$ are all eigenkets of $S_z$. These will also all be eigenkets of $\mathbf{S}_1^2 = S_{1x}^2 +S_{1y}^2 +S_{1z}^2$ since we have

\begin{aligned}S_1^2 {\left\lvert {x x} \right\rangle} &= \hbar^2 \left(\frac{1}{{2}}\right) \left(1 + \frac{1}{{2}}\right) {\left\lvert {x x} \right\rangle} = \frac{3}{4} \hbar^2 {\left\lvert {x x} \right\rangle} \\ S_2^2 {\left\lvert {x x} \right\rangle} &= \hbar^2 \left(\frac{1}{{2}}\right) \left(1 + \frac{1}{{2}}\right) {\left\lvert {x x} \right\rangle} = \frac{3}{4} \hbar^2 {\left\lvert {x x} \right\rangle} \end{aligned} \hspace{\stretch{1}}(3.23)

\begin{aligned}\begin{aligned}S^2 &= (\mathbf{S}_1^2+\mathbf{S}_2^2) \cdot(\mathbf{S}_1^2+\mathbf{S}_2^2) \\ &= S_1^2 + S_2^2 + 2 \mathbf{S}_1 \cdot \mathbf{S}_2\end{aligned}\end{aligned} \hspace{\stretch{1}}(3.25)

Are all the product kets also eigenkets of $S^2$? Calculate

\begin{aligned}S^2 {\left\lvert {+-} \right\rangle} &= (S_1^2 + S_2^2 + 2 \mathbf{S}_1 \cdot \mathbf{S}_2) {\left\lvert {+-} \right\rangle} \\ &=\left(\frac{3}{4}\hbar^2+\frac{3}{4}\hbar^2\right)+ 2 S_{1x} S_{2x} {\left\lvert {+-} \right\rangle} + 2 S_{1y} S_{2y} {\left\lvert {+-} \right\rangle} + 2 S_{1z} S_{2z} {\left\lvert {+-} \right\rangle} \end{aligned}

For the $z$ mixed terms, we have

\begin{aligned}2 S_{1z} S_{2z} {\left\lvert {+-} \right\rangle} = 2 \left(\frac{\hbar}{2}\right)\left(-\frac{\hbar}{2}\right){\left\lvert {+-} \right\rangle}\end{aligned} \hspace{\stretch{1}}(3.26)

So

\begin{aligned}S^2{\left\lvert {+-} \right\rangle} = \hbar^2 {\left\lvert {+-} \right\rangle} + 2 S_{1x} S_{2x} {\left\lvert {+-} \right\rangle} + 2 S_{1y} S_{2y} {\left\lvert {+-} \right\rangle} \end{aligned} \hspace{\stretch{1}}(3.27)

Since we have set our spin direction in the z direction with

\begin{aligned}{\left\lvert {+} \right\rangle} &\rightarrow \begin{bmatrix}1 \\ 0\end{bmatrix} \\ {\left\lvert {-} \right\rangle} &\rightarrow \begin{bmatrix}0 \\ 1 \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.28)

We have

\begin{aligned}S_x{\left\lvert {+} \right\rangle} &\rightarrow \frac{\hbar}{2} \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}\begin{bmatrix}1 \\ 0\end{bmatrix} =\frac{\hbar}{2}\begin{bmatrix}0 \\ 1 \end{bmatrix}=\frac{\hbar}{2} {\left\lvert {-} \right\rangle} \\ S_x{\left\lvert {-} \right\rangle} &\rightarrow \frac{\hbar}{2} \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}\begin{bmatrix}0 \\ 1 \end{bmatrix} =\frac{\hbar}{2}\begin{bmatrix}1 \\ 0 \end{bmatrix}=\frac{\hbar}{2} {\left\lvert {+} \right\rangle} \\ S_y{\left\lvert {+} \right\rangle} &\rightarrow \frac{\hbar}{2} \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix}\begin{bmatrix}1 \\ 0 \end{bmatrix} =\frac{i\hbar}{2}\begin{bmatrix}0 \\ 1 \end{bmatrix}=\frac{i\hbar}{2} {\left\lvert {-} \right\rangle} \\ S_y{\left\lvert {-} \right\rangle} &\rightarrow \frac{\hbar}{2} \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix}\begin{bmatrix}0 \\ 1 \end{bmatrix} =\frac{-i\hbar}{2}\begin{bmatrix}1 \\ 0 \end{bmatrix}=-\frac{i\hbar}{2} {\left\lvert {+} \right\rangle} \\ \end{aligned}

And are able to arrive at the action of $S^2$ on our mixed composite state

\begin{aligned}S^2{\left\lvert {+-} \right\rangle} = \hbar^2 ({\left\lvert {+-} \right\rangle} + {\left\lvert {-+} \right\rangle} ).\end{aligned} \hspace{\stretch{1}}(3.30)

For the action on the ${\left\lvert {++} \right\rangle}$ state we have

\begin{aligned}S^2 {\left\lvert {++} \right\rangle} &=\left(\frac{3}{4}\hbar^2 +\frac{3}{4}\hbar^2\right){\left\lvert {++} \right\rangle} + 2 \frac{\hbar^2}{4} {\left\lvert {--} \right\rangle} + 2 i^2 \frac{\hbar^2}{4} {\left\lvert {--} \right\rangle} +2 \left(\frac{\hbar}{2}\right)\left(\frac{\hbar}{2}\right){\left\lvert {++} \right\rangle} \\ &=2 \hbar^2 {\left\lvert {++} \right\rangle} \\ \end{aligned}

and on the ${\left\lvert {--} \right\rangle}$ state we have

\begin{aligned}S^2 {\left\lvert {--} \right\rangle} &=\left(\frac{3}{4}\hbar^2 +\frac{3}{4}\hbar^2\right){\left\lvert {--} \right\rangle} + 2 \frac{(-\hbar)^2}{4} {\left\lvert {++} \right\rangle} + 2 i^2 \frac{\hbar^2}{4} {\left\lvert {++} \right\rangle} +2 \left(-\frac{\hbar}{2}\right)\left(-\frac{\hbar}{2}\right){\left\lvert {--} \right\rangle} \\ &=2 \hbar^2 {\left\lvert {--} \right\rangle} \end{aligned}

All of this can be assembled into a tidier matrix form

\begin{aligned}S^2\rightarrow \hbar^2\begin{bmatrix}2 & 0 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 2 \\ \end{bmatrix},\end{aligned} \hspace{\stretch{1}}(3.31)

where the matrix is taken with respect to the (ordered) basis

\begin{aligned}\{{\left\lvert {++} \right\rangle},{\left\lvert {+-} \right\rangle},{\left\lvert {-+} \right\rangle},{\left\lvert {--} \right\rangle}\}.\end{aligned} \hspace{\stretch{1}}(3.32)

However,

\begin{aligned}\left[{S^2},{S_z}\right] &= 0 \\ \left[{S_i},{S_j}\right] &= i \hbar \sum_k \epsilon_{ijk} S_k\end{aligned} \hspace{\stretch{1}}(3.33)

It should be possible to find eigenkets of $S^2$ and $S_z$

\begin{aligned}S^2 {\left\lvert {s m_s} \right\rangle} &= s(s+1)\hbar^2 {\left\lvert {s m_s} \right\rangle} \\ S_z {\left\lvert {s m_s} \right\rangle} &= \hbar m_s {\left\lvert {s m_s} \right\rangle} \end{aligned} \hspace{\stretch{1}}(3.35)

An orthonormal set of eigenkets of $S^2$ and $S_z$ is found to be

\begin{aligned}\begin{array}{l l}{\left\lvert {++} \right\rangle} & \mbox{latex s = 1and $m_s = 1$} \\ \frac{1}{{\sqrt{2}}} \left( {\left\lvert {+-} \right\rangle} + {\left\lvert {-+} \right\rangle} \right) & \mbox{$s = 1$ and $m_s = 0$} \\ {\left\lvert {–} \right\rangle} & \mbox{$s = 1$ and $m_s = -1$} \\ \frac{1}{{\sqrt{2}}} \left( {\left\lvert {+-} \right\rangle} – {\left\lvert {-+} \right\rangle} \right) & \mbox{$s = 0$ and $m_s = 0$}\end{array}\end{aligned} \hspace{\stretch{1}}(3.37)

The first three kets here can be grouped into a triplet in a 3D Hilbert space, whereas the last treated as a singlet in a 1D Hilbert space.

Form a grouping

\begin{aligned}H = H_1 \otimes H_2\end{aligned} \hspace{\stretch{1}}(3.38)

Can write

\begin{aligned}\frac{1}{{2}} \otimes \frac{1}{{2}} = 1 \oplus 0\end{aligned} \hspace{\stretch{1}}(3.39)

where the $1$ and $0$ here refer to the spin index $s$.

## Other examples

Consider, perhaps, the $l=5$ state of the hydrogen atom

\begin{aligned}J_1^2 {\left\lvert {j_1 m_1} \right\rangle} &= j_1(j_1+1)\hbar^2 {\left\lvert {j_1 m_1} \right\rangle} \\ J_{1z} {\left\lvert {j_1 m_1} \right\rangle} &= \hbar m_1 {\left\lvert {j_1 m_1} \right\rangle} \end{aligned} \hspace{\stretch{1}}(3.40)

\begin{aligned}J_2^2 {\left\lvert {j_2 m_2} \right\rangle} &= j_2(j_2+1)\hbar^2 {\left\lvert {j_2 m_2} \right\rangle} \\ J_{2z} {\left\lvert {j_2 m_2} \right\rangle} &= \hbar m_2 {\left\lvert {j_2 m_2} \right\rangle} \end{aligned} \hspace{\stretch{1}}(3.42)

Consider the Hilbert space spanned by ${\left\lvert {j_1 m_1} \right\rangle} \otimes {\left\lvert {j_2 m_2} \right\rangle}$, a $(2 j_1 + 1)(2 j_2 + 1)$ dimensional space. How to find the eigenkets of $J^2$ and $J_z$?

# References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] Wikipedia. Spin.orbit interaction — wikipedia, the free encyclopedia [online]. 2011. [Online; accessed 2-November-2011]. http://en.wikipedia.org/w/index.php?title=Spin\%E2\%80\%93orbit_interaction&oldid=451606718.

## PHY456H1F: Quantum Mechanics II. Lecture 13 (Taught by Prof J.E. Sipe). Spin and spinors (cont.)

Posted by peeterjoot on October 24, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

# Disclaimer.

Peeter’s lecture notes from class. May not be entirely coherent.

# Multiple wavefunction spaces.

Reading: See section 26.5 in the text [1].

We identified

\begin{aligned}\psi(\mathbf{r}) = \left\langle{{ \mathbf{r}}} \vert {{\psi}}\right\rangle\end{aligned} \hspace{\stretch{1}}(2.1)

with improper basis kets

\begin{aligned}{\left\lvert {\mathbf{r}} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.2)

Now introduce many function spaces

\begin{aligned}\begin{bmatrix}\psi_1(\mathbf{r}) \\ \psi_2(\mathbf{r}) \\ \dot{v}s \\ \psi_\gamma(\mathbf{r})\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(2.3)

with improper (unnormalizable) basis kets

\begin{aligned}{\left\lvert {\mathbf{r} \alpha} \right\rangle}, \qquad \alpha \in 1, 2, ... \gamma\end{aligned} \hspace{\stretch{1}}(2.4)

\begin{aligned}\psi_\alpha(\mathbf{r}) = \left\langle{{ \mathbf{r}\alpha}} \vert {{\psi}}\right\rangle\end{aligned} \hspace{\stretch{1}}(2.5)

for an abstract ket ${\left\lvert {\psi} \right\rangle}$

We will try taking this Hilbert space

\begin{aligned}H = H_o \otimes H_s\end{aligned} \hspace{\stretch{1}}(2.6)

Where $H_o$ is the Hilbert space of “scalar” QM, “o” orbital and translational motion, associated with kets ${\left\lvert {\mathbf{r}} \right\rangle}$ and $H_s$ is the Hilbert space associated with the $\gamma$ components ${\left\lvert {\alpha} \right\rangle}$. This latter space we will label the “spin” or “internal physics” (class suggestion: or perhaps intrinsic). This is “unconnected” with translational motion.

We build up the basis kets for $H$ by direct products

\begin{aligned}{\left\lvert {\mathbf{r} \alpha} \right\rangle} = {\left\lvert {\mathbf{r}} \right\rangle} \otimes {\left\lvert {\alpha} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.7)

Now, for a rotated ket we seek a general angular momentum operator $\mathbf{J}$ such that

\begin{aligned}{\left\lvert {\psi'} \right\rangle} = e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{J}/\hbar} {\left\lvert {\psi} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.8)

where

\begin{aligned}\mathbf{J} = \mathbf{L} + \mathbf{S},\end{aligned} \hspace{\stretch{1}}(2.9)

where $\mathbf{L}$ acts over kets in $H_o$, “orbital angular momentum”, and $\mathbf{S}$ is the “spin angular momentum”, acting on kets in $H_s$.

Strictly speaking this would be written as direct products involving the respective identities

\begin{aligned}\mathbf{J} = \mathbf{L} \otimes I_s + I_o \otimes \mathbf{S}.\end{aligned} \hspace{\stretch{1}}(2.10)

We require

\begin{aligned}\left[{J_i},{J_j}\right] = i \hbar \sum \epsilon_{i j k} J_k\end{aligned} \hspace{\stretch{1}}(2.11)

Since $\mathbf{L}$ and $\mathbf{S}$ “act over separate Hilbert spaces”. Since these come from legacy operators

\begin{aligned}\left[{L_i},{S_j}\right] = 0\end{aligned} \hspace{\stretch{1}}(2.12)

We also know that

\begin{aligned}\left[{L_i},{L_j}\right] = i \hbar \sum \epsilon_{i j k} L_k\end{aligned} \hspace{\stretch{1}}(2.13)

so

\begin{aligned}\left[{S_i},{S_j}\right] = i \hbar \sum \epsilon_{i j k} S_k, \end{aligned} \hspace{\stretch{1}}(2.14)

as expected. We could, in principle, have more complicated operators, where this would not be true. This is a proposal of sorts. Given such a definition of operators, let’s see where we can go with it.

For matrix elements of $\mathbf{L}$ we have

\begin{aligned}{\left\langle {\mathbf{r}} \right\rvert} L_x {\left\lvert {\mathbf{r}'} \right\rangle} = -i \hbar \left( y \frac{\partial {}}{\partial {z}}-z \frac{\partial {}}{\partial {y}} \right) \delta(\mathbf{r}- \mathbf{r}')\end{aligned} \hspace{\stretch{1}}(2.15)

What are the matrix elements of ${\left\langle {\alpha} \right\rvert} S_i {\left\lvert {\alpha'} \right\rangle}$? From the commutation relationships we know

\begin{aligned}\sum_{\alpha'' = 1}^\gamma {\left\langle {\alpha} \right\rvert} S_i {\left\lvert {\alpha''} \right\rangle}{\left\langle {\alpha''} \right\rvert} S_j {\left\lvert {\alpha'} \right\rangle}-\sum_{\alpha'' = 1}^\gamma {\left\langle {\alpha} \right\rvert} S_j {\left\lvert {\alpha''} \right\rangle}{\left\langle {\alpha''} \right\rvert} S_i {\left\lvert {\alpha'} \right\rangle}=i \hbar \sum_k \epsilon_{ijk} {\left\langle {\alpha} \right\rvert} S_k {\left\lvert {\alpha''} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.16)

We see that our matrix element is tightly constrained by our choice of commutator relationships. We have $\gamma^2$ such matrix elements, and it turns out that it is possible to choose (or find) matrix elements that satisfy these constraints?

The ${\left\langle {\alpha} \right\rvert} S_i {\left\lvert {\alpha'} \right\rangle}$ matrix elements that satisfy these constraints are found by imposing the commutation relations

\begin{aligned}\left[{S_i},{S_j}\right] = i \hbar \sum \epsilon_{i j k} S_k, \end{aligned} \hspace{\stretch{1}}(2.17)

and with

\begin{aligned}S^2 = \sum_j S_j^2,\end{aligned} \hspace{\stretch{1}}(2.18)

(this is just a definition). We find

\begin{aligned}\left[{S^2},{S_i}\right] = 0\end{aligned} \hspace{\stretch{1}}(2.19)

and seeking eigenkets

\begin{aligned}S^2 {\left\lvert {s m_s} \right\rangle} &= s(s+1) \hbar^2 {\left\lvert {s m_s} \right\rangle} \\ S_z {\left\lvert {s m_s} \right\rangle} &= \hbar m_s {\left\lvert {s m_s} \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.20)

Find solutions for $s = 1/2, 1, 3/2, 2, \cdots$, where $m_s \in \{-s, \cdots, s\}$. ie. $2 s + 1$ possible vectors ${\left\lvert {s m_s} \right\rangle}$ for a given $s$.

\begin{aligned}s = \frac{1}{{2}} &\implies \gamma = 2 \\ s = 1 &\implies \gamma = 3 \\ s = \frac{3}{2} &\implies \gamma = 4 \end{aligned}

We start with the algebra (mathematically the Lie algebra), and one can compute the Hilbert spaces that are consistent with these algebraic constraints.

We assume that for any type of given particle $S$ is fixed, where this has to do with the nature of the particle.

\begin{aligned}s = \frac{1}{{2}} &\qquad \text{A spinlatex 1/2particle} \\ s = 1 &\qquad \text{A spin $1$ particle} \\ s = \frac{3}{2} &\qquad \text{A spin $3/2$ particle}\end{aligned}

$S$ is fixed once we decide that we are talking about a specific type of particle.

A non-relativistic particle in this framework has two nondynamical quantities. One is the mass $m$ and we now introduce a new invariant, the spin $s$ of the particle.

This has been introduced as a kind of strategy. It is something that we are going to try, and it turns out that it does. This agrees well with experiment.

In 1939 Wigner asked, “what constraints do I get if I constrain the constraints of quantum mechanics with special relativity.” It turns out that in the non-relativistic limit, we get just this.

There’s a subtlety here, because we get into some logical trouble with the photon with a rest mass of zero ($m = 0$ is certainly allowed as a value of our invariant $m$ above). We can’t stop or slow down a photon, so orbital angular momentum is only a conceptual idea. Really, the orbital angular momentum and the spin angular momentum cannot be separated out for a photon, so talking of a spin $1$ particle really means spin as in $\mathbf{J}$, and not spin as in $\mathbf{L}$.

## Spin $1/2$ particles

Reading: See section 26.6 in the text [1].

Let’s start talking about the simplest case. This includes electrons, all leptons (integer spin particles like photons and the weakly interacting W and Z bosons), and quarks.

\begin{aligned}s &= \frac{1}{{2}} \\ m_s &= \pm \frac{1}{{2}}\end{aligned} \hspace{\stretch{1}}(2.22)

states

\begin{aligned}{\left\lvert {s m_s} \right\rangle} = {\left\lvert { \frac{1}{{2}}, \frac{1}{{2}} } \right\rangle},{\left\lvert { \frac{1}{{2}}, -\frac{1}{{2}} } \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.24)

Note there is a convention

\begin{aligned}{\left\lvert { \frac{1}{{2}} \bar{\frac{1}{{2}}} } \right\rangle} &= {\left\lvert { \frac{1}{{2}}, -\frac{1}{{2}} } \right\rangle} \\ {\left\lvert { \frac{1}{{2}} \frac{1}{{2}} } \right\rangle} &= {\left\lvert { \frac{1}{{2}} \frac{1}{{2}} } \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.25)

\begin{aligned}\begin{aligned}S^2 {\left\lvert {\frac{1}{{2}} m_s} \right\rangle} &= \frac{1}{{2}} \left( \frac{1}{{2}} + 1 \right) \hbar^2 {\left\lvert {\frac{1}{{2}} m_s} \right\rangle} \\ &=\frac{3}{4} \hbar^2 {\left\lvert {\frac{1}{{2}} m_s} \right\rangle} \\ \end{aligned}\end{aligned} \hspace{\stretch{1}}(2.27)

\begin{aligned}S_z {\left\lvert {\frac{1}{{2}} m_s} \right\rangle} = m_s \hbar {\left\lvert {\frac{1}{{2}} m_s} \right\rangle} \end{aligned} \hspace{\stretch{1}}(2.28)

For shorthand

\begin{aligned}{\left\lvert { \frac{1}{{2}} \frac{1}{{2}} } \right\rangle} &= {\left\lvert { + } \right\rangle} \\ {\left\lvert { \frac{1}{{2}} \bar{\frac{1}{{2}}} } \right\rangle} &= {\left\lvert { - } \right\rangle}\end{aligned} \hspace{\stretch{1}}(2.29)

\begin{aligned}S^2 \rightarrow \frac{3}{4} \hbar^2 \begin{bmatrix}1 & 0 \\ 0 & 1\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(2.31)

\begin{aligned}S_z \rightarrow \frac{\hbar}{2}\begin{bmatrix}1 & 0 \\ 0 & -1\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(2.32)

One can easily work out from the commutation relationships that

\begin{aligned}S_x \rightarrow \frac{\hbar}{2}\begin{bmatrix}0 & 1 \\ 1 & 0\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(2.33)

\begin{aligned}S_y \rightarrow \frac{\hbar}{2}\begin{bmatrix}0 & -i \\ i & 0\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(2.34)

We’ll start with adding $\mathbf{L}$ into the mix on Wednesday.

# References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

## PHY456H1F: Quantum Mechanics II. Lecture 11 (Taught by Prof J.E. Sipe). Spin and Spinors

Posted by peeterjoot on October 17, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

# Disclaimer.

Peeter’s lecture notes from class. May not be entirely coherent.

# Generators.

Covered in section 26 of the text [1].

## Example: Time translation

\begin{aligned}{\lvert {\psi(t)} \rangle} = e^{-i H t/\hbar} {\lvert {\psi(0)} \rangle} .\end{aligned} \hspace{\stretch{1}}(2.1)

The Hamiltonian “generates” evolution (or translation) in time.

## Example: Spatial translation

\begin{aligned}{\lvert {\mathbf{r} + \mathbf{a}} \rangle} = e^{-i \mathbf{a} \cdot \mathbf{P}/\hbar} {\lvert {\mathbf{r}} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.2)

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL11fig1}
\caption{Vector translation.}

\end{figure}

$\mathbf{P}$ is the operator that generates translations. Written out, we have

\begin{aligned}\begin{aligned}e^{-i \mathbf{a} \cdot \mathbf{P}/\hbar} &= e^{- i (a_x P_x + a_y P_y + a_z P_z)/\hbar} \\ &= e^{- i a_x P_x/\hbar}e^{- i a_y P_y/\hbar}e^{- i a_z P_z/\hbar},\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.3)

where the factorization was possible because $P_x$, $P_y$, and $P_z$ commute

\begin{aligned}\left[{P_i},{P_j}\right] = 0,\end{aligned} \hspace{\stretch{1}}(2.4)

for any $i, j$ (including $i = i$ as I dumbly questioned in class … this is a commutator, so $\left[{P_i},{P_j}\right] = P_i P_i - P_i P_i = 0$).

The fact that the $P_i$ commute means that successive translations can be done in any orderr and have the same result.

In class we were rewarded with a graphic demo of translation component commutation as Professor Sipe pulled a giant wood carving of a cat (or tiger?) out from beside the desk and proceeded to translate it around on the desk in two different orders, with the cat ending up in the same place each time.

### Exponential commutation.

Note that in general

\begin{aligned}e^{A + B} \ne e^A e^B,\end{aligned} \hspace{\stretch{1}}(2.5)

unless $\left[{A},{B}\right] = 0$. To show this one can compare

\begin{aligned}\begin{aligned}e^{A + B} &= 1 + A + B + \frac{1}{{2}}(A + B)^2 + \cdots \\ &= 1 + A + B + \frac{1}{{2}}(A^2 + A B + BA + B^2) + \cdots \\ \end{aligned}\end{aligned} \hspace{\stretch{1}}(2.6)

and

\begin{aligned}\begin{aligned}e^A e^B &= \left(1 + A + \frac{1}{{2}}A^2 + \cdots\right)\left(1 + B + \frac{1}{{2}}B^2 + \cdots\right) \\ &= 1 + A + B + \frac{1}{{2}}( A^2 + 2 A B + B^2 ) + \cdots\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.7)

Comparing the second order (for example) we see that we must have for equality

\begin{aligned}A B + B A = 2 A B,\end{aligned} \hspace{\stretch{1}}(2.8)

or

\begin{aligned}B A = A B,\end{aligned} \hspace{\stretch{1}}(2.9)

or

\begin{aligned}\left[{A},{B}\right] = 0\end{aligned} \hspace{\stretch{1}}(2.10)

### Translating a ket

If we consider the quantity

\begin{aligned}e^{-i \mathbf{a} \cdot \mathbf{P}/\hbar} {\lvert {\psi} \rangle} = {\lvert {\psi'} \rangle} ,\end{aligned} \hspace{\stretch{1}}(2.11)

does this ket “translated” by $\mathbf{a}$ make any sense? The vector $\mathbf{a}$ lives in a 3D space and our ket ${\lvert {\psi} \rangle}$ lives in Hilbert space. A quantity like this deserves some careful thought and is the subject of some such thought in the Interpretations of Quantum mechanics course. For now, we can think of the operator and ket as a “gadget” that prepares a state.

A student in class pointed out that ${\lvert {\psi} \rangle}$ can be dependent on many degress of freedom, for example, the positions of eight different particles. This translation gadget in such a case acts on the whole kit and kaboodle.

Now consider the matrix element

\begin{aligned}\left\langle{\mathbf{r}} \vert {{\psi'}}\right\rangle = {\langle {\mathbf{r}} \rvert} e^{-i \mathbf{a} \cdot \mathbf{P}/\hbar} {\lvert {\psi} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.12)

Note that

\begin{aligned}{\langle {\mathbf{r}} \rvert} e^{-i \mathbf{a} \cdot \mathbf{P}/\hbar} &= \left( e^{i \mathbf{a} \cdot \mathbf{P}/\hbar} {\lvert {\mathbf{r}} \rangle} \right)^\dagger \\ &= \left( {\lvert {\mathbf{r} - \mathbf{a}} \rangle} \right)^\dagger,\end{aligned}

so

\begin{aligned}\left\langle{\mathbf{r}} \vert {{\psi'}}\right\rangle = \left\langle{{\mathbf{r} -\mathbf{a}}} \vert {{\psi}}\right\rangle,\end{aligned} \hspace{\stretch{1}}(2.13)

or

\begin{aligned}\psi'(\mathbf{r}) = \psi(\mathbf{r} - \mathbf{a})\end{aligned} \hspace{\stretch{1}}(2.14)

This is what we expect of a translated function, as illustrated in figure (\ref{fig:qmTwoL11:qmTwoL11fig2})
\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL11fig2}
\caption{Active spatial translation.}
\end{figure}

## Example: Spatial rotation

We’ve been introduced to the angular momentum operator

\begin{aligned}\mathbf{L} = \mathbf{R} \times \mathbf{P},\end{aligned} \hspace{\stretch{1}}(2.15)

where

\begin{aligned}L_x &= Y P_z - Z P_y \\ L_y &= Z P_x - X P_z \\ L_z &= X P_y - Y P_x.\end{aligned} \hspace{\stretch{1}}(2.16)

We also found that

\begin{aligned}\left[{L_i},{L_j}\right] = i \hbar \sum_k \epsilon_{ijk} L_k.\end{aligned} \hspace{\stretch{1}}(2.19)

These non-zero commutators show that the components of angular momentum do not commute.

Define

\begin{aligned}{\lvert {\mathcal{R}(\mathbf{r})} \rangle} = e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{L}/\hbar}{\lvert {\mathbf{r}} \rangle} .\end{aligned} \hspace{\stretch{1}}(2.20)

This is the vecvtor that we get by actively rotating the vector $\mathbf{r}$ by an angule $\theta$ counterclockwise about $\hat{\mathbf{n}}$, as in figure (\ref{fig:qmTwoL11:qmTwoL11fig3})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL11fig3}
\caption{Active vector rotations}
\end{figure}

An active rotation rotates the vector, leaving the coordinate system fixed, whereas a passive rotation is one for which the coordinate system is rotated, and the vector is left fixed.

Note that rotations do not commute. Suppose that we have a pair of rotations as in figure (\ref{fig:qmTwoL11:qmTwoL11fig4})
\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL11fig4}
\caption{A example pair of non-commuting rotations.}
\end{figure}

Again, we get the graphic demo, with Professor Sipe rotating the big wooden cat sculpture. Did he bring that in to class just to make this point (too bad I missed the first couple minutes of the lecture).

Rather amusingly, he points out that most things in life do not commute. We get much different results if we apply the operations of putting water into the teapot and turning on the stove in different orders.

### Rotating a ket

\begin{aligned}{\lvert {\psi'} \rangle} = e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{L}/\hbar }{\lvert {\psi} \rangle},\end{aligned} \hspace{\stretch{1}}(2.21)

we can form the matrix element

\begin{aligned}\left\langle{\mathbf{r}} \vert {{\psi'}}\right\rangle = {\langle {\mathbf{r}} \rvert} e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{L}/\hbar }{\lvert {\psi} \rangle}.\end{aligned} \hspace{\stretch{1}}(2.22)

In this we have

\begin{aligned}{\langle {\mathbf{r}} \rvert} e^{-i \theta \hat{\mathbf{n}} \cdot \mathbf{L}/\hbar }&=\left( e^{i \theta \hat{\mathbf{n}} \cdot \mathbf{L}/\hbar } {\lvert {\mathbf{r}} \rangle} \right)^\dagger \\ &=\left( {\lvert {\mathcal{R}^{-1}(\mathbf{r}) } \rangle} \right)^\dagger,\end{aligned}

so

\begin{aligned}\left\langle{\mathbf{r}} \vert {{\psi'}}\right\rangle = \left\langle{{\mathcal{R}^{-1}(\mathbf{r}) }} \vert {{\psi'}}\right\rangle,\end{aligned} \hspace{\stretch{1}}(2.23)

or

\begin{aligned}\psi'(\mathbf{r}) = \psi( \mathcal{R}^{-1}(\mathbf{r}) )\end{aligned} \hspace{\stretch{1}}(2.24)

# Generalizations.

Recall what you did last year, where $H$, $\mathbf{P}$, and $\mathbf{L}$ were defined mechaniccally. We found

\begin{itemize}
\item $H$ generates time evolution (or translation in time).
\item $\mathbf{P}$ generates spatial translation.
\item $\mathbf{L}$ generates spatial rotation.
\end{itemize}

For our mechanical definitions we have

\begin{aligned}\left[{P_i},{P_j}\right] = 0,\end{aligned} \hspace{\stretch{1}}(3.25)

and

\begin{aligned}\left[{L_i},{L_j}\right] = i \hbar \sum_k \epsilon_{ijk} L_k.\end{aligned} \hspace{\stretch{1}}(3.26)

These are the relations that show us the way translations and rotations combine. We want to move up to a higher plane, a new level of abstraction. To do so we define $H$ as the operator that generates time evolution. If we have a theory that covers the behaviour of how anything evolves in time, $H$ encodes the rules for this time evolution.

Define $\mathbf{P}$ as the operator that generates translations in space.

Define $\mathbf{J}$ as the operator that generates rotations in space.

In order that these match expectations, we require

\begin{aligned}\left[{P_i},{P_j}\right] = 0,\end{aligned} \hspace{\stretch{1}}(3.27)

and

\begin{aligned}\left[{J_i},{J_j}\right] = i \hbar \sum_k \epsilon_{ijk} J_k.\end{aligned} \hspace{\stretch{1}}(3.28)

In the simple theory of a spinless particle we have

\begin{aligned}\mathbf{J} \equiv \mathbf{L} = \mathbf{R} \times \mathbf{P}.\end{aligned} \hspace{\stretch{1}}(3.29)

We actually need a generalization of this since this is, in fact, not good enought, even for low energy physics.

## Many component wave functions.

We are free to construct tuples of spatial vector functions like

\begin{aligned}\begin{bmatrix}\Psi_I(\mathbf{r}, t) \\ \Psi_{II}(\mathbf{r}, t)\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(3.30)

or

\begin{aligned}\begin{bmatrix}\Psi_I(\mathbf{r}, t) \\ \Psi_{II}(\mathbf{r}, t) \\ \Psi_{III}(\mathbf{r}, t)\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(3.31)

etc.

We will see that these behave qualitatively different than one component wave functions. We also don’t have to be considering multiple particle wave functions, but just one particle that requires three functions in $\mathbb{R}^{3}$ to describe it (ie: we are moving in on spin).

Question: Do these live in the same vector space?
Answer: We will get to this.

### A classical analogy.

“There’s only bad analogies, since if the are good they’d be describing the same thing. We can however, produce some useful bad analogies”

\begin{enumerate}
\item A temperature field

\begin{aligned}T(\mathbf{r})\end{aligned} \hspace{\stretch{1}}(3.32)

\item Electric field

\begin{aligned}\begin{bmatrix}E_x(\mathbf{r}) \\ E_y(\mathbf{r}) \\ E_z(\mathbf{r}) \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.33)

\end{enumerate}

These behave in a much different way. If we rotate a scalar field like $T(\mathbf{r})$ as in figure (\ref{fig:qmTwoL11:qmTwoL11fig5})
\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL11fig5}
\caption{Rotated temperature (scalar) field}
\end{figure}

Suppose we have a temperature field generated by, say, a match. Rotating the match above, we have

\begin{aligned}T'(\mathbf{r}) = T(\mathcal{R}^{-1}(\mathbf{r})).\end{aligned} \hspace{\stretch{1}}(3.34)

Compare this to the rotation of an electric field, perhaps one produced by a capacitor, as in figure (\ref{fig:qmTwoL11:qmTwoL11fig6})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL11fig6}
\caption{Rotating a capacitance electric field}
\end{figure}

Is it true that we have

\begin{aligned}\begin{bmatrix}E_x(\mathbf{r}) \\ E_y(\mathbf{r}) \\ E_z(\mathbf{r}) \end{bmatrix}\stackrel{?}{=}\begin{bmatrix}E_x(\mathcal{R}^{-1}(\mathbf{r})) \\ E_y(\mathcal{R}^{-1}(\mathbf{r})) \\ E_z(\mathcal{R}^{-1}(\mathbf{r})) \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(3.35)

No. Because the components get mixed as well as the positions at which those components are evaluated.

We will work with many component wave functions, some of which will behave like vectors, and will have to develope the methods and language to tackle this.

# References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.