# Peeter Joot's (OLD) Blog.

• ## Archives

 Adam C Scott on avoiding gdb signal noise… Ken on Scotiabank iTrade RESP …… Alan Ball on Oops. Fixing a drill hole in P… Peeter Joot's B… on Stokes theorem in Geometric… Exploring Stokes The… on Stokes theorem in Geometric…

• 287,565

# Posts Tagged ‘hamiltonian’

## An updated compilation of notes, for ‘PHY452H1S Basic Statistical Mechanics’, Taught by Prof. Arun Paramekanti

Posted by peeterjoot on March 27, 2013

Here’s my second update of my notes compilation for this course, including all of the following:

March 27, 2013 Fermi gas

March 26, 2013 Fermi gas thermodynamics

March 26, 2013 Fermi gas thermodynamics

March 23, 2013 Relativisitic generalization of statistical mechanics

March 21, 2013 Kittel Zipper problem

March 18, 2013 Pathria chapter 4 diatomic molecule problem

March 17, 2013 Gibbs sum for a two level system

March 16, 2013 open system variance of N

March 16, 2013 probability forms of entropy

March 14, 2013 Grand Canonical/Fermion-Bosons

March 13, 2013 Quantum anharmonic oscillator

March 12, 2013 Grand canonical ensemble

March 11, 2013 Heat capacity of perturbed harmonic oscillator

March 10, 2013 Langevin small approximation

March 10, 2013 Addition of two one half spins

March 10, 2013 Midterm II reflection

March 07, 2013 Thermodynamic identities

March 06, 2013 Temperature

March 05, 2013 Interacting spin

plus everything detailed in the description of my first update and before.

## PHY452H1S Basic Statistical Mechanics. Lecture 15: Grand Canonical/Fermion-Bosons. Taught by Prof. Arun Paramekanti

Posted by peeterjoot on March 14, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

# Disclaimer

Peeter’s lecture notes from class. May not be entirely coherent.

# Grand Canonical/Fermion-Bosons

Was mentioned that three dimensions confines us to looking at either Fermions or Bosons, and that two dimensions is a rich subject (interchange of two particles isn’t the same as one particle cycling around the other ending up in the same place — how is that different than a particle cycling around another in a two dimensional space?)

Definitions

1. Fermion. Antisymmetric under exchange. $n_k = 0, 1$
2. Boson. Symmetric under exchange. $n_k = 0, 1, 2, \cdots$

In either case our energies are

\begin{aligned}\epsilon_k = \frac{\hbar^2 k^2}{2m},\end{aligned} \hspace{\stretch{1}}(1.2.1)

For Fermions we’ll have occupation filling of the form fig. 1.1, where there can be only one particle at any given site (an energy level for that value of momentum). For Bosonic systems as in fig. 1.2, we don’t have a restriction of only one particle for each state, and can have any given number of particles for each value of momentum.

Fig 1.1: Fermionic energy level filling for free particle in a box

Fig 1.2: Bosonic free particle in a box energy level filling

Our Hamiltonian is

\begin{aligned}H = \sum_k \hat{n}_k \epsilon_k,\end{aligned} \hspace{\stretch{1}}(1.2.2)

where we have a number operator

\begin{aligned}N = \sum \hat{n}_k,\end{aligned} \hspace{\stretch{1}}(1.2.3)

such that

\begin{aligned}\left[{N},{H}\right] = 0.\end{aligned} \hspace{\stretch{1}}(1.2.4)

\begin{aligned}Z_{\mathrm{G}} = \sum_{N=0}^\infty e^{\beta \mu N}\sum_{n_k, \sum n_k = N} e^{-\beta \sum_k n_k \epsilon_k}.\end{aligned} \hspace{\stretch{1}}(1.2.5)

While the second sum is constrained, because we are summing over all $n_k$, this is essentially an unconstrained sum, so we can write

\begin{aligned}Z_{\mathrm{G}} &= \sum_{n_k}e^{\beta \mu \sum_k n_k}e^{-\beta \sum_k n_k \epsilon_k} \\ &= \sum_{n_k} \left( \prod_k e^{-\beta(\epsilon_k - \mu) n_k} \right) \\ &= \prod_{n} \left( \sum_{n_k} e^{-\beta(\epsilon_k - \mu) n_k} \right).\end{aligned} \hspace{\stretch{1}}(1.2.6)

Fermions

\begin{aligned}\sum_{n_k = 0}^1 e^{-\beta(\epsilon_k - \mu) n_k} = 1 + e^{-\beta(\epsilon_k - \mu)}\end{aligned} \hspace{\stretch{1}}(1.2.7)

Bosons

\begin{aligned}\sum_{n_k = 0}^\infty e^{-\beta(\epsilon_k - \mu) n_k} = \frac{1}{{1 - e^{-\beta(\epsilon_k - \mu)}}}\end{aligned} \hspace{\stretch{1}}(1.2.8)

($\epsilon_k - \mu \ge 0$).

Our grand partition functions are then

\begin{aligned}Z_{\mathrm{G}}^f = \prod_k \left( 1 + e^{-\beta(\epsilon_k - \mu)} \right)\end{aligned} \hspace{\stretch{1}}(1.0.9a)

\begin{aligned}Z_{\mathrm{G}}^b = \prod_k \frac{1}{{ 1 - e^{-\beta(\epsilon_k - \mu)} }}\end{aligned} \hspace{\stretch{1}}(1.0.9b)

We can use these to compute the average number of particles

\begin{aligned}\left\langle{{n_k^f}}\right\rangle = \frac{1 \times 0 + e^{-\beta(\epsilon_k - \mu)} \times 1}{ 1 + e^{-\beta(\epsilon_k - \mu)} }=\frac{1}{{ 1 + e^{-\beta(\epsilon_k - \mu)} }}\end{aligned} \hspace{\stretch{1}}(1.0.10)

\begin{aligned}\left\langle{{n_k^b}}\right\rangle = \frac{1 \times 0 + e^{-\beta(\epsilon_k - \mu)} \times 1+e^{-2 \beta(\epsilon_k - \mu)} \times 2 + \cdots}{ 1+e^{-\beta(\epsilon_k - \mu)} +e^{-2 \beta(\epsilon_k - \mu)} }\end{aligned} \hspace{\stretch{1}}(1.0.11)

This chemical potential over temperature exponential

\begin{aligned}e^{\beta \mu} \equiv z,\end{aligned} \hspace{\stretch{1}}(1.0.12)

is called the fugacity. The denominator has the form

\begin{aligned}D = 1 + z e^{-\beta \epsilon_k}+ z^2 e^{-2 \beta \epsilon_k},\end{aligned} \hspace{\stretch{1}}(1.0.13)

so we see that

\begin{aligned}z \frac{\partial {D}}{\partial {z}} = z e^{-\beta \epsilon_k}+ 2 z^2 e^{-2 \beta \epsilon_k}+ 3 z^3 e^{-3 \beta \epsilon_k}+ \cdots\end{aligned} \hspace{\stretch{1}}(1.0.14)

Thus the numerator is

\begin{aligned}N = z \frac{\partial {D}}{\partial {z}},\end{aligned} \hspace{\stretch{1}}(1.0.15)

and

\begin{aligned}\left\langle{{n_k^b}}\right\rangle &= \frac{z \frac{\partial {D_k}}{\partial {z}} }{D_k} \\ &= z \frac{\partial {}}{\partial {z}} \ln D_k \\ &= \cdots \\ &= \frac{1}{{ e^{\beta(\epsilon_k - \mu)} - 1}}\end{aligned} \hspace{\stretch{1}}(1.0.16)

What is the density $\rho$?

For Fermions

\begin{aligned}\rho = \frac{N}{V} =\frac{1}{{V}} \sum_{\mathbf{k}}\frac{1}{{ e^{\beta(\epsilon_\mathbf{k} - \mu)} + 1}}\end{aligned} \hspace{\stretch{1}}(1.0.17)

Using a “particle in a box” quantization where $k_\alpha = 2 \pi m_\alpha/L$, in a $d$-dimensional space, we can approximate this as

\begin{aligned}\boxed{\rho = \int \frac{d^d k}{(2 \pi)^d}\frac{1}{{ e^{\beta(\epsilon_k - \mu)} - 1}}.}\end{aligned} \hspace{\stretch{1}}(1.0.18)

This integral is actually difficult to evaluate. For $T \rightarrow 0$ ($\beta \rightarrow \infty$, where

\begin{aligned}n_k = \Theta(\mu - \epsilon_k).\end{aligned} \hspace{\stretch{1}}(1.0.19)

This is illustrated in, where we also show the smearing that occurs as temperature increases fig. 1.3.

Fig 1.3: Occupation numbers for different energies

With

\begin{aligned}E_{\mathrm{F}} = \mu(T = 0),\end{aligned} \hspace{\stretch{1}}(1.0.20)

we want to ask what is the radius of the ball for which

\begin{aligned}\epsilon_k = E_{\mathrm{F}}\end{aligned} \hspace{\stretch{1}}(1.0.21)

or

\begin{aligned}E_{\mathrm{F}} = \frac{\hbar^2 k_{\mathrm{F}}^2}{2m},\end{aligned} \hspace{\stretch{1}}(1.0.22)

so that

\begin{aligned}k_{\mathrm{F}} = \sqrt{\frac{2 m E_{\mathrm{F}}}{\hbar^2}},\end{aligned} \hspace{\stretch{1}}(1.0.23)

so that our density where $\epsilon_k = \mu$ is

\begin{aligned}\rho &= \int_{k \le k_{\mathrm{F}}} \frac{d^3 k}{(2 \pi)^3} \times 1 \\ &= \frac{1}{{(2\pi)^3}} 4 \pi \int^{k_{\mathrm{F}}} k^2 dk \\ &= \frac{4 \pi}{3} k_{\mathrm{F}}^3 \frac{1}{{(2 \pi)^3}},\end{aligned} \hspace{\stretch{1}}(1.0.24)

so that

\begin{aligned}k_{\mathrm{F}} = (6 \pi^2 \rho)^{1/3},\end{aligned} \hspace{\stretch{1}}(1.0.25)

Our chemical potential at zero temperature is then

\begin{aligned}\mu(T = 0) = \frac{\hbar^2}{2m} (6 \pi^2 \rho)^{2/3}.\end{aligned} \hspace{\stretch{1}}(1.0.26)

\begin{aligned}\rho^{-1/3} = \mbox{interparticle spacing}.\end{aligned} \hspace{\stretch{1}}(1.0.27)

We can convince ourself that the chemical potential must have the form fig. 1.4.

Fig 1.4: Large negative chemical potential at high temperatures

Given large negative chemical potential at high temperatures our number distribution will have the form

\begin{aligned}\left\langle{{n_k}}\right\rangle = e^{-\beta (\epsilon_k - \mu)} \propto e^{-\beta \epsilon_k}\end{aligned} \hspace{\stretch{1}}(1.0.28)

We see that the classical Boltzmann distribution is recovered for high temperatures.

We can also calculate the chemical potential at high temperatures. We’ll find that this has the form

\begin{aligned}e^{\beta \mu} = \frac{4}{3} \rho \lambda_T^3,\end{aligned} \hspace{\stretch{1}}(1.0.29)

where this quantity $\lambda_T$ is called the Thermal de Broglie wavelength.

\begin{aligned}\lambda_T = \sqrt{\frac{ 2 \pi \hbar^2}{m k_{\mathrm{B}} T}}.\end{aligned} \hspace{\stretch{1}}(1.0.30)

## Addition of two one half spins

Posted by peeterjoot on March 10, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

In class an example of interacting spin was given where the Hamiltonian included a two spins dot product

\begin{aligned}H = \mathbf{S}_1 \cdot \mathbf{S}_2.\end{aligned} \hspace{\stretch{1}}(1.0.1)

The energy eigenvalues for this Hamiltonian were derived by using the trick to rewrite this in terms of just squared spin operators

\begin{aligned}H = \frac{(\mathbf{S}_1 + \mathbf{S}_2)^2 - \mathbf{S}_1^2 - \mathbf{S}_2^2}{2}.\end{aligned} \hspace{\stretch{1}}(1.0.2)

For each of these terms we can calculate the total energy eigenvalues from

\begin{aligned}\mathbf{S}^2 \Psi = \hbar^2 S (S + 1) \Psi,\end{aligned} \hspace{\stretch{1}}(1.0.3)

where $S$ takes on the values of the total spin for the (possibly composite) spin operator. Thinking about the spin operators in their matrix representation, it’s not obvious to me that we can just add the total spins, so that if $\mathbf{S}_1$ and $\mathbf{S}_2$ are the spin operators for two respective particle, then the total system has a spin operator $\mathbf{S} = \mathbf{S}_1 + \mathbf{S}_2$ (really $\mathbf{S} = \mathbf{S}_1 \otimes I_2 + I_2 \otimes \mathbf{S}_2$, since the respective spin operators only act on their respective particles).

Let’s develop a bit of intuition on this, by calculating the energy eigenvalues of $\mathbf{S}_1 \cdot \mathbf{S}_2$ using Pauli matrices.

First lets look at how each of the Pauli matrices operate on the $S_z$ eigenvectors

\begin{aligned}\sigma_x {\left\lvert {+} \right\rangle} = \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} \begin{bmatrix}1 \\ 0\end{bmatrix}=\begin{bmatrix}0 \\ 1 \end{bmatrix}= {\left\lvert {-} \right\rangle}\end{aligned} \hspace{\stretch{1}}(1.0.4a)

\begin{aligned}\sigma_x {\left\lvert {-} \right\rangle} = \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} \begin{bmatrix}0 \\ 1\end{bmatrix}=\begin{bmatrix}1 \\ 0 \end{bmatrix}= {\left\lvert {+} \right\rangle}\end{aligned} \hspace{\stretch{1}}(1.0.4b)

\begin{aligned}\sigma_y {\left\lvert {+} \right\rangle} = \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} \begin{bmatrix}1 \\ 0\end{bmatrix}=\begin{bmatrix}0 \\ i \end{bmatrix}= i {\left\lvert {-} \right\rangle}\end{aligned} \hspace{\stretch{1}}(1.0.4c)

\begin{aligned}\sigma_y {\left\lvert {-} \right\rangle} = \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} \begin{bmatrix}0 \\ 1\end{bmatrix}=\begin{bmatrix}-i \\ 0 \end{bmatrix}= -i {\left\lvert {+} \right\rangle}\end{aligned} \hspace{\stretch{1}}(1.0.4d)

\begin{aligned}\sigma_z {\left\lvert {+} \right\rangle} = \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} \begin{bmatrix}1 \\ 0\end{bmatrix}=\begin{bmatrix}1 \\ 0 \end{bmatrix}= {\left\lvert {+} \right\rangle}\end{aligned} \hspace{\stretch{1}}(1.0.4e)

\begin{aligned}\sigma_z {\left\lvert {-} \right\rangle} = \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} \begin{bmatrix}0 \\ 1\end{bmatrix}=-\begin{bmatrix}0 \\ 1 \end{bmatrix}= -{\left\lvert {-} \right\rangle}\end{aligned} \hspace{\stretch{1}}(1.0.4f)

Summarizing, these are

\begin{aligned}\sigma_x {\left\lvert {\pm} \right\rangle} = {\left\lvert {\mp} \right\rangle}\end{aligned} \hspace{\stretch{1}}(1.0.5a)

\begin{aligned}\sigma_y {\left\lvert {\pm} \right\rangle} = \pm i {\left\lvert {\mp} \right\rangle}\end{aligned} \hspace{\stretch{1}}(1.0.5b)

\begin{aligned}\sigma_z {\left\lvert {\pm} \right\rangle} = \pm {\left\lvert {\pm} \right\rangle}\end{aligned} \hspace{\stretch{1}}(1.0.5c)

For convienience let’s avoid any sort of direct product notation, with the composite operations defined implicitly by

\begin{aligned}\left( S_{1k} \otimes S_{2k} \right)\left( {\left\lvert {\alpha} \right\rangle} \otimes {\left\lvert {\beta} \right\rangle} \right)=S_{1k} S_{2k} {\left\lvert {\alpha \beta} \right\rangle}=\left( S_{1k} {\left\lvert {\alpha} \right\rangle} \right) \otimes\left( S_{2k} {\left\lvert {\beta} \right\rangle} \right).\end{aligned} \hspace{\stretch{1}}(1.0.6)

Now let’s compute all the various operations

\begin{aligned}\begin{aligned}\sigma_{1x} \sigma_{2x} {\left\lvert {++} \right\rangle} &= {\left\lvert {--} \right\rangle} \\ \sigma_{1x} \sigma_{2x} {\left\lvert {--} \right\rangle} &= {\left\lvert {++} \right\rangle} \\ \sigma_{1x} \sigma_{2x} {\left\lvert {+-} \right\rangle} &= {\left\lvert {-+} \right\rangle} \\ \sigma_{1x} \sigma_{2x} {\left\lvert {-+} \right\rangle} &= {\left\lvert {+-} \right\rangle}\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.7a)

\begin{aligned}\begin{aligned}\sigma_{1y} \sigma_{2y} {\left\lvert {++} \right\rangle} &= i^2 {\left\lvert {--} \right\rangle} \\ \sigma_{1y} \sigma_{2y} {\left\lvert {--} \right\rangle} &= (-i)^2 {\left\lvert {++} \right\rangle} \\ \sigma_{1y} \sigma_{2y} {\left\lvert {+-} \right\rangle} &= i (-i) {\left\lvert {-+} \right\rangle} \\ \sigma_{1y} \sigma_{2y} {\left\lvert {-+} \right\rangle} &= (-i) i {\left\lvert {+-} \right\rangle}\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.7b)

\begin{aligned}\begin{aligned}\sigma_{1z} \sigma_{2z} {\left\lvert {++} \right\rangle} &= (-1)^2 {\left\lvert {--} \right\rangle} \\ \sigma_{1z} \sigma_{2z} {\left\lvert {--} \right\rangle} &= {\left\lvert {++} \right\rangle} \\ \sigma_{1z} \sigma_{2z} {\left\lvert {+-} \right\rangle} &= -{\left\lvert {-+} \right\rangle} \\ \sigma_{1z} \sigma_{2z} {\left\lvert {-+} \right\rangle} &= -{\left\lvert {+-} \right\rangle}\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.7c)

Tabulating first the action of the sum of the $x$ and $y$ operators we have

\begin{aligned}\begin{aligned}\left( \sigma_{1x} \sigma_{2x} + \sigma_{1y} \sigma_{2y} \right) {\left\lvert {++} \right\rangle} &= 0 \\ \left( \sigma_{1x} \sigma_{2x} + \sigma_{1y} \sigma_{2y} \right) {\left\lvert {--} \right\rangle} &= 0 \\ \left( \sigma_{1x} \sigma_{2x} + \sigma_{1y} \sigma_{2y} \right) {\left\lvert {+-} \right\rangle} &= 2 {\left\lvert {-+} \right\rangle} \\ \left( \sigma_{1x} \sigma_{2x} + \sigma_{1y} \sigma_{2y} \right) {\left\lvert {-+} \right\rangle} &= 2 {\left\lvert {+-} \right\rangle}\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.8)

so that

\begin{aligned}\begin{aligned}\mathbf{S}_1 \cdot \mathbf{S}_2 {\left\lvert {++} \right\rangle} &= {\left\lvert {++} \right\rangle} \\ \mathbf{S}_1 \cdot \mathbf{S}_2 {\left\lvert {--} \right\rangle} &= {\left\lvert {--} \right\rangle} \\ \mathbf{S}_1 \cdot \mathbf{S}_2 {\left\lvert {+-} \right\rangle} &= 2 {\left\lvert {-+} \right\rangle} - {\left\lvert {+-} \right\rangle} \\ \mathbf{S}_1 \cdot \mathbf{S}_2 {\left\lvert {-+} \right\rangle} &= 2 {\left\lvert {+-} \right\rangle} - {\left\lvert {-+} \right\rangle}\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.9)

Now we are set to write out the Hamiltonian matrix. Doing this with respect to the basis $\beta = \{ {\left\lvert {++} \right\rangle}, {\left\lvert {--} \right\rangle}, {\left\lvert {+-} \right\rangle}, {\left\lvert {-+} \right\rangle} \}$, we have

\begin{aligned}H &= \mathbf{S}_1 \cdot \mathbf{S}_2 \\ &= \frac{\hbar^2}{4} \begin{bmatrix}\left\langle ++ \right\rvert H \left\lvert ++ \right\rangle & \left\langle ++ \right\rvert H \left\lvert -- \right\rangle & \left\langle ++ \right\rvert H \left\lvert +- \right\rangle & \left\langle ++ \right\rvert H \left\lvert -+ \right\rangle \\ \left\langle -- \right\rvert H \left\lvert ++ \right\rangle & \left\langle -- \right\rvert H \left\lvert -- \right\rangle & \left\langle -- \right\rvert H \left\lvert +- \right\rangle & \left\langle -- \right\rvert H \left\lvert -+ \right\rangle \\ \left\langle +- \right\rvert H \left\lvert ++ \right\rangle & \left\langle +- \right\rvert H \left\lvert -- \right\rangle & \left\langle +- \right\rvert H \left\lvert +- \right\rangle & \left\langle +- \right\rvert H \left\lvert -+ \right\rangle \\ \left\langle -+ \right\rvert H \left\lvert ++ \right\rangle & \left\langle -+ \right\rvert H \left\lvert -- \right\rangle & \left\langle -+ \right\rvert H \left\lvert +- \right\rangle & \left\langle -+ \right\rvert H \left\lvert -+ \right\rangle \end{bmatrix} \\ &= \frac{\hbar^2}{4} \begin{bmatrix}1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & -1 & 2 \\ 0 & 0 & 2 & -1 \end{bmatrix} \end{aligned} \hspace{\stretch{1}}(1.0.10)

Two of the eigenvalues we can read off by inspection, and for the other two need to solve

\begin{aligned}0 =\begin{vmatrix}-\hbar^2/4 - \lambda & \hbar^2/2 \\ \hbar^2/2 & -\hbar^2/4 - \lambda\end{vmatrix}= (\hbar^2/4 + \lambda)^2 - (\hbar^2/2)^2\end{aligned} \hspace{\stretch{1}}(1.0.11)

or

\begin{aligned}\lambda = -\frac{\hbar^2}{4} \pm \frac{\hbar^2}{2} = \frac{\hbar^2}{4}, -\frac{3 \hbar^2}{4}.\end{aligned} \hspace{\stretch{1}}(1.0.12)

These are the last of the triplet energy eigenvalues and the singlet value that we expected from the spin addition method. The eigenvectors for the $\hbar^2/4$ eigenvalue is given by the solution of

\begin{aligned}0 =\frac{\hbar^2}{2}\begin{bmatrix}-1 & 1 \\ 1 & -1\end{bmatrix}\begin{bmatrix}a \\ b\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.13)

So the eigenvector is

\begin{aligned}\frac{1}{{\sqrt{2}}} \left( {\left\lvert {+-} \right\rangle} + {\left\lvert {-+} \right\rangle} \right)\end{aligned} \hspace{\stretch{1}}(1.0.14)

For our $-3\hbar^2/4$ eigenvalue we seek

\begin{aligned}0 =\frac{\hbar^2}{2}\begin{bmatrix}1 & 1 \\ 1 & 1\end{bmatrix}\begin{bmatrix}a \\ b\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.15)

So the eigenvector is

\begin{aligned}\frac{1}{{\sqrt{2}}} \left( {\left\lvert {+-} \right\rangle} - {\left\lvert {-+} \right\rangle} \right)\end{aligned} \hspace{\stretch{1}}(1.0.16)

An orthonormal basis with respective eigenvalues $\hbar^2/4 (\times 3), -3\hbar^2/4$ is thus given by

\begin{aligned}\beta' = \left\{{\left\lvert {++} \right\rangle},{\left\lvert {--} \right\rangle},\frac{1}{{\sqrt{2}}} \left( {\left\lvert {+-} \right\rangle} + {\left\lvert {-+} \right\rangle} \right),\frac{1}{{\sqrt{2}}} \left( {\left\lvert {+-} \right\rangle} - {\left\lvert {-+} \right\rangle} \right)\right\}.\end{aligned} \hspace{\stretch{1}}(1.0.17)

Let’s use this to confirm that for $H = (\mathbf{S}_1 + \mathbf{S}_2)^2$, the two spin $1/2$ particles have a combined spin given by

\begin{aligned}S(S + 1) \hbar^2.\end{aligned} \hspace{\stretch{1}}(1.0.18)

With

\begin{aligned}(\mathbf{S}_1 + \mathbf{S}_2)^2 = \mathbf{S}_1^2 + \mathbf{S}_2^2 + 2 \mathbf{S}_1 \cdot \mathbf{S}_2,\end{aligned} \hspace{\stretch{1}}(1.0.19)

we have for the $\hbar^2/4$ energy eigenstate of $\mathbf{S}_1 \cdot \mathbf{S}_2$

\begin{aligned}2 \hbar^2 \frac{1}{{2}} \left( 1 + \frac{1}{{2}} \right) + 2 \frac{\hbar^2}{4} = 2 \hbar^2,\end{aligned} \hspace{\stretch{1}}(1.0.20)

and for the $-3\hbar^2/4$ energy eigenstate of $\mathbf{S}_1 \cdot \mathbf{S}_2$

\begin{aligned}2 \hbar^2 \frac{1}{{2}} \left( 1 + \frac{1}{{2}} \right) + 2 \left( - \frac{3 \hbar^2}{4} \right) = 0.\end{aligned} \hspace{\stretch{1}}(1.0.21)

We get the $2 \hbar^2$ and $0$ eigenvalues respectively as expected.

## PHY452H1S Basic Statistical Mechanics. Problem Set 5: Temperature

Posted by peeterjoot on March 10, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

# Disclaimer

## Question: Polymer stretching – “entropic forces” (2013 problem set 5, p1)

Consider a toy model of a polymer in one dimension which is made of $N$ steps (amino acids) of unit length, going left or right like a random walk. Let one end of this polymer be at the origin and the other end be at a point $X = \sqrt{N}$ (viz. the rms size of the polymer) , so $1 \ll X \ll N$. We have previously calculated the number of configurations corresponding to this condition (approximate the binomial distribution by a Gaussian).

### Part a

Using this, find the entropy of this polymer as $S = k_{\mathrm{B}} \ln \Omega$. The free energy of this polymer, even in the absence of any other interactions, thus has an entropic contribution, $F = -T S$. If we stretch this polymer, we expect to have fewer available configurations, and thus a smaller entropy and a higher free energy.

### Part b

Find the change in free energy of this polymer if we stretch this polymer from its end being at $X$ to a larger distance $X + \Delta X$.

### Part c

Show that the change in free energy is linear in the displacement for small $\Delta X$, and hence find the temperature dependent “entropic spring constant” of this polymer. (This entropic force is important to overcome for packing DNA into the nucleus, and in many biological processes.)

Typo correction (via email):
You need to show that the change in free energy is quadratic in the displacement $\Delta X$, not linear in $\Delta X$. The force is linear in $\Delta X$. (Exactly as for a “spring”.)

### Entropy.

In lecture 2 probabilities for the sums of fair coin tosses were considered. Assigning $\pm 1$ to the events $Y_k$ for heads and tails coin tosses respectively, a random variable $Y = \sum_k Y_k$ for the total of $N$ such events was found to have the form

\begin{aligned}P_N(Y) = \left\{\begin{array}{l l}\left(\frac{1}{{2}}\right)^N \frac{N!}{\left(\frac{N-Y}{2}\right)!\left(\frac{N+Y}{2}\right)!}& \quad \mbox{if Y and N have same parity} \\ 0& \quad \mbox{otherwise} \end{array}\right.\end{aligned} \hspace{\stretch{1}}(1.1.1)

For an individual coin tosses we have averages $\left\langle{{Y_1}}\right\rangle = 0$, and $\left\langle{{Y_1^2}}\right\rangle = 1$, so the central limit theorem provides us with a large $N$ Gaussian approximation for this distribution

\begin{aligned}P_N(Y) \approx\frac{2}{\sqrt{2 \pi N}} \exp\left( -\frac{Y^2}{2N} \right).\end{aligned} \hspace{\stretch{1}}(1.1.2)

This fair coin toss problem can also be thought of as describing the coordinate of the end point of a one dimensional polymer with the beginning point of the polymer is fixed at the origin. Writing $\Omega(N, Y)$ for the total number of configurations that have an end point at coordinate $Y$ we have

\begin{aligned}P_N(Y) = \frac{\Omega(N, Y)}{2^N},\end{aligned} \hspace{\stretch{1}}(1.1.3)

From this, the total number of configurations that have, say, length $X = \left\lvert {Y} \right\rvert$, in the large $N$ Gaussian approximation, is

\begin{aligned}\Omega(N, X) &= 2^N \left( P_N(+X) +P_N(-X) \right) \\ &= \frac{2^{N + 2}}{\sqrt{2 \pi N}} \exp\left( -\frac{X^2}{2N} \right).\end{aligned} \hspace{\stretch{1}}(1.1.4)

The entropy associated with a one dimensional polymer of length $X$ is therefore

\begin{aligned}S_N(X) &= - k_{\mathrm{B}} \frac{X^2}{2N} + k_{\mathrm{B}} \ln \frac{2^{N + 2}}{\sqrt{2 \pi N}} \\ &= - k_{\mathrm{B}} \frac{X^2}{2N} + \text{constant}.\end{aligned} \hspace{\stretch{1}}(1.1.5)

Writing $S_0$ for this constant the free energy is

\begin{aligned}\boxed{F = U - T S = U + k_{\mathrm{B}} T \frac{X^2}{2N} + S_0 T.}\end{aligned} \hspace{\stretch{1}}(1.1.6)

### Change in free energy.

At constant temperature, stretching the polymer from its end being at $X$ to a larger distance $X + \Delta X$, results in a free energy change of

\begin{aligned}\Delta F &= F( X + \Delta X ) - F(X) \\ &= \frac{k_{\mathrm{B}} T}{2N} \left( (X + \Delta X)^2 - X^2 \right) \\ &= \frac{k_{\mathrm{B}} T}{2N} \left( 2 X \Delta X + (\Delta X)^2 \right)\end{aligned} \hspace{\stretch{1}}(1.1.7)

If $\Delta X$ is assumed small, our constant temperature change in free energy $\Delta F \approx (\partial F/\partial X)_T \Delta X$ is

\begin{aligned}\boxed{\Delta F = \frac{k_{\mathrm{B}} T}{N} X \Delta X.}\end{aligned} \hspace{\stretch{1}}(1.1.8)

### Temperature dependent spring constant.

I found the statement and subsequent correction of the problem statement somewhat confusing. To figure this all out, I thought it was reasonable to step back and relate free energy to the entropic force explicitly.

Consider temporarily a general thermodynamic system, for which we have by definition free energy and thermodynamic identity respectively

\begin{aligned}F = U - T S,\end{aligned} \hspace{\stretch{1}}(1.0.9a)

\begin{aligned}dU = T dS - P dV.\end{aligned} \hspace{\stretch{1}}(1.0.9b)

The differential of the free energy is

\begin{aligned}dF &= dU - T dS - S dT \\ &= -P dV - S dT \\ &= \left( \frac{\partial {F}}{\partial {T}} \right)_V dT+\left( \frac{\partial {F}}{\partial {V}} \right)_T dV.\end{aligned} \hspace{\stretch{1}}(1.0.10)

Forming the wedge product with $dT$, we arrive at the two form

\begin{aligned}0 &= \left( \left( P + \left( \frac{\partial {F}}{\partial {V}} \right)_T \right) dV + \left( S + \left( \frac{\partial {F}}{\partial {T}} \right)_V \right) dT \right)\wedge dT \\ &= \left( P + \left( \frac{\partial {F}}{\partial {V}} \right)_T \right) dV \wedge dT,\end{aligned} \hspace{\stretch{1}}(1.0.11)

This provides the relation between free energy and the “pressure” for the system

\begin{aligned}P = - \left( \frac{\partial {F}}{\partial {V}} \right)_T.\end{aligned} \hspace{\stretch{1}}(1.0.12)

For a system with a constant cross section $\Delta A$, $dV = \Delta A dX$, so the force associated with the system is

\begin{aligned}f &= P \Delta A \\ &= - \frac{1}{{\Delta A}} \left( \frac{\partial {F}}{\partial {X}} \right)_T \Delta A,\end{aligned} \hspace{\stretch{1}}(1.0.13)

or

\begin{aligned}f = - \left( \frac{\partial {F}}{\partial {X}} \right)_T.\end{aligned} \hspace{\stretch{1}}(1.0.14)

Okay, now we have a relation between the force and the rate of change of the free energy

\begin{aligned}f(X) = -\frac{k_{\mathrm{B}} T}{N} X.\end{aligned} \hspace{\stretch{1}}(1.0.15)

Our temperature dependent “entropic spring constant” in analogy with $f = -k X$, is therefore

\begin{aligned}\boxed{k = \frac{k_{\mathrm{B}} T}{N}.}\end{aligned} \hspace{\stretch{1}}(1.0.16)

## Question: Independent one-dimensional harmonic oscillators (2013 problem set 5, p2)

Consider a set of $N$ independent classical harmonic oscillators, each having a frequency $\omega$.

### Part a

Find the canonical partition at a temperature $T$ for this system of oscillators keeping track of correction factors of Planck constant. (Note that the oscillators are distinguishable, and we do not need $1/N!$ correction factor.)

### Part b

Using this, derive the mean energy and the specific heat at temperature $T$.

### Part c

For quantum oscillators, the partition function of each oscillator is simply $\sum_n e^{-\beta E_n}$ where $E_n$ are the (discrete) energy levels given by $(n + 1/2)\hbar \omega$, with $n = 0,1,2,\cdots$. Hence, find the canonical partition function for $N$ independent distinguishable quantum oscillators, and find the mean energy and specific heat at temperature $T$.

### Part d

Show that the quantum results go over into the classical results at high temperature $k_{\mathrm{B}} T \gg \hbar \omega$, and comment on why this makes sense.

### Part e

Also find the low temperature behavior of the specific heat in both classical and quantum cases when $k_{\mathrm{B}} T \ll \hbar \omega$.

### Classical partition function

For a single particle in one dimension our partition function is

\begin{aligned}Z_1 = \frac{1}{{h}} \int dp dq e^{-\beta \left( \frac{1}{{2 m}} p^2 + \frac{1}{{2}} m \omega^2 q^2 \right)},\end{aligned} \hspace{\stretch{1}}(1.0.17)

with

\begin{aligned}a = \sqrt{\frac{\beta}{2 m}} p\end{aligned} \hspace{\stretch{1}}(1.0.18a)

\begin{aligned}b = \sqrt{\frac{\beta m}{2}} \omega q,\end{aligned} \hspace{\stretch{1}}(1.0.18b)

we have

\begin{aligned}Z_1 &= \frac{1}{{h \omega}} \sqrt{\frac{2 m}{\beta}} \sqrt{\frac{2}{\beta m}} \int da db e^{-a^2 - b^2} \\ &= \frac{2}{\beta h \omega}2 \pi \int_0^\infty r e^{-r^2} \\ &= \frac{2 \pi}{\beta h \omega} \\ &= \frac{1}{\beta \hbar \omega}.\end{aligned} \hspace{\stretch{1}}(1.0.19)

So for $N$ distinguishable classical one dimensional harmonic oscillators we have

\begin{aligned}\boxed{Z_N(T) = Z_1^N = \left( \frac{k_{\mathrm{B}} T}{\hbar \omega} \right)^N.}\end{aligned} \hspace{\stretch{1}}(1.0.20)

### Classical mean energy and heat capacity

From the free energy

\begin{aligned}F = -k_{\mathrm{B}} T \ln Z_N = N k_{\mathrm{B}} T \ln (\beta \hbar \omega),\end{aligned} \hspace{\stretch{1}}(1.0.21)

we can compute the mean energy

\begin{aligned}U &= \frac{1}{{k_{\mathrm{B}}}} \frac{\partial {}}{\partial {\beta}} \left( \frac{F}{T} \right) \\ &= N \frac{\partial {}}{\partial {\beta}} \ln (\beta \hbar \omega) \\ &= \frac{N }{\beta},\end{aligned} \hspace{\stretch{1}}(1.0.22)

or

\begin{aligned}\boxed{U = N k_{\mathrm{B}} T.}\end{aligned} \hspace{\stretch{1}}(1.0.23)

The specific heat follows immediately

\begin{aligned}\boxed{C_{\mathrm{V}} = \frac{\partial {U}}{\partial {T}} = N k_{\mathrm{B}}.}\end{aligned} \hspace{\stretch{1}}(1.0.24)

### Quantum partition function, mean energy and heat capacity

For a single one dimensional quantum oscillator, our partition function is

\begin{aligned}Z_1 &= \sum_{n = 0}^\infty e^{-\beta \hbar \omega \left( n + \frac{1}{{2}} \right)} \\ &= e^{-\beta \hbar \omega/2}\sum_{n = 0}^\infty e^{-\beta \hbar \omega n} \\ &= \frac{e^{-\beta \hbar \omega/2}}{1 - e^{-\beta \hbar \omega}} \\ &= \frac{1}{e^{\beta \hbar \omega/2} - e^{-\beta \hbar \omega/2}} \\ &= \frac{1}{{\sinh(\beta \hbar \omega/2)}}.\end{aligned} \hspace{\stretch{1}}(1.0.25)

Assuming distinguishable quantum oscillators, our $N$ particle partition function is

\begin{aligned}\boxed{Z_N(\beta) = \frac{1}{{\sinh^N(\beta \hbar \omega/2)}}.}\end{aligned} \hspace{\stretch{1}}(1.0.26)

This time we don’t add the $1/\hbar$ correction factor, nor the $N!$ indistinguishability correction factor.

Our free energy is

\begin{aligned}F = N k_{\mathrm{B}} T \ln \sinh(\beta \hbar \omega/2),\end{aligned} \hspace{\stretch{1}}(1.0.27)

our mean energy is

\begin{aligned}U &= \frac{1}{{k_{\mathrm{B}}}} \frac{\partial {}}{\partial {\beta}} \frac{F}{T} \\ &= N \frac{\partial {}}{\partial {\beta}}\ln \sinh(\beta \hbar \omega/2) \\ &= N \frac{\cosh( \beta \hbar \omega/2 )}{\sinh(\beta \hbar \omega/2)} \frac{\hbar \omega}{2},\end{aligned} \hspace{\stretch{1}}(1.0.28)

or

\begin{aligned}\boxed{U(T)= \frac{N \hbar \omega}{2} \coth \left( \frac{\hbar \omega}{2 k_{\mathrm{B}} T} \right).}\end{aligned} \hspace{\stretch{1}}(1.0.29)

This is plotted in fig. 1.1.

Fig 1.1: Mean energy for N one dimensional quantum harmonic oscillators

With $\coth'(x) = -1/\sinh^2(x)$, our specific heat is

\begin{aligned}C_{\mathrm{V}} &= \frac{\partial {U}}{\partial {T}} \\ &= \frac{N \hbar \omega}{2} \frac{-1}{\sinh^2 \left( \frac{\hbar \omega}{2 k_{\mathrm{B}} T} \right)} \frac{\hbar \omega}{2 k_{\mathrm{B}}} \left( \frac{-1}{T^2} \right),\end{aligned} \hspace{\stretch{1}}(1.0.30)

or

\begin{aligned}\boxed{C_{\mathrm{V}} = N k_{\mathrm{B}}\left( \frac{\hbar \omega}{2 k_{\mathrm{B}} T \sinh \left( \frac{\hbar \omega}{2 k_{\mathrm{B}} T} \right) } \right)^2.}\end{aligned} \hspace{\stretch{1}}(1.0.31)

### Classical limits

In the high temperature limit $1 \gg \hbar \omega/k_{\mathrm{B}} T$, we have

\begin{aligned}\cosh \left( \frac{\hbar \omega}{2 k_{\mathrm{B}} T} \right)\approx 1\end{aligned} \hspace{\stretch{1}}(1.0.32)

\begin{aligned}\sinh \left( \frac{\hbar \omega}{2 k_{\mathrm{B}} T} \right)\approx \frac{\hbar \omega}{2 k_{\mathrm{B}} T},\end{aligned} \hspace{\stretch{1}}(1.0.33)

so

\begin{aligned}U \approx N \frac{\not{{\hbar \omega}}}{\not{{2}}} \frac{\not{{2}} k_{\mathrm{B}} T}{\not{{\hbar \omega}}},\end{aligned} \hspace{\stretch{1}}(1.0.34)

or

\begin{aligned}U(T) \approx N k_{\mathrm{B}} T,\end{aligned} \hspace{\stretch{1}}(1.0.35)

matching the classical result of eq. 1.0.23. Similarly from the quantum specific heat result of eq. 1.0.31, we have

\begin{aligned}C_{\mathrm{V}}(T) \approx N k_{\mathrm{B}}\left( \frac{\hbar \omega}{2 k_{\mathrm{B}} T \left( \frac{\hbar \omega}{2 k_{\mathrm{B}} T} \right) } \right)^2= N k_{\mathrm{B}}.\end{aligned} \hspace{\stretch{1}}(1.0.36)

This matches our classical result from eq. 1.0.24. We expect this equivalence at high temperatures since our quantum harmonic partition function eq. 1.0.26 is approximately

\begin{aligned}Z_N \approx \frac{2}{\beta \hbar \omega},\end{aligned} \hspace{\stretch{1}}(1.0.37)

This differs from the classical partition function only by this factor of $2$. While this alters the free energy by $k_{\mathrm{B}} T \ln 2$, it doesn’t change the mean energy since ${\partial {(k_{\mathrm{B}} \ln 2)}}/{\partial {\beta}} = 0$. At high temperatures the mean energy are large enough that the quantum nature of the system has no significant effect.

### Low temperature limits

For the classical case the heat capacity was constant ($C_{\mathrm{V}} = N k_{\mathrm{B}}$), all the way down to zero. For the quantum case the heat capacity drops to zero for low temperatures. We can see that via L’hopitals rule. With $x = \hbar \omega \beta/2$ the low temperature limit is

\begin{aligned}\lim_{T \rightarrow 0} C_{\mathrm{V}} &= N k_{\mathrm{B}} \lim_{x \rightarrow \infty} \frac{x^2}{\sinh^2 x} \\ &= N k_{\mathrm{B}} \lim_{x \rightarrow \infty} \frac{2x }{2 \sinh x \cosh x} \\ &= N k_{\mathrm{B}} \lim_{x \rightarrow \infty} \frac{1 }{\cosh^2 x + \sinh^2 x} \\ &= N k_{\mathrm{B}} \lim_{x \rightarrow \infty} \frac{1 }{\cosh (2 x) } \\ &= 0.\end{aligned} \hspace{\stretch{1}}(1.0.38)

We also see this in the plot of fig. 1.2.

Fig 1.2: Specific heat for N quantum oscillators

## Question: Quantum electric dipole (2013 problem set 5, p3)

A quantum electric dipole at a fixed space point has its energy determined by two parts – a part which comes from its angular motion and a part coming from its interaction with an applied electric field $\mathcal{E}$. This leads to a quantum Hamiltonian

\begin{aligned}H = \frac{\mathbf{L} \cdot \mathbf{L}}{2 I} - \mu \mathcal{E} L_z,\end{aligned} \hspace{\stretch{1}}(1.0.39)

where $I$ is the moment of inertia, and we have assumed an electric field $\mathcal{E} = \mathcal{E} \hat{\mathbf{z}}$. This Hamiltonian has eigenstates described by spherical harmonics $Y_{l, m}(\theta, \phi)$, with $m$ taking on $2l+1$ possible integral values, $m = -l, -l + 1, \cdots, l -1, l$. The corresponding eigenvalues are

\begin{aligned}\lambda_{l, m} = \frac{l(l+1) \hbar^2}{2I} - \mu \mathcal{E} m \hbar.\end{aligned} \hspace{\stretch{1}}(1.0.40)

(Recall that $l$ is the total angular momentum eigenvalue, while $m$ is the eigenvalue corresponding to $L_z$.)

### Part a

Schematically sketch these eigenvalues as a function of $\mathcal{E}$ for $l = 0,1,2$.

### Part b

Find the quantum partition function, assuming only $l = 0$ and $l = 1$ contribute to the sum.

### Part c

Using this partition function, find the average dipole moment $\mu \left\langle{{L_z}}\right\rangle$ as a function of the electric field and temperature for small electric fields, commenting on its behavior at very high temperature and very low temperature.

### Part d

Estimate the temperature above which discarding higher angular momentum states, with $l \ge 2$, is not a good approximation.

### Sketch the energy eigenvalues

Let’s summarize the values of the energy eigenvalues $\lambda_{l,m}$ for $l = 0, 1, 2$ before attempting to plot them.

$l = 0$

For $l = 0$, the azimuthal quantum number can only take the value $m = 0$, so we have

\begin{aligned}\lambda_{0,0} = 0.\end{aligned} \hspace{\stretch{1}}(1.0.41)

$l = 1$

For $l = 1$ we have

\begin{aligned}\frac{l(l+1)}{2} = 1(2)/2 = 1,\end{aligned} \hspace{\stretch{1}}(1.0.42)

so we have

\begin{aligned}\lambda_{1,0} = \frac{\hbar^2}{I} \end{aligned} \hspace{\stretch{1}}(1.0.43a)

\begin{aligned}\lambda_{1,\pm 1} = \frac{\hbar^2}{I} \mp \mu \mathcal{E} \hbar.\end{aligned} \hspace{\stretch{1}}(1.0.43b)

$l = 2$

For $l = 2$ we have

\begin{aligned}\frac{l(l+1)}{2} = 2(3)/2 = 3,\end{aligned} \hspace{\stretch{1}}(1.0.44)

so we have

\begin{aligned}\lambda_{2,0} = \frac{3 \hbar^2}{I} \end{aligned} \hspace{\stretch{1}}(1.0.45a)

\begin{aligned}\lambda_{2,\pm 1} = \frac{3 \hbar^2}{I} \mp \mu \mathcal{E} \hbar\end{aligned} \hspace{\stretch{1}}(1.0.45b)

\begin{aligned}\lambda_{2,\pm 2} = \frac{3 \hbar^2}{I} \mp 2 \mu \mathcal{E} \hbar.\end{aligned} \hspace{\stretch{1}}(1.0.45c)

These are sketched as a function of $\mathcal{E}$ in fig. 1.3.

Fig 1.3: Energy eigenvalues for l = 0,1, 2

### Partition function

Our partition function, in general, is

\begin{aligned}Z &= \sum_{l = 0}^\infty \sum_{m = -l}^l e^{-\lambda_{l,m} \beta} \\ &= \sum_{l = 0}^\infty \exp\left( -\frac{l (l+1) \hbar^2 \beta}{2 I} \right)\sum_{m = -l}^l e^{ m \mu \hbar \mathcal{E} \beta}.\end{aligned} \hspace{\stretch{1}}(1.0.46)

Dropping all but $l = 0, 1$ terms this is

\begin{aligned}Z \approx 1 + e^{-\hbar^2 \beta/I} \left( 1 + e^{- \mu \hbar \mathcal{E} \beta } + e^{ \mu \hbar \mathcal{E} \beta} \right),\end{aligned} \hspace{\stretch{1}}(1.0.47)

or

\begin{aligned}\boxed{Z \approx 1 + e^{-\hbar^2 \beta/I} (1 + 2 \cosh\left( \mu \hbar \mathcal{E} \beta \right)).}\end{aligned} \hspace{\stretch{1}}(1.0.48)

### Average dipole moment

For the average dipole moment, averaging over both the states and the partitions, we have

\begin{aligned}Z \left\langle{{ \mu L_z }}\right\rangle &= \sum_{l = 0}^\infty \sum_{m = -l}^l {\left\langle {l m} \right\rvert} \mu L_z {\left\lvert {l m} \right\rangle} e^{-\beta \lambda_{l, m}} \\ &= \sum_{l = 0}^\infty \sum_{m = -l}^l \mu {\left\langle {l m} \right\rvert} m \hbar {\left\lvert {l m} \right\rangle} e^{-\beta \lambda_{l, m}} \\ &= \mu \hbar \sum_{l = 0}^\infty \exp\left( -\frac{l (l+1) \hbar^2 \beta}{2 I} \right)\sum_{m = -l}^l m e^{ \mu m \hbar \mathcal{E} \beta} \\ &= \mu \hbar \sum_{l = 0}^\infty \exp\left( -\frac{l (l+1) \hbar^2 \beta}{2 I} \right)\sum_{m = 1}^l m \left( e^{ \mu m \hbar \mathcal{E} \beta} -e^{-\mu m \hbar \mathcal{E} \beta} \right) \\ &= 2 \mu \hbar \sum_{l = 0}^\infty \exp\left( -\frac{l (l+1) \hbar^2 \beta}{2 I} \right)\sum_{m = 1}^l m \sinh (\mu m \hbar \mathcal{E} \beta).\end{aligned} \hspace{\stretch{1}}(1.0.49)

For the cap of $l = 1$ we have

\begin{aligned}\left\langle{{ \mu L_z }}\right\rangle \approx\frac{2 \mu \hbar }{Z}\left( 1 (0) + e^{-\hbar^2 \beta/ I} \sinh (\mu \hbar \mathcal{E} \beta) \right)\approx2 \mu \hbar \frac{e^{-\hbar^2 \beta/ I} \sinh (\mu \hbar \mathcal{E} \beta) }{1 + e^{-\hbar^2 \beta/I} \left( 1 + 2 \cosh( \mu \hbar \mathcal{E} \beta) \right)},\end{aligned} \hspace{\stretch{1}}(1.0.50)

or

\begin{aligned}\boxed{\left\langle{{ \mu L_z }}\right\rangle \approx\frac{2 \mu \hbar \sinh (\mu \hbar \mathcal{E} \beta) }{e^{\hbar^2 \beta/I} + 1 + 2 \cosh( \mu \hbar \mathcal{E} \beta)}.}\end{aligned} \hspace{\stretch{1}}(1.0.51)

This is plotted in fig. 1.4.

Fig 1.4: Dipole moment

For high temperatures $\mu \hbar \mathcal{E} \beta \ll 1$ or $k_{\mathrm{B}} T \gg \mu \hbar \mathcal{E}$, expanding the hyperbolic sine and cosines to first and second order respectively and the exponential to first order we have

\begin{aligned}\left\langle{{ \mu L_z }}\right\rangle &\approx 2 \mu \hbar \frac{ \frac{\mu \hbar \mathcal{E}}{k_{\mathrm{B}} T}}{ 4 + \frac{h^2}{I k_{\mathrm{B}} T} + \left( \frac{\mu \hbar \mathcal{E}}{k_{\mathrm{B}} T} \right)^2}=\frac{2 (\mu \hbar)^2 \mathcal{E} k_{\mathrm{B}} T}{4 (k_{\mathrm{B}} T)^2 + \hbar^2 k_{\mathrm{B}} T/I + (\mu \hbar \mathcal{E})^2 } \\ &\approx\frac{(\mu \hbar)^2 \mathcal{E}}{4 k_{\mathrm{B}} T}.\end{aligned} \hspace{\stretch{1}}(1.0.52)

Our dipole moment tends to zero approximately inversely proportional to temperature. These last two respective approximations are plotted along with the all temperature range result in fig. 1.5.

Fig 1.5: High temperature approximations to dipole moments

For low temperatures $k_{\mathrm{B}} T \ll \mu \hbar \mathcal{E}$, where $\mu \hbar \mathcal{E} \beta \gg 1$ we have

\begin{aligned}\left\langle{{ \mu L_z }}\right\rangle \approx\frac{ 2 \mu \hbar e^{\mu \hbar \mathcal{E} \beta} }{ e^{\hbar^2 \beta/I} + e^{\mu \hbar \mathcal{E} \beta} }=\frac{ 2 \mu \hbar }{ 1 + e^{ (\hbar^2 \beta/I - \mu \hbar \mathcal{E})/{k_{\mathrm{B}} T} } }.\end{aligned} \hspace{\stretch{1}}(1.0.53)

Provided the electric field is small enough (which means here that $\mathcal{E} < \hbar/\mu I$) this will look something like fig. 1.6.

Fig 1.6: Low temperature dipole moment behavior

### Approximation validation

In order to validate the approximation, let’s first put the partition function and the numerator of the dipole moment into a tidier closed form, evaluating the sums over the radial indices $l$. First let’s sum the exponentials for the partition function, making an $n = m + l$

\begin{aligned}\sum_{m = -l}^l a^m &= a^{-l} \sum_{n=0}^{2l} a^n \\ &= a^{-l} \frac{a^{2l + 1} - 1}{a - 1} \\ &= \frac{a^{l + 1} - a^{-l}}{a - 1} \\ &= \frac{a^{l + 1/2} - a^{-(l+1/2)}}{a^{1/2} - a^{-1/2}}.\end{aligned} \hspace{\stretch{1}}(1.0.54)

With a substitution of $a = e^b$, we have

\begin{aligned}\boxed{\sum_{m = -l}^l e^{b m}=\frac{\sinh(b(l + 1/2))}{\sinh(b/2)}.}\end{aligned} \hspace{\stretch{1}}(1.0.55)

Now we can sum the azimuthal exponentials for the dipole moment. This sum is of the form

\begin{aligned}\sum_{m = -l}^l m a^m &= a \left( \sum_{m = 1}^l + \sum_{m = -l}^{-1} \right)m a^{m-1} \\ &= a \frac{d}{da}\sum_{m = 1}^l\left( a^{m} + a^{-m} \right) \\ &= a \frac{d}{da}\left( \sum_{m = -l}^l a^m - \not{{1}} \right) \\ &= a \frac{d}{da}\left( \frac{a^{l + 1/2} - a^{-(l+1/2)}}{a^{1/2} - a^{-1/2}} \right).\end{aligned} \hspace{\stretch{1}}(1.0.56)

With $a = e^{b}$, and $1 = a db/da$, we have

\begin{aligned}a \frac{d}{da} = a \frac{db}{da} \frac{d}{db} = \frac{d}{db},\end{aligned} \hspace{\stretch{1}}(1.0.57)

we have

\begin{aligned}\sum_{m = -l}^l m e^{b m}= \frac{d}{db}\left( \frac{ \sinh(b(l + 1/2)) }{ \sinh(b/2) } \right).\end{aligned} \hspace{\stretch{1}}(1.0.58)

With a little help from Mathematica to simplify that result we have

\begin{aligned}\boxed{\sum_{m = -l}^l m e^{b m}=\frac{l \sinh(b (l+1)) - (l+1) \sinh(b l) }{2 \sinh^2(b/2)}.}\end{aligned} \hspace{\stretch{1}}(1.0.59)

We can now express the average dipole moment with only sums over radial indices $l$

\begin{aligned}\left\langle{{ \mu L_z }}\right\rangle &= \mu \hbar \frac{ \sum_{l = 0}^\infty \exp\left( -\frac{l (l+1) \hbar^2 \beta}{2 I} \right) \sum_{m = -l}^l m e^{ \mu m \hbar \mathcal{E} \beta}}{ \sum_{l = 0}^\infty \exp\left( -\frac{l (l+1) \hbar^2 \beta}{2 I} \right) \sum_{m = -l}^l e^{ m \mu \hbar \mathcal{E} \beta}} \\ &= \mu \hbar\frac{ \sum_{l = 0}^\infty \exp\left( -\frac{l (l+1) \hbar^2 \beta}{2 I} \right) \frac { l \sinh(\mu \hbar \mathcal{E} \beta (l+1)) - (l+1) \sinh(\mu \hbar \mathcal{E} \beta l) } { 2 \sinh^2(\mu \hbar \mathcal{E} \beta/2) }}{\sum_{l = 0}^\infty \exp\left( -\frac{l (l+1) \hbar^2 \beta}{2 I} \right) \frac { \sinh(\mu \hbar \mathcal{E} \beta(l + 1/2)) } { \sinh(\mu \hbar \mathcal{E} \beta/2) }}.\end{aligned} \hspace{\stretch{1}}(1.0.60)

So our average dipole moment is

\begin{aligned}\boxed{\left\langle{{ \mu L_z }}\right\rangle = \frac{\mu \hbar }{2 \sinh(\mu \hbar \mathcal{E} \beta/2)}\frac{ \sum_{l = 0}^\infty \exp\left( -\frac{l (l+1) \hbar^2 \beta}{2 I} \right)\left( l \sinh(\mu \hbar \mathcal{E} \beta (l+1)) - (l+1) \sinh(\mu \hbar \mathcal{E} \beta l) \right)}{ \sum_{l = 0}^\infty \exp\left( -\frac{l (l+1) \hbar^2 \beta}{2 I} \right) \sinh(\mu \hbar \mathcal{E} \beta(l + 1/2))}.}\end{aligned} \hspace{\stretch{1}}(1.0.61)

The hyperbolic sine in the denominator from the partition function and the difference of hyperbolic sines in the numerator both grow fast. This is illustrated in fig. 1.7.

Fig 1.7: Hyperbolic sine plots for dipole moment

Let’s look at the order of these hyperbolic sines for large arguments. For the numerator we have a difference of the form

\begin{aligned}x \sinh( x + 1 ) - (x + 1) \sinh ( x ) &= \frac{1}{{2}} \left( x \left( e^{x + 1} - e^{-x - 1} \right) -(x +1 ) \left( e^{x } - e^{-x } \right) \right)\approx\frac{1}{{2}} \left( x e^{x + 1} -(x +1 ) e^{x } \right) \\ &= \frac{1}{{2}} \left( x e^{x} ( e - 1 ) - e^x \right) \\ &= O(x e^x).\end{aligned} \hspace{\stretch{1}}(1.0.62)

For the hyperbolic sine from the partition function we have for large $x$

\begin{aligned}\sinh( x + 1/2) = \frac{1}{{2}} \left( e^{x + 1/2} - e^{-x - 1/2} \right)\approx \frac{\sqrt{e}}{2} e^{x}= O(e^x).\end{aligned} \hspace{\stretch{1}}(1.0.63)

While these hyperbolic sines increase without bound as $l$ increases, we have a negative quadratic dependence on $l$ in the $\mathbf{L}^2$ contribution to these sums, provided that is small enough we can neglect the linear growth of the hyperbolic sines. We wish for that factor to be large enough that it dominates for all $l$. That is

\begin{aligned}\frac{l(l+1) \hbar^2}{2 I k_{\mathrm{B}} T} \gg 1,\end{aligned} \hspace{\stretch{1}}(1.0.64)

or

\begin{aligned}T \ll \frac{l(l+1) \hbar^2}{2 I k_{\mathrm{B}} T}.\end{aligned} \hspace{\stretch{1}}(1.0.65)

Observe that the RHS of this inequality, for $l = 1, 2, 3, 4, \cdots$ satisfies

\begin{aligned}\frac{\hbar^2 }{I k_{\mathrm{B}}}<\frac{3 \hbar^2 }{I k_{\mathrm{B}}}<\frac{6 \hbar^2 }{I k_{\mathrm{B}}}<\frac{10 \hbar^2 }{I k_{\mathrm{B}}}< \cdots\end{aligned} \hspace{\stretch{1}}(1.0.66)

So, for small electric fields, our approximation should be valid provided our temperature is constrained by

\begin{aligned}\boxed{T \ll \frac{\hbar^2 }{I k_{\mathrm{B}}}.}\end{aligned} \hspace{\stretch{1}}(1.0.67)

## An updated compilation of notes, for ‘PHY452H1S Basic Statistical Mechanics’, Taught by Prof. Arun Paramekanti

Posted by peeterjoot on March 3, 2013

That compilation now all of the following too (no further updates will be made to any of these) :

February 28, 2013 Rotation of diatomic molecules

February 28, 2013 Helmholtz free energy

February 26, 2013 Statistical and thermodynamic connection

February 24, 2013 Ideal gas

February 16, 2013 One dimensional well problem from Pathria chapter II

February 15, 2013 1D pendulum problem in phase space

February 14, 2013 Continuing review of thermodynamics

February 13, 2013 Lightning review of thermodynamics

February 11, 2013 Cartesian to spherical change of variables in 3d phase space

February 10, 2013 n SHO particle phase space volume

February 10, 2013 Change of variables in 2d phase space

February 10, 2013 Some problems from Kittel chapter 3

February 07, 2013 Midterm review, thermodynamics

February 06, 2013 Limit of unfair coin distribution, the hard way

February 05, 2013 Ideal gas and SHO phase space volume calculations

February 03, 2013 One dimensional random walk

February 02, 2013 1D SHO phase space

February 02, 2013 Application of the central limit theorem to a product of random vars

January 31, 2013 Liouville’s theorem questions on density and current

January 30, 2013 State counting

## PHY452H1S Basic Statistical Mechanics. Lecture 11: Statistical and thermodynamic connection. Taught by Prof.\ Arun Paramekanti

Posted by peeterjoot on February 27, 2013

# Disclaimer

Peeter’s lecture notes from class. May not be entirely coherent.

# Connections between statistical and thermodynamic views

• “Heat”. Disorganized energy.
• $S_{\text{Statistical entropy}}$. This is the thermodynamic entropy introduced by Boltzmann (microscopic).

# Ideal gas

\begin{aligned}H = \sum_{i = 1}^N \frac{\mathbf{p}_i^2}{2m}\end{aligned} \hspace{\stretch{1}}(1.3.1)

\begin{aligned}\Omega(E) = \frac{1}{{h^{3N} N!}}\int d\mathbf{x}_1 d\mathbf{x}_2 \cdots d\mathbf{x}_Nd\mathbf{p}_1 d\mathbf{p}_2 \cdots d\mathbf{p}_N\delta( E - H )\end{aligned} \hspace{\stretch{1}}(1.3.2)

Let’s isolate the contribution of the Hamiltonian from a single particle and all the rest

\begin{aligned}H = \frac{\mathbf{p}_1^2}{2m}+\sum_{i \ne 1}^N \frac{\mathbf{p}_i^2}{2m}=\frac{\mathbf{p}_1^2}{2m}+H'\end{aligned} \hspace{\stretch{1}}(1.3.3)

so that the number of states in the phase space volume in the phase space region associated with the energy is

\begin{aligned}\Omega(N, E) &= \frac{V^N}{h^{3N} N!}\int d\mathbf{p}_1\int d\mathbf{p}_2 d\mathbf{p}_3 \cdots d\mathbf{p}_N\delta( E - H' - H_1) \\ &= \frac{V^{N-1}}{h^{3(N-1)} (N-1)!} \frac{V}{h^3 N}\int d\mathbf{p}_1\int d\mathbf{p}_2 d\mathbf{p}_3 \cdots d\mathbf{p}_N\delta( E - H' - H_1) \\ &= \frac{ V }{ h^3 N} \int d\mathbf{p}_1 \Omega( N-1, E - H_1 )\end{aligned} \hspace{\stretch{1}}(1.3.4)

With entropy defined by

\begin{aligned}S = k_{\mathrm{B}} \ln \Omega,\end{aligned} \hspace{\stretch{1}}(1.3.5)

we have

\begin{aligned}\Omega( N-1, E - H_1 ) = \exp\left( \frac{1}{k_{\mathrm{B}}} S \left( N-1, E - \frac{\mathbf{p}_1^2}{2m} \right) \right)\end{aligned} \hspace{\stretch{1}}(1.3.6)

so that

\begin{aligned}\Omega(N, E) =\frac{ V }{ h^3 N} \int d\mathbf{p}_1 \exp\left( \frac{1}{k_{\mathrm{B}}} S \left( N-1, E - \frac{\mathbf{p}_1^2}{2m} \right) \right)\end{aligned} \hspace{\stretch{1}}(1.3.7)

For $N \gg 1$ and $E \gg \mathbf{p}_1^2/2m$, the exponential can be approximated by

\begin{aligned}\exp\left( \frac{1}{k_{\mathrm{B}}} S \left( N-1, E - \frac{\mathbf{p}_1^2}{2m} \right) \right)= \exp\left( \frac{1}{k_{\mathrm{B}}} \left( S(N, E) - \left( \frac{\partial {S}}{\partial {N}} \right)_{E, V} - \frac{\mathbf{p}_1^2}{2m} \left( \frac{\partial {S}}{\partial {E}} \right)_{N, V} \right) \right),\end{aligned} \hspace{\stretch{1}}(1.3.8)

so that

\begin{aligned}\Omega(N, E) = \underbrace{\frac{ V }{ h^3 N} \int d\mathbf{p}_1 e^{\frac{S}{k_{\mathrm{B}}}(N, E)}e^{-\frac{1}{{k_{\mathrm{B}}}}\left( \frac{\partial {S}}{\partial {N}} \right)_{E, V}}}_{B}\int d\mathbf{p}_1 e^{-\frac{\mathbf{p}_1^2}{2m k_{\mathrm{B}}}\left( \frac{\partial {S}}{\partial {E}} \right)_{N, V}}.\end{aligned} \hspace{\stretch{1}}(1.3.9)

or

\begin{aligned}\Omega(N, E) = B\int d\mathbf{p}_1 e^{-\frac{\mathbf{p}_1^2}{2m k_{\mathrm{B}}}\left( \frac{\partial {S}}{\partial {E}} \right)_{N, V}}.\end{aligned} \hspace{\stretch{1}}(1.3.10)

\begin{aligned}\mathcal{P}(\mathbf{p}_1) \propto e^{-\frac{\mathbf{p}_1^2}{2m k_{\mathrm{B}} T}}.\end{aligned} \hspace{\stretch{1}}(1.3.11)

This is the Maxwell distribution.

# Non-ideal gas. General classical system

Fig 1: Partitioning out a subset of a larger system

Breaking the system into a subsystem $1$ and the reservoir $2$ so that with

\begin{aligned}H = H_1 + H_2\end{aligned} \hspace{\stretch{1}}(1.4.12)

we have

\begin{aligned}\Omega(N, V, E) &= \int d\{x_1\}d\{p_1\}d\{x_2\}d\{p_2\}\delta( E - H_1 - H_2 ) \frac{1}{{ h^{3N_1} N_1! h^{3 N_2} N_2!}} \\ &\propto \int d\{x_1\}d\{p_1\}e^{\frac{1}{{k_{\mathrm{B}}}} S(E - H_1, N - N_1)}\end{aligned} \hspace{\stretch{1}}(1.4.13)

\begin{aligned}\Omega(N, V, E) \sim \int d\{x_1\}d\{p_1\}\underbrace{e^{\frac{1}{{k_{\mathrm{B}}}}S(E, N)}e^{-\frac{N_1 }{k_{\mathrm{B}}}\left( \frac{\partial {S}}{\partial {N}} \right)_{E, V}}}_{\text{environment'', or heat bath''}}e^{-\frac{H_1 }{k_{\mathrm{B}}}\left( \frac{\partial {S}}{\partial {E}} \right)_{N, V}}\end{aligned} \hspace{\stretch{1}}(1.4.14)

\begin{aligned}H_1 = \sum_{i \in 1} \frac{\mathbf{p}_i}{2m}+\sum_{i \in j} V(\mathbf{x}_i - \mathbf{x}_j)+ \sum_{i \in 1} \Phi(\mathbf{x}_i)\end{aligned} \hspace{\stretch{1}}(1.4.15)

\begin{aligned}\mathcal{P} \propto e^{-\frac{H( \{x_1\} \{p_1\} ) }{k_{\mathrm{B}} T} }\end{aligned} \hspace{\stretch{1}}(1.4.16)

and for the subsystem

\begin{aligned}\mathcal{P}_1 =\frac{e^{-\frac{H_1}{k_{\mathrm{B}} T} }}{\int d\{x_1\}d\{p_1\}e^{-\frac{H_1}{k_{\mathrm{B}} T} }}\end{aligned} \hspace{\stretch{1}}(1.4.17)

# Canonical ensemble

Can we use results for this subvolume, can we use this to infer results for the entire system? Suppose we break the system into a number of smaller subsystems as in fig. 1.2.

Fig 2: Larger system partitioned into many small subsystems

\begin{aligned}\underbrace{(N, V, E)}_{\text{microcanonical}}\rightarrow (N, V, T)\end{aligned} \hspace{\stretch{1}}(1.5.18)

We’d have to understand how large the differences between the energy fluctuations of the different subsystems are. We’ve already assumed that we have minimal long range interactions since we’ve treated the subsystem $1$ above in isolation. With $\beta = 1/(k_{\mathrm{B}} T)$ the average energy is

\begin{aligned}\left\langle{{E}}\right\rangle = \frac{\int d\{x_1\}d\{p_1\}He^{- \beta H }}{\int d\{x_1\}d\{p_1\}e^{- \beta H }}\end{aligned} \hspace{\stretch{1}}(1.5.19)

\begin{aligned}\left\langle{{E^2}}\right\rangle = \frac{\int d\{x_1\}d\{p_1\}H^2e^{- \beta H }}{\int d\{x_1\}d\{p_1\}e^{- \beta H }}\end{aligned} \hspace{\stretch{1}}(1.5.20)

We define the partition function

\begin{aligned}Z \equiv \frac{1}{{h^{3N} N!}}\int d\{x_1\}d\{p_1\}e^{- \beta H }.\end{aligned} \hspace{\stretch{1}}(1.5.21)

Observe that the derivative of $Z$ is

\begin{aligned}\frac{\partial {Z}}{\partial {\beta}} = -\frac{1}{{h^{3N} N!}}\int d\{x_1\}d\{p_1\}He^{- \beta H },\end{aligned} \hspace{\stretch{1}}(1.5.22)

allowing us to express the average energy compactly in terms of the partition function

\begin{aligned}\left\langle{{E}}\right\rangle = -\frac{1}{{Z}} \frac{\partial {Z}}{\partial {\beta}} = - \frac{\partial {\ln Z}}{\partial {\beta}}.\end{aligned} \hspace{\stretch{1}}(1.5.23)

Taking second derivatives we find the variance of the energy

\begin{aligned}\frac{\partial^2 {{\ln Z}}}{\partial {{\beta}}^2} &=\frac{\partial {}}{\partial {\beta}}\frac{\int d\{x_1\}d\{p_1\}(-H)e^{- \beta H }}{\int d\{x_1\}d\{p_1\}e^{- \beta H }} \\ &= \frac{\int d\{x_1\}d\{p_1\}(-H)^2e^{- \beta H }}{\int d\{x_1\}d\{p_1\}e^{- \beta H }}-\frac{\left( \int d\{x_1\} d\{p_1\} (-H) e^{- \beta H } \right)^2}{\left( \int d\{x_1\} d\{p_1\} e^{- \beta H } \right)^2} \\ &= \left\langle{{E^2}}\right\rangle - \left\langle{{E}}\right\rangle^2 \\ &= \sigma_{\mathrm{E}}^2\end{aligned} \hspace{\stretch{1}}(1.5.24)

We also have

\begin{aligned}\sigma_{\mathrm{E}}^2 &= -\frac{\partial {\left\langle{{E}}\right\rangle}}{\partial {\beta}} \\ &= \frac{\partial {\left\langle{{E}}\right\rangle}}{\partial {T}} \frac{\partial {T}}{\partial {\beta}} \\ &= -\frac{\partial {\left\langle{{E}}\right\rangle}}{\partial {T}} \frac{\partial {}}{\partial {\beta}} \frac{1}{{k_{\mathrm{B}} \beta}} \\ &= \frac{\partial {\left\langle{{E}}\right\rangle}}{\partial {T}} \frac{1}{{k_{\mathrm{B}} \beta^2}} \\ &= k_{\mathrm{B}} T^2 \frac{\partial {\left\langle{{E}}\right\rangle}}{\partial {T}}\end{aligned} \hspace{\stretch{1}}(1.5.25)

Recalling that the heat capacity was defined by

\begin{aligned}C_V = \frac{\partial {\left\langle{{E}}\right\rangle}}{\partial {T}},\end{aligned} \hspace{\stretch{1}}(1.5.26)

we have

\begin{aligned}\sigma_{\mathrm{E}}^2 = k_{\mathrm{B}} T^2 C_V \propto N\end{aligned} \hspace{\stretch{1}}(1.5.27)

\begin{aligned}\frac{\sigma_{\mathrm{E}}}{\left\langle{{E}}\right\rangle} \propto \frac{1}{{\sqrt{N}}}\end{aligned} \hspace{\stretch{1}}(1.5.28)

## 1D pendulum problem in phase space

Posted by peeterjoot on February 15, 2013

Problem 2.6 in [1] asks for some analysis of the (presumably small angle) pendulum problem in phase space, including an integration of the phase space volume energy and period of the system to the area $A$ included within a phase space trajectory. With coordinates as in fig. 1.1, our Lagrangian is

Fig 1.1: 1d pendulum

\begin{aligned}\mathcal{L} = \frac{1}{{2}} m l^2 \dot{\theta}^2 - g m l ( 1 - \cos\theta ).\end{aligned} \hspace{\stretch{1}}(1.0.1)

As a sign check we find for small $\theta$ from the Euler-Lagrange equations $\dot{d}{\theta} = -(g/l) \theta$ as expected. For the Hamiltonian, we need the canonical momentum

\begin{aligned}p_\theta = \frac{\partial {\mathcal{L}}}{\partial {\dot{\theta}}} = m l^2 \dot{\theta}.\end{aligned} \hspace{\stretch{1}}(1.0.2)

Observe that this canonical momentum does not have dimensions of momentum, but that of angular momentum ($m l \dot{\theta} \times l$).

Our Hamiltonian is

\begin{aligned}H = \frac{1}{{2 m l^2}} p_\theta^2 + g m l ( 1 - \cos\theta ).\end{aligned} \hspace{\stretch{1}}(1.0.3)

Hamilton’s equations for this system, in matrix form are

\begin{aligned}\frac{d{{}}}{dt}\begin{bmatrix}\theta \\ p_\theta\end{bmatrix}=\begin{bmatrix}\frac{\partial {H}}{\partial {p_\theta}} \\ -\frac{\partial {H}}{\partial {\theta}} \end{bmatrix}=\begin{bmatrix}p_\theta/m l^2 \\ - g m l \sin\theta\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.4)

With $\omega = g/l$, it is convient to non-dimensionalize this

\begin{aligned}\frac{d{{}}}{dt}\begin{bmatrix}\theta \\ p_\theta/ \omega m l^2\end{bmatrix}=\omega\begin{bmatrix}p_\theta/\omega m l^2 \\ - \sin\theta\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.0.5)

Now we can make the small angle approximation. Writing

\begin{aligned}\mathbf{u} = \begin{bmatrix}\theta \\ p_\theta/ \omega m l^2\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.6a)

\begin{aligned}i = \begin{bmatrix}0 & 1 \\ -1 & 0\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.6b)

Our pendulum equation is reduced to

\begin{aligned}\mathbf{u}' = i \omega \mathbf{u},\end{aligned} \hspace{\stretch{1}}(1.0.7)

With a solution that we can read off by inspection

\begin{aligned}\mathbf{u} = e^{i \omega t} \mathbf{u}_0=\begin{bmatrix}\cos\omega t & \sin\omega t \\ -\sin\omega t & \cos \omega t\end{bmatrix}\mathbf{u}_0\end{aligned} \hspace{\stretch{1}}(1.0.8)

Let’s put the initial phase space point into polar form

\begin{aligned}\mathbf{u}_0^2= \theta_0^2 + \frac{p_0^2}{\omega^2 m^2 l^4}= \frac{2}{\omega^2 m l^2}\left( { \frac{p_0^2}{2 m l^2} + \frac{1}{{2}} \omega^2 m l^2 \theta_0^2 } \right)=\frac{2}{g m l}\left( { \frac{p_0^2}{2 m l^2} + \frac{1}{{2}} g m l \theta_0^2 } \right)\end{aligned} \hspace{\stretch{1}}(1.0.9)

This doesn’t appear to be an exact match for eq. 1.0.3, but we can write for small $\theta_0$

\begin{aligned}1 - \cos\theta_0=2 \sin^2 \left( { \frac{\theta_0}{2} } \right)\approx2 \left( { \frac{\theta_0}{2} } \right)^2=\frac{\theta_0^2}{2}.\end{aligned} \hspace{\stretch{1}}(1.0.10)

This shows that we can rewrite our initial conditions as

\begin{aligned}\mathbf{u}_0 = \sqrt{ \frac{2 E}{g m l} }e^{i \phi }\begin{bmatrix}1 \\ 0\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.0.11)

where

\begin{aligned}\tan \phi =\left( { \omega m l^2 \theta_0/ p_0 } \right).\end{aligned} \hspace{\stretch{1}}(1.0.12)

Our time evolution in phase space is given by

\begin{aligned}\begin{bmatrix}\theta(t) \\ p_\theta(t)\end{bmatrix}=\sqrt{ \frac{2 E}{g m l} }\begin{bmatrix}\cos(\omega t + \phi) \\ - \omega m l^2\sin(\omega t + \phi)\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.0.14)

or

\begin{aligned}\boxed{\begin{bmatrix}\theta(t) \\ p_\theta(t)\end{bmatrix}=\frac{1}{{\omega l}}\sqrt{ \frac{2 E}{m} }\begin{bmatrix}\cos(\omega t + \phi) \\ - \omega m l^2\sin(\omega t + \phi)\end{bmatrix}.}\end{aligned} \hspace{\stretch{1}}(1.0.14)

This is plotted in fig. 1.2.

Fig 1.2: Phase space trajectory for small angle pendulum

The area of this ellipse is

\begin{aligned}A = \pi \frac{1}{{\omega^2 l^2}} \frac{2 E}{m} \omega m l^2 = \frac{2 \pi}{\omega} E.\end{aligned} \hspace{\stretch{1}}(1.0.15)

With $\tau$ for the period of the trajectory, this is

\begin{aligned}A = \tau E.\end{aligned} \hspace{\stretch{1}}(1.0.16)

As a final note, observe that the oriented integral from problem 2.5 of the text $\oint p_\theta d\theta$, is also this area. This is a general property, which can be seen geometrically in fig. 1.3, where we see that the counterclockwise oriented integral of $\oint p dq$ would give the negative area. The integrals along the $c_4, c_1$ paths give the area under the blob, whereas the integrals along the other paths where the sense is opposite, give the complete area under the top boundary. Since they are oppositely sensed, adding them gives just the area of the blob.

Fig 1.3: Area from oriented integral along path

Let’s do this $\oint p_\theta d\theta$ integral for the pendulum phase trajectories. With

\begin{aligned}\theta = \frac{1}{{\omega l}} \sqrt{\frac{2 E}{m}} \cos(\omega t + \phi)\end{aligned} \hspace{\stretch{1}}(1.0.17a)

\begin{aligned}p_\theta = -m l \sqrt{\frac{2 E}{m}} \sin(\omega t + \phi)\end{aligned} \hspace{\stretch{1}}(1.0.17b)

We have

\begin{aligned}\oint p_\theta d\theta = \frac{m l}{\omega l} \frac{2 E}{m} \int_0^{2\pi/\omega} \sin^2( \omega t + \phi) \omega dt= 2 E \int_0^{2\pi/\omega} \frac{ 1 - \cos\left( { 2(\omega t + \phi) } \right) }{2} dt= E \frac{2 \pi}{\omega} = E \tau.\end{aligned} \hspace{\stretch{1}}(1.0.18)

# References

[1] RK Pathria. Statistical mechanics. Butterworth Heinemann, Oxford, UK, 1996.

## The continuity equation for phase space density

Posted by peeterjoot on February 5, 2013

Thinking back to the origin of the 3D continuity equation from fluid mechanics, we used the geometrical argument that any change to the mass in a volume had to leave through the boundary. This was

\begin{aligned}\frac{\partial }{\partial t} \int_V \rho dV = -\int_{\partial V} \left( \rho \mathbf{u} \right) \cdot \hat{\mathbf{n}} dA= -\int_{V} \boldsymbol{\nabla} \cdot \left( \rho \mathbf{u} \right) dV.\end{aligned} \hspace{\stretch{1}}(1.0.1)

We used Green’s theorem above, allowing us to write, provided the volume is fixed

\begin{aligned}0 =\int_V \left( \frac{\partial {\rho}}{\partial t} + \boldsymbol{\nabla} \cdot \left( \rho \mathbf{u} \right) \right)dV,\end{aligned} \hspace{\stretch{1}}(1.0.2)

or

\begin{aligned}0 =\frac{\partial {\rho}}{\partial t} + \boldsymbol{\nabla} \cdot \left( \rho \mathbf{u} \right).\end{aligned} \hspace{\stretch{1}}(1.0.3)

Consider the following phase space picture (Fig1).  The time evolution of any individual particle (or set of particles that lie in the same element of phase space) is directed in the direction $(\dot{x}, \dot{p})$. So, the phase space density leaving through the surface is in proportion to the normal component of $\mathbf{j} = \rho (\dot{q}, \dot{p})$ (red in the figure).

Fig1: Phase space current

With this geometrical picture in mind, the $6N$ dimensional phase space equivalent of 1.0.1, and a basis $\mathbf{e}_{i_{\alpha}}$ for the positions $q_{i_\alpha}$ and $\mathbf{f}_{i_\alpha}$ for the momentum components $p_{i_\alpha}$ is then

\begin{aligned}\frac{\partial }{\partial t} \int_{V_{6N}} \rho d^{3N} q d^{3N} p &= -\int_{\partial V_{6N}} \rho \sum_{i_\alpha} \left( \mathbf{e}_{i_\alpha} \dot{q}_{i_\alpha} + \mathbf{f}_{i_\alpha} \dot{p}_{i_\alpha} \right) \cdot \hat{\mathbf{n}} dA \\ &= -\int_{V_{6N}} d^{3N} q d^{3N} p \sum_{i_\alpha} \left( \frac{ \partial \left( \rho \dot{q}_{i_\alpha} \right) }{\partial q_{i_\alpha} } + \frac{ \partial \left( \rho \dot{p}_{i_\alpha} \right) }{\partial p_{i_\alpha} } \right)\end{aligned} \hspace{\stretch{1}}(1.0.5)

Here $dA$ is the surface area element for the phase space and $\hat{\mathbf{n}}$ is the unit normal to this surface. We have to assume the existance of a divergence theorem for the $6N$ dimensional space.

We can now regroup, and find for the integrand

\begin{aligned} 0 = \frac{\partial \rho}{\partial t} + \sum_{i_\alpha} \left( \frac{\partial \left( \rho \dot{q}_{i_\alpha} \right) }{ \partial q_{i_\alpha} } + \frac{\partial \left( \rho \dot{p}_{i_\alpha} \right) }{ \partial p_{i_\alpha} } \right),\end{aligned} \hspace{\stretch{1}}(1.0.5)

which is the \underlineAndIndex{continuity equation}. The assumptions that we have to make are that the flow of the density in phase space through the surface is proportional to the projection of the vector $\rho (\dot{q}_{i_\alpha}, \dot{p}_{i_\alpha})$, and then use the same old arguments (extended to a $6N$ dimensional space) as we did for the continuity equation for 3D masses.

## 1D SHO phase space

Posted by peeterjoot on February 2, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Let’s review the 1D SHO to get a better feel for the ideas of phase space. Given a spring and mass system

\begin{aligned}F = - k x = -(\boldsymbol{\nabla} \phi)_x\end{aligned} \hspace{\stretch{1}}(1.0.1)

our potential is

\begin{aligned}\phi = \frac{1}{{2}} k x^2,\end{aligned} \hspace{\stretch{1}}(1.0.2)

So, our Hamiltonian is

\begin{aligned}H = \frac{1}{{2m}}{p^2} + \frac{1}{{2}} k x^2.\end{aligned} \hspace{\stretch{1}}(1.0.3)

Hamilton’s equations follow from $H = p \dot{x} - \mathcal{L}$

\begin{aligned}\frac{\partial {H}}{\partial {p}} = \dot{x}\end{aligned} \hspace{\stretch{1}}(1.0.4a)

\begin{aligned}\frac{\partial {H}}{\partial {x}} = -\dot{p}.\end{aligned} \hspace{\stretch{1}}(1.0.4b)

For the SHO this is

\begin{aligned}\frac{d{{}}}{dt}\begin{bmatrix}p \\ x\end{bmatrix}=\begin{bmatrix}-\frac{\partial {H}}{\partial {x}} \\ \frac{\partial {H}}{\partial {p}} \end{bmatrix}=\begin{bmatrix} - k x \\ p/m\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.0.5)

It’s convenient to non-dimensionalize this. Using $\omega = \sqrt{k/m}$, which has dimensions of $1/T$, we form

\begin{aligned}\frac{d{{}}}{dt}\begin{bmatrix}p/m \\ \omega x\end{bmatrix}=\begin{bmatrix} - (k/m) x \\ (\omega) p/m\end{bmatrix}=\omega\begin{bmatrix} - \omega x \\ p/m\end{bmatrix}=\omega\begin{bmatrix}0 & -1 \\ 1 & 0\end{bmatrix}\begin{bmatrix}p/m \\ \omega x\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.6)

With definitions

\begin{aligned}i = \begin{bmatrix}0 & -1 \\ 1 & 0\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.7a)

\begin{aligned}\mathbf{x} =\begin{bmatrix}p/m \\ \omega x\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.0.7b)

the SHO Hamilton’s equations are just

\begin{aligned}\boxed{\mathbf{x}' = i \omega \mathbf{x}.}\end{aligned} \hspace{\stretch{1}}(1.0.8)

The solution follows immediately

\begin{aligned}\mathbf{x} = e^{i \omega t} \mathbf{x}_0.\end{aligned} \hspace{\stretch{1}}(1.0.9)

We expect matrix exponential to have the structure of a rotation matrix, so let’s write it out explicitly to see its structure

\begin{aligned}e^{i \omega t} = I \cos(\omega t)+ i \sin(\omega t)=\begin{bmatrix}1 & 0 \\ 0 & 1\end{bmatrix}\cos(\omega t)+ \begin{bmatrix}0 & -1 \\ 1 & 0\end{bmatrix}\sin(\omega t)=\begin{bmatrix}\cos(\omega t) & - \sin(\omega t) \\ \sin(\omega t) & \cos(\omega t)\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.0.10)

In this non-dimensionalized phase space, with $p/m$ on the horizontal axis and $\omega x$ on the vertical axis, this is a counterclockwise rotation. The (squared) radius of the rotation is

\begin{aligned}(p_0/m)^2+(\omega x_0)^2 = \frac{2}{m}\left( {\frac{p_0^2}{m}+ \frac{1}{{2}} \omega^2 m x_0^2} \right)= \frac{2}{m}\left( {\frac{p_0^2}{2m}+ \frac{1}{{2}} k x_0^2} \right)=\frac{2 E}{m}.\end{aligned} \hspace{\stretch{1}}(1.0.11)

It makes sense to put the initial position in phase space in polar form too. We can write

\begin{aligned}\begin{bmatrix}p_0/m \\ \omega x_0\end{bmatrix}=\sqrt{\frac{2 E}{m}}e^{i \theta}\begin{bmatrix}1 \\ 0\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.0.12)

where

\begin{aligned}\theta = \arctan \left( { \omega m \frac{x_0}{p_0} } \right).\end{aligned} \hspace{\stretch{1}}(1.0.13)

Now the non-dimensionalized phase space solution takes the particularly simple form

\begin{aligned}\boxed{\mathbf{x} = \sqrt{ \frac{2 E}{m} } e^{i (\omega t + \theta)} \begin{bmatrix}1 \\ 0\end{bmatrix}.}\end{aligned} \hspace{\stretch{1}}(1.0.14)

Removing the non-dimensionalization

Written explicitly, our momentum and position trace out, elliptical trajectories

\begin{aligned}p = p_0 \cos(\omega t) - \omega m x_0 \sin(\omega t) \end{aligned} \hspace{\stretch{1}}(1.0.15a)

\begin{aligned}x = \frac{p_0}{m \omega} \sin(\omega t) + x_0 \cos(\omega t).\end{aligned} \hspace{\stretch{1}}(1.0.15b)

With the initial phase space point specified as a rotation from the momentum axis as in 1.0.13, this is just

\begin{aligned}p = \sqrt{ \frac{2 E}{m} } m \cos(\omega t + \theta) = \sqrt{ 2 m E } \cos(\omega t + \theta) \end{aligned} \hspace{\stretch{1}}(1.0.16a)

\begin{aligned}x = \sqrt{ \frac{2 E}{m} } \frac{1}{{\omega}} \sin(\omega t + \theta) = \sqrt{ \frac{2 E}{k} } \sin(\omega t + \theta) \end{aligned} \hspace{\stretch{1}}(1.0.16b)

This is plotted in (Fig 1)

Fig 1: A SHO phase space trajectory

Observe that the rotation angle $\theta$ doesn’t specify a geometric rotation of the ellipse. Instead, it is a function of the starting point of the elliptical trajectory through phase space.

Aside. Complex representation of phase space points

It’s interesting to note that we can also work in a complex representation of phase space, instead of a matrix picture (for this 1D SHO problem).

\begin{aligned}\frac{d{{}}}{dt} \left( {\frac{p}{m} + i \omega x} \right)=\omega \left( {- \omega x + i \frac{p}{m}} \right)=i \omega \left( {\frac{p}{m} + i \omega x} \right).\end{aligned} \hspace{\stretch{1}}(1.0.17)

Writing

\begin{aligned}z = \frac{p}{m} + i \omega x, \end{aligned} \hspace{\stretch{1}}(1.0.18)

Hamilton’s equations take the form

\begin{aligned}z' = i \omega z.\end{aligned} \hspace{\stretch{1}}(1.0.19)

Again, we can read off the solution by inspection

\begin{aligned}z = e^{i \omega t} z_0.\end{aligned} \hspace{\stretch{1}}(1.0.20)

## PHY456H1F: Quantum Mechanics II. Lecture 21 (Taught by Prof J.E. Sipe). Scattering theory

Posted by peeterjoot on November 24, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

# Disclaimer.

Peeter’s lecture notes from class. May not be entirely coherent.

# Scattering theory.

READING: section 19, section 20 of the text [1].

Here’s (\ref{fig:qmTwoL21:qmTwoL21Fig1}) a simple classical picture of a two particle scattering collision

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL21Fig1}
\caption{classical collision of particles.}
\end{figure}

We will focus on point particle elastic collisions (no energy lost in the collision). With particles of mass $m_1$ and $m_2$ we write for the total and reduced mass respectively

\begin{aligned}M = m_1 + m_2\end{aligned} \hspace{\stretch{1}}(2.1)

\begin{aligned}\frac{1}{{\mu}} = \frac{1}{{m_1}} + \frac{1}{{m_2}},\end{aligned} \hspace{\stretch{1}}(2.2)

so that interaction due to a potential $V(\mathbf{r}_1 - \mathbf{r}_2)$ that depends on the difference in position $\mathbf{r} = \mathbf{r}_1 - \mathbf{r}$ has, in the center of mass frame, the Hamiltonian

\begin{aligned}H = \frac{\mathbf{p}^2}{2 \mu} + V(\mathbf{r})\end{aligned} \hspace{\stretch{1}}(2.3)

In the classical picture we would investigate the scattering radius $r_0$ associated with the impact parameter $\rho$ as depicted in figure (\ref{fig:qmTwoL21:qmTwoL21Fig2})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL21Fig2}
\caption{Classical scattering radius and impact parameter.}
\end{figure}

## 1D QM scattering. No potential wave packet time evolution.

Now lets move to the QM picture where we assume that we have a particle that can be represented as a wave packet as in figure (\ref{fig:qmTwoL21:qmTwoL21Fig3})
\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL21Fig3}
\caption{Wave packet for a particle wavefunction $\Re(\psi(x,0))$}
\end{figure}

First without any potential $V(x) = 0$, lets consider the evolution. Our position and momentum space representations are related by

\begin{aligned}\int {\left\lvert{\psi(x, t)}\right\rvert}^2 dx = 1 = \int {\left\lvert{\psi(p, t)}\right\rvert}^2 dp,\end{aligned} \hspace{\stretch{1}}(2.4)

and by Fourier transform

\begin{aligned}\psi(x, t) = \int \frac{dp}{\sqrt{2 \pi \hbar}} \overline{\psi}(p, t) e^{i p x/\hbar}.\end{aligned} \hspace{\stretch{1}}(2.5)

Schr\”{o}dinger’s equation takes the form

\begin{aligned}i \hbar \frac{\partial {\psi(x,t)}}{\partial {t}} = - \frac{\hbar^2}{2 \mu} \frac{\partial^2 {{\psi(x, t)}}}{\partial {{x}}^2},\end{aligned} \hspace{\stretch{1}}(2.6)

or more simply in momentum space

\begin{aligned}i \hbar \frac{\partial {\overline{\psi}(p,t)}}{\partial {t}} = \frac{p^2}{2 \mu} \frac{\partial^2 {{\overline{\psi}(p, t)}}}{\partial {{x}}^2}.\end{aligned} \hspace{\stretch{1}}(2.7)

Rearranging to integrate we have

\begin{aligned}\frac{\partial {\overline{\psi}}}{\partial {t}} = -\frac{i p^2}{2 \mu \hbar} \overline{\psi},\end{aligned} \hspace{\stretch{1}}(2.8)

and integrating

\begin{aligned}\ln \overline{\psi} = -\frac{i p^2 t}{2 \mu \hbar} + \ln C,\end{aligned} \hspace{\stretch{1}}(2.9)

or

\begin{aligned}\overline{\psi} = C e^{-\frac{i p^2 t}{2 \mu \hbar}} = \overline{\psi}(p, 0) e^{-\frac{i p^2 t}{2 \mu \hbar}}.\end{aligned} \hspace{\stretch{1}}(2.10)

Time evolution in momentum space for the free particle changes only the phase of the wavefunction, the momentum probability density of that particle.

Fourier transforming, we find our position space wavefunction to be

\begin{aligned}\psi(x, t) = \int \frac{dp}{\sqrt{2 \pi \hbar}} \overline{\psi}(p, 0) e^{i p x/\hbar} e^{-i p^2 t/2 \mu \hbar}.\end{aligned} \hspace{\stretch{1}}(2.11)

To clean things up, write

\begin{aligned}p = \hbar k,\end{aligned} \hspace{\stretch{1}}(2.12)

for

\begin{aligned}\psi(x, t) = \int \frac{dk}{\sqrt{2 \pi}} a(k, 0) ) e^{i k x} e^{-i \hbar k^2 t/2 \mu},\end{aligned} \hspace{\stretch{1}}(2.13)

where

\begin{aligned}a(k, 0) = \sqrt{\hbar} \overline{\psi}(p, 0).\end{aligned} \hspace{\stretch{1}}(2.14)

Putting

\begin{aligned}a(k, t) = a(k, 0) e^{ -i \hbar k^2/2 \mu},\end{aligned} \hspace{\stretch{1}}(2.15)

we have

\begin{aligned}\psi(x, t) = \int \frac{dk}{\sqrt{2 \pi}} a(k, t) ) e^{i k x} \end{aligned} \hspace{\stretch{1}}(2.16)

Observe that we have

\begin{aligned}\int dk {\left\lvert{ a(k, t)}\right\rvert}^2 = \int dp {\left\lvert{ \overline{\psi}(p, t)}\right\rvert}^2 = 1.\end{aligned} \hspace{\stretch{1}}(2.17)

## A Gaussian wave packet

Suppose that we have, as depicted in figure (\ref{fig:qmTwoL21:qmTwoL21Fig4})
\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL21Fig4}
\caption{Gaussian wave packet.}
\end{figure}

a Gaussian wave packet of the form

\begin{aligned}\psi(x, 0) = \frac{ (\pi \Delta^2)^{1/4}} e^{i k_0 x} e^{- x^2/2 \Delta^2}.\end{aligned} \hspace{\stretch{1}}(2.18)

This is actually a minimum uncertainty packet with

\begin{aligned}\Delta x &= \frac{\Delta}{\sqrt{2}} \\ \Delta p &= \frac{\hbar}{\Delta \sqrt{2}}.\end{aligned} \hspace{\stretch{1}}(2.19)

Taking Fourier transforms we have

\begin{aligned}a(k, 0) &= \left(\frac{\Delta^2}{\pi}\right)^{1/4} e^{-(k - k_0)^2 \Delta^2/2} \\ a(k, t) &= \left(\frac{\Delta^2}{\pi}\right)^{1/4} e^{-(k - k_0)^2 \Delta^2/2} e^{ -i \hbar k^2 t/ 2\mu} \equiv \alpha(k, t)\end{aligned} \hspace{\stretch{1}}(2.21)

For $t > 0$ our wave packet will start moving and spreading as in figure (\ref{fig:qmTwoL21:qmTwoL21Fig5})
\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL21Fig5}
\end{figure}

## With a potential.

Now “switch on” a potential, still assuming a wave packet representation for the particle. With a positive (repulsive) potential as in figure (\ref{fig:qmTwoL21:qmTwoL21Fig6}), at a time long before the interaction of the wave packet with the potential we can visualize the packet as heading towards the barrier.

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL21Fig6}
\caption{QM wave packet prior to interaction with repulsive potential.}
\end{figure}

After some time long after the interaction, classically for this sort of potential where the particle kinetic energy is less than the barrier “height”, we would have total reflection. In the QM case, we’ve seen before that we will have a reflected and a transmitted portion of the wave packet as depicted in figure (\ref{fig:qmTwoL21:qmTwoL21Fig7})
\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL21Fig7}
\caption{QM wave packet long after interaction with repulsive potential.}
\end{figure}

Even if the particle kinetic energy is greater than the barrier height, as in figure (\ref{fig:qmTwoL21:qmTwoL21Fig8}), we can still have a reflected component.
\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL21Fig8}
\caption{Kinetic energy greater than potential energy.}
\end{figure}

This is even true for a negative potential as depicted in figure (\ref{fig:qmTwoL21:qmTwoL21Fig9})!

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL21Fig9}
\caption{qmTwoL21Fig9}
\end{figure}

Consider the probability for the particle to be found anywhere long after the interaction, summing over the transmitted and reflected wave functions, we have

\begin{aligned}1 &= \int {\left\lvert{\psi_r + \psi_t}\right\rvert}^2 \\ &= \int {\left\lvert{\psi_r}\right\rvert}^2 + \int {\left\lvert{\psi_t}\right\rvert}^2 + 2 \Re \int \psi_r^{*} \psi_t\end{aligned}

Observe that long after the interaction the cross terms in the probabilities will vanish because they are non-overlapping, leaving just the probably densities for the transmitted and reflected probably densities independently.

We define

\begin{aligned}T &= \int {\left\lvert{\psi_t(x, t)}\right\rvert}^2 dx \\ R &= \int {\left\lvert{\psi_r(x, t)}\right\rvert}^2 dx.\end{aligned} \hspace{\stretch{1}}(2.23)

The objective of most of our scattering problems will be the calculation of these probabilities and the comparisons of their ratios.

Question. Can we have more than one wave packet reflect off. Yes, we could have multiple wave packets for both the reflected and the transmitted portions. For example, if the potential has some internal structure there could be internal reflections before anything emerges on either side and things could get quite messy.

# Considering the time independent case temporarily.

We are going to work through something that is going to seem at first to be completely unrelated. We will (eventually) see that this can be applied to this problem, so a bit of patience will be required.

We will be using the time independent Schr\”{o}dinger equation

\begin{aligned}- \frac{\hbar^2}{2 \mu} \psi_k''(x) = V(x) \psi_k(x) = E \psi_k(x),\end{aligned} \hspace{\stretch{1}}(3.25)

where we have added a subscript $k$ to our wave function with the intention (later) of allowing this to vary. For “future use” we define for $k > 0$

\begin{aligned}E = \frac{\hbar^2 k^2}{2 \mu}.\end{aligned} \hspace{\stretch{1}}(3.26)

Consider a potential as in figure (\ref{fig:qmTwoL21:qmTwoL21Fig10}), where $V(x) = 0$ for $x > x_2$ and $x < x_1$.

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL21Fig10}
\caption{potential zero outside of a specific region.}
\end{figure}

We won't have bound states here (repulsive potential). There will be many possible solutions, but we want to look for a solution that is of the form

\begin{aligned}\psi_k(x) = C e^{i k x}, \qquad x > x_2\end{aligned} \hspace{\stretch{1}}(3.27)

Suppose $x = x_3 > x_2$, we have

\begin{aligned}\psi_k(x_3) = C e^{i k x_3}\end{aligned} \hspace{\stretch{1}}(3.28)

\begin{aligned}{\left.{{\frac{d\psi_k}{dx}}}\right\vert}_{{x = x_3}} = i k C e^{i k x_3} \equiv \phi_k(x_3)\end{aligned} \hspace{\stretch{1}}(3.29)

\begin{aligned}{\left.{{\frac{d^2\psi_k}{dx^2}}}\right\vert}_{{x = x_3}} = -k^2 C e^{i k x_3} \end{aligned} \hspace{\stretch{1}}(3.30)

Defining

\begin{aligned}\phi_k(x) = \frac{d\psi_k}{dx},\end{aligned} \hspace{\stretch{1}}(3.31)

we write Schr\”{o}dinger’s equation as a pair of coupled first order equations

\begin{aligned}\frac{d\psi_k}{dx} &= \phi_k(x) \\ -\frac{\hbar^2}{2 \mu} \frac{d\phi_k(x)}{dx} = - V(x) \psi_k(x) + \frac{\hbar^2 k^2}{2\mu} \psi_k(x).\end{aligned} \hspace{\stretch{1}}(3.32)

At this $x = x_3$ specifically, we “know” both $\phi_k(x_3)$ and $\psi_k(x_3)$ and have

\begin{aligned}{\left.{{\frac{d\psi_k}{dx}}}\right\vert}_{{x_3}} &= \phi_k(x) \\ -\frac{\hbar^2}{2 \mu} {\left.{{\frac{d\phi_k(x)}{dx}}}\right\vert}_{{x_3}} = - V(x_3) \psi_k(x_3) + \frac{\hbar^2 k^2}{2\mu} \psi_k(x_3),\end{aligned} \hspace{\stretch{1}}(3.34)

This allows us to find both

\begin{aligned}{dx}}}\right\vert}_{{x_3}} \\ {dx}}}\right\vert}_{{x_3}} \end{aligned} \hspace{\stretch{1}}(3.36)

then proceed to numerically calculate $\phi_k(x)$ and $\psi_k(x)$ at neighboring points $x = x_3 + \epsilon$. Essentially, this allows us to numerically integrate backwards from $x_3$ to find the wave function at previous points for any sort of potential.

# References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.