Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

 Adam C Scott on avoiding gdb signal noise… Ken on Scotiabank iTrade RESP …… Alan Ball on Oops. Fixing a drill hole in P… Peeter Joot's B… on Stokes theorem in Geometric… Exploring Stokes The… on Stokes theorem in Geometric…

• 293,785

Archive for February, 2013

Rotation of diatomic molecules

Posted by peeterjoot on February 28, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Question: Rotation of diatomic molecules ([2] problem 3.6)

In our first look at the ideal gas we considered only the translational energy of the particles. But molecules can rotate, with kinetic energy. The rotation motion is quantized; and the energy levels of a diatomic molecule are of the form

\begin{aligned}\epsilon(j) = j(j + 1) \epsilon_0\end{aligned} \hspace{\stretch{1}}(1.0.1)

where $j$ is any positive integer including zero: $j = 0, 1, 2, \cdots$. The multiplicity of each rotation level is $g(j) = 2 j + 1$.

a

Find the partition function $Z_R(\tau)$ for the rotational states of one molecule. Remember that $Z$ is a sum over all states, not over all levels — this makes a difference.

b

Evaluate $Z_R(\tau)$ approximately for $\tau \gg \epsilon_0$, by converting the sum to an integral.

c

Do the same for $\tau \ll \epsilon_0$, by truncating the sum after the second term.

d

Give expressions for the energy $U$ and the heat capacity $C$, as functions of $\tau$, in both limits. Observe that the rotational contribution to the heat capacity of a diatomic molecule approaches 1 (or, in conventional units, $k_{\mathrm{B}}$) when $\tau \gg \epsilon_0$.

e

Sketch the behavior of $U(\tau)$ and $C(\tau)$, showing the limiting behaviors for $\tau \rightarrow \infty$ and $\tau \rightarrow 0$.

a. Partition function $Z_R(\tau)$

To understand the reference to multiplicity recall (section 4.13 [1]) that the rotational Hamiltonian was of the form

\begin{aligned}H = \frac{\mathbf{L}^2}{2 M r^2},\end{aligned} \hspace{\stretch{1}}(1.0.2)

where the $\mathbf{L}^2$ eigenvectors satisfied

\begin{subequations}

\begin{aligned}\mathbf{L}^2 {\left\lvert {l m} \right\rangle} = l (l + 1) \hbar^2 {\left\lvert {l m} \right\rangle}\end{aligned} \hspace{\stretch{1}}(1.0.3a)

\begin{aligned}L_z {\left\lvert {l m} \right\rangle} = m \hbar {\left\lvert {l m} \right\rangle}\end{aligned} \hspace{\stretch{1}}(1.0.3b)

\end{subequations}

and $-l \le m \le l$, where $l \ge 0$ is a positive integer. We see that $\epsilon_0$ is of the form

\begin{aligned}\epsilon_0 = \frac{\hbar^2}{2 M R_l(r)},\end{aligned} \hspace{\stretch{1}}(1.0.4)

and our partition function is

\begin{aligned}Z_R(\tau) = \sum_{l = 0}^\infty \sum_{m = -l}^l e^{-l (l + 1)\epsilon_0/\tau}= \sum_{l = 0}^\infty (2 l + 1) e^{-l (l + 1)\epsilon_0/\tau}.\end{aligned} \hspace{\stretch{1}}(1.0.5)

We have no dependence on $m$ in the sum, and just have to sum terms like fig 1, and are able to sum over $m$ trivially, which is where the multiplicity comes from.

Fig 1: Summation over m

To get a feel for how many terms are significant in these sums, we refer to the plot of fig 2. We plot the partition function itself in, truncation at $l = 30$ terms in fig 3.

Fig 2: Plotting the partition function summand

Fig 3: Z_R(tau) truncated after 30 terms in log plot

b. Evaluate partition function for large temperatures

If $\tau \gg \epsilon_0$, so that $\epsilon_0/\tau \ll 1$, all our exponentials are close to unity. Employing an integral approximation of the partition function, we can somewhat miraculously integrate this directly

\begin{aligned}Z_R(\tau) &\approx \int_0^\infty dl (2 l + 1) e^{-l(l+1)\epsilon_0/\tau} \\ &= \int_0^\infty dl \frac{d}{dl} \left( -\frac{\tau}{\epsilon_0} e^{-l(l+1)\epsilon_0/\tau} \right) \\ &= \frac{\tau}{\epsilon_0}\end{aligned} \hspace{\stretch{1}}(1.0.6)

c. Evaluate partition function for small temperatures

When $\tau \ll \epsilon_0$, so that $\epsilon_0/\tau \gg 1$, all our exponentials are increasingly close to zero as $l$ increases. Dropping all the second and higher order terms we have

\begin{aligned}Z_R(\tau) \approx 1 + 3 e^{-2 \epsilon_0/\tau}\end{aligned} \hspace{\stretch{1}}(1.0.7)

d. Energy and heat capacity

In the large $\epsilon_0/\tau$ domain (small temperatures) we have

\begin{aligned}U &= \tau^2 \frac{\partial {}}{\partial {\tau}} \ln Z \\ &= \tau^2 \frac{\partial {}}{\partial {\tau}} \ln \left( 1 + 3 e^{-2 \epsilon_0/\tau} \right) \\ &= \tau^2 \frac{3 (-2\epsilon_0)(-1/\tau^2)}{1 + 3 e^{-2 \epsilon_0/\tau}} \\ &= \frac{6 \epsilon_0}{1 + 3 e^{-2 \epsilon_0/\tau}} \\ &\approx 6 \epsilon_0.\end{aligned} \hspace{\stretch{1}}(1.0.8)

The specific heat in this domain is

\begin{aligned}C_{\mathrm{V}} = \frac{\partial {U}}{\partial {\tau}}=\left( \frac{6 \epsilon_0/\tau}{1 + 3 e^{-2 \epsilon_0/\tau}} \right)^2\approx \left( \frac{6 \epsilon_0}{\tau} \right)^2\end{aligned} \hspace{\stretch{1}}(1.0.9)

For the small $\epsilon_0/\tau$ (large temperatures) case we have

\begin{aligned}U = \tau^2 \frac{\partial {}}{\partial {\tau}} \ln Z= \tau^2 \frac{\partial {}}{\partial {\tau}} \ln \frac{\tau}{\epsilon_0}= \tau^2 \frac{1}{{\tau}}= \tau\end{aligned} \hspace{\stretch{1}}(1.0.10)

The heat capacity in this large temperature region is

\begin{aligned}C_{\mathrm{V}} = \frac{\partial {U}}{\partial {\tau}} = 1,\end{aligned} \hspace{\stretch{1}}(1.0.11)

which is unity as described in the problem.

e. Sketch

The energy and heat capacities are roughly sketched in fig 4.

Fig 4: Energy and heat capacity

It’s somewhat odd seeming that we have a zero point energy at zero temperature. Plotting the energy (truncating the sums to 30 terms) in fig 5, we don’t see such a zero point energy.

Fig 5: Exact plot of the energy for a range of temperatures (30 terms of the sums retained)

That plotted energy is as follows, computed without first dropping any terms of the partition function

\begin{aligned}U &= \tau^2 \frac{\partial}{\partial \tau} \ln\left( \sum_{l = 0}^\infty (2 l + 1) e^{-l (l + 1)\epsilon_0/\tau} \right) \\ &= \epsilon_0\frac{\left( \sum_{l = 1}^\infty l (l + 1)(2 l + 1) e^{-l (l + 1)\epsilon_0/\tau} \right)}{\left( \sum_{l = 0}^\infty (2 l + 1) e^{-l (l + 1)\epsilon_0/\tau} \right)} \\ &= \epsilon_0\frac{\left( \sum_{l = 1}^\infty l (l + 1)(2 l + 1) e^{-l (l + 1)\epsilon_0/\tau} \right)}{Z}\end{aligned} \hspace{\stretch{1}}(1.0.12)

To avoid the zero point energy, we have to use this and not the truncated partition function to do the integral approximation. Doing that calculation (which isn’t as convenient, so I cheated and used Mathematica). We obtain

\begin{aligned}U \approx \frac{\int_1^\infty l (l + 1)(2 l + 1) e^{-l (l + 1)\epsilon_0/\tau}}{\int_0^\infty (2 l + 1) e^{-l (l + 1)\epsilon_0/\tau}}=\epsilon_0 e^{2 \epsilon_0/\tau} \left( 2 + \frac{\tau}{\epsilon_0} \right).\end{aligned} \hspace{\stretch{1}}(1.0.13)

This approximation, which has taken the sums to infinity, is plotted in fig 6.

Fig 6: Low temperature approximation of the energy

From eq. 1.0.12, we can take one more derivative to calculate the exact specific heat

\begin{aligned}C_{\mathrm{V}} &= \epsilon_0\frac{\partial {}}{\partial {\tau}}\left(\frac{\left( \sum_{l = 1}^\infty l (l + 1)(2 l + 1) e^{-l (l + 1)\epsilon_0/\tau} \right)}{\left( \sum_{l = 0}^\infty (2 l + 1) e^{-l (l + 1)\epsilon_0/\tau} \right)}\right) \\ &= \left( \frac{\epsilon_0}{\tau} \right)^2\left(\frac{\left( \sum_{l = 1}^\infty l^2 (l + 1)^2 (2 l + 1) e^{-l (l + 1)\epsilon_0/\tau} \right)}{\left( \sum_{l = 0}^\infty (2 l + 1) e^{-l (l + 1)\epsilon_0/\tau} \right)}+\frac{\left( \sum_{l = 1}^\infty l (l + 1)(2 l + 1) e^{-l (l + 1)\epsilon_0/\tau} \right)^2}{\left( \sum_{l = 0}^\infty (2 l + 1) e^{-l (l + 1)\epsilon_0/\tau} \right)^2}\right) \\ &= \left( \frac{\epsilon_0}{\tau} \right)^2\left(\frac{\left( \sum_{l = 1}^\infty l^2 (l + 1)^2 (2 l + 1) e^{-l (l + 1)\epsilon_0/\tau} \right)}{Z}+ \frac{U^2}{\epsilon_0^2}\right) \\ &= \frac{U^2}{\epsilon_0^2}+\left( \frac{\epsilon_0}{\tau} \right)^2\frac{\left( \sum_{l = 1}^\infty l^2 (l + 1)^2 (2 l + 1) e^{-l (l + 1)\epsilon_0/\tau} \right)}{Z}.\end{aligned} \hspace{\stretch{1}}(1.0.14)

This is plotted to 30 terms in fig 7.

Fig 7: Specific heat to 30 terms

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] C. Kittel and H. Kroemer. Thermal physics. WH Freeman, 1980.

PHY452H1S Basic Statistical Mechanics. Lecture 12: Helmholtz free energy. Taught by Prof. Arun Paramekanti

Posted by peeterjoot on February 28, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Disclaimer

Peeter’s lecture notes from class. May not be entirely coherent.

Canonical partition

We found

\begin{subequations}

\begin{aligned}\frac{\sigma_{\mathrm{E}}}{E} \propto \frac{T \sqrt{C_V}}{E} k_{\mathrm{B}}^2\end{aligned} \hspace{\stretch{1}}(1.0.1a)

\begin{aligned}Z = \sum_{\{c\}} e^{-\beta E(c)}\end{aligned} \hspace{\stretch{1}}(1.0.1b)

\begin{aligned}C_V \sim N\end{aligned} \hspace{\stretch{1}}(1.0.1c)

\begin{aligned}E \sim N\end{aligned} \hspace{\stretch{1}}(1.0.1d)

\end{subequations}

where the partition function \index{partition function} acts as a probability distribution so that we can define an average as

\begin{aligned}\left\langle{{A}}\right\rangle = \frac{\sum_{\{c\}} A(c) e^{-\beta E(c)}}{Z}\end{aligned} \hspace{\stretch{1}}(1.0.2)

If we suppose that the energy is typically close to the average energy as in fig. 1.1.

Fig 1.1: Peaked energy distribution

, then we can approximate the partition function as

\begin{aligned}Z \approx e^{-\beta \left\langle{{E}}\right\rangle} \sum_{\{c\}} \delta_{E, \bar{E}}= e^{-\beta \left\langle{{E}}\right\rangle} e^S/k_{\mathrm{B}},\end{aligned} \hspace{\stretch{1}}(1.0.4)

where we’ve used $S = k_{\mathrm{B}} \ln \Omega$ to express the number of states where the energy matches the average energy $\Omega = \sum \delta_{E, \bar{E}}$.

This gives us

\begin{aligned}Z = e^{-\beta (\left\langle{{E}}\right\rangle - k_{\mathrm{B}} T S/k_{\mathrm{B}}) } = e^{-\beta (\left\langle{{E}}\right\rangle - T S) } \end{aligned} \hspace{\stretch{1}}(1.0.4)

or

\begin{aligned}\boxed{Z = e^{-\beta F},}\end{aligned} \hspace{\stretch{1}}(1.0.5)

where we define the Helmholtz free energy $F$ as

\begin{aligned}\boxed{F = \left\langle{{E}}\right\rangle - T S.}\end{aligned} \hspace{\stretch{1}}(1.0.6)

Equivalently, the log of the partition function provides us with the partition function

\begin{aligned}F = - k_{\mathrm{B}} T \ln Z.\end{aligned} \hspace{\stretch{1}}(1.0.7)

Recalling our expression for the average energy, we can now write that in terms of the free energy

\begin{aligned}\left\langle{{E}}\right\rangle = \frac{\sum_{\{c\}} E(c) e^{-\beta E(c)}}{\sum_{\{c\}} e^{-\beta E(c)}}= -\frac{\partial {}}{\partial {\beta}}\ln Z=\frac{\partial {(\beta F)}}{\partial {\beta}}\end{aligned} \hspace{\stretch{1}}(1.0.8)

Quantum mechanical picture

Consider a subsystem as in fig. 1.2 where we have states of the form

Fig 1.2: subsystem in heat bath

\begin{aligned}{\left\lvert {\Psi_{\text{full}}} \right\rangle} = {\left\lvert {\chi_{\text{subsystem}}} \right\rangle} {\left\lvert {\phi_{\text{bath}}} \right\rangle}\end{aligned} \hspace{\stretch{1}}(1.0.9)

and a total Hamiltonian operator of the form

\begin{aligned}H_{\text{full}} = H_{\text{subsystem}} + H_{\text{bath}} (+ H_{\text{coupling}})\end{aligned} \hspace{\stretch{1}}(1.0.10)

where the total energy of the state, given energy eigenvalues $\mathcal{E}_n$ and $\lambda_n$ for the states ${\left\lvert {\chi_{\text{subsystem}}} \right\rangle}$ and ${\left\lvert {\phi_{\text{bath}}} \right\rangle}$ respectively, is given by the sum

\begin{aligned}E = \mathcal{E}_m + \lambda_n.\end{aligned} \hspace{\stretch{1}}(1.0.11)

Here $\mathcal{E}_m, \lambda_n$ are many body energies, so that $\delta E \sim \#e^{-\#N}$.

We can now write the total number of states as

\begin{aligned}\Omega(E) &= \underbrace{\sum_m}_{\text{subsystem}}\underbrace{\sum_n}_{\text{bath}}\delta(E - \mathcal{E}_m -\lambda_n)\\ &= \sum_m e^{\frac{1}{{k_{\mathrm{B}}}} S(E - \mathcal{E}_m)} \\ &\approx \sum_m e^{\frac{1}{{k_{\mathrm{B}}}} S(E)}e^{\beta \mathcal{E}_m}\end{aligned} \hspace{\stretch{1}}(1.0.12)

\begin{aligned}Z = \sum_m e^{-\beta \mathcal{E}_m} = \text{Tr} \left( e^{-\beta \hat{H}_{\text{subsystem}}} \right)\end{aligned} \hspace{\stretch{1}}(1.0.13)

We’ve ignored the coupling term in eq. 1.0.10. This is actually a problem in quantum mechanics since we require this coupling to introduce state changes.

Example: Spins

Given $N$ spin $1/2$ objects $\uparrow$, $\downarrow$, satisfying

\begin{aligned}S_z = \pm \frac{1}{{2}} \hbar\end{aligned} \hspace{\stretch{1}}(1.0.14)

Dropping $\hbar$ we have

\begin{aligned}S_z \rightarrow \pm \frac{1}{{2}} \sigma\end{aligned} \hspace{\stretch{1}}(1.0.15)

Our system has a state ${\left\lvert {\sigma_1, \sigma_2, \cdots \sigma_N} \right\rangle}$ where $\sigma_i = \pm 1$. The total number of states is $2^N$.

Our Hamiltonian is

\begin{aligned}\hat{H} = - B \sum_i \hat{S}_{z_i}.\end{aligned} \hspace{\stretch{1}}(1.0.16)

This is the associated with the Zeeman effect, where states can be split by a magnetic field, as in fig. 1.3.

Fig 1.3: Zeeman splitting

Our minimum and maximum energies are

\begin{subequations}

\begin{aligned}E_{\mathrm{min}} = -\frac{B}{2} N\end{aligned} \hspace{\stretch{1}}(1.0.17a)

\begin{aligned}E_{\mathrm{max}} = -\frac{B}{2} N\end{aligned} \hspace{\stretch{1}}(1.0.17b)

\end{subequations}

The total energy difference is

\begin{aligned}\Delta E = B N,\end{aligned} \hspace{\stretch{1}}(1.0.23)

and the energy differences are

\begin{aligned}\delta E \sim \frac{B N}{2^N} \sim \# e^{-\# N}.\end{aligned} \hspace{\stretch{1}}(1.0.23)

This is a measure of the average energy difference between two adjacent energy levels. In a real system we cannot assume that we have non-interacting spins. Any weak interaction will split our degenerate energy levels as in fig. 1.4.

Fig 1.4: Interaction splitting

We can now express the partition function

\begin{aligned}Z &= \sum_{\{\sigma\}} e^{-\beta \left( -\frac{B}{2} \sum_i \sigma_i \right)} \\ &= \left( \sum_{\{\sigma_1\}} \exp \left( -\frac{\beta B}{2} \sigma_i \right) \right)\left( \sum_{\{\sigma_2\}} \exp \left( -\frac{\beta B}{2} \sigma_i \right) \right)\cdots \\ &= \left( \exp \left( -\frac{\beta B}{2} (1) \right) + \exp \left( -\frac{\beta B}{2} (-1) \right) \right)^N \\ &= \left( 2 \cosh\left( \frac{B}{2 k_B T} \right) \right)^N\end{aligned} \hspace{\stretch{1}}(1.0.23)

Our free energy is

\begin{aligned}F = - k_B T N \ln \left( 2 \cosh \left( \frac{B}{2 k_B T} \right) \right)\end{aligned} \hspace{\stretch{1}}(1.0.23)

For the expected value of the spin we find

\begin{aligned}\left\langle{{S_z}}\right\rangle = \sum_i \left\langle{{S_{z_i}}}\right\rangle\end{aligned} \hspace{\stretch{1}}(1.0.23)

\begin{aligned}\left\langle{{S_{z_i}}}\right\rangle=\frac{1}{{2}} \frac{\sum_\sigma \sigma e^{\beta B \sigma/2}}{\sum_\sigma e^{\beta B \sigma/2}}= \frac{1}{{2}} \tanh \left( \frac{B}{2 k_B T} \right)\end{aligned} \hspace{\stretch{1}}(1.0.23)

PHY452H1S Basic Statistical Mechanics. Lecture 11: Statistical and thermodynamic connection. Taught by Prof.\ Arun Paramekanti

Posted by peeterjoot on February 27, 2013

Disclaimer

Peeter’s lecture notes from class. May not be entirely coherent.

Connections between statistical and thermodynamic views

• “Heat”. Disorganized energy.
• $S_{\text{Statistical entropy}}$. This is the thermodynamic entropy introduced by Boltzmann (microscopic).

Ideal gas

\begin{aligned}H = \sum_{i = 1}^N \frac{\mathbf{p}_i^2}{2m}\end{aligned} \hspace{\stretch{1}}(1.3.1)

\begin{aligned}\Omega(E) = \frac{1}{{h^{3N} N!}}\int d\mathbf{x}_1 d\mathbf{x}_2 \cdots d\mathbf{x}_Nd\mathbf{p}_1 d\mathbf{p}_2 \cdots d\mathbf{p}_N\delta( E - H )\end{aligned} \hspace{\stretch{1}}(1.3.2)

Let’s isolate the contribution of the Hamiltonian from a single particle and all the rest

\begin{aligned}H = \frac{\mathbf{p}_1^2}{2m}+\sum_{i \ne 1}^N \frac{\mathbf{p}_i^2}{2m}=\frac{\mathbf{p}_1^2}{2m}+H'\end{aligned} \hspace{\stretch{1}}(1.3.3)

so that the number of states in the phase space volume in the phase space region associated with the energy is

\begin{aligned}\Omega(N, E) &= \frac{V^N}{h^{3N} N!}\int d\mathbf{p}_1\int d\mathbf{p}_2 d\mathbf{p}_3 \cdots d\mathbf{p}_N\delta( E - H' - H_1) \\ &= \frac{V^{N-1}}{h^{3(N-1)} (N-1)!} \frac{V}{h^3 N}\int d\mathbf{p}_1\int d\mathbf{p}_2 d\mathbf{p}_3 \cdots d\mathbf{p}_N\delta( E - H' - H_1) \\ &= \frac{ V }{ h^3 N} \int d\mathbf{p}_1 \Omega( N-1, E - H_1 )\end{aligned} \hspace{\stretch{1}}(1.3.4)

With entropy defined by

\begin{aligned}S = k_{\mathrm{B}} \ln \Omega,\end{aligned} \hspace{\stretch{1}}(1.3.5)

we have

\begin{aligned}\Omega( N-1, E - H_1 ) = \exp\left( \frac{1}{k_{\mathrm{B}}} S \left( N-1, E - \frac{\mathbf{p}_1^2}{2m} \right) \right)\end{aligned} \hspace{\stretch{1}}(1.3.6)

so that

\begin{aligned}\Omega(N, E) =\frac{ V }{ h^3 N} \int d\mathbf{p}_1 \exp\left( \frac{1}{k_{\mathrm{B}}} S \left( N-1, E - \frac{\mathbf{p}_1^2}{2m} \right) \right)\end{aligned} \hspace{\stretch{1}}(1.3.7)

For $N \gg 1$ and $E \gg \mathbf{p}_1^2/2m$, the exponential can be approximated by

\begin{aligned}\exp\left( \frac{1}{k_{\mathrm{B}}} S \left( N-1, E - \frac{\mathbf{p}_1^2}{2m} \right) \right)= \exp\left( \frac{1}{k_{\mathrm{B}}} \left( S(N, E) - \left( \frac{\partial {S}}{\partial {N}} \right)_{E, V} - \frac{\mathbf{p}_1^2}{2m} \left( \frac{\partial {S}}{\partial {E}} \right)_{N, V} \right) \right),\end{aligned} \hspace{\stretch{1}}(1.3.8)

so that

\begin{aligned}\Omega(N, E) = \underbrace{\frac{ V }{ h^3 N} \int d\mathbf{p}_1 e^{\frac{S}{k_{\mathrm{B}}}(N, E)}e^{-\frac{1}{{k_{\mathrm{B}}}}\left( \frac{\partial {S}}{\partial {N}} \right)_{E, V}}}_{B}\int d\mathbf{p}_1 e^{-\frac{\mathbf{p}_1^2}{2m k_{\mathrm{B}}}\left( \frac{\partial {S}}{\partial {E}} \right)_{N, V}}.\end{aligned} \hspace{\stretch{1}}(1.3.9)

or

\begin{aligned}\Omega(N, E) = B\int d\mathbf{p}_1 e^{-\frac{\mathbf{p}_1^2}{2m k_{\mathrm{B}}}\left( \frac{\partial {S}}{\partial {E}} \right)_{N, V}}.\end{aligned} \hspace{\stretch{1}}(1.3.10)

\begin{aligned}\mathcal{P}(\mathbf{p}_1) \propto e^{-\frac{\mathbf{p}_1^2}{2m k_{\mathrm{B}} T}}.\end{aligned} \hspace{\stretch{1}}(1.3.11)

This is the Maxwell distribution.

Non-ideal gas. General classical system

Fig 1: Partitioning out a subset of a larger system

Breaking the system into a subsystem $1$ and the reservoir $2$ so that with

\begin{aligned}H = H_1 + H_2\end{aligned} \hspace{\stretch{1}}(1.4.12)

we have

\begin{aligned}\Omega(N, V, E) &= \int d\{x_1\}d\{p_1\}d\{x_2\}d\{p_2\}\delta( E - H_1 - H_2 ) \frac{1}{{ h^{3N_1} N_1! h^{3 N_2} N_2!}} \\ &\propto \int d\{x_1\}d\{p_1\}e^{\frac{1}{{k_{\mathrm{B}}}} S(E - H_1, N - N_1)}\end{aligned} \hspace{\stretch{1}}(1.4.13)

\begin{aligned}\Omega(N, V, E) \sim \int d\{x_1\}d\{p_1\}\underbrace{e^{\frac{1}{{k_{\mathrm{B}}}}S(E, N)}e^{-\frac{N_1 }{k_{\mathrm{B}}}\left( \frac{\partial {S}}{\partial {N}} \right)_{E, V}}}_{\text{environment'', or heat bath''}}e^{-\frac{H_1 }{k_{\mathrm{B}}}\left( \frac{\partial {S}}{\partial {E}} \right)_{N, V}}\end{aligned} \hspace{\stretch{1}}(1.4.14)

\begin{aligned}H_1 = \sum_{i \in 1} \frac{\mathbf{p}_i}{2m}+\sum_{i \in j} V(\mathbf{x}_i - \mathbf{x}_j)+ \sum_{i \in 1} \Phi(\mathbf{x}_i)\end{aligned} \hspace{\stretch{1}}(1.4.15)

\begin{aligned}\mathcal{P} \propto e^{-\frac{H( \{x_1\} \{p_1\} ) }{k_{\mathrm{B}} T} }\end{aligned} \hspace{\stretch{1}}(1.4.16)

and for the subsystem

\begin{aligned}\mathcal{P}_1 =\frac{e^{-\frac{H_1}{k_{\mathrm{B}} T} }}{\int d\{x_1\}d\{p_1\}e^{-\frac{H_1}{k_{\mathrm{B}} T} }}\end{aligned} \hspace{\stretch{1}}(1.4.17)

Canonical ensemble

Can we use results for this subvolume, can we use this to infer results for the entire system? Suppose we break the system into a number of smaller subsystems as in fig. 1.2.

Fig 2: Larger system partitioned into many small subsystems

\begin{aligned}\underbrace{(N, V, E)}_{\text{microcanonical}}\rightarrow (N, V, T)\end{aligned} \hspace{\stretch{1}}(1.5.18)

We’d have to understand how large the differences between the energy fluctuations of the different subsystems are. We’ve already assumed that we have minimal long range interactions since we’ve treated the subsystem $1$ above in isolation. With $\beta = 1/(k_{\mathrm{B}} T)$ the average energy is

\begin{aligned}\left\langle{{E}}\right\rangle = \frac{\int d\{x_1\}d\{p_1\}He^{- \beta H }}{\int d\{x_1\}d\{p_1\}e^{- \beta H }}\end{aligned} \hspace{\stretch{1}}(1.5.19)

\begin{aligned}\left\langle{{E^2}}\right\rangle = \frac{\int d\{x_1\}d\{p_1\}H^2e^{- \beta H }}{\int d\{x_1\}d\{p_1\}e^{- \beta H }}\end{aligned} \hspace{\stretch{1}}(1.5.20)

We define the partition function

\begin{aligned}Z \equiv \frac{1}{{h^{3N} N!}}\int d\{x_1\}d\{p_1\}e^{- \beta H }.\end{aligned} \hspace{\stretch{1}}(1.5.21)

Observe that the derivative of $Z$ is

\begin{aligned}\frac{\partial {Z}}{\partial {\beta}} = -\frac{1}{{h^{3N} N!}}\int d\{x_1\}d\{p_1\}He^{- \beta H },\end{aligned} \hspace{\stretch{1}}(1.5.22)

allowing us to express the average energy compactly in terms of the partition function

\begin{aligned}\left\langle{{E}}\right\rangle = -\frac{1}{{Z}} \frac{\partial {Z}}{\partial {\beta}} = - \frac{\partial {\ln Z}}{\partial {\beta}}.\end{aligned} \hspace{\stretch{1}}(1.5.23)

Taking second derivatives we find the variance of the energy

\begin{aligned}\frac{\partial^2 {{\ln Z}}}{\partial {{\beta}}^2} &=\frac{\partial {}}{\partial {\beta}}\frac{\int d\{x_1\}d\{p_1\}(-H)e^{- \beta H }}{\int d\{x_1\}d\{p_1\}e^{- \beta H }} \\ &= \frac{\int d\{x_1\}d\{p_1\}(-H)^2e^{- \beta H }}{\int d\{x_1\}d\{p_1\}e^{- \beta H }}-\frac{\left( \int d\{x_1\} d\{p_1\} (-H) e^{- \beta H } \right)^2}{\left( \int d\{x_1\} d\{p_1\} e^{- \beta H } \right)^2} \\ &= \left\langle{{E^2}}\right\rangle - \left\langle{{E}}\right\rangle^2 \\ &= \sigma_{\mathrm{E}}^2\end{aligned} \hspace{\stretch{1}}(1.5.24)

We also have

\begin{aligned}\sigma_{\mathrm{E}}^2 &= -\frac{\partial {\left\langle{{E}}\right\rangle}}{\partial {\beta}} \\ &= \frac{\partial {\left\langle{{E}}\right\rangle}}{\partial {T}} \frac{\partial {T}}{\partial {\beta}} \\ &= -\frac{\partial {\left\langle{{E}}\right\rangle}}{\partial {T}} \frac{\partial {}}{\partial {\beta}} \frac{1}{{k_{\mathrm{B}} \beta}} \\ &= \frac{\partial {\left\langle{{E}}\right\rangle}}{\partial {T}} \frac{1}{{k_{\mathrm{B}} \beta^2}} \\ &= k_{\mathrm{B}} T^2 \frac{\partial {\left\langle{{E}}\right\rangle}}{\partial {T}}\end{aligned} \hspace{\stretch{1}}(1.5.25)

Recalling that the heat capacity was defined by

\begin{aligned}C_V = \frac{\partial {\left\langle{{E}}\right\rangle}}{\partial {T}},\end{aligned} \hspace{\stretch{1}}(1.5.26)

we have

\begin{aligned}\sigma_{\mathrm{E}}^2 = k_{\mathrm{B}} T^2 C_V \propto N\end{aligned} \hspace{\stretch{1}}(1.5.27)

\begin{aligned}\frac{\sigma_{\mathrm{E}}}{\left\langle{{E}}\right\rangle} \propto \frac{1}{{\sqrt{N}}}\end{aligned} \hspace{\stretch{1}}(1.5.28)

One dimensional well problem from Pathria chapter II

Posted by peeterjoot on February 16, 2013

Problem 2.5 [2] asks to show that

\begin{aligned}\oint p dq = \left( { n + \frac{1}{{2}} } \right) h,\end{aligned} \hspace{\stretch{1}}(1.0.1)

provided the particle’s potential is such that

\begin{aligned}m \hbar \left\lvert { \frac{dV}{dq} } \right\rvert \ll \left( { m ( E - V ) } \right)^{3/2}.\end{aligned} \hspace{\stretch{1}}(1.0.2)

I took a guess that this was actually the WKB condition

\begin{aligned}\frac{k'}{k^2} \ll 1,\end{aligned} \hspace{\stretch{1}}(1.0.3)

where the WKB solution was of the form

\begin{aligned}k^2(q) = 2 m (E - V(q))/\hbar^2\end{aligned} \hspace{\stretch{1}}(1.0.4a)

\begin{aligned}\psi(q) = \frac{1}{{\sqrt{k}}} e^{\pm i \int k(q) dq}.\end{aligned} \hspace{\stretch{1}}(1.0.4b)

The WKB validity condition is

\begin{aligned}1 \gg \frac{-2 m V'}{\hbar} \frac{1}{{2}} \frac{1}{{\sqrt{2 m (E - V)}}} \frac{\hbar^2}{2 m(E - V)}\end{aligned} \hspace{\stretch{1}}(1.0.5)

or

\begin{aligned}m \hbar \left\lvert {V'} \right\rvert \ll \left( {2 m (E - V)} \right)^{3/2}.\end{aligned} \hspace{\stretch{1}}(1.0.6)

This differs by a factor of $2 \sqrt{2}$ from the constraint specified in the problem, but I’m guessing that constant factors of that sort have just been dropped.

Even after figuring out that this question was referring to WKB, I didn’t know what to make of the oriented integral $\int p dq$. With $p$ being an operator in the QM context, what did this even mean. I found the answer in [1] section 12.12. Here $p$ just means

\begin{aligned}p(q) = \hbar k(q),\end{aligned} \hspace{\stretch{1}}(1.0.7)

where $k(q)$ is given by eq. 1.0.4a. The rest of the problem can also be found there and relies on the WKB connection formulas, which aren’t derived in any text that I own. Quoting results based on other results that I don’t know the origin of it’s worthwhile, so that’s as far as I’ll attempt this question (but do plan to eventually look up and understand those WKB connection formulas, and then see how they can be applied in a problem like this).

References

[1] D. Bohm. Quantum Theory. Courier Dover Publications, 1989.

[2] RK Pathria. Statistical mechanics. Butterworth Heinemann, Oxford, UK, 1996.

1D pendulum problem in phase space

Posted by peeterjoot on February 15, 2013

Problem 2.6 in [1] asks for some analysis of the (presumably small angle) pendulum problem in phase space, including an integration of the phase space volume energy and period of the system to the area $A$ included within a phase space trajectory. With coordinates as in fig. 1.1, our Lagrangian is

Fig 1.1: 1d pendulum

\begin{aligned}\mathcal{L} = \frac{1}{{2}} m l^2 \dot{\theta}^2 - g m l ( 1 - \cos\theta ).\end{aligned} \hspace{\stretch{1}}(1.0.1)

As a sign check we find for small $\theta$ from the Euler-Lagrange equations $\dot{d}{\theta} = -(g/l) \theta$ as expected. For the Hamiltonian, we need the canonical momentum

\begin{aligned}p_\theta = \frac{\partial {\mathcal{L}}}{\partial {\dot{\theta}}} = m l^2 \dot{\theta}.\end{aligned} \hspace{\stretch{1}}(1.0.2)

Observe that this canonical momentum does not have dimensions of momentum, but that of angular momentum ($m l \dot{\theta} \times l$).

Our Hamiltonian is

\begin{aligned}H = \frac{1}{{2 m l^2}} p_\theta^2 + g m l ( 1 - \cos\theta ).\end{aligned} \hspace{\stretch{1}}(1.0.3)

Hamilton’s equations for this system, in matrix form are

\begin{aligned}\frac{d{{}}}{dt}\begin{bmatrix}\theta \\ p_\theta\end{bmatrix}=\begin{bmatrix}\frac{\partial {H}}{\partial {p_\theta}} \\ -\frac{\partial {H}}{\partial {\theta}} \end{bmatrix}=\begin{bmatrix}p_\theta/m l^2 \\ - g m l \sin\theta\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.4)

With $\omega = g/l$, it is convient to non-dimensionalize this

\begin{aligned}\frac{d{{}}}{dt}\begin{bmatrix}\theta \\ p_\theta/ \omega m l^2\end{bmatrix}=\omega\begin{bmatrix}p_\theta/\omega m l^2 \\ - \sin\theta\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.0.5)

Now we can make the small angle approximation. Writing

\begin{aligned}\mathbf{u} = \begin{bmatrix}\theta \\ p_\theta/ \omega m l^2\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.6a)

\begin{aligned}i = \begin{bmatrix}0 & 1 \\ -1 & 0\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.6b)

Our pendulum equation is reduced to

\begin{aligned}\mathbf{u}' = i \omega \mathbf{u},\end{aligned} \hspace{\stretch{1}}(1.0.7)

With a solution that we can read off by inspection

\begin{aligned}\mathbf{u} = e^{i \omega t} \mathbf{u}_0=\begin{bmatrix}\cos\omega t & \sin\omega t \\ -\sin\omega t & \cos \omega t\end{bmatrix}\mathbf{u}_0\end{aligned} \hspace{\stretch{1}}(1.0.8)

Let’s put the initial phase space point into polar form

\begin{aligned}\mathbf{u}_0^2= \theta_0^2 + \frac{p_0^2}{\omega^2 m^2 l^4}= \frac{2}{\omega^2 m l^2}\left( { \frac{p_0^2}{2 m l^2} + \frac{1}{{2}} \omega^2 m l^2 \theta_0^2 } \right)=\frac{2}{g m l}\left( { \frac{p_0^2}{2 m l^2} + \frac{1}{{2}} g m l \theta_0^2 } \right)\end{aligned} \hspace{\stretch{1}}(1.0.9)

This doesn’t appear to be an exact match for eq. 1.0.3, but we can write for small $\theta_0$

\begin{aligned}1 - \cos\theta_0=2 \sin^2 \left( { \frac{\theta_0}{2} } \right)\approx2 \left( { \frac{\theta_0}{2} } \right)^2=\frac{\theta_0^2}{2}.\end{aligned} \hspace{\stretch{1}}(1.0.10)

This shows that we can rewrite our initial conditions as

\begin{aligned}\mathbf{u}_0 = \sqrt{ \frac{2 E}{g m l} }e^{i \phi }\begin{bmatrix}1 \\ 0\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.0.11)

where

\begin{aligned}\tan \phi =\left( { \omega m l^2 \theta_0/ p_0 } \right).\end{aligned} \hspace{\stretch{1}}(1.0.12)

Our time evolution in phase space is given by

\begin{aligned}\begin{bmatrix}\theta(t) \\ p_\theta(t)\end{bmatrix}=\sqrt{ \frac{2 E}{g m l} }\begin{bmatrix}\cos(\omega t + \phi) \\ - \omega m l^2\sin(\omega t + \phi)\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.0.14)

or

\begin{aligned}\boxed{\begin{bmatrix}\theta(t) \\ p_\theta(t)\end{bmatrix}=\frac{1}{{\omega l}}\sqrt{ \frac{2 E}{m} }\begin{bmatrix}\cos(\omega t + \phi) \\ - \omega m l^2\sin(\omega t + \phi)\end{bmatrix}.}\end{aligned} \hspace{\stretch{1}}(1.0.14)

This is plotted in fig. 1.2.

Fig 1.2: Phase space trajectory for small angle pendulum

The area of this ellipse is

\begin{aligned}A = \pi \frac{1}{{\omega^2 l^2}} \frac{2 E}{m} \omega m l^2 = \frac{2 \pi}{\omega} E.\end{aligned} \hspace{\stretch{1}}(1.0.15)

With $\tau$ for the period of the trajectory, this is

\begin{aligned}A = \tau E.\end{aligned} \hspace{\stretch{1}}(1.0.16)

As a final note, observe that the oriented integral from problem 2.5 of the text $\oint p_\theta d\theta$, is also this area. This is a general property, which can be seen geometrically in fig. 1.3, where we see that the counterclockwise oriented integral of $\oint p dq$ would give the negative area. The integrals along the $c_4, c_1$ paths give the area under the blob, whereas the integrals along the other paths where the sense is opposite, give the complete area under the top boundary. Since they are oppositely sensed, adding them gives just the area of the blob.

Fig 1.3: Area from oriented integral along path

Let’s do this $\oint p_\theta d\theta$ integral for the pendulum phase trajectories. With

\begin{aligned}\theta = \frac{1}{{\omega l}} \sqrt{\frac{2 E}{m}} \cos(\omega t + \phi)\end{aligned} \hspace{\stretch{1}}(1.0.17a)

\begin{aligned}p_\theta = -m l \sqrt{\frac{2 E}{m}} \sin(\omega t + \phi)\end{aligned} \hspace{\stretch{1}}(1.0.17b)

We have

\begin{aligned}\oint p_\theta d\theta = \frac{m l}{\omega l} \frac{2 E}{m} \int_0^{2\pi/\omega} \sin^2( \omega t + \phi) \omega dt= 2 E \int_0^{2\pi/\omega} \frac{ 1 - \cos\left( { 2(\omega t + \phi) } \right) }{2} dt= E \frac{2 \pi}{\omega} = E \tau.\end{aligned} \hspace{\stretch{1}}(1.0.18)

References

[1] RK Pathria. Statistical mechanics. Butterworth Heinemann, Oxford, UK, 1996.

PHY452H1S Basic Statistical Mechanics. Lecture 10: Continuing review of thermodynamics. Taught by Prof. Arun Paramekanti

Posted by peeterjoot on February 14, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Disclaimer

Peeter’s lecture notes from class. May not be entirely coherent.

Continuing review of thermodynamics

We have energy conservation split into two types of energy

\begin{aligned}dE = \underbrace{d W }_{\text{Organized macroscopic variable X}}+ \underbrace{d Q}_{\text{disorganized}}\end{aligned} \hspace{\stretch{1}}(1.2.1)

In fig. 1 we plot changes that are adiabatic processes ($d Q = 0$) and heating and cooling processes (with $d W = 0$).

Given a dimensionality of $d_w + 1$, a cyclic change is that for which we have

\begin{aligned}\{ X_{\mathrm{initial}} \} \rightarrow \{ X_{\mathrm{final}} \} \end{aligned} \hspace{\stretch{1}}(1.0.2a)

\begin{aligned}E_{\mathrm{initial}} \rightarrow E_{\mathrm{final}}\end{aligned} \hspace{\stretch{1}}(1.0.2b)

\begin{aligned}\Delta W \ne 0\end{aligned} \hspace{\stretch{1}}(1.0.2c)

\begin{aligned}\Delta Q \ne 0\end{aligned} \hspace{\stretch{1}}(1.0.2d)

\begin{aligned}\Delta E = 0\end{aligned} \hspace{\stretch{1}}(1.0.2e)

Such a cyclic process could be represented as in fig. 2.

Fig2: Cyclic process

Here we’ve labeled the level curves with a parameter $\sigma$, as yet undefined. We call $\sigma$ the thermodynamic entropy, and say that

\begin{aligned}\left( {\sigma, \{x_i\}} \right),\end{aligned} \hspace{\stretch{1}}(1.0.3)

specifies the state of the system.

Example: Pushing a block against a surface with friction.

Equilibrium

Considering two systems in contact as in fig. 3.

Fig3: Two systems in contact

We require

• Mechanical equilibrium.

requires balance of the forces $f_i$

\begin{aligned}\frac{\partial {E}}{\partial {x_i}} = f_i,\end{aligned} \hspace{\stretch{1}}(1.0.4)

(Note the neglect of the sign here, the direction of the force isn’t really of interest).

and

\begin{aligned}\frac{\partial {E_1}}{\partial {x_i}} = \frac{\partial {E_2}}{\partial {x_i}} \end{aligned} \hspace{\stretch{1}}(1.0.5)

• Thermal stability

\begin{aligned}\frac{\partial {E_1}}{\partial {\sigma}} = \frac{\partial {E_2}}{\partial {\sigma}} \end{aligned} \hspace{\stretch{1}}(1.0.6)

We must have some quantity that characterizes the state of the system in a non-macroscopic fashion. The identity eq. 1.0.6 is a statement that we have equal temperatures.

We define temperature as

\begin{aligned}T \equiv \frac{\partial {E}}{\partial {\sigma}}.\end{aligned} \hspace{\stretch{1}}(1.0.7)

We could potentially define different sorts of temperature, for example, perhaps $T^3 \equiv {\partial {E}}/{\partial {\sigma}}$. Should we do this, we effectively also define $\sigma$ in a specific way. The definition eq. 1.0.7 effectively defines this non-macroscopic parameter $\sigma$ (the entropy) in the simplest possible way.

Cyclic state variable verses non-state variables

\begin{aligned}\{x_i\}, \sigma \rightarrow \text{state variable''}\end{aligned} \hspace{\stretch{1}}(1.0.8)

A non-cyclic process changes these, whereas a cyclic process takes $\sigma, \{x_i\}$ back to the initial values. This is characterized by

\begin{aligned}\oint d\sigma = 0\end{aligned} \hspace{\stretch{1}}(1.0.9a)

\begin{aligned}\oint dx_i = 0.\end{aligned} \hspace{\stretch{1}}(1.0.9b)

This doesn’t mean that the closed loop integral of other qualities, such as temperature are necessarily zero

\begin{aligned}\oint T d\sigma = \oint d Q \ne 0\end{aligned} \hspace{\stretch{1}}(1.0.10a)

\begin{aligned}\oint f_i dx_i = \oint d W \ne 0.\end{aligned} \hspace{\stretch{1}}(1.0.10b)

Note that the identification of $d Q = T d\sigma$ follows from our definition

\begin{aligned}\left( { \frac{\partial {E}}{\partial {\sigma}} } \right)_x = T\end{aligned} \hspace{\stretch{1}}(1.0.17)

so that with $d W = 0$ we have

\begin{aligned}dE = T d \sigma\end{aligned} \hspace{\stretch{1}}(1.0.17)

Graphically we have for a cyclic process fig. 4.

Fig4: Cyclic process

We have

\begin{aligned}d W_{\rightarrow} = -d W_{\leftarrow} \end{aligned} \hspace{\stretch{1}}(1.0.17)

\begin{aligned}d Q_{\rightarrow} = -d Q_{\leftarrow} \end{aligned} \hspace{\stretch{1}}(1.0.17)

so that

\begin{aligned}\Delta Q_{12}^{(\mathrm{A})} + \Delta Q_{21}^{(\mathrm{B})} \ne 0,\end{aligned} \hspace{\stretch{1}}(1.0.17)

or

\begin{aligned}\Delta Q_{12}^{(\mathrm{A})} \ne \Delta Q_{12}^{(\mathrm{B})}.\end{aligned} \hspace{\stretch{1}}(1.0.17)

Irreversible and reversible processes

Reversible means that an undoing of the macroscopic quantities brings us back to the initial state. A counter example is a block on a spring as illustrated in fig. 5.

Fig 5: Heat loss and irreversibility

In such a system the block will hit gas atoms as it moves. It’s hard to imagine that such gas particles will somehow spontaneously reorganize itself so that they return to their initial positions and velocities. This is the jist of the Second law of thermodynamics. Real processes introduce a degree of irreversibility with

\begin{aligned}\text{Energy}_1 \rightarrow \text{Energy}_2\end{aligned} \hspace{\stretch{1}}(1.0.17)

\begin{aligned}\text{Work} \rightarrow \text{Heat}\end{aligned} \hspace{\stretch{1}}(1.0.17)

but not all

\begin{aligned}\text{Heat} \rightarrow \text{Work}.\end{aligned} \hspace{\stretch{1}}(1.0.17)

PHY452H1S Basic Statistical Mechanics. Lecture 9: Lightning review of thermodynamics. Taught by Mr. (Eric) Kin-Ho Lee

Posted by peeterjoot on February 13, 2013

Disclaimer

Peeter’s lecture notes from class. May not be entirely coherent.

Lightning review of thermodynamics

First law

Energy conservation.

• Work. Macroscopic control
• heat. Uncontrollable (microscopically)

This is summarized by the differential relationship

\begin{aligned}dE = d W + d Q.\end{aligned} \hspace{\stretch{1}}(1.2.1)

Examples of work

We have many types of work (in contrast to only one type of heat). Exmaples

1. $-P dV = d W$
2. $q\mathbf{E} \cdot dl$
3. $k x dx$
4. $H dm$

Homework: verify the signs of these.

We put these into a general form, to first order, of

\begin{aligned}d W_i = f_i dx_i,\end{aligned} \hspace{\stretch{1}}(1.2.2)

where we assume that higher order terms are not significant.

\begin{aligned}d W = \sum_i d W_i = \sum_i f_i dx_i.\end{aligned} \hspace{\stretch{1}}(1.2.3)

Heat

We have only one type of heat, which we loosely describe as something imbued by contact with a “hotter” system, as in (Fig 1)

Fig1: System in contact with heat source

This is defined as the condition where we have no heat exchange with the environment, or

\begin{aligned}d Q = 0.\end{aligned} \hspace{\stretch{1}}(1.2.4)

We contrast this with heating processes for which we have

\begin{aligned}d W = 0.\end{aligned} \hspace{\stretch{1}}(1.2.5)

Since we have $N$ coordinates ($d W = \sum_{i = 1}^N f_i dx_i$). We can think about an $n + 1$ dimensional space, where

1. $N$-dimensions are $x_i$
2. 1 dimension that characterizes heat exchange.

n = 1

Given work on gas

\begin{aligned}d W = -P dV\end{aligned} \hspace{\stretch{1}}(1.2.6)

We have a coordinate, not yet precisely defined, for which fixed levels indicate that there is no heat exchange occuring, as in (Fig 2)

Fig2: Adiabatic and heat exchange processes

We’ll call this axis $\sigma$, the thermodynamic entropy.

We’ve been introduced to statistical entropy

\begin{aligned}S = k_B \ln \Omega.\end{aligned} \hspace{\stretch{1}}(1.2.7)

We’ll assume for now that these are not related and will eventually figure out the connection between these two concepts.

n = 2

A representation of a adiabatic, or constant $\sigma$-hypersurface process is given in (Fig 3), a heating/cooling process with transition between $\sigma$-hypersurfaces in (Fig 4), and a cyclic process, in (Fig 5)

Fig4: Heat exchange process

Fig 5: Cyclic process

The cyclic process is one for which $dE = d W + d Q = 0$, however, this does not imply $d W = 0$ and $d Q = 0$ since we only require that the sum of the two is zero. In this whole process, we can have for example a net change in heat. Example: the engine of a car. Work is done, and heat is generated, but a car that was initially stopped and returns to its final destination, stops and cools down again, has still had significant internal action in the process.

Reversible processes

What do we mean by reversible? We mean that any of the changes in the system have been done so slowly that we could reverse the direction of the processes at any point, and should we do so, both the system and the environment will be returned to its initial state. This is an idealization that is, most of the time, a good approximation, but gives us an excellent idea of the limits of what we can theoretically describe.

Question: Why does the speed of the process make a difference?

If we are making changes to the system quickly, imagine that we are compressing a gas as in (Fig 6)

Fig 6: Fast gas compression by a piston

Doing work slowly means that the whole system can react to the change imposed. If we compressed the gas quickly, then changes to the system start only at the contact point with the piston. This can’t be reversed. If we pull the piston out at this point, none of the non-front gas particles will be able to react. The system will not be in thermal equalibrium for fast changes.

Cartesian to spherical change of variables in 3d phase space

Posted by peeterjoot on February 11, 2013

Question: Cartesian to spherical change of variables in 3d phase space

[1] problem 2.2 (a). Try a spherical change of vars to verify explicitly that phase space volume is preserved.

Our kinetic Lagrangian in spherical coordinates is

\begin{aligned}\mathcal{L} &= \frac{1}{{2}} m \left( \dot{r} \hat{\mathbf{r}} + r \sin\theta \dot{\phi} \hat{\boldsymbol{\phi}} + r \dot{\theta} \hat{\boldsymbol{\theta}} \right)^2 \\ &= \frac{1}{{2}} m \left( \dot{r}^2 + r^2 \sin^2\theta \dot{\phi}^2 + r^2 \dot{\theta}^2 \right)^2\end{aligned} \hspace{\stretch{1}}(1.0.1)

We read off our canonical momentum

\begin{aligned}p_r &= \frac{\partial {\mathcal{L}}}{\partial {r}} \\ &= m \dot{r}\end{aligned} \hspace{\stretch{1}}(1.0.2a)

\begin{aligned}p_\theta &= \frac{\partial {\mathcal{L}}}{\partial {\theta}} \\ &= m r^2 \dot{\theta}\end{aligned} \hspace{\stretch{1}}(1.0.2b)

\begin{aligned}p_\phi &= \frac{\partial {\mathcal{L}}}{\partial {\phi}} \\ &= m r^2 \sin^2\theta \dot{\phi},\end{aligned} \hspace{\stretch{1}}(1.0.2c)

and can now express the Hamiltonian in spherical coordinates

\begin{aligned}H &= \frac{1}{{2}} m \left(\left( \frac{p_r}{m} \right)^2+ r^2 \sin^2\theta \left( \frac{p_\phi}{m r^2 \sin^2\theta} \right)+ r^2 \left( \frac{p_\theta}{m r^2} \right)\right) \\ &= \frac{p_r^2}{2m} + \frac{p_\phi^2}{2 m r^2 \sin^2\theta} + \frac{p_\theta^2}{2 m r^2}\end{aligned} \hspace{\stretch{1}}(1.0.3)

Now we want to do a change of variables. The coordinates transform as

\begin{aligned}x = r \sin\theta \cos\phi\end{aligned} \hspace{\stretch{1}}(1.0.4a)

\begin{aligned}y = r \sin\theta \sin\phi\end{aligned} \hspace{\stretch{1}}(1.0.4b)

\begin{aligned}z = r \cos\theta,\end{aligned} \hspace{\stretch{1}}(1.0.4c)

or

\begin{aligned}r = \sqrt{x^2 + y^2 + z^2}\end{aligned} \hspace{\stretch{1}}(1.0.5a)

\begin{aligned}\theta = \arccos(z/r)\end{aligned} \hspace{\stretch{1}}(1.0.5b)

\begin{aligned}\phi = \arctan(y/x).\end{aligned} \hspace{\stretch{1}}(1.0.5c)

It’s not too hard to calculate the change of variables for the momenta (verified in sphericalPhaseSpaceChangeOfVars.nb). We have

\begin{aligned}p_r = \frac{x p_x + y p_y + z p_z}{\sqrt{x^2 + y^2 + z^2}}\end{aligned} \hspace{\stretch{1}}(1.0.6a)

\begin{aligned}p_\theta = \frac{(p_x x + p_y y) z - p_z (x^2 + y^2)}{\sqrt{x^2 + y^2}}\end{aligned} \hspace{\stretch{1}}(1.0.6b)

\begin{aligned}p_\phi = x p_y - y p_x\end{aligned} \hspace{\stretch{1}}(1.0.6c)

Now let’s compute the volume element in spherical coordinates. This is

\begin{aligned}d\omega &= dr d\theta d\phi p_r p_\theta p_\phi \\ &= \frac{\partial(r, \theta, \phi, p_r, p_\theta, p_\phi)}{\partial(x, y, z, p_x, p_y, p_z)}dx dy dz dp_x dp_y dp_z \\ &= \begin{vmatrix} \frac{x}{\sqrt{x^2+y^2+z^2}} & \frac{y}{\sqrt{x^2+y^2+z^2}} & \frac{z}{\sqrt{x^2+y^2+z^2}} & 0 & 0 & 0 \\ \frac{x z}{\sqrt{x^2+y^2} \left(x^2+y^2+z^2\right)} & \frac{y z}{\sqrt{x^2+y^2} \left(x^2+y^2+z^2\right)} & -\frac{\sqrt{x^2+y^2}}{x^2+y^2+z^2} & 0 & 0 & 0 \\ -\frac{y}{x^2+y^2} & \frac{x}{x^2+y^2} & 0 & 0 & 0 & 0 \\ \frac{\left(y^2+z^2\right) p_x-x y p_y-x z p_z}{\left(x^2+y^2+z^2\right)^{3/2}} & \frac{\left(x^2+z^2\right) p_y-y \left(x p_x+z p_z\right)}{\left(x^2+y^2+z^2\right)^{3/2}} & \frac{\left(x^2+y^2\right) p_z-z \left(x p_x+y p_y\right)}{\left(x^2+y^2+z^2\right)^{3/2}} & \frac{x}{\sqrt{x^2+y^2+z^2}} & \frac{y}{\sqrt{x^2+y^2+z^2}} & \frac{z}{\sqrt{x^2+y^2+z^2}} \\ \frac{y z \left(y p_x-x p_y\right)-x \left(x^2+y^2\right) p_z}{\left(x^2+y^2\right)^{3/2}} & \frac{x z \left(x p_y-y p_x\right)-y \left(x^2+y^2\right) p_z}{\left(x^2+y^2\right)^{3/2}} & \frac{x p_x+y p_y}{\sqrt{x^2+y^2}} & \frac{x z}{\sqrt{x^2+y^2}} & \frac{y z}{\sqrt{x^2+y^2}} & -\sqrt{x^2+y^2} \\ p_y & -p_x & 0 & -y & x & 0 \\ \end{vmatrix}dx dy dz dp_x dp_y dp_z \\ &= dx dy dz dp_x dp_y dp_z\end{aligned} \hspace{\stretch{1}}(1.0.7)

This also has a unit determinant, as we found in the similar cylindrical change of phase space variables.

References

[1] RK Pathria. Statistical mechanics. Butterworth Heinemann, Oxford, UK, 1996.

Change of variables in 2d phase space

Posted by peeterjoot on February 10, 2013

Motivation

In [1] problem 2.2, it’s suggested to try a spherical change of vars to verify explicitly that phase space volume is preserved, and to explore some related ideas. As a first step let’s try a similar, but presumably easier change of variables, going from Cartesian to cylindrical phase spaces.

Canonical momenta and Hamiltonian

Our cylindrical velocity is

\begin{aligned}\mathbf{v} = \dot{r} \hat{\mathbf{r}} + r \dot{\theta} \hat{\boldsymbol{\theta}},\end{aligned} \hspace{\stretch{1}}(1.2.1)

so a purely kinetic Lagrangian would be

\begin{aligned}\mathcal{L} &= \frac{1}{{2}} m \mathbf{v}^2 \\ &= \frac{1}{{2}} m \left( \dot{r}^2 + r^2 \dot{\theta}^2 \right).\end{aligned} \hspace{\stretch{1}}(1.2.2)

Our canonical momenta are

\begin{subequations}

\begin{aligned}p_r &= \frac{\partial {\mathcal{L}}}{\partial {\dot{r}}} \\ &= m \dot{r}\end{aligned} \hspace{\stretch{1}}(1.0.3a)

\begin{aligned}p_\theta &= \frac{\partial {\mathcal{L}}}{\partial {\dot{\theta}}} \\ &= m r^2 \dot{\theta}.\end{aligned} \hspace{\stretch{1}}(1.0.3b)

\end{subequations}

and our kinetic energy is

\begin{aligned}H &= \mathcal{L} \\ &= \frac{1}{{2m}} p_r^2 + \frac{1}{{2 m r^2}} p_\theta^2.\end{aligned} \hspace{\stretch{1}}(1.0.4)

Now we need to express our momenta in terms of the Cartesian coordinates. We have for the radial momentum

\begin{aligned}p_r &= m \dot{r} \\ &= m \frac{d{{}}}{dt} \sqrt{x^2 + y^2} \\ &= \frac{1}{{2}} \frac{2 m}{r} \left( x \dot{x} + y \dot{y} \right)\end{aligned} \hspace{\stretch{1}}(1.0.5)

or

\begin{aligned}p_r = \frac{1}{{r}} \left( x p_x + y p_y \right)\end{aligned} \hspace{\stretch{1}}(1.0.6)

\begin{aligned}p_\theta &= m r^2 \frac{d{{\theta}}}{dt} \\ &= m r^2 \frac{d{{}}}{dt} \arctan \left( \frac{y}{x} \right).\end{aligned} \hspace{\stretch{1}}(1.0.7)

After some reduction (cyclindrialMomenta.nb), we find

\begin{aligned}p_\theta = p_y x - p_x y.\end{aligned} \hspace{\stretch{1}}(1.0.8)

We can assemble these into a complete set of change of variable equations

\begin{subequations}

\begin{aligned}r = \sqrt{x^2 + y^2}\end{aligned} \hspace{\stretch{1}}(1.0.9a)

\begin{aligned}\theta = \arctan\left( \frac{y}{x} \right)\end{aligned} \hspace{\stretch{1}}(1.0.9b)

\begin{aligned}p_r = \frac{1}{{\sqrt{x^2 + y^2}}} \left( x p_x + y p_y \right)\end{aligned} \hspace{\stretch{1}}(1.0.9c)

\begin{aligned}p_\theta = p_y x - p_x y.\end{aligned} \hspace{\stretch{1}}(1.0.9d)

\end{subequations}

Our phase space volume element change of variables is

\begin{aligned}dr d\theta dp_r dp_\theta &= \frac{\partial(r, \theta, p_r, p_\theta)}{\partial(x, y, p_x, p_y)}dx dy dp_x dp_y \\ &= \begin{vmatrix} \frac{x}{\sqrt{x^2+y^2}} & \frac{y}{\sqrt{x^2+y^2}} & 0 & 0 \\ -\frac{y}{x^2+y^2} & \frac{x}{x^2+y^2} & 0 & 0 \\ \frac{y \left(y p_x-x p_y\right)}{\left(x^2+y^2\right)^{3/2}} & \frac{x \left(x p_y-y p_x\right)}{\left(x^2+y^2\right)^{3/2}} & \frac{x}{\sqrt{x^2+y^2}} & \frac{y}{\sqrt{x^2+y^2}} \\ p_y & -p_x & -y & x \end{vmatrix}dx dy dp_x dp_y \\ &= \frac{x^2 + y^2}{\left(x^2 + y^2\right)^{3/2}}\frac{x^2 + y^2}{\left(x^2 + y^2\right)^{1/2}} \\ &= dx dy dp_x dp_y.\end{aligned} \hspace{\stretch{1}}(1.0.10)

We see explicitly that this point transformation has a unit Jacobian, preserving area.

References

[1] RK Pathria. Statistical mechanics. Butterworth Heinemann, Oxford, UK, 1996.

Some problems from Kittel chapter 3

Posted by peeterjoot on February 10, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Question: Classical gas partition function

[1] expresses the classical gas partition function (3.77) as

\begin{aligned}Z_1 \propto \int \exp\left( - \frac{p_x^2 + p_y^2 + p_z^2 }{2 M \tau}\right) dp_x dp_y dp_z\end{aligned} \hspace{\stretch{1}}(1.0.1)

Show that this leads to the expected $3 \tau/2$ result for the thermal average energy.

Let’s use the adjustment technique from the text for the $N$ partition case and write

\begin{aligned}Z_N = \frac{1}{{N!}} Z_1^N,\end{aligned} \hspace{\stretch{1}}(1.0.2)

with $Z_1$ as above. This gives us

\begin{aligned}U &= \tau^2 \frac{\partial {}}{\partial {\tau}} \ln Z_N \\ &= \tau^2 \frac{\partial {}}{\partial {\tau}} \left(N \ln Z_1 - \ln N!\right) \\ &= N \tau^2 \frac{\partial {\ln Z_1 }}{\partial {\tau}} \\ &= N \tau^2 \frac{\partial {}}{\partial {\tau}} \sum_{k = 1}^{3} \ln\int \exp\left( - \frac{p_k^2 }{2 M \tau}\right) dp_k \\ &= N \tau^2\sum_{k = 1}^{3}\frac{\frac{\partial {}}{\partial {\tau}} \int \exp\left( - \frac{p_k^2 }{2 M \tau} \right) dp_k}{\int \exp\left( - \frac{p_k^2 }{2 M \tau} \right) dp_k} \\ &= N \tau^2\sum_{k = 1}^{3}\frac{\frac{\partial {}}{\partial {\tau}} \sqrt{ 2 \pi M \tau }}{\sqrt{ 2 \pi M \tau}} \\ &= 3 N \tau^2\frac{\frac{1}{{2}} \tau^{-1/2}}{\sqrt{ \tau}} \\ &= \frac{3}{2} N \tau \\ &= \frac{3}{2} N k_{\mathrm{B}} T\end{aligned} \hspace{\stretch{1}}(1.0.3)

Question: Two state system

[1] problem 3.1.

Find an expression for the free energy as a function of $\tau$ of a system with two states, one at energy $0$ and one at energy $\epsilon$. From the free energy, find expressions for the energy and entropy of the system.

Our partition function is

\begin{aligned}Z = 1 + e^{-\epsilon /\tau}\end{aligned} \hspace{\stretch{1}}(1.0.4)

The free energy is just

\begin{aligned}F = -\tau \ln Z = -\tau \ln (1 + e^{-\epsilon/\tau})\end{aligned} \hspace{\stretch{1}}(1.0.5)

The entropy follows immediately

\begin{aligned}\sigma \\ &= -\frac{\partial {F}}{\partial {\tau}} \\ &= \frac{\partial {}}{\partial {\tau}}\left( \tau \ln \left( 1 + e^{-\epsilon/\tau} \right) \right) \\ &= \ln \left( 1 + e^{-\epsilon/\tau} \right)-\tau \epsilon \frac{-1}{\tau^2} \frac{1}{{1 + e^{-\epsilon/\tau}}} \\ &= \ln \left( 1 + e^{-\epsilon/\tau} \right)+\frac{\epsilon}{\tau} \frac{e^{-\epsilon/\tau}}{1 + e^{-\epsilon/\tau}}\end{aligned} \hspace{\stretch{1}}(1.0.6)

The energy is

\begin{aligned}U \\ &= F + \tau \sigma \\ &= -\tau \ln (1 + e^{-\epsilon/\tau}) + \tau \sigma \\ &= \tau\left( \not{{\ln \left( 1 + e^{-\epsilon/\tau} \right)}} + \frac{\epsilon}{\tau} \frac{e^{-\epsilon/\tau}}{1 + e^{-\epsilon/\tau}} -\not{{\ln (1 + e^{-\epsilon/\tau}) }} \right)\end{aligned} \hspace{\stretch{1}}(1.0.7)

This is

\begin{aligned}U=\frac{\epsilon e^{-\epsilon/\tau}}{1 + e^{-\epsilon/\tau}}=\frac{\epsilon}{1 + e^{\epsilon/\tau}}.\end{aligned} \hspace{\stretch{1}}(1.0.8)

These are all plotted in (Fig 1).

Fig1: Plots for two state system

\imageFigure{kittelCh3Problem1PlotsFig1}{Plots for two state system}{fig:kittelCh3Problem1Plots:kittelCh3Problem1PlotsFig1}{0.2}

Question: Magnetic susceptibility

[1] problem 3.2.

Use the partition function to find an exact expression for the magnetization $M$ and the susceptibility $\chi = dM/dB$ as a function of temperature and magnetic field for the model system of magnetic moments in a magnetic field. The result for the magnetization, found by other means, was $M = n m \tanh( m B/\tau)$, where $n$ is the particle concentration. Find the free energy and express the result as a function only of $\tau$ and the parameter $x = M/nm$. Show that the susceptibility is $\chi = n m^2/\tau$ in the limit $m B \ll \tau$.

Our partition function for a unit volume containing $n$ spins is

\begin{aligned}Z=\frac{\left( e^{-m B/\tau} +e^{m B/\tau} \right)^n}{n!}=2 \frac{\left( \cosh\left( m B/\tau \right) \right)^n}{n!},\end{aligned} \hspace{\stretch{1}}(1.0.9)

so that the Free energy is

\begin{aligned}F = -\tau\left( \ln 2 - \ln n! + n \ln \cosh\left( m B/\tau \right) \right).\end{aligned} \hspace{\stretch{1}}(1.0.15)

The energy, magnetization and magnetic field were interrelated by

\begin{aligned}- M B &= U \\ &= \tau^2 \frac{\partial {}}{\partial {\tau}}\left( -\frac{F}{\tau} \right) \\ &= \tau^2 n\frac{\partial {}}{\partial {\tau}}\ln \cosh\left( m B/\tau \right) \\ &= \tau^2 n \frac{ -m B/\tau^2\sinh\left( m B/\tau \right)}{\cosh\left( m B/\tau \right)} \\ &= - m B n \tanh \left( m B/\tau \right).\end{aligned} \hspace{\stretch{1}}(1.0.11)

This gives us

\begin{aligned}M = m n \tanh \left( m B/\tau \right),\end{aligned} \hspace{\stretch{1}}(1.0.12)

so that

\begin{aligned}\chi = \frac{dM}{dB}= \frac{m^2 n}{\tau \cosh^2 \left( m B/\tau \right)}.\end{aligned} \hspace{\stretch{1}}(1.0.13)

For $m B/\tau \ll 1$, the cosh term goes to unity, so we have

\begin{aligned}\chi \approx= \frac{m^2 n}{\tau},\end{aligned} \hspace{\stretch{1}}(1.0.14)

as desired.

With $x = M/nm$, or $m = M/nx$, the free energy is

\begin{aligned}F =-\tau\left( \ln 2/n! + n \ln \cosh\left( \frac{M B}{n x \tau} \right) \right)\end{aligned} \hspace{\stretch{1}}(1.0.15)

That last expression isn’t particularly illuminating. What was the point of that substitution?

Question: Free energy of a harmonic oscillator

[1] problem 3.3.

A one dimensional harmonic oscillator has an infinite series of equally spaced energy states, with $\epsilon_s = s \hbar \omega$, where $s$ is a positive integer or zero, and $\omega$ is the classical frequency of the oscillator. We have chosen the zero of energy at the state $s = 0$. Show that for a harmonic oscillator the free energy is

\begin{aligned}F = \tau \ln\left( 1 - e^{-\hbar \omega/\tau} \right).\end{aligned} \hspace{\stretch{1}}(1.0.16)

Note that at high temperatures such that $\tau \gg \hbar \omega$ we may expand the argument of the logarithm to obtain $F \approx \tau \ln (\hbar \omega/\tau)$. From 1.0.16 show that the entropy is

\begin{aligned}\sigma = \frac{\hbar\omega/\tau}{e^{\hbar \omega/\tau} - 1} -\ln\left( 1 - e^{-\hbar \omega/\tau} \right)\end{aligned} \hspace{\stretch{1}}(1.0.17)

I found it curious that this problem dropped the factor of $\hbar\omega/2$ from the energy. Including it we have

\begin{aligned}\epsilon_s = \left( s + \frac{1}{{2}} \right) \hbar \omega,\end{aligned} \hspace{\stretch{1}}(1.0.18)

So that the partition function is

\begin{aligned}Z= \sum_{s = 0}^\infty e^{-\left( s + \frac{1}{{2}} \right) \hbar \omega/\tau }=e^{-\hbar \omega/2\tau}\sum_{s = 0}^\infty e^{-s \hbar \omega/\tau}.\end{aligned} \hspace{\stretch{1}}(1.0.19)

The free energy is

\begin{aligned}F &= -\tau \ln Z \\ &= -\tau\left( -\frac{\hbar \omega}{2\tau} + \ln \left( \sum_{s = 0}^\infty e^{-s \hbar \omega/\tau} \right) \right) \\ &= \frac{\hbar \omega}{2} +\ln\left( \sum_{s = 0}^\infty e^{-s \hbar \omega/\tau} \right)\end{aligned} \hspace{\stretch{1}}(1.0.20)

We see that the contribution of the $\hbar \omega/2$ in the energy of each state just adds a constant factor to the free energy. This will drop out when we compute the entropy. Dropping that factor now that we know why it doesn’t contribute, we can complete the summation, so have, by inspection

\begin{aligned}F = -\tau \ln Z=\tau \ln\left( 1 - e^{-\hbar \omega/\tau} \right).\end{aligned} \hspace{\stretch{1}}(1.0.21)

Taking derivatives for the entropy we have

\begin{aligned}\sigma &= -\frac{\partial {F}}{\partial {\tau}} \\ &= -\ln\left( 1 - e^{-\hbar \omega/\tau} \right)+\tau\frac{\hbar \omega}{\tau^2} \frac{e^{-\hbar \omega/\tau}}{1 - e^{-\hbar \omega/\tau}} \\ &= -\ln\left( 1 - e^{-\hbar \omega/\tau} \right)+\frac{\frac{\hbar \omega}{\tau}}{e^{\hbar \omega/\tau} - 1}\end{aligned} \hspace{\stretch{1}}(1.0.22)

Question: Energy fluctuation

[1] problem 3.4.

Consider a system of fixed volume in thermal contact with a reservoir. Show that the mean square fluctuation in the energy of the system is

\begin{aligned}\left\langle{{ (\epsilon - \left\langle{\epsilon}\right\rangle)^2 }}\right\rangle = \tau^2\left( \frac{\partial {U}}{\partial {\tau}} \right)_V\end{aligned} \hspace{\stretch{1}}(1.0.23)

Here $U$ is the conventional symbol for $\left\langle{{\epsilon}}\right\rangle$. Hint: Use the partition function $Z$ to relate ${\partial {U}}/{\partial {t}}$ to the mean square fluctuation. Also, multiply out the term $(\cdots)^2$.

With a probability of finding the system in state $s$ of

\begin{aligned}P_s = \frac{e^{-\epsilon_s/\tau}}{Z}\end{aligned} \hspace{\stretch{1}}(1.0.24)

the average energy is

\begin{aligned}U &= \left\langle{{\epsilon}}\right\rangle \\ &= \sum_s P_s \epsilon_s \\ &= \sum_s \epsilon_s \frac{e^{-\epsilon_s/\tau}}{Z} \\ &= \frac{1}{{Z}} \sum_s \epsilon_s e^{-\epsilon_s/\tau}\end{aligned} \hspace{\stretch{1}}(1.0.25)

So we have

\begin{aligned}\tau^2 \frac{\partial {U}}{\partial {\tau}} \\ &= -\frac{\tau^2}{Z^2} \frac{dZ}{d\tau}\sum_s \epsilon_s e^{-\epsilon_s/\tau}+ \frac{\tau^2}{Z}\sum_s \frac{\epsilon_s^2}{\tau^2} e^{-\epsilon_s/\tau} \\ &= -\frac{\tau^2}{Z^2} \frac{dZ}{d\tau}\sum_s \epsilon_s e^{-\epsilon_s/\tau}+ \frac{1}{{Z}}\sum_s \epsilon_s^2 e^{-\epsilon_s/\tau}.\end{aligned} \hspace{\stretch{1}}(1.0.26)

But

\begin{aligned}\frac{dZ}{d\tau}=\frac{d}{d\tau} \sum_s e^{-\epsilon_s/\tau}=\sum_s \frac{\epsilon_s}{\tau^2} e^{-\epsilon_s/\tau},\end{aligned} \hspace{\stretch{1}}(1.0.27)

giving

\begin{aligned}\tau^2 \frac{\partial {U}}{\partial {\tau}} &= \frac{1}{Z^2}\sum_s \epsilon_s e^{-\epsilon_s/\tau} \sum_s \epsilon_s e^{-\epsilon_s/\tau}+ \frac{1}{{Z}}\sum_s \epsilon_s^2 e^{-\epsilon_s/\tau} \\ &= -\left\langle{{ \epsilon }}\right\rangle^2 + \left\langle{{\epsilon^2}}\right\rangle,\end{aligned} \hspace{\stretch{1}}(1.0.28)

which shows 1.0.23 as desired.

References

[1] C. Kittel and H. Kroemer. Thermal physics. WH Freeman, 1980.